React Native Made Easy Ep. 2 – Native Bridge - Huawei Developers

Introduction
React Native is a convenient tool for cross platform development, and though it has become more and more powerful through the updates, there are limits to it, for example its capability to interact with and using the native components. Bridging native code with Javascript is one of the most popular and effective ways to solve the problem. Best of both worlds!
Currently not all HMS Kits has official RN support yet, this article will walk you through how to create android native bridge to connect your RN app with HMS Kits, and Scan Kit will be used as an example here.
The tutorial is based on https://github.com/clementf-hw/rn_integration_demo/tree/4b2262aa2110041f80cb41ebd7caa1590a48528a, you can find more details about the sample project in this article: https://forums.developer.huawei.com...d=0201230857831870061&fid=0101187876626530001.
Prerequisites
Basic Android development
Basic React Native development
These areas have been covered immensely already on RN’s official site, this forum and other sources
HMS properly configured
You can also reference the above article for this matter
Major dependencies
RN Version: 0.62.2 (released on 9th April, 2020)
Gradle Version: 5.6.4
Gradle Plugin Version: 3.6.1
agcp: 1.2.1.301
This tutorial is broken into 3 parts:
Pt. 1: Create a simple native UI component as intro and warm up
Pt. 2: Bridging HMS Scan Kit into React Native
Pt. 3: Make Scan Kit into a stand alone React Native Module that you can import into other projects or even upload to npm.
Bridging HMS Scan Kit
Now we have some fundamental knowledge on how to bridge, let’s bridge something meaningful. We will bridge the Scan Kit Default View as a QR Code Scanner, and also learn how to communicate from Native side to React Native side.
First, we’ll have to configure the project following the guide to set Scan Kit up on the native side: https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/scan-preparation-4
Put agconnect-service.json in place
Add to allprojects > repositories in root level build.gradle
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Add to buildscript > repositories
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Add to buildscript > dependencies
Code:
buildscript{
dependencies {
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
}
Go to app/build.gradle and add this to header
Code:
apply plugin: 'com.huawei.agconnect'
Add this to dependencies
Code:
dependencies {
implementation 'com.huawei.hms:scanplus:1.1.3.300'
}
Add in proguard-rules.pro
Code:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.hianalytics.android.**{*;}
-keep class com.huawei.**{*;}
Now do a gradle sync. Also you can try to build and run the app to see if everything’s ok even though we have not done any actual development yet.
Add these to AndroidManifest.xml
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
<application
…
<activity android:name="com.huawei.hms.hmsscankit.ScanKitActivity" />
</application>
So the basic setup/configuration is done. Similar to the warm up, we will create a Module file first. Note that for the sake of variance and wider adaptability of the end product, this time we’ll make it a plain Native Module instead of Native UI Component.
Code:
package com.cfdemo.d001rn;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
public class ReactNativeHmsScanModule extends ReactContextBaseJavaModule {
private static final String REACT_CLASS = "ReactNativeHmsScan";
private static ReactApplicationContext reactContext;
public ReactNativeHmsScanModule(ReactApplicationContext context) {
super(context);
reactContext = context;
}
@NonNull
@Override
public String getName() {
return REACT_CLASS;
}
}
We have seen how data flows from RN to native in the warm up (e.g. @reactProp of our button), There are also several ways for data to flow from native to RN. In Scan Kit, it utilizes startActivityForResult, therefore we need to implement its subsequent listeners.
Code:
package com.cfdemo.d001rn;
import android.app.Activity;
import android.content.Intent;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ActivityEventListener;
import com.facebook.react.bridge.BaseActivityEventListener;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
public class ReactNativeHmsScanModule extends ReactContextBaseJavaModule {
private static final String REACT_CLASS = "ReactNativeHmsScan";
private static ReactApplicationContext reactContext;
public ReactNativeHmsScanModule(ReactApplicationContext context) {
super(context);
reactContext = context;
reactContext.addActivityEventListener(mActivityEventListener);
}
@NonNull
@Override
public String getName() {
return REACT_CLASS;
}
private final ActivityEventListener mActivityEventListener = new BaseActivityEventListener() {
@Override
public void onActivityResult(Activity activity, int requestCode, int resultCode, Intent intent) {
}
};
}
There are couple small details we’ll need to add. First, React Native javascript side expects a Promise from the result.
Code:
private Promise mScannerPromise;
We also need to add a request code to identify that this is our scan kit activity. 567 here is just an example, the value is of your own discretion
Code:
private static final int REQUEST_CODE_SCAN = 567
There will be several error/reject conditions, let’s identify and declare their code first
Code:
private static final String E_ACTIVITY_DOES_NOT_EXIST = "E_ACTIVITY_DOES_NOT_EXIST";
private static final String E_SCANNER_CANCELLED = "E_SCANNER_CANCELLED";
private static final String E_FAILED_TO_SHOW_SCANNER = "E_FAILED_TO_SHOW_SCANNER";
private static final String E_INVALID_CODE = "E_INVALID_CODE";
At this moment, the module should look like this
Code:
package com.cfdemo.d001rn;
import android.app.Activity;
import android.content.Intent;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ActivityEventListener;
import com.facebook.react.bridge.BaseActivityEventListener;
import com.facebook.react.bridge.Promise;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
public class ReactNativeHmsScanModule extends ReactContextBaseJavaModule {
private static final String REACT_CLASS = "ReactNativeHmsScan";
private static ReactApplicationContext reactContext;
private Promise mScannerPromise;
private static final int REQUEST_CODE_SCAN = 567;
private static final String E_ACTIVITY_DOES_NOT_EXIST = "E_ACTIVITY_DOES_NOT_EXIST";
private static final String E_SCANNER_CANCELLED = "E_SCANNER_CANCELLED";
private static final String E_FAILED_TO_SHOW_SCANNER = "E_FAILED_TO_SHOW_SCANNER";
private static final String E_INVALID_CODE = "E_INVALID_CODE";
public ReactNativeHmsScanModule(ReactApplicationContext context) {
super(context);
reactContext = context;
reactContext.addActivityEventListener(mActivityEventListener);
}
@NonNull
@Override
public String getName() {
return REACT_CLASS;
}
private final ActivityEventListener mActivityEventListener = new BaseActivityEventListener() {
@Override
public void onActivityResult(Activity activity, int requestCode, int resultCode, Intent intent) {
}
};
}
Now let’s implement the listener method
Code:
if (requestCode == REQUEST_CODE_SCAN) {
if (mScannerPromise != null) {
if (resultCode == Activity.RESULT_CANCELED) {
mScannerPromise.reject(E_SCANNER_CANCELLED, "Scanner was cancelled");
} else if (resultCode == Activity.RESULT_OK) {
Object obj = intent.getParcelableExtra(ScanUtil.RESULT);
if (obj instanceof HmsScan) {
if (!TextUtils.isEmpty(((HmsScan) obj).getOriginalValue())) {
mScannerPromise.resolve(((HmsScan) obj).getOriginalValue().toString());
} else {
mScannerPromise.reject(E_INVALID_CODE, "Invalid Code");
}
return;
}
}
}
}
Let’s walk through what this does
When the listener receives an activity result, it checks if this is our request by checking the request code.
Afterwards, it checks if the promise object is null. We will cover the promise object later, but briefly speaking this is passed from RN to native, and we rely on it to send the data back to RN.
Then, if the result is a CANCELED situation, we tell RN that the scanner is canceled, for example closed by user, by calling promise.reject()
If the result indicates OK, we’ll get the data by calling getParcelableExtra()
Now we’ll see if the resulting data matches our data type and is not empty, and then we’ll call promise.resolve()
Otherwise we will resolve a general rejection message. Of course here you can expand and give a more detailed breakdown and resolution if you wish
This is a lot of checking and validation, but one can never be too safe, right?
Cool, now we have finished the listener, let’s work on the caller! This is the method we’ll be calling in RN side, indicated by the @reactMethod annotation.
[CODE @reactMethod
public void startScan(final Promise promise) {
} [/CODE]
Give it some content
[CODE @reactMethod
public void startScan(final Promise promise) {
Activity currentActivity = getCurrentActivity();
if (currentActivity == null) {
promise.reject(E_ACTIVITY_DOES_NOT_EXIST, "Activity doesn't exist");
return;
}
// Store the promise to resolve/reject when picker returns data
mScannerPromise = promise;
try {
ScanUtil.startScan(currentActivity, REQUEST_CODE_SCAN, new HmsScanAnalyzerOptions.Creator().setHmsScanTypes(HmsScan.ALL_SCAN_TYPE).create());
} catch (Exception e) {
mScannerPromise.reject(E_FAILED_TO_SHOW_SCANNER, e);
mScannerPromise = null;
}
}[/CODE]
Let’s do a walk through again
First we get the current activity reference and check if it is valid
Then we take the input promise and assign it to mScannerPromise which we declared earlier, so we can refer and use it throughout the process
Now we call the Scan Kit! This part is same as a normal android implementation.
Of course we wrap it with a try-catch for safety purposes
At this point we have finished the Module, same as the warm up we’ll need to create a Package. This time it is a Native Module therefore we register it in createNativeModules() and also give createViewManagers() an empty list.
Code:
package com.cfdemo.d001rn;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import com.facebook.react.ReactPackage;
import com.facebook.react.bridge.NativeModule;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.uimanager.ViewManager;
public class ReactNativeHmsScanPackage implements ReactPackage {
@Override
public List<NativeModule> createNativeModules(ReactApplicationContext reactContext) {
return Arrays.<NativeModule>asList(new ReactNativeHmsScanModule(reactContext));
}
@Override
public List<ViewManager> createViewManagers(ReactApplicationContext reactContext) {
return Collections.emptyList();
}
}
Same as before, we’ll add the package to our MainApplication.java, import the Package, and add it in the getPackages() function
Code:
import com.cfdemo.d001rn.ReactNativeWarmUpPackage;
import com.cfdemo.d001rn.ReactNativeHmsScanPackage;
public class MainApplication extends Application implements ReactApplication {
...
@Override
protected List<ReactPackage> getPackages() {
@SuppressWarnings("UnnecessaryLocalVariable")
List<ReactPackage> packages = new PackageList(this).getPackages();
// Packages that cannot be autolinked yet can be added manually here, for example:
// packages.add(new MyReactNativePackage());
packages.add(new ReactNativeWarmUpPackage());
packages.add(new ReactNativeHmsScanPackage());
return packages;
}
All set! Let’s head back to RN side. This is our app from the warm up exercise(with a bit style change for the things we are going to add)
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s add a Button and set its onPress property as this.onScan() which we’ll implement after this
Code:
render() {
const { displayText, region } = this.state
return (
<View style={styles.container}>
<Text style={styles.textBox}>
{displayText}
</Text>
<RNWarmUpView
style={styles.nativeModule}
text={"Render in Javascript"}
/>
<Button
style={styles.button}
title={'Scan Button'}
onPress={() => this.onScan()}
/>
<MapView
style={styles.map}
region={region}
showCompass={true}
showsUserLocation={true}
showsMyLocationButton={true}
>
</MapView>
</View>
);
}
Reload and see the button
Similar to the one in the warm up, we can declare the Native Module using this simple way
Code:
const RNWarmUpView = requireNativeComponent('RCTWarmUpView')
const RNHMSScan = NativeModules.ReactNativeHmsScan
Now we’ll implement onScan() which uses the async/await syntax for asynchronous coding
Code:
async onScan() {
try {
const data = await RNHMSScan.startScan();
// handle your data here
} catch (e) {
console.log(e);
}
}
Important! Scan Kit requires CAMERA and READ_EXTERNAL_STORAGE permissions to function, make sure you have handled this beforehand. One of the recommended way to handle it is to use react-native-permissions library https://github.com/react-native-community/react-native-permissions. I will make another article regarding this topic, but for now you can refer to https://github.com/clementf-hw/rn_integration_demo if you are in need.
Now we click…TADA!
In this demo, this is what onScan() contains
Code:
async onScan() {
try {
const data = await RNHMSScan.startScan();
const qrcodeData = {
message: (JSON.parse(data)).message,
location: (JSON.parse(data)).location,
my_location: (JSON.parse(data)).my_location
}
this.handleData(qrcodeData)
} catch (e) {
console.log(e);
}
}
Note: one minor modification is needed if you are basing on the branch of this demo project mentioned before
Code:
onLocationReceived(locationData) {
const location = typeof locationData === "object" ? locationData : JSON.parse(locationData)
…
Now let’s try scan this
The actual data contained in the QR Code is
Code:
{"message": "Auckland", "location": {"lat": "-36.848461","lng": "174.763336"}}
Which bring us to Auckland!
Now your HMS Scan Kit in React Native is up and running!
Pt. 2 of this tutorial is done, please feel free to ask questions. You can also check out the repo of the sample project on github: https://github.com/clementf-hw/rn_integration_demo, and raise issue if you have any question or any update.
In the 3rd and final part of this tutorial, we'll go through how to make this RN HMS Scan Kit Bridge a standalone, downloadable and importable React Native Module, which you can use in multiple projects instead of creating the Native Module one by one, and you can even upload it to NPM to share with other fellow developers.

Related

Validate your news: Feat. Huawei ML Kit (Text Image Super-Resolution)

Introduction
Quality improvement has become crucial in this era of digitalization where all our documents are kept in the folders, shared over the network and read on the digital device.
Imaging the grapple of an elderly person who has no way to read and understand an old prescribed medical document which has gone blurred and deteriorated.
Can we evade such issues??
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s unbind what Huawei ML Kit offers to overcome such challenges of our day to day life.
Huawei ML Kit provides Text Image Super-Resolution API to improvise the quality and visibility of old and blurred text on an image.
Text Image Super-Resolution can zoom in an image that contains the text and significantly improve the definition of the text.
Limitations
The text image super-resolution service requires images with the maximum resolution 800 x 800 px and the length greater than or equal to 64 px.
Development Overview
Prerequisite
Must have a Huawei Developer Account
Must have Android Studio 3.0 or later
Must have a Huawei phone with HMS Core 4.0.2.300 or later
EMUI 3.0 or later
Software Requirements
Java SDK 1.7 or later
Android 5.0 or later
Preparation
Create an app or project in the Huawei app gallery connect.
Provide the SHA Key and App Package name of the project in App Information Section and enable the ML Kit API.
Download the agconnect-services.json file.
Create an Android project.
Integration
Add below to build.gradle (project)file, under buildscript/repositories and allprojects/repositories.
Code:
Maven {url 'http://developer.huawei.com/repo/'}
Add below to build.gradle (app) file, under dependencies.
To use the Base SDK of ML Kit-Text Image Super Resolution, add the following dependencies:
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-textimagesuperresolution:2.0.3.300'
}
Adding permissions
Code:
<uses-permission android:name="android.permission.CAMERA " />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Automatically Updating the Machine Learning Model
Add the following statements to the AndroidManifest.xml file to automatically install the machine learning model on the user’s device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "tisr"/>
Development Process
This article focuses on demonstrating the capabilities of Huawei’s ML Kit: Text Image Super- Resolution API’s.
Here is the example which explains how can we integrate this powerful API to leverage the benefits of improvising the Text-Image quality and provide full accessibility to the reader to read the old and blur newspapers from an online news directory.
TextImageView Activity : Launcher Activity
This is main activity of “The News Express “application.
Code:
package com.mlkitimagetext.example;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import com.mlkitimagetext.example.textimagesuperresolution.TextImageSuperResolutionActivity;
public class TextImageView extends AppCompatActivity {
Button NewsExpress;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_text_image_view);
NewsExpress = findViewById(R.id.bt1);
NewsExpress.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
startActivity(new Intent(TextImageView.this, TextImageSuperResolutionActivity.class));
}
});
}
}
Activity_text_image_view.xml
This is the view class for the above activity class.
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/im3">
<LinearLayout
android:id="@+id/ll_buttons"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="200dp"
android:orientation="vertical">
<Button
android:id="@+id/bt1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@android:color/transparent"
android:layout_gravity="center"
android:text="The News Express"
android:textAllCaps="false"
android:textStyle="bold"
android:textSize="34dp"
android:textColor="@color/mlkit_bcr_text_color_white"></Button>
<TextView
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:textStyle="bold"
android:text="Validate Your News"
android:textSize="20sp"
android:layout_gravity="center"
android:textColor="#9fbfdf"/>
</LinearLayout>
</RelativeLayout>
TextImageSuperResolutionActivity
This activity class performs following actions:
Image picker implementation to pick the image from the gallery
Convert selected image to Bitmap
Create a text image super-resolution analyser.
Create an MLFrame object by using android.graphics.Bitmap.
Perform super-resolution processing on the image with text.
Stop the analyser to release detection resources.
Code:
package com.mlkitimagetext.example;
import android.content.Intent;
import android.graphics.Bitmap;
import android.net.Uri;
import android.os.Bundle;
import android.provider.MediaStore;
import android.view.View;
import android.widget.ImageView;
import android.widget.Toast;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.common.MLException;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolution;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolutionAnalyzer;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolutionAnalyzerFactory;
import com.mlkitimagetext.example.R;
import androidx.appcompat.app.AppCompatActivity;
import java.io.IOException;
public class TextImageSuperResolutionActivity<button> extends AppCompatActivity implements View.OnClickListener {
private static final String TAG = "TextSuperResolutionActivity";
private MLTextImageSuperResolutionAnalyzer analyzer;
private static final int INDEX_3X = 1;
private static final int INDEX_ORIGINAL = 2;
private ImageView imageView;
private Bitmap srcBitmap;
Uri imageUri;
Boolean ImageSetupFlag = false;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_text_super_resolution);
imageView = findViewById(R.id.image);
imageView.setOnClickListener(this);
findViewById(R.id.btn_load).setOnClickListener(this);
createAnalyzer();
}
@Override
public void onClick(View view) {
if (view.getId() == R.id.btn_load) {
openGallery();
}else if (view.getId() == R.id.image)
{
if(ImageSetupFlag != true)
{
detectImage(INDEX_3X);
}else {
detectImage(INDEX_ORIGINAL);
ImageSetupFlag = false;
}
}
}
private void openGallery() {
Intent gallery = new Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
startActivityForResult(gallery, 1);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data){
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK && requestCode == 1){
imageUri = data.getData();
try {
srcBitmap = MediaStore.Images.Media.getBitmap(this.getContentResolver(), imageUri);
} catch (IOException e) {
e.printStackTrace();
}
//BitmapFactory.decodeResource(getResources(), R.drawable.new1);
imageView.setImageURI(imageUri);
}
}
private void release() {
if (analyzer == null) {
return;
}
analyzer.stop();
}
private void detectImage(int type) {
if (type == INDEX_ORIGINAL) {
setImage(srcBitmap);
return;
}
if (analyzer == null) {
return;
}
// Create an MLFrame by using the bitmap.
MLFrame frame = new MLFrame.Creator().setBitmap(srcBitmap).create();
Task<MLTextImageSuperResolution> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLTextImageSuperResolution>() {
public void onSuccess(MLTextImageSuperResolution result) {
// success.
Toast.makeText(getApplicationContext(), "Success", Toast.LENGTH_SHORT).show();
setImage(result.getBitmap());
ImageSetupFlag = true;
}
})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException) e;
// Get the error code, developers can give different page prompts according to the error code.
int errorCode = mlException.getErrCode();
// Get the error message, developers can combine the error code to quickly locate the problem.
String errorMessage = mlException.getMessage();
Toast.makeText(getApplicationContext(), "Error:" + errorCode + " Message:" + errorMessage, Toast.LENGTH_SHORT).show();
} else {
// Other exception。
Toast.makeText(getApplicationContext(), "Failed:" + e.getMessage(), Toast.LENGTH_SHORT).show();
}
}
});
}
private void setImage(final Bitmap bitmap) {
imageView.setImageBitmap(bitmap);
}
private void createAnalyzer() {
analyzer = MLTextImageSuperResolutionAnalyzerFactory.getInstance().getTextImageSuperResolutionAnalyzer();
}
@Override
protected void onDestroy() {
super.onDestroy();
if (srcBitmap != null) {
srcBitmap.recycle();
}
release();
}
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202388336667910498&fid=0101187876626530001
Which all image format is supported?

Demystifying Document Skew Correction feat. HUAWEI ML KIT

Prolusion
This era is revolutionary for the science and research as most of the innovation is for the consumer needs.
We all know that document scanning is routine errand for most of us and a dire need for today’s digital world.
In such needs, we often require a powerful mechanism which can correct the informalities and skew for our documents.
Document Skew Correction is a technique which helps in correcting the tilted images to the right facing angle which further improvise the visibility of the image.
Huawei ML Kit offers a robust API for skew correction which enables automatic position identification of a document in an image and corrects the shooting angle. It also allows users to customize the edge points.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Suggestions
It is recommended the shooting angle of the image should be within 30 degrees.
It is recommended that the image size be within the range of 320 x 320 px to 1920 x 1920 px.
Skew detection API supports JPG, JPEG, and PNG image formats.
Development Overview
Prerequisite
Must have a Huawei Developer Account
Must have Android Studio 3.0 or later
Must have a Huawei phone with HMS Core 4.0.2.300 or later
EMUI 3.0 or later
Software Requirements
Java SDK 1.7 or later
Android 5.0 or later
Preparation
Create an app or project in the Huawei app gallery connect.
Provide the SHA Key and App Package name of the project in App Information Section and enable the ML Kit API.
Download the agconnect-services.json file.
Create an Android project.
Integration
Add below to build.gradle (project)file, under buildscript/repositories and allprojects/repositories.
Maven {url 'http://developer.huawei.com/repo/'}
Add below to build.gradle (app) file, under dependencies.
To use the Base SDK of ML Kit-Document Skew Correction, add the following dependencies:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-documentskew:2.0.4.300'
}
To use the Full SDK of ML Kit- Document Skew Correction, add the following dependencies:
dependencies{
// Import the Model Package.
implementation 'com.huawei.hms:ml-computer-vision-documentskew-model:2.0.4.300'
}
Adding permissions
<uses-permission android:name="android.permission.CAMERA " />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Automatically Updating the Machine Learning Model
Add the following statements to the AndroidManifest.xml file to automatically install the machine learning model on the user’s device.
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "dsc"/>
Development Process
This article focuses on demonstrating the capabilities of Huawei’s ML Kit: Document Skew Correction API’s.
Here is the example of “SUPER DOC” application which allows user to capture and fetch the images from local memory of the device and let them correct using the document which explains how we can integrate this powerful API to leverage the benefits of correcting a skewed document image to provider the right angle to the document which eventually improves the readability of the document.
Skewdetect Activity
This activity is responsible to click and fetch the images and detect them for the skew correction and further align them and provide the output as aligned document image.
Code:
package com.mlkit.documentSkewCorrection;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import android.content.Intent;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Point;
import android.net.Uri;
import android.os.Bundle;
import android.provider.MediaStore;
import android.util.Log;
import android.view.View;
import android.widget.ImageView;
import android.widget.Toast;
import androidx.appcompat.app.AppCompatActivity;
import com.google.android.material.floatingactionbutton.FloatingActionButton;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionConstant;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionCoordinateInput;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionResult;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzer;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzerFactory;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzerSetting;
import ccom.huawei.hms.mlsdk.dsc.MLDocumentSkewDetectResult;
import com.mlkit.documentSkewCorrection.R;
public class SkewDetect extends AppCompatActivity implements View.OnClickListener {
private static final String TAG = "SkewDetectActivity";
private MLDocumentSkewCorrectionAnalyzer analyzer;
private ImageView mImageView;
private Bitmap bitmap;
Uri imageUri;
private MLDocumentSkewCorrectionCoordinateInput input;
private MLFrame mlFrame;
Boolean FlagCameraClickDone = false;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
this.setContentView(R.layout.activity_document_skew_correction);
this.findViewById(R.id.image_refine).setOnClickListener(this);
this.mImageView = this.findViewById(R.id.image_refine_result);
if(FlagCameraClickDone)
{
this.findViewById(R.id.image_refine).setVisibility(View.VISIBLE);
}
else
{
this.findViewById(R.id.image_refine).setVisibility(View.GONE);
}
// Create the setting.
MLDocumentSkewCorrectionAnalyzerSetting setting = new MLDocumentSkewCorrectionAnalyzerSetting
.Factory()
.create();
// Get the analyzer.
this.analyzer = MLDocumentSkewCorrectionAnalyzerFactory.getInstance().getDocumentSkewCorrectionAnalyzer(setting);
FloatingActionButton fab = (FloatingActionButton) findViewById(R.id.fab);
fab.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
FlagCameraClickDone = false;
Intent gallery = new Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
startActivityForResult(gallery, 1);
}
});
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data){
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK && requestCode == 1){
imageUri = data.getData();
try {
bitmap = MediaStore.Images.Media.getBitmap(this.getContentResolver(), imageUri);
// Create a MLFrame by using the bitmap.
this.mlFrame = new MLFrame.Creator().setBitmap(this.bitmap).create();
} catch (IOException e) {
e.printStackTrace();
}
//BitmapFactory.decodeResource(getResources(), R.drawable.new1);
FlagCameraClickDone = true;
this.findViewById(R.id.image_refine).setVisibility(View.VISIBLE);
mImageView.setImageURI(imageUri);
}
}
@Override
public void onClick(View v) {
this.analyzer();
}
private void analyzer() {
// Call document skew detect interface to get coordinate data
Task<MLDocumentSkewDetectResult> detectTask = this.analyzer.asyncDocumentSkewDetect(this.mlFrame);
detectTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewDetectResult>() {
@Override
public void onSuccess(MLDocumentSkewDetectResult detectResult) {
Log.e(TAG, detectResult.getResultCode() + ":");
if (detectResult != null) {
int resultCode = detectResult.getResultCode();
// Detect success.
if (resultCode == MLDocumentSkewCorrectionConstant.SUCCESS) {
Point leftTop = detectResult.getLeftTopPosition();
Point rightTop = detectResult.getRightTopPosition();
Point leftBottom = detectResult.getLeftBottomPosition();
Point rightBottom = detectResult.getRightBottomPosition();
List<Point> coordinates = new ArrayList<>();
coordinates.add(leftTop);
coordinates.add(rightTop);
coordinates.add(rightBottom);
coordinates.add(leftBottom);
SkewDetect .this.setDetectData(new MLDocumentSkewCorrectionCoordinateInput(coordinates));
SkewDetect .this.refineImg();
} else if (resultCode == MLDocumentSkewCorrectionConstant.IMAGE_DATA_ERROR) {
// Parameters error.
Log.e(TAG, "Parameters error!");
SkewDetect.this.displayFailure();
} else if (resultCode == MLDocumentSkewCorrectionConstant.DETECT_FAILD) {
// Detect failure.
Log.e(TAG, "Detect failed!");
SkewDetect .this.displayFailure();
}
} else {
// Detect exception.
Log.e(TAG, "Detect exception!");
SkewDetect .this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Processing logic for detect failure.
Log.e(TAG, e.getMessage() + "");
SkewDetect .this.displayFailure();
}
});
}
// Show result
private void displaySuccess(MLDocumentSkewCorrectionResult refineResult) {
if (this.bitmap == null) {
this.displayFailure();
return;
}
// Draw the portrait with a transparent background.
Bitmap corrected = refineResult.getCorrected();
if (corrected != null) {
this.mImageView.setImageBitmap(corrected);
} else {
this.displayFailure();
}
}
private void displayFailure() {
Toast.makeText(this.getApplicationContext(), "Fail", Toast.LENGTH_SHORT).show();
}
private void setDetectData(MLDocumentSkewCorrectionCoordinateInput input) {
this.input = input;
}
// Refine image
private void refineImg() {
// Call refine image interface
Task<MLDocumentSkewCorrectionResult> correctionTask = this.analyzer.asyncDocumentSkewCorrect(this.mlFrame, this.input);
correctionTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewCorrectionResult>() {
@Override
public void onSuccess(MLDocumentSkewCorrectionResult refineResult) {
if (refineResult != null) {
int resultCode = refineResult.getResultCode();
if (resultCode == MLDocumentSkewCorrectionConstant.SUCCESS) {
SkewDetect .this.displaySuccess(refineResult);
} else if (resultCode == MLDocumentSkewCorrectionConstant.IMAGE_DATA_ERROR) {
// Parameters error.
Log.e(TAG, "Parameters error!");
SkewDetect .this.displayFailure();
} else if (resultCode == MLDocumentSkewCorrectionConstant.CORRECTION_FAILD) {
// Correct failure.
Log.e(TAG, "Correct failed!");
SkewDetect .this.displayFailure();
}
} else {
// Correct exception.
Log.e(TAG, "Correct exception!");
SkewDetect .this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Processing logic for refine failure.
SkewDetect .this.displayFailure();
}
});
}
@Override
protected void onDestroy() {
super.onDestroy();
if (this.analyzer != null) {
try {
this.analyzer.stop();
} catch (IOException e) {
Log.e(SkewDetect .TAG, "Stop failed: " + e.getMessage());
}
}
}
}
Skewdetect Activity View Class
This class is responsible for creating the UI definition of the application.
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/shape"
tools:context="com.huawei.mlkit.example.face.StillFaceAnalyseActivity">
/**Create an image view to hold the bitmap**/
<ImageView
android:id="@+id/image_refine_result"
android:layout_width="500dp"
android:layout_height="300dp"
android:layout_below="@+id/image_foreground"
android:layout_marginTop="20dp"></ImageView>
<RelativeLayout
android:id="@+id/relativeLayout1"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_margin="20dp">
/**Create a button to fetch the ML kit API for skew correction**/
<Button
android:id="@+id/imagecorrection"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_alignParentBottom="true"
android:layout_gravity="start|bottom"
android:background="@color/emui_color_gray_1"
android:text=" Skew Correction "
android:textAllCaps="false"
android:textColor="@color/emui_color_gray_7"></Button>
/**Create a button to capture the image for skew correction**/
<com.google.android.material.floatingactionbutton.FloatingActionButton
android:id="@+id/fab"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentRight="true"
android:layout_alignParentBottom="true"
android:layout_gravity="end|bottom"
android:contentDescription="@string/camera"
android:outlineProvider="none"
android:src="@drawable/gall"
app:backgroundTint="@color/emui_color_gray_1"
app:borderWidth="0dp"
app:elevation="2dp" />
/**Create a button to fetch the image for skew correction**/
<com.google.android.material.floatingactionbutton.FloatingActionButton
android:id="@+id/cam"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_gravity="bottom"
android:layout_alignParentLeft="true"
android:contentDescription="@string/camera"
android:outlineProvider="none"
android:src="@drawable/icon_cam"
app:backgroundTint="@color/emui_color_gray_1"
app:borderWidth="0dp"
app:elevation="2dp" />
</RelativeLayout>
</RelativeLayout>
Results
Conclusion
In this article we took a small step to create and demonstrate the integration of Document Skew Correction API’s from Huawei ML Kit for better document image readability.
Upcoming article will have the integration of multiple ML API’s under one powerful application.
Stay tuned!!
References
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/documentskewcorrection-0000001051703156

Intermediate: How to Verify Phone Number and Anonymous Account Login using Huawei Auth Service-AGC in Unity

Introduction
In this article, we will looking that how Huawei Auth Service-AGC provides secure and reliable user authentication system to your application. However building such systems is very difficult process, using Huawei Auth Service SDK only need to access Auth Service capabilities without implementing on the cloud.
Here I am covering sign in anonymously which is anonymous account sign to access your app as guest when you wish to login anonymously, Auth Service provides you unique id to uniquely identify user. And authenticating mobile number by verifying through OTP.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Overview
You need to install Unity software and I assume that you have prior
knowledge about the unity and C#.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
A Huawei phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK 1.7 or later.
Unity software installed.
Visual Studio/Code installed.
HMS Core (APK) 4.X or later.
Integration Preparations
1. Create a project in AppGallery Connect.
2. Create Unity project.
3. Huawei HMS AGC Services to project.
4. Download and save the configuration file.
Add the agconnect-services.json file following directory Assests > Plugins > Android
5. Add the following plugin and dependencies in LaucherTemplate.
Code:
apply plugin: 'com.huawei.agconnect'
6. Add the following dependencies in MainTemplate.
Code:
apply plugin: 'com.huawei.agconnect'
implementation 'com.huawei.agconnect:agconnect-auth:1.4.2.301'
implementation 'com.huawei.hms:base:5.2.0.300'
implementation 'com.huawei.hms:hwid:5.2.0.300'
7. Add dependencies in build script repositories and all project repositories & class path in BaseProjectTemplate.
Code:
maven { url 'https://developer.huawei.com/repo/' }
8. Create Empty Game object rename to GameManager, UI canvas input text fields and buttons and assign onclick events to respective components as shown below.
MainActivity.java
Code:
package com.huawei.AuthServiceDemo22;
import android.content.Intent;
import android.os.Bundle;
import com.hw.unity.Agc.Auth.ThirdPartyLogin.LoginManager;
import com.unity3d.player.UnityPlayerActivity;
import android.util.Log;
import com.huawei.agconnect.auth.AGConnectAuth;
import com.huawei.agconnect.auth.AGConnectAuthCredential;
import com.huawei.agconnect.auth.AGConnectUser;
import com.huawei.agconnect.auth.PhoneAuthProvider;
import com.huawei.agconnect.auth.SignInResult;
import com.huawei.agconnect.auth.VerifyCodeResult;
import com.huawei.agconnect.auth.VerifyCodeSettings;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hmf.tasks.TaskExecutors;
import java.util.Locale;
public class MainActivity extends UnityPlayerActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
LoginManager.getInstance().initialize(this);
Log.d("DDDD"," Inside onCreate ");
}
public static void AnoniomousLogin(){
AGConnectAuth.getInstance().signInAnonymously().addOnSuccessListener(new OnSuccessListener<SignInResult>() {
@Override
public void onSuccess(SignInResult signInResult) {
AGConnectUser user = signInResult.getUser();
String uid = user.getUid();
Log.d("DDDD"," Login Anonymous UID : "+uid);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.d("DDDD"," Inside ERROR "+e.getMessage());
}
});
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
LoginManager.getInstance().onActivityResult(requestCode, resultCode, data);
}
public static void sendVerifCode(String phone) {
VerifyCodeSettings settings = VerifyCodeSettings.newBuilder()
.action(VerifyCodeSettings.ACTION_REGISTER_LOGIN)
.sendInterval(30) // Shortest sending interval, 30–120s
.build();
String countCode = "+91";
String phoneNumber = phone;
if (notEmptyString(countCode) && notEmptyString(phoneNumber)) {
Task<VerifyCodeResult> task = PhoneAuthProvider.requestVerifyCode(countCode, phoneNumber, settings);
task.addOnSuccessListener(TaskExecutors.uiThread(), new OnSuccessListener<VerifyCodeResult>() {
@Override
public void onSuccess(VerifyCodeResult verifyCodeResult) {
Log.d("DDDD"," ==>"+verifyCodeResult);
}
}).addOnFailureListener(TaskExecutors.uiThread(), new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.d("DDDD"," Inside onFailure");
}
});
}
}
static boolean notEmptyString(String string) {
return string != null && !string.isEmpty() && !string.equals("");
}
public static void linkPhone(String verifyCode1,String phone) {
Log.d("DDDD", " verifyCode1 "+verifyCode1);
String phoneNumber = phone;
String countCode = "+91";
String verifyCode = verifyCode1;
Log.e("DDDD", " verifyCode "+verifyCode);
AGConnectAuthCredential credential = PhoneAuthProvider.credentialWithVerifyCode(
countCode,
phoneNumber,
null, // password, can be null
verifyCode);
AGConnectAuth.getInstance().getCurrentUser().link(credential).addOnSuccessListener(new OnSuccessListener<SignInResult>() {
@Override
public void onSuccess(SignInResult signInResult) {
String phoneNumber = signInResult.getUser().getPhone();
String uid = signInResult.getUser().getUid();
Log.d("DDDD", "phone number: " + phoneNumber + ", uid: " + uid);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.e("DDDD", "Login error, please try again, error:" + e.getMessage());
}
});
}
}
GameManager.cs
Code:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class GameManager : MonoBehaviour
{
public InputField OtpField,inputFieldPhone;
string otp=null,phone="";
// Start is called before the first frame update
void Start()
{
inputFieldPhone.text = "9740424108";
}
public void onClickButton(){
phone = inputFieldPhone.text;
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("sendVerifCode",phone);
}
}
public void LinkPhone(){
otp = OtpField.text;
Debug.Log(" OTP "+otp);
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("linkPhone",otp,phone);
}
}
public void AnoniomousLogin(){
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("AnoniomousLogin");
}
}
}
10. Click to Build apk, choose File > Build settings > Build, to Build and Run, choose File > Build settings > Build And Run.
Result
Tips and Tricks
Add agconnect-services.json file without fail.
Make sure dependencies added in build files.
Make sure that you enabled the Auth Service in AG-Console.
Make sure that you enabled the Authentication mode in Auth Service.
Conclusion
We have learnt integration of Huawei Auth Service-AGC anonymous account login and mobile number verification through OTP in Unity Game development. Conclusion is Auth Service provides secure and reliable user authentication system to your application.
Thank you so much for reading article, hope this article helps you.
Reference
Official documentation service introduction
Unity Auth Service Manual
Auth Service CodeLabs
Checkout in forum

MVVM Architecture On HarmonyOS Using Retrofit And RxJava

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this tutorial, we will be discussing and implementing the HarmonyOS MVVM Architectural pattern in our Harmony app.
This project is available on github, link can be found at the end of the article
Table of contents
What is MVVM
Harmony MVVM example project structure
Adding dependencies
Model
Layout
Retrofit interface
ViewModel
Tip and Tricks
Conclusion
Recommended resources
What is MVVM
MVVM stands for Model, View, ViewModel:
Model: This holds the data of the application. It cannot directly talk to the View. Generally, it’s recommended to expose the data to the ViewModel through ActiveDatas (Observables ).
View: It represents the UI of the application devoid of any Application Logic. It observes the ViewModel.
ViewModel: It acts as a link between the Model and the View. It’s responsible for transforming the data from the Model. It provides data streams to the View. It also uses hooks or callbacks to update the View. It’ll ask for the data from the Model.
MVVM can be achieved in two ways:
Using Data binding
RxJava
In this tutorial we will implement MVVM in Harmony using RXjava, as Data binding is still under development and not ready to use in Harmony.
Harmony MVVM example project structure​
We will create packages by features. It will make your code more modular and manageable.
Adding the Dependencies
Add the following dependencies in your module level build.gradle file:
Code:
dependencies {
//[...]
//RxJava
implementation "io.reactivex.rxjava2:rxjava:2.2.17"
//retrofit
implementation 'com.squareup.retrofit2:retrofit:2.6.0'
implementation "com.squareup.retrofit2:converter-moshi:2.6.0"
implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
//RxJava adapter for retrofit
implementation 'com.squareup.retrofit2:adapter-rxjava2:2.7.1'
}
Model
The Model would hold the user’s email and password. The following User.java class does it:
Code:
package com.megaache.mvvmdemo.model;
public class User {
private String email;
private String password;
public User(String email, String password) {
this.email = email;
this.password = password;
}
public void setEmail(String email) {
this.email = email;
}
public String getEmail() {
return email;
}
public void setPassword(String password) {
this.password = password;
}
public String getPassword() {
return password;
}
@Override
public String toString() {
return "User{" +
"email='" + email + '\'' +
", password='" + password + '\'' +
'}';
}
}
Layout
NOTE: For this tutorial, I have decided to create the layout for smart watch devices, however it will work fine on all devices, you just need to re-arrange the components and modify the alignment.
The layout will consist of login button, two text fields and two error texts, each will be shown or hidden depending on the value of the text box above it, after clicking the login button. the final UI will like the screenshot below:
Before we create layout lets add some colors:
First create file color.json under resources/base/element and add the following json content:
Code:
{
"color": [
{
"name": "primary",
"value": "#283148"
},
{
"name": "primaryDark",
"value": "#283148"
},
{
"name": "accent",
"value": "#06EBBF"
},
{
"name": "red",
"value": "#FF406E"
}
]
}
Then, lets design background elements for the Text Fields and the Button:
Create file background_text_field.xml and background_text_button.xml under resources/base/graphic as shown in below screenshot :
Then add the following code:
Background_text_field.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="20"/>
<solid
ohos:color="#ffffff"/>
<stroke
ohos:width="2"
ohos:color="$color:accent"/>
</shape>
Background_button.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="20"/>
<solid
ohos:color="$color:accent"/>
</shape>
Now lets create the background element for the main layout, let’s called background_ability_login.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<solid
ohos:color="$color:primaryDark"/>
</shape>
Finally, let’s create the layout file ability_login.xml:
Code:
<?xml version="1.0" encoding="utf-8"?>
<ScrollView
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:id="$+id:scrollview"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:background_element="$graphic:background_ability_login"
ohos:layout_alignment="horizontal_center"
ohos:rebound_effect="true"
>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:padding="20vp"
>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:layout_alignment="center"
ohos:orientation="vertical"
>
<TextField
ohos:id="$+id:tf_email"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_text_field"
ohos:hint="email"
ohos:left_padding="10vp"
ohos:min_height="40vp"
ohos:multiple_lines="false"
ohos:text_alignment="vertical_center"
ohos:text_color="black"
ohos:text_input_type="pattern_number"
ohos:text_size="15fp"/>
<Text
ohos:id="$+id:t_email_invalid"
ohos:height="match_content"
ohos:width="match_content"
ohos:layout_alignment="center"
ohos:text="invalid email"
ohos:text_color="$color:red"
ohos:text_size="15fp"
/>
</DirectionalLayout>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:layout_alignment="center"
ohos:orientation="vertical"
ohos:top_margin="10vp">
<TextField
ohos:id="$+id:tf_password"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_text_field"
ohos:hint="password"
ohos:left_padding="10vp"
ohos:min_height="40vp"
ohos:multiple_lines="false"
ohos:text_alignment="vertical_center"
ohos:text_color="black"
ohos:text_input_type="pattern_password"
ohos:text_size="15fp"
/>
<Text
ohos:id="$+id:t_password_invalid"
ohos:height="match_content"
ohos:width="match_content"
ohos:layout_alignment="center"
ohos:padding="0vp"
ohos:text="invalid password"
ohos:text_color="$color:red"
ohos:text_size="15fp"
/>
</DirectionalLayout>
<Button
ohos:id="$+id:btn_login"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_button"
ohos:bottom_margin="30vp"
ohos:min_height="40vp"
ohos:text="login"
ohos:text_color="#fff"
ohos:text_size="18fp"
ohos:top_margin="10vp"/>
</DirectionalLayout>
</ScrollView>
Retrofit interface
Before we move to the ViewModel, we have to setup our Retrofit service and repository class.
To keep the project clean, I will create class config.java which will hold our API URLs:
Code:
package com.megaache.mvvmdemo;
public class Config {
//todo: update base url variable with valid url
public static final String BASE_URL = "https://example.com";
public static final String API_VERSION = "/api/v1";
public static final String LOGIN_URL="auth/login";
}
Note: The url's are just for demonstration. For the demo to work, you must replace the urls.
First create interface APIServices.java:
For this tutorial, we assume the method of login EndPoint is Post, you may changes depending on your API, the method login will return an Observable, that will be observed in the ViewModel using RxJava.
Code:
package com.megaache.mvvmdemo.network;
import com.megaache.mvvmdemo.Config;
import com.megaache.mvvmdemo.network.request.LoginRequest;
import com.megaache.mvvmdemo.network.response.LoginResponse;
import io.reactivex.Observable;
import retrofit2.http.Body;
import retrofit2.http.Headers;
import retrofit2.http.POST;
public interface APIServices {
@POST(Config.LOGIN_URL)
@Headers("Content-Type: application/json;charset=UTF-8")
public Observable<LoginResponse> login(@Body LoginRequest loginRequest);
}
Note: the class LoginRequest which you will see later in this tutorial, must be equal to the request that the server expects in The names of variables and their types, otherwise the server will fail to process the request.
Then, add method createRetrofitClient() to MyApplication.java, which will create and return retrofit instance, the instance will use Moshi converter to handle the conversion of JSON to our java class, and RxJava2 adapter to return observables that can work with RxJava instead of the default Call class which requires callbacks:
Code:
package com.megaache.mvvmdemo;
import com.megaache.mvvmdemo.network.APIServices;
import ohos.aafwk.ability.AbilityPackage;
import okhttp3.OkHttpClient;
import retrofit2.Retrofit;
import retrofit2.adapter.rxjava2.RxJava2CallAdapterFactory;
import retrofit2.converter.moshi.MoshiConverterFactory;
import java.util.concurrent.TimeUnit;
public class MyApplication extends AbilityPackage {
@Override
public void onInitialize() {
super.onInitialize();
}
public static APIServices createRetrofitClient() {
OkHttpClient client = new OkHttpClient.Builder()
.connectTimeout(60L, TimeUnit.SECONDS)
.build();
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(Config.BASE_URL + Config.API_VERSION)
.addCallAdapterFactory(RxJava2CallAdapterFactory.create())
.addConverterFactory(MoshiConverterFactory.create()).client(client)
.build();
return retrofit.create(APIServices.class);
}
}
NOTE: For cleaner code, you can create file RetrofitClient.java and move the method createRetrofitClient() to it.
Now, let’s work on the Login feature, we going to first create request and response classes, then move to the ViewModel and the view:
We need LoginRequest and LoginResponse which both will extends BaseRequest and BaseRespnonse, code is shown below:
Create BaseRequest.java:
In real life project, your API may expect some parameter to be sent with every request. For example: accessToken, language, deviceId, pushToken etc. which will depende on your API. for this tutorial I added one field called deviceType with static value.
Code:
package com.megaache.mvvmdemo.network.request;
public class BaseRequest {
private String deviceType;
public BaseRequest() {
deviceType = "harmony-watch";
}
public String getDeviceType() {
return deviceType;
}
public void setDeviceType(String deviceType) {
this.deviceType = deviceType;
}
}
Create class LoginRequest.java, which will extend BaseRequest and have two fields Email and Password, which will provided by the end user:
Code:
package com.megaache.mvvmdemo.network.request;
public class LoginRequest {
private String email;
private String password;
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
}
Then for the response, Create BaseResponse.java first:
Code:
package com.megaache.mvvmdemo.network.response;
import com.megaache.mvvmdemo.MyApplication;
import java.io.Serializable;
public class BaseResponse implements Serializable {
}
Then LoginResponse.java extending BaseResponse:
Code:
package com.megaache.mvvmdemo.network.response;
import com.megaache.mvvmdemo.model.User;
import com.squareup.moshi.Json;
public class LoginResponse extends BaseResponse {
@Json(name = "user")
private User user;
@Json(name = "accessToken")
private String accessToken;
public User getUser() {
return user;
}
public void setUser(User user) {
this.user = user;
}
public String getAccessToken() {
return accessToken;
}
public void setAccessToken(String accessToken) {
this.accessToken = accessToken;
}
}
Note: this class must be equal to the response you get from server, otherwise Retrofit Gson converter will fail to convert the response to LoginResponse class, both the type of variables and their names must equal the those in the JSON response.
ViewModel
In ViewModel, we will wrap the data which was loaded with Retrofit inside class LoggedIn in LoginViewState, and observe states Observable defined in BaseViewModel in our Ability (or AbilitySlice). Whenever the value in states changes, the ability will be notified without checking whether the ability is alive or not.
The code for LoginViewState.java extending empty class BaseViewState.java, and ErrorData.java (used in LoginViewState.java) is given below:
ErrorData.java:
Code:
package com.megaache.mvvmdemo.model;
import java.io.Serializable;
public class ErrorData implements Serializable {
private String message;
private int statusCode;
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
public int getStatusCode() {
return statusCode;
}
public void setStatusCode(int statusCode) {
this.statusCode = statusCode;
}
}
LoginViewState.java:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.base.BaseViewState;
import com.megaache.mvvmdemo.model.ErrorData;
import com.megaache.mvvmdemo.network.response.LoginResponse;
public class LoginViewState extends BaseViewState {
public static class Loading extends LoginViewState {
}
public static class Error extends LoginViewState {
private ErrorData message;
public Error(ErrorData message) {
this.message = message;
}
public void setMessage(ErrorData message) {
this.message = message;
}
public ErrorData getMessage() {
return message;
}
}
public static class LoggedIn extends LoginViewState {
private LoginResponse userDataResponse;
public LoggedIn(LoginResponse userDataResponse) {
this.userDataResponse = userDataResponse;
}
public LoginResponse getUserDataResponse() {
return userDataResponse;
}
public void setUserDataResponse(LoginResponse userDataResponse) {
this.userDataResponse = userDataResponse;
}
}
}
The code for the LoginViewModel.java is given below:
When the user clicks the login button, the method sendLoginRequest() will setup our retrofit Observable, the request will not be sent until the we call the method subscribe which will be done on the View. notice we are subscribing on the Schedulers.Io() Scheduler, which is will execute the requests in a background thread to avoid freezing the UI, and because of that we have to create our custom Observer that will invoke the callback code in UI thread after we receive data, more on this later:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.base.BaseViewModel;
import com.megaache.mvvmdemo.MyApplication;
import com.megaache.mvvmdemo.model.ErrorData;
import com.megaache.mvvmdemo.network.request.LoginRequest;
import io.reactivex.Observable;
import io.reactivex.schedulers.Schedulers;
import ohos.aafwk.abilityjet.activedata.ActiveData;
public class LoginViewModel extends BaseViewModel<LoginViewState> {
private static final int MIN_PASSWORD_LENGTH = 6;
public ActiveData<Boolean> emailValid = new ActiveData<>();
public ActiveData<Boolean> passwordValid = new ActiveData<>();
public ActiveData<Boolean> loginState = new ActiveData<>();
public LoginViewModel() {
super();
}
public void login(String email, String password) {
boolean isEmailValid = isEmailValid(email);
emailValid.setData(isEmailValid);
if (!isEmailValid)
return;
boolean isPasswordValid = isPasswordValid(email);
passwordValid.setData(isPasswordValid);
if (!isPasswordValid)
return;
LoginRequest loginRequest = new LoginRequest();
loginRequest.setEmail(email);
loginRequest.setPassword(password);
super.subscribe(sendLoginRequest(loginRequest));
}
private Observable<LoginViewState> sendLoginRequest(LoginRequest loginRequest) {
return MyApplication.createRetrofitClient()
.login(loginRequest)
.doOnError(Throwable::printStackTrace)
.map(LoginViewState.LoggedIn::new)
.cast(LoginViewState.class)
.onErrorReturn(throwable -> {
ErrorData errorData = new ErrorData();
if (throwable.getMessage() != null)
errorData.setMessage(throwable.getMessage());
else
errorData.setMessage(" No internet! ");
return new LoginViewState.Error(errorData);
})
.subscribeOn(Schedulers.io())
.startWith(new LoginViewState.Loading());
}
private boolean isEmailValid(String email) {
return email != null && !email.isEmpty() && email.contains("@");
}
private boolean isPasswordValid(String password) {
return password != null && password.length() > MIN_PASSWORD_LENGTH;
}
}
Settings up the ability (View)
As you know, ability is our view, we have instantiated ViewModel and observer states and ActiveDatas in the method ObserverData(), as mentioned before, retrofit will send the request on background thread, therefore the code in the Observer will run on the same thread (Schedulars.io()), which will cause exceptions if that code attemp to update the UI, to prevent that, we will create a custom UIObserver class which extends Observer, that will run our code in the UI task dispatcher of the ability (UI Thread), code for UiObserver.java as show below:
Code:
package com.megaache.mvvmdemo.utils;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.abilityjet.activedata.DataObserver;
import ohos.app.dispatcher.TaskDispatcher;
public abstract class UiObserver<T> extends DataObserver<T> {
private TaskDispatcher uiTaskDispatcher;
public UiObserver(Ability baseAbilitySlice) {
setLifecycle(baseAbilitySlice.getLifecycle());
uiTaskDispatcher = baseAbilitySlice.getUITaskDispatcher();
}
@Override
public void onChanged(T t) {
uiTaskDispatcher.asyncDispatch(() -> onValueChanged(t));
}
public abstract void onValueChanged(T t);
}
Code for LoginAbility.java is shown below:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.ResourceTable;
import com.megaache.mvvmdemo.utils.UiObserver;
import com.megaache.mvvmdemo.model.ErrorData;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.content.Intent;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.agp.components.TextField;
import ohos.agp.window.dialog.ToastDialog;
public class LoginAbility extends Ability {
private LoginViewModel loginViewModel;
private TextField emailTF;
private Text emailInvalidT;
private TextField passwordTF;
private Text passwordInvalidT;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
loginViewModel = new LoginViewModel();
initUI();
observeData();
}
private void initUI() {
super.setUIContent(ResourceTable.Layout_ability_login);
Button loginButton = (Button) findComponentById(ResourceTable.Id_btn_login);
loginButton.setClickedListener(c -> attemptLogin());
emailTF = (TextField) findComponentById(ResourceTable.Id_tf_email);
emailInvalidT = (Text) findComponentById(ResourceTable.Id_t_email_invalid);
passwordTF = (TextField) findComponentById(ResourceTable.Id_tf_password);
passwordInvalidT = (Text) findComponentById(ResourceTable.Id_t_password_invalid);
}
private void observeData() {
loginViewModel.emailValid.addObserver(new UiObserver<Boolean>(this) {
@Override
public void onValueChanged(Boolean aBoolean) {
emailInvalidT.setVisibility(aBoolean ? Component.VISIBLE : Component.HIDE);
}
}, false);
loginViewModel.passwordValid.addObserver(new UiObserver<Boolean>(this) {
@Override
public void onValueChanged(Boolean aBoolean) {
passwordInvalidT.setVisibility(aBoolean ? Component.VISIBLE : Component.HIDE);
}
}, false);
loginViewModel.getStates().addObserver(new UiObserver<LoginViewState>(this) {
@Override
public void onValueChanged(LoginViewState loginState) {
if (loginState instanceof LoginViewState.Loading) {
toggleLoadingDialog(true);
} else if (loginState instanceof LoginViewState.Error) {
toggleLoadingDialog(false);
manageError(((LoginViewState.Error) loginState).getMessage());
} else if (loginState instanceof LoginViewState.LoggedIn) {
toggleLoadingDialog(false);
showToast("logging successful!");
}
}
}, false);
}
private void attemptLogin() {
loginViewModel.login(emailTF.getText(), passwordTF.getText());
}
private void toggleLoadingDialog(boolean show) {
//todo: show/hide loading dialog
}
private void manageError(ErrorData errorData) {
showToast(errorData.getMessage());
}
private void showToast(String message) {
new ToastDialog(this)
.setText(message)
.show();
}
@Override
protected void onStop() {
super.onStop();
loginViewModel.unbind();
}
}
Tips And Tricks
If you want your app to work offline, its best to introduce a Repository classes that will handle quering information from server if internet is available or from the cach if not
For cleaner code, try re-using the ViewModel as much as possible, by creating a base class and moving the shared code their
You should not keep a reference to a View (component) or context in the ViewModel, unless you have no option
The ViewModel should not talk directly to the View, instead the View obseve the ViewModel and update itself depending on ViewModel data
A correct implementation of ViewModel should allow you to change the UI with minimal or zero changes to the ViewModel.
Conclusion
MVVM combines the advantages of separation of concerns provided by MVP archichetecture, while leveraging the advantages of RxJava or Data binding. The result is a pattern where the model drives as many of the operations as possible, minimizing the logic in the view.
Finally, talk is cheap, and I strongly advise you to try and learn these things in the code so that you do not need to rely on people like me to tell you what to do.
Clone the project from github, replate the API Urls in class config.java and run it on HarmonyOs Device, you should see a toast that says "Logging succesful" if the credentials are correct, otherwise it should show a toast with the error that says "no internet" or and error returned from the server.
This project is available on GitHub: Click here
Recommended sources:
HarmonyOS (essential topics): Essential Topics
Retrofit: Click here
Rxjava: Click here
Original Source
Comment below if you have any questions or suggestions.
Thank you!
In Android we have Manifest file right there we will declare all the activity names like that do we have any file here?

Introduction of Pose estimation using Huawei HiAI Engine in Android

Introduction
In this article, we will learn how to detect human skeletal detection.
The key skeletal features are important for describing human posture and predicting human behavior. Therefore, the recognition of key skeletal features is the basis for a diversity of computer vision tasks, such as motion categorizations, abnormal behavior detection, and auto-navigation. In recent years, improved skeletal feature recognition has been widely applied to the development of deep learning technology, especially domains relating to computer vision.
Pose estimation mainly detects key human body features such as joints and facial features, and provides skeletal information based on such features.
If input a portrait image, users will obtain the coordinate information of 14 key skeletal features of each portrait in it. The algorithm supports real-time processing and returns the result within 70 ms. The result presents posture information regarding head, neck, right and left shoulders, right and left elbows, right and left wrists, and right and left hips, right and left knees, and right and left ankles.
How to integrate Pose Estimation
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library.
3. Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml.
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Java:
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.os.RemoteException;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import android.widget.ImageView;
import android.widget.Toast;
import com.huawei.hiai.pdk.pluginservice.ILoadPluginCallback;
import com.huawei.hiai.pdk.resultcode.HwHiAIResultCode;
import com.huawei.hiai.vision.common.ConnectionCallback;
import com.huawei.hiai.vision.common.VisionBase;
import com.huawei.hiai.vision.common.VisionImage;
import com.huawei.hiai.vision.image.detector.PoseEstimationDetector;
import com.huawei.hiai.vision.visionkit.image.detector.BodySkeletons;
import com.huawei.hiai.vision.visionkit.image.detector.PeConfiguration;
import com.huawei.hiai.vision.visionkit.text.config.VisionTextConfiguration;
import java.io.BufferedInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class MainActivity extends AppCompatActivity {
private Object mWaitResult = new Object(); // The user establishes a semaphore and waits for the callback information of the bound service
private ImageView mImageView;
private ImageView yogaPose;
[USER=439709]@override[/USER]
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mImageView = (ImageView) findViewById(R.id.skeleton_img);
yogaPose = (ImageView) findViewById(R.id.yogaPose);
//The application needs to bind the CV service first, and monitor whether the service is successfully connected
VisionBase.init(getApplicationContext(), new ConnectionCallback() {
public void onServiceConnect() { // Listen to the message that the service is successfully bound
Log.d("SkeletonPoint", "HwVisionManager onServiceConnect OK.");
Toast.makeText(getApplicationContext(),"Service binding successfully!",Toast.LENGTH_LONG).show();
synchronized (mWaitResult) {
mWaitResult.notifyAll();
doSkeletonPoint();
}
}
public void onServiceDisconnect() { // Listen to the message that the binding service failed
Log.d("SkeletonPoint", "HwVisionManager onServiceDisconnect OK.");
Toast.makeText(getApplicationContext(),"Service binding failed!",Toast.LENGTH_LONG).show();
synchronized (mWaitResult) {
mWaitResult.notifyAll();
}
}
});
}
[USER=439709]@override[/USER]
protected void onResume() {
super.onResume();
}
[USER=439709]@override[/USER]
protected void onDestroy() {
super.onDestroy();
}
private void doSkeletonPoint() {
// Declare the skeleton detection interface object, and set the plug-in to cross-process mode MODE_OUT (also can be set to the same process mode MODE_IN)
PoseEstimationDetector mPoseEstimationDetector = new PoseEstimationDetector(MainActivity.this);
PeConfiguration config = new PeConfiguration.Builder()
.setProcessMode(VisionTextConfiguration.MODE_OUT)
.build();
mPoseEstimationDetector.setConfiguration(config);
// Currently, the skeleton detection interface accepts input as Bitmap, which is encapsulated into VisionImage. Video streaming will be supported in the future
Bitmap bitmap = null;
VisionImage image = null;
// TODO: Developers need to create a Bitmap here
BufferedInputStream bis = null;
try {
bis = new BufferedInputStream(getAssets().open("0.jpg"));
} catch (IOException e) {
Log.d("SkeletonPoint", e.toString());
Toast.makeText(getApplicationContext(), e.toString(),Toast.LENGTH_LONG).show();
}
bitmap = BitmapFactory.decodeStream(bis);
yogaPose.setImageBitmap(bitmap);
Bitmap bitmap2 = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), bitmap.getConfig());
image = VisionImage.fromBitmap(bitmap);
// Query whether the capability supports the installation of plug-ins at the same time. getAvailability() returns -6 to indicate that the current engine supports this ability, but the plug-in needs to be downloaded and installed on the cloud side
int availability = mPoseEstimationDetector.getAvailability();
int installation = HwHiAIResultCode.AIRESULT_UNSUPPORTED; // Indicates that it does not support
if (availability == -6) {
Lock lock = new ReentrantLock();
Condition condition = lock.newCondition();
LoadPluginCallback cb = new LoadPluginCallback(lock, condition);
// Download and install the plugin
mPoseEstimationDetector.loadPlugin(cb);
lock.lock();
try {
condition.await(90, TimeUnit.SECONDS);
} catch (InterruptedException e) {
Log.e("SkeletonPoint", e.getMessage());
} finally {
lock.unlock();
}
installation = cb.mResultCode;
}
// You can call the interface after downloading and installing successfully
if ((availability == HwHiAIResultCode.AIRESULT_SUCCESS)
|| (installation == HwHiAIResultCode.AIRESULT_SUCCESS)) {
// Load model and resources
mPoseEstimationDetector.prepare();
// Skeleton point result returned
List<BodySkeletons> mBodySkeletons = new ArrayList<>();
// The run method is called synchronously. At present, the maximum interface run time is 70 ms, and it is recommended to use another thread to call every frame
// After detect, bitmap will be released
int resultCode = mPoseEstimationDetector.detect(image, mBodySkeletons, null);
Toast.makeText(getApplicationContext(),"resultCode: " + resultCode,Toast.LENGTH_LONG).show();
// Draw a point
if (mBodySkeletons.size() != 0) {
drawPointNew(mBodySkeletons, bitmap2);
mImageView.setImageBitmap(bitmap2);
}
// Release engine
mPoseEstimationDetector.release();
}
}
public static class LoadPluginCallback extends ILoadPluginCallback.Stub {
private int mResultCode = HwHiAIResultCode.AIRESULT_UNKOWN;
private Lock mLock;
private Condition mCondition;
LoadPluginCallback(Lock lock, Condition condition) {
mLock = lock;
mCondition = condition;
}
[USER=439709]@override[/USER]
public void onResult(int resultCode) throws RemoteException {
Log.d("SkeletonPoint", "LoadPluginCallback, onResult: " + resultCode);
mResultCode = resultCode;
mLock.lock();
try {
mCondition.signalAll();
} finally {
mLock.unlock();
}
}
[USER=439709]@override[/USER]
public void onProgress(int i) throws RemoteException {
}
}
private void drawPointNew(List<BodySkeletons> poseEstimationMulPeopleSkeletons, Bitmap bmp) {
if ((poseEstimationMulPeopleSkeletons == null)
|| (poseEstimationMulPeopleSkeletons.size() < 1)) {
return;
}
int humanNum = poseEstimationMulPeopleSkeletons.size();
int points = 14;
int size = humanNum * points;
int[] xArr = new int[size];
int[] yArr = new int[size];
for (int j = 0; (j < humanNum) && (j < 6); j++) {
for (int i = 0; i < points; i++) {
xArr[j * points + i] = (int)((float)poseEstimationMulPeopleSkeletons.get(j).getPosition().get(i).x);
yArr[j * points + i] = (int)((float)poseEstimationMulPeopleSkeletons.get(j).getPosition().get(i).y);
}
}
Paint p = new Paint();
p.setStyle(Paint.Style.FILL_AND_STROKE);
p.setStrokeWidth(5);
p.setColor(Color.GREEN);
Canvas canvas = new Canvas(bmp);
int len = xArr.length;
int[] color = {0xFF000000, 0xFF444444, 0xFF888888, 0xFFCCCCCC, 0xFFFF0000, 0xFF00FF00, 0xFF0000FF,
0xFFFFFF00, 0xFF00FFFF, 0xFFFF00FF, 0xFF8800FF, 0xFF4400FF, 0xFFFFDDDD};
p.setColor(color[4]);
for (int i = 0; i < len; i++) {
canvas.drawCircle(xArr, yArr, 10, p);
}
for (int i = 0; i < humanNum; i++) {
int j = 0;
p.setColor(color[j++]);
if ((xArr[0+points*i]>0) &&(yArr[0+points*i]>0)&&(xArr[1+points*i]>0)&&(yArr[1+points*i]>0)) {
canvas.drawLine(xArr[0+points*i], yArr[0+points*i], xArr[1+points*i], yArr[1+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[2+points*i]>0)&&(yArr[2+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[2+points*i], yArr[2+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[2+points*i]>0)&&(yArr[2+points*i]>0)&&(xArr[3+points*i]>0)&&(yArr[3+points*i]>0)) {
canvas.drawLine(xArr[2+points*i], yArr[2+points*i], xArr[3+points*i], yArr[3+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[3+points*i]>0)&&(yArr[3+points*i]>0)&&(xArr[4+points*i]>0)&&(yArr[4+points*i]>0)) {
canvas.drawLine(xArr[3+points*i], yArr[3+points*i], xArr[4+points*i], yArr[4+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[5+points*i]>0)&&(yArr[5+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[5+points*i], yArr[5+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[5+points*i]>0)&&(yArr[5+points*i]>0)&&(xArr[6+points*i]>0)&&(yArr[6+points*i]>0)) {
canvas.drawLine(xArr[5+points*i], yArr[5+points*i], xArr[6+points*i], yArr[6+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[6+points*i]>0)&&(yArr[6+points*i]>0)&&(xArr[7+points*i]>0)&&(yArr[7+points*i]>0)) {
canvas.drawLine(xArr[6+points*i], yArr[6+points*i], xArr[7+points*i], yArr[7+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[8+points*i]>0)&&(yArr[8+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[8+points*i], yArr[8+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[8+points*i]>0)&&(yArr[8+points*i]>0)&&(xArr[9+points*i]>0)&&(yArr[9+points*i]>0)) {
canvas.drawLine(xArr[8+points*i], yArr[8+points*i], xArr[9+points*i], yArr[9+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[9+points*i]>0)&&(yArr[9+points*i]>0)&&(xArr[10+points*i]>0)&&(yArr[10+points*i]>0)) {
canvas.drawLine(xArr[9+points*i], yArr[9+points*i], xArr[10+points*i], yArr[10+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[11+points*i]>0)&&(yArr[11+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[11+points*i], yArr[11+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[11+points*i]>0)&&(yArr[11+points*i]>0)&&(xArr[12+points*i]>0)&&(yArr[12+points*i]>0)) {
canvas.drawLine(xArr[11+points*i], yArr[11+points*i], xArr[12+points*i], yArr[12+points*i], p);
}
p.setColor(color[j]);
if ((xArr[12+points*i]>0)&&(yArr[12+points*i]>0)&&(xArr[13+points*i]>0)&&(yArr[13+points*i]>0)) {
canvas.drawLine(xArr[12+points*i], yArr[12+points*i], xArr[13+points*i], yArr[13+points*i], p);
}
}
}
}
Result
Tips and Tricks
This API provides optimal detection results when no more than three portraits are not appear in the image.
This API works better when the proportion of a portrait in an image is high.
At least four skeletal features of the upper part of the body are required for reliable recognition results.
If you are taking Video from a camera or gallery make sure your app has camera and storage permissions.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion
In this article, we have learnt what the pose estimation is and how to integrate pose estimation using Huawei HiAI in android with java. We able to detect the image skeleton in the example. It is able to detect head, neck, elbow, knee and ankle.
Reference
Pose Estimation
Apply for Huawei HiAI

Categories

Resources