Introduction of Pose estimation using Huawei HiAI Engine in Android - Huawei Developers

Introduction
In this article, we will learn how to detect human skeletal detection.
The key skeletal features are important for describing human posture and predicting human behavior. Therefore, the recognition of key skeletal features is the basis for a diversity of computer vision tasks, such as motion categorizations, abnormal behavior detection, and auto-navigation. In recent years, improved skeletal feature recognition has been widely applied to the development of deep learning technology, especially domains relating to computer vision.
Pose estimation mainly detects key human body features such as joints and facial features, and provides skeletal information based on such features.
If input a portrait image, users will obtain the coordinate information of 14 key skeletal features of each portrait in it. The algorithm supports real-time processing and returns the result within 70 ms. The result presents posture information regarding head, neck, right and left shoulders, right and left elbows, right and left wrists, and right and left hips, right and left knees, and right and left ankles.
How to integrate Pose Estimation
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library.
3. Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml.
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Java:
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.os.RemoteException;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import android.widget.ImageView;
import android.widget.Toast;
import com.huawei.hiai.pdk.pluginservice.ILoadPluginCallback;
import com.huawei.hiai.pdk.resultcode.HwHiAIResultCode;
import com.huawei.hiai.vision.common.ConnectionCallback;
import com.huawei.hiai.vision.common.VisionBase;
import com.huawei.hiai.vision.common.VisionImage;
import com.huawei.hiai.vision.image.detector.PoseEstimationDetector;
import com.huawei.hiai.vision.visionkit.image.detector.BodySkeletons;
import com.huawei.hiai.vision.visionkit.image.detector.PeConfiguration;
import com.huawei.hiai.vision.visionkit.text.config.VisionTextConfiguration;
import java.io.BufferedInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class MainActivity extends AppCompatActivity {
private Object mWaitResult = new Object(); // The user establishes a semaphore and waits for the callback information of the bound service
private ImageView mImageView;
private ImageView yogaPose;
[USER=439709]@override[/USER]
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mImageView = (ImageView) findViewById(R.id.skeleton_img);
yogaPose = (ImageView) findViewById(R.id.yogaPose);
//The application needs to bind the CV service first, and monitor whether the service is successfully connected
VisionBase.init(getApplicationContext(), new ConnectionCallback() {
public void onServiceConnect() { // Listen to the message that the service is successfully bound
Log.d("SkeletonPoint", "HwVisionManager onServiceConnect OK.");
Toast.makeText(getApplicationContext(),"Service binding successfully!",Toast.LENGTH_LONG).show();
synchronized (mWaitResult) {
mWaitResult.notifyAll();
doSkeletonPoint();
}
}
public void onServiceDisconnect() { // Listen to the message that the binding service failed
Log.d("SkeletonPoint", "HwVisionManager onServiceDisconnect OK.");
Toast.makeText(getApplicationContext(),"Service binding failed!",Toast.LENGTH_LONG).show();
synchronized (mWaitResult) {
mWaitResult.notifyAll();
}
}
});
}
[USER=439709]@override[/USER]
protected void onResume() {
super.onResume();
}
[USER=439709]@override[/USER]
protected void onDestroy() {
super.onDestroy();
}
private void doSkeletonPoint() {
// Declare the skeleton detection interface object, and set the plug-in to cross-process mode MODE_OUT (also can be set to the same process mode MODE_IN)
PoseEstimationDetector mPoseEstimationDetector = new PoseEstimationDetector(MainActivity.this);
PeConfiguration config = new PeConfiguration.Builder()
.setProcessMode(VisionTextConfiguration.MODE_OUT)
.build();
mPoseEstimationDetector.setConfiguration(config);
// Currently, the skeleton detection interface accepts input as Bitmap, which is encapsulated into VisionImage. Video streaming will be supported in the future
Bitmap bitmap = null;
VisionImage image = null;
// TODO: Developers need to create a Bitmap here
BufferedInputStream bis = null;
try {
bis = new BufferedInputStream(getAssets().open("0.jpg"));
} catch (IOException e) {
Log.d("SkeletonPoint", e.toString());
Toast.makeText(getApplicationContext(), e.toString(),Toast.LENGTH_LONG).show();
}
bitmap = BitmapFactory.decodeStream(bis);
yogaPose.setImageBitmap(bitmap);
Bitmap bitmap2 = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), bitmap.getConfig());
image = VisionImage.fromBitmap(bitmap);
// Query whether the capability supports the installation of plug-ins at the same time. getAvailability() returns -6 to indicate that the current engine supports this ability, but the plug-in needs to be downloaded and installed on the cloud side
int availability = mPoseEstimationDetector.getAvailability();
int installation = HwHiAIResultCode.AIRESULT_UNSUPPORTED; // Indicates that it does not support
if (availability == -6) {
Lock lock = new ReentrantLock();
Condition condition = lock.newCondition();
LoadPluginCallback cb = new LoadPluginCallback(lock, condition);
// Download and install the plugin
mPoseEstimationDetector.loadPlugin(cb);
lock.lock();
try {
condition.await(90, TimeUnit.SECONDS);
} catch (InterruptedException e) {
Log.e("SkeletonPoint", e.getMessage());
} finally {
lock.unlock();
}
installation = cb.mResultCode;
}
// You can call the interface after downloading and installing successfully
if ((availability == HwHiAIResultCode.AIRESULT_SUCCESS)
|| (installation == HwHiAIResultCode.AIRESULT_SUCCESS)) {
// Load model and resources
mPoseEstimationDetector.prepare();
// Skeleton point result returned
List<BodySkeletons> mBodySkeletons = new ArrayList<>();
// The run method is called synchronously. At present, the maximum interface run time is 70 ms, and it is recommended to use another thread to call every frame
// After detect, bitmap will be released
int resultCode = mPoseEstimationDetector.detect(image, mBodySkeletons, null);
Toast.makeText(getApplicationContext(),"resultCode: " + resultCode,Toast.LENGTH_LONG).show();
// Draw a point
if (mBodySkeletons.size() != 0) {
drawPointNew(mBodySkeletons, bitmap2);
mImageView.setImageBitmap(bitmap2);
}
// Release engine
mPoseEstimationDetector.release();
}
}
public static class LoadPluginCallback extends ILoadPluginCallback.Stub {
private int mResultCode = HwHiAIResultCode.AIRESULT_UNKOWN;
private Lock mLock;
private Condition mCondition;
LoadPluginCallback(Lock lock, Condition condition) {
mLock = lock;
mCondition = condition;
}
[USER=439709]@override[/USER]
public void onResult(int resultCode) throws RemoteException {
Log.d("SkeletonPoint", "LoadPluginCallback, onResult: " + resultCode);
mResultCode = resultCode;
mLock.lock();
try {
mCondition.signalAll();
} finally {
mLock.unlock();
}
}
[USER=439709]@override[/USER]
public void onProgress(int i) throws RemoteException {
}
}
private void drawPointNew(List<BodySkeletons> poseEstimationMulPeopleSkeletons, Bitmap bmp) {
if ((poseEstimationMulPeopleSkeletons == null)
|| (poseEstimationMulPeopleSkeletons.size() < 1)) {
return;
}
int humanNum = poseEstimationMulPeopleSkeletons.size();
int points = 14;
int size = humanNum * points;
int[] xArr = new int[size];
int[] yArr = new int[size];
for (int j = 0; (j < humanNum) && (j < 6); j++) {
for (int i = 0; i < points; i++) {
xArr[j * points + i] = (int)((float)poseEstimationMulPeopleSkeletons.get(j).getPosition().get(i).x);
yArr[j * points + i] = (int)((float)poseEstimationMulPeopleSkeletons.get(j).getPosition().get(i).y);
}
}
Paint p = new Paint();
p.setStyle(Paint.Style.FILL_AND_STROKE);
p.setStrokeWidth(5);
p.setColor(Color.GREEN);
Canvas canvas = new Canvas(bmp);
int len = xArr.length;
int[] color = {0xFF000000, 0xFF444444, 0xFF888888, 0xFFCCCCCC, 0xFFFF0000, 0xFF00FF00, 0xFF0000FF,
0xFFFFFF00, 0xFF00FFFF, 0xFFFF00FF, 0xFF8800FF, 0xFF4400FF, 0xFFFFDDDD};
p.setColor(color[4]);
for (int i = 0; i < len; i++) {
canvas.drawCircle(xArr, yArr, 10, p);
}
for (int i = 0; i < humanNum; i++) {
int j = 0;
p.setColor(color[j++]);
if ((xArr[0+points*i]>0) &&(yArr[0+points*i]>0)&&(xArr[1+points*i]>0)&&(yArr[1+points*i]>0)) {
canvas.drawLine(xArr[0+points*i], yArr[0+points*i], xArr[1+points*i], yArr[1+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[2+points*i]>0)&&(yArr[2+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[2+points*i], yArr[2+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[2+points*i]>0)&&(yArr[2+points*i]>0)&&(xArr[3+points*i]>0)&&(yArr[3+points*i]>0)) {
canvas.drawLine(xArr[2+points*i], yArr[2+points*i], xArr[3+points*i], yArr[3+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[3+points*i]>0)&&(yArr[3+points*i]>0)&&(xArr[4+points*i]>0)&&(yArr[4+points*i]>0)) {
canvas.drawLine(xArr[3+points*i], yArr[3+points*i], xArr[4+points*i], yArr[4+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[5+points*i]>0)&&(yArr[5+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[5+points*i], yArr[5+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[5+points*i]>0)&&(yArr[5+points*i]>0)&&(xArr[6+points*i]>0)&&(yArr[6+points*i]>0)) {
canvas.drawLine(xArr[5+points*i], yArr[5+points*i], xArr[6+points*i], yArr[6+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[6+points*i]>0)&&(yArr[6+points*i]>0)&&(xArr[7+points*i]>0)&&(yArr[7+points*i]>0)) {
canvas.drawLine(xArr[6+points*i], yArr[6+points*i], xArr[7+points*i], yArr[7+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[8+points*i]>0)&&(yArr[8+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[8+points*i], yArr[8+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[8+points*i]>0)&&(yArr[8+points*i]>0)&&(xArr[9+points*i]>0)&&(yArr[9+points*i]>0)) {
canvas.drawLine(xArr[8+points*i], yArr[8+points*i], xArr[9+points*i], yArr[9+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[9+points*i]>0)&&(yArr[9+points*i]>0)&&(xArr[10+points*i]>0)&&(yArr[10+points*i]>0)) {
canvas.drawLine(xArr[9+points*i], yArr[9+points*i], xArr[10+points*i], yArr[10+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[11+points*i]>0)&&(yArr[11+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[11+points*i], yArr[11+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[11+points*i]>0)&&(yArr[11+points*i]>0)&&(xArr[12+points*i]>0)&&(yArr[12+points*i]>0)) {
canvas.drawLine(xArr[11+points*i], yArr[11+points*i], xArr[12+points*i], yArr[12+points*i], p);
}
p.setColor(color[j]);
if ((xArr[12+points*i]>0)&&(yArr[12+points*i]>0)&&(xArr[13+points*i]>0)&&(yArr[13+points*i]>0)) {
canvas.drawLine(xArr[12+points*i], yArr[12+points*i], xArr[13+points*i], yArr[13+points*i], p);
}
}
}
}
Result
Tips and Tricks
This API provides optimal detection results when no more than three portraits are not appear in the image.
This API works better when the proportion of a portrait in an image is high.
At least four skeletal features of the upper part of the body are required for reliable recognition results.
If you are taking Video from a camera or gallery make sure your app has camera and storage permissions.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion
In this article, we have learnt what the pose estimation is and how to integrate pose estimation using Huawei HiAI in android with java. We able to detect the image skeleton in the example. It is able to detect head, neck, elbow, knee and ankle.
Reference
Pose Estimation
Apply for Huawei HiAI

Related

Intermediate: How to extract table information from Images using Huawei HiAI Table Recognition service in Android

Introduction
In this article, we will learn how to implement Huawei HiAI kit using Table Recognition service into android application, this service helps us to extract the table content from images.
Table recognition algorithms, this one is based on the line structure of table. Clear and detectable lines are necessary for the proper identification of cells.
Use case: Imagine you have lots of paperwork and documents where you would be using tables, and using the same you would like to manipulate data. Conventionally you can copy them manually or generate excel files for third party apps.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Any IDE with Android SDK installed (IntelliJ, Android Studio).
3. HiAI SDK.
4. Minimum API Level 23 is required.
5. Required EMUI 9.0.0 and later version devices.
6. Required process kirin 990/985/980/970/ 825Full/820Full/810Full/ 720Full/710Full
Features
1. Restores the table information including text in the cells, and identifies merged cells as well.
2. Fast recognition it returns the text in a cell containing 50 lines within 3seconds
3. Recognition accuracy level >85%
4. Recall rate >80%
How to integrate HMS Dependencies
1. First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project. For more information check this link
2. Download agconnect-services.json file from AGC and add into app’s root directory.
3. Add the required dependencies to the build.gradle file under root folder.
Code:
maven {url'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
4. Add the App level dependencies to the build.gradle file under app folder.
Code:
apply plugin: 'com.huawei.agconnect'
5. Add the required permission to the Manifestfile.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.hardware.camera"/>
<uses-permission android:name="android.permission.HARDWARE_TEST.camera.autofocus"/>
6. Now, sync your project.
How to apply for HiAI Engine Library
1. Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
2. Click Apply for HUAWEI HiAI kit.
3. Enter required information like Product name and Package name, click Next button.
4. Verify the application details and click Submit button.
5. Click the Download SDK button to open the SDK list.
6. Unzip downloaded SDK and add into your android project under lib folder.
7. Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(<b><span style="font-size: 10.0pt;font-family: Consolas;">include</span></b>: [<b><span style="font-size: 10.0pt;">'*.aar'</span></b>, <b><span style="font-size: 10.0pt;">'*.jar'</span></b>], <b><span style="font-size: 10.0pt;">dir</span></b>: <b><span style="font-size: 10.0pt;">'libs'</span></b>)
implementation <b><span style="font-size: 10.0pt;">'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}</span></b><b style="font-family: Consolas;font-size: 10.0pt;background-color: white;">
</b>
8. After completing this above setup, now Sync your gradle file.
Let’s do code
I have created a project on Android studio with empty activity let’s start coding.
In the MainActivity.java we can create the business logic.
Code:
MainActivity extends AppCompatActivity {
private boolean isConnection = false;
private int REQUEST_CODE = 101;
private int REQUEST_PHOTO = 100;
private Bitmap bitmap;
private Bitmap resultBitmap;
private Button btnImage;
private ImageView originalImage;
private ImageView conversionImage;
private TextView textView;
private TextView contentText;
private final String[] permission = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE};
private ImageSuperResolution resolution;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
requestPermissions(permission, REQUEST_CODE);
initHiAI();
originalImage = findViewById(R.id.super_origin);
conversionImage = findViewById(R.id.super_image);
textView = findViewById(R.id.text);
contentText = findViewById(R.id.content_text);
btnImage = findViewById(R.id.btn_album);
btnImage.setOnClickListener(v -> {
selectImage();
});
}
private void initHiAI() {
VisionBase.init(this, new ConnectionCallback() {
@Override
public void onServiceConnect() {
isConnection = true;
DeviceCompatibility();
}
@Override
public void onServiceDisconnect() {
}
});
}
private void DeviceCompatibility() {
resolution = new ImageSuperResolution(this);
int support = resolution.getAvailability();
if (support == 0) {
Toast.makeText(this, "Device supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(this, "Device doesn't supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
}
}
public void selectImage() {
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (data != null && requestCode == REQUEST_PHOTO) {
try {
bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), data.getData());
if (isConnection) {
setTableAI();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private void setTableAI() {
textView.setText("Extraction Table Text");
contentText.setVisibility(View.VISIBLE);
TableDetector mTableDetector = new TableDetector(this);
VisionImage image = VisionImage.fromBitmap(bitmap);
VisionTableConfiguration mTableConfig = new VisionTableConfiguration.Builder()
.setAppType(VisionTableConfiguration.APP_NORMAL)
.setProcessMode(VisionTableConfiguration.MODE_OUT)
.build();
mTableDetector.setVisionConfiguration(mTableConfig);
mTableDetector.prepare();
Table table = new Table();
int mResult_code = mTableDetector.detect(image, table, null);
if (mResult_code == 0) {
int count = table.getTableCount();
List<TableContent> tc = table.getTableContent();
StringBuilder sbTableCell = new StringBuilder();
List<TableCell> tableCell = tc.get(0).getBody();
for (TableCell c : tableCell) {
List<String> words = c.getWord();
StringBuilder sb = new StringBuilder();
for (String s : words) {
sb.append(s).append(",");
}
String cell = c.getStartRow() + ":" + c.getEndRow() + ": " + c.getStartColumn() + ":" +
c.getEndColumn() + "; " + sb.toString();
sbTableCell.append(cell).append("\n");
contentText.setText("Count = " + count + "\n\n" + sbTableCell.toString());
}
}
}
}
Demo
Tips and Tricks
1. Download latest Huawei HiAI SDK.
2. Set minSDK version to 23 or later.
3. Do not forget to add jar files into gradle file.
4. It supports slides images.
5. Input resolution larger than 720p and with aspect ratio smaller than 2:1.
6. It supports only printed text, images, formulas, handwritten content, seals, watermarks cannot be identified.
7. Refer this URL for supported Countries/Regions list.
Conclusion
That’s it! Now your table content extracted from image, for further analysis with statistics or just for editing it. This works for tables with clear and simple structure information.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Huawei HiAI Table Recognition Kit URL
Original Source

Intermediate: Filter Pets by Scene Detection Using Huawei HiAI in Android

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we will learn how to integrate Huawei Scene detection using Huawei HiAI. We will build the Pets cart where we can sell pets online and filter pets by scene detection using Huawei HiAI.
What is Scene Detection?
Scene detection can quickly classify images by identifying the type of scene to which the image content belongs, such as animals, green plants, food, buildings, and automobiles. Scene detection can also add smart classification labels to images, facilitating smart album generation and category-based image management.
Features
Fast: This algorithm is currently developed based on the deep neural network, to fully utilize the neural processing unit (NPU) of Huawei mobile phones to accelerate the neural network, achieving an acceleration of over 10 times.
Lightweight: This API greatly reduces the computing time and ROM space the algorithm model takes up, making your app more lightweight.
Abundant: Scene detection can identify 103 scenarios such as Cat, Dog, Snow, Cloudy sky, Beach, Greenery, Document, Stage, Fireworks, Food, Sunset, Blue sky, Flowers, Night, Bicycle, Historical buildings, Panda, Car, and Autumn leaves. The detection average accuracy is over 95% and the average recall rate is over 85% (lab data).
How to integrate Scene Detection
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library
3. Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development, and click HUAWEI HiAI.
Step 2: Click Apply for the HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click the Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add to your android project under the libs folder.
Step 7: Add jar files dependencies into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}Copy code
Client application development process
Follow the steps
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
Code:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<!-- CAMERA -->
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Step 4: Build application.
First request run time permission
Code:
private void requestPermissions() {
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
int permission = ActivityCompat.checkSelfPermission(this,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
if (permission != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.CAMERA}, 0x0010);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}Copy code
Initialize vision base
private void initVisionBase() {
VisionBase.init(SceneDetectionActivity.this, new ConnectionCallback() {
[USER=439709]@override[/USER]
public void onServiceConnect() {
//This callback method is called when the connection to the service is successful.
//Here you can initialize the detector class, mark the service connection status, and more.
Log.i(LOG, "onServiceConnect ");
Toast.makeText(SceneDetectionActivity.this, "Service Connected", Toast.LENGTH_SHORT).show();
}
[USER=439709]@override[/USER]
public void onServiceDisconnect() {
//This callback method is called when disconnected from the service.
//You can choose to reconnect here or to handle exceptions.
Log.i(LOG, "onServiceDisconnect");
Toast.makeText(SceneDetectionActivity.this, "Service Disconnected", Toast.LENGTH_SHORT).show();
}
});
}
Build Async class for scene detection.
Code:
class SceneDetectionAsync extends AsyncTask<Bitmap, Void, JSONObject> {
[USER=439709]@override[/USER]
protected JSONObject doInBackground(Bitmap... bitmaps) {
//Bitmap bitmap = BitmapFactory.decodeFile(imgPath);//Obtain the Bitmap image. (Note that the Bitmap must be in the ARGB8888 format, that is, bitmap.getConfig() == Bitmap.Config.ARGB8888.)
Frame frame = new Frame();//Construct the Frame object
frame.setBitmap(bitmaps[0]);
SceneDetector sceneDetector = new SceneDetector(SceneDetectionActivity.this);//Construct Detector.
JSONObject jsonScene = sceneDetector.detect(frame, null);//Perform scene detection.
Scene sc = sceneDetector.convertResult(jsonScene);//Obtain the Java class result.
if (sc != null) {
int type = sc.getType();//Obtain the identified scene type.
Log.d(LOG, "Type:" + type);
}
Log.d(LOG, "Json data:" + jsonScene.toString());
return jsonScene;
}
[USER=439709]@override[/USER]
protected void onPostExecute(JSONObject data) {
super.onPostExecute(data);
if (dialog != null && dialog.isShowing()) {
dialog.dismiss();
}
adapter = new MyListAdapter(getPetsFilteredDataList(data));
recyclerView.setAdapter(adapter);
Toast.makeText(SceneDetectionActivity.this, "Data filtered successfully", Toast.LENGTH_SHORT).show();
}
}Copy code
Show select image dialog.
private void selectImage() {
try {
PackageManager pm = getPackageManager();
int hasPerm = pm.checkPermission(Manifest.permission.CAMERA, getPackageName());
if (hasPerm == PackageManager.PERMISSION_GRANTED) {
final CharSequence[] options = {"Take Photo", "Choose From Gallery", "Cancel"};
androidx.appcompat.app.AlertDialog.Builder builder = new androidx.appcompat.app.AlertDialog.Builder(this);
builder.setTitle("Select Option");
builder.setItems(options, new DialogInterface.OnClickListener() {
[USER=439709]@override[/USER]
public void onClick(DialogInterface dialog, int item) {
if (options[item].equals("Take Photo")) {
dialog.dismiss();
fileUri = getOutputMediaFileUri();
Log.d(LOG, "end get uri = " + fileUri);
Intent i = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
i.putExtra(MediaStore.EXTRA_OUTPUT, fileUri);
startActivityForResult(i, REQUEST_IMAGE_TAKE);
} else if (options[item].equals("Choose From Gallery")) {
dialog.dismiss();
Intent i = new Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
startActivityForResult(i, REQUEST_IMAGE_SELECT);
} else if (options[item].equals("Cancel")) {
dialog.dismiss();
}
}
});
builder.show();
} else
Toast.makeText(this, "Camera Permission error", Toast.LENGTH_SHORT).show();
} catch (Exception e) {
Toast.makeText(this, "Camera Permission error", Toast.LENGTH_SHORT).show();
e.printStackTrace();
}
}
/**
* Create a file Uri for saving an image or video
*/
private Uri getOutputMediaFileUri() {
//return Uri.fromFile(getOutputMediaFile(type));
Log.d(LOG, "authority = " + getPackageName() + ".provider");
Log.d(LOG, "getApplicationContext = " + getApplicationContext());
return FileProvider.getUriForFile(this, getPackageName() + ".fileprovider", getOutputMediaFile());
}
/**
* Create a File for saving an image
*/
private static File getOutputMediaFile() {
File mediaStorageDir = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES), "LabelDetect");
// Create the storage directory if it does not exist
if (!mediaStorageDir.exists()) {
if (!mediaStorageDir.mkdirs()) {
Log.d(LOG, "failed to create directory");
return null;
}
}
// Create a media file name
String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date());
File mediaFile = new File(mediaStorageDir.getPath() + File.separator +
"IMG_" + timeStamp + ".jpg");
Log.d(LOG, "mediaFile " + mediaFile);
return mediaFile;
}
When user select image start detecting.
Code:
@override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if ((requestCode == REQUEST_IMAGE_TAKE || requestCode == REQUEST_IMAGE_SELECT) && resultCode == RESULT_OK) {
String imgPath;
if (requestCode == REQUEST_IMAGE_TAKE) {
imgPath = Environment.getExternalStorageDirectory() + fileUri.getPath();
} else {
Uri selectedImage = data.getData();
String[] filePathColumn = {MediaStore.Images.Media.DATA};
Cursor cursor = SceneDetectionActivity.this.getContentResolver().query(selectedImage,
filePathColumn, null, null, null);
cursor.moveToFirst();
int columnIndex = cursor.getColumnIndex(filePathColumn[0]);
imgPath = cursor.getString(columnIndex);
cursor.close();
}
Log.d(LOG, "imgPath = " + imgPath);
bmp = BitmapFactory.decodeFile(imgPath);
if (bmp != null) {
//Toast.makeText(this, "Bit map is not null", Toast.LENGTH_SHORT).show();
dialog = ProgressDialog.show(SceneDetectionActivity.this,
"Predicting...", "Wait for one sec...", true);
SceneDetectionAsync async = new SceneDetectionAsync();
async.execute(bmp);
} else {
Toast.makeText(this, "Bit map is null", Toast.LENGTH_SHORT).show();
}
}
super.onActivityResult(requestCode, resultCode, data);
}
Data set
Code:
private MyListData[] getPetsList() {
MyListData[] listData = new MyListData[]{
new MyListData("Labrador Retriever", "20000INR", "Age: 1yr", R.drawable.labrador_retriever),
new MyListData("Bengal Cat", "8000INR", "Age: 1 month", R.drawable.bengal_cat),
new MyListData("Parrot", "2500INR", "Age: 3months", R.drawable.parrot),
new MyListData("Rabbit", "1500INR", "Age: 1 month", R.drawable.rabbit_image),
new MyListData("Beagle", "20500INR", "Age:6months", R.drawable.beagle),
new MyListData("Bulldog", "19000INR", "1yr", R.drawable.bulldog),
new MyListData("German Shepherd", "18000INR", "Age: 2yr", R.drawable.german_shepherd_dog),
new MyListData("German Shorthaired Pointer", "20000INR", "Age: 8 months", R.drawable.german_shorthaired_pointer),
new MyListData("Golder retriever", "12000INR", "Age: 7months", R.drawable.golden_retriever),
new MyListData("Pembroke Welsh corgi", "9000INR", "Age: 10months", R.drawable.pembroke_welsh_corgi),
new MyListData("Pomeranian", "25000INR", "Age: 10months", R.drawable.pomeranian),
new MyListData("Poodle", "15000INR", "Age: 3months", R.drawable.poodle),
new MyListData("Rottweiler", "1700INR", "Age:2yr", R.drawable.rottweiler),
new MyListData("Shihtzu", "18000INR", "Age: 5months", R.drawable.shih_tzu),
};
return listData;
}
private MyListData[] getPetsFilteredDataList(JSONObject jsonObject) {
MyListData[] listData = null;
try {
//{"resultCode":0,"scene":"{\"type\":13}"}
String scene = jsonObject.getString("scene");
JSONObject object = new JSONObject(scene);
int type = object.getInt("type");
switch (type) {
case 1:
break;
case 12:
//Get Cats filtered data here
break;
case 13:
listData = getDogsData();
break;
}
} catch (JSONException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
return listData;
}
private MyListData[] getDogsData() {
MyListData[] dogsList = new MyListData[]{
new MyListData("Labrador Retriever", "20000INR", "Age: 1yr", R.drawable.labrador_retriever),
new MyListData("Beagle", "20500INR", "Age:6months", R.drawable.beagle),
new MyListData("Bulldog", "19000INR", "1yr", R.drawable.bulldog),
new MyListData("German Shepherd", "18000INR", "Age: 2yr", R.drawable.german_shepherd_dog),
new MyListData("German Shorthaired Pointer", "20000INR", "Age: 8 months", R.drawable.german_shorthaired_pointer),
new MyListData("Golder retriever", "12000INR", "Age: 7months", R.drawable.golden_retriever),
new MyListData("Pembroke Welsh corgi", "9000INR", "Age: 10months", R.drawable.pembroke_welsh_corgi),
new MyListData("Pomeranian", "25000INR", "Age: 10months", R.drawable.pomeranian),
new MyListData("Poodle", "15000INR", "Age: 3months", R.drawable.poodle),
new MyListData("Rottweiler", "1700INR", "Age:2yr", R.drawable.rottweiler),
new MyListData("Shihtzu", "18000INR", "Age: 5months", R.drawable.shih_tzu),
};
return dogsList;
}
Result
Before Filter.
After filter
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise we get Manifest merge issue.
Run detect() background thread otherwise app will crash with error.
If you are taking image from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
If device does not supports you will get 601 code in the result code
Maximum 20MB image
Conclusion
In this article, we have learnt the following concepts.
What is Scene detection?
Features of scene detection
How to integrate scene detection using Huawei HiAI
How to Apply Huawei HiAI
How to build the application
How to filter data by scene
Reference
Scene detection
Apply for Huawei HiAI

Share Educational Training Video Summary on Social Media by Huawei Video Summarization using Huawei HiAI in Android

Introduction
In this article, we will learn how to integrate Huawei Video summarization using Huawei HiAI. We will build the Video preview maker application to share it on social media to increase your video views.
What is Video summarization?
In general Video summarization is the process of distilling a raw video into a more compact form without losing much information.
This Service can generate a 10 seconds, 15 seconds, or 30 seconds video summary of a single video or multiple videos containing the original voice.
Note: Total Video lenght should not exceed more than 10 minutes.
Click to expand...
Click to collapse
Implementing an advanced multi-dimensional scoring framework, the aesthetic engine assists with shooting, photo selection, video editing, and video splitting, by comprehending complex subjective aspects in images, and making high-level judgments related to the attractiveness, memorability and engaging nature of images.
Features
Fast: This algorithm is currently developed based on the deep neural network, to fully utilize the neural processing unit (NPU) of Huawei mobile phones to accelerate the neural network, achieving an acceleration of over 10 times.
Lightweight: This API greatly reduces the computing time and ROM space the algorithm model takes up, making your app more lightweight.
Comprehensive scoring: The aesthetic engine provides scoring to measure image quality from objective dimensions (image quality), subjective dimensions (sensory evaluation), and photographic dimensions (rule evaluation).
Portrait aesthetics scoring: An industry-leading portrait aesthetics scoring feature obtains semantic information about human bodies in the image, including the number of people, individual body builds, positions, postures, facial positions and angles, eye movements, mouth movements, and facial expressions. Aesthetic scores of the portrait are given according to the various types of the body semantic information.
How to integrate Video Summarization
Configure the application on the AGC.
Apply for HiAI Engine Library
Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
Code:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<!-- CAMERA -->
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Step 4: Build application.
First request run time permission
Code:
private void requestPermissions() {
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
int permission = ActivityCompat.checkSelfPermission(this,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
if (permission != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.CAMERA}, 0x0010);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
Initialize vision base
Code:
private void initVisionBase() {
VisionBase.init(VideoSummaryActivity.this, new ConnectionCallback() {
[USER=439709]@override[/USER]
public void onServiceConnect() {
Log.i(LOG, "onServiceConnect ");
Toast.makeText(VideoSummaryActivity.this, "Service Connected", Toast.LENGTH_SHORT).show();
}
[USER=439709]@override[/USER]
public void onServiceDisconnect() {
Log.i(LOG, "onServiceDisconnect");
Toast.makeText(VideoSummaryActivity.this, "Service Disconnected", Toast.LENGTH_SHORT).show();
}
});
}
Create video Async class
Code:
public class VideoAsyncTask extends AsyncTask<String, Void, String> {
private static final String LOG = VideoAsyncTask.class.getSimpleName();
private Context context;
private VideoCoverListener listener;
private AestheticsScoreDetector aestheticsScoreDetector;
public VideoAsyncTask(Context context, VideoCoverListener listener) {
this.context = context;
this.listener = listener;
}
[USER=439709]@override[/USER]
protected String doInBackground(String... paths) {
Log.i(LOG, "init VisionBase");
VisionBase.init(context, ConnectManager.getInstance().getmConnectionCallback()); //try to start AIEngine
if (!ConnectManager.getInstance().isConnected()) { //wait for AIEngine service
ConnectManager.getInstance().waitConnect();
}
Log.i(LOG, "init videoCover");
aestheticsScoreDetector = new AestheticsScoreDetector(context);
AEModelConfiguration aeModelConfiguration;
aeModelConfiguration = new AEModelConfiguration();
aeModelConfiguration.getSummerizationConf().setSummerizationMaxLen(10);
aeModelConfiguration.getSummerizationConf().setSummerizationMinLen(2);
aestheticsScoreDetector.setAEModelConfiguration(aeModelConfiguration);
String videoResult = null;
if (listener.isAsync()) {
videoCoverAsync(paths);
videoResult = "-10000";
} else {
videoResult = videoCover(paths);
aestheticsScoreDetector.release();
}
//release engine after detect finished
return videoResult;
}
[USER=439709]@override[/USER]
protected void onPostExecute(String resultScore) {
if (!resultScore.equals("-10000")) {
listener.onTaskCompleted(resultScore, false);
}
super.onPostExecute(String.valueOf(resultScore));
}
private String videoCover(String[] videoPaths) {
if (videoPaths == null) {
Log.e(LOG, "uri is null ");
return null;
}
JSONObject jsonObject = new JSONObject();
int position = 0;
Video[] videos = new Video[videoPaths.length];
for (String path : videoPaths) {
Video video = new Video();
video.setPath(path);
videos[position++] = video;
}
jsonObject = aestheticsScoreDetector.getVideoSummerization(videos, null);
if (jsonObject == null) {
Log.e(LOG, "return JSONObject is null");
return "return JSONObject is null";
}
if (!jsonObject.optString("resultCode").equals("0")) {
Log.e(LOG, "return JSONObject resultCode is not 0");
return jsonObject.optString("resultCode");
}
Log.d(LOG, "videoCover get score end");
AEVideoResult aeVideoResult = aestheticsScoreDetector.convertVideoSummaryResult(jsonObject);
if (null == aeVideoResult) {
Log.e(LOG, "aeVideoResult is null ");
return null;
}
String result = new Gson().toJson(aeVideoResult, AEVideoResult.class);
return result;
}
private void videoCoverAsync(String[] videoPaths) {
if (videoPaths == null) {
Log.e(LOG, "uri is null ");
return;
}
Log.d(LOG, "runVisionService " + "start get score");
CVisionCallback callback = new CVisionCallback();
int position = 0;
Video[] videos = new Video[videoPaths.length];
for (String path : videoPaths) {
Video video = new Video();
video.setPath(path);
videos[position++] = video;
}
aestheticsScoreDetector.getVideoSummerization(videos, callback);
}
public class CVisionCallback extends VisionCallback {
[USER=439709]@override[/USER]
public void setRequestID(String requestID) {
super.setRequestID(requestID);
}
[USER=439709]@override[/USER]
public void onDetectedResult(AnnotateResult annotateResult) throws RemoteException {
if (annotateResult != null) {
Log.e("Visioncallback", annotateResult.toString());
}
Log.e("Visioncallback", annotateResult.getResult().toString());
JSONObject jsonObject = null;
try {
jsonObject = new JSONObject(annotateResult.getResult().toString());
} catch (JSONException e) {
e.printStackTrace();
}
if (jsonObject == null) {
Log.e(LOG, "return JSONObject is null");
aestheticsScoreDetector.release();
return;
}
if (!jsonObject.optString("resultCode").equals("0")) {
Log.e(LOG, "return JSONObject resultCode is not 0");
aestheticsScoreDetector.release();
return;
}
AEVideoResult aeVideoResult = aestheticsScoreDetector.convertVideoSummaryResult(jsonObject);
if (aeVideoResult == null) {
aestheticsScoreDetector.release();
return;
}
String result = new Gson().toJson(aeVideoResult, AEVideoResult.class);
aestheticsScoreDetector.release();
listener.onTaskCompleted(result, true);
}
[USER=439709]@override[/USER]
public void onDetectedInfo(InfoResult infoResult) throws RemoteException {
JSONObject jsonObject = null;
try {
jsonObject = new JSONObject(infoResult.getInfoResult().toString());
AEDetectVideoStatus aeDetectVideoStatus = aestheticsScoreDetector.convertDetectVideoStatusResult(jsonObject);
if (aeDetectVideoStatus != null) {
listener.updateProcessProgress(aeDetectVideoStatus);
} else {
Log.d(LOG, "[ASTaskPlus onDetectedInfo]aeDetectVideoStatus result is null!");
}
} catch (JSONException e) {
e.printStackTrace();
}
}
[USER=439709]@override[/USER]
public void onDetectedError(ErrorResult errorResult) throws RemoteException {
Log.e(LOG, errorResult.getResultCode() + "");
aestheticsScoreDetector.release();
listener.onTaskCompleted(String.valueOf(errorResult.getResultCode()), true);
}
[USER=439709]@override[/USER]
public String getRequestID() throws RemoteException {
return null;
}
}
}
Result
Tips and Tricks
Check dependencies added properly
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
If you are taking Video from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Maximum video length is 10 minute.
Resolution should be between 144P and 2160P.
Conclusion
In this article, we have learnt the following concepts.
What is Video summarization?
Features of Video summarization
How to integrate Video summarization using Huawei HiAI
How to Apply Huawei HiAI
How to build the application
Reference
Video summarization
Apply for Huawei HiAI
Happy coding

Extract table data from Image using Huawei Table Recognition by Huawei HiAI in Android

Introduction
The API can perfectly retrieve the information of tables as well as text from cells, besides, merged cells can also be recognized. It supports recognition of tables with clear and unbroken lines, but not supportive of tables with crooked lines or cells divided by color background. Currently the API supports recognition from printed materials and snapshots of slide meetings, but it is not functioning for the screenshots or photos of excel sheets and any other table editing software.
Here, the image resolution should be higher than 720p (1280×720 px), and the aspect ratio (length-to-width ratio) should be lower than 2:1.
In this article, we will learn how to implement Huawei HiAI kit using Table Recognition service into android application, this service helps us to extract the table content from images.
Software requirements
1. Any operating system (MacOS, Linux and Windows).
2. Any IDE with Android SDK installed (IntelliJ, Android Studio).
3. HiAI SDK.
4. Minimum API Level 23 is required.
5. Required EMUI 9.0.0 and later version devices.
6. Required processors kirin 990/985/980/970/ 825Full/820Full/810Full/ 720Full/710Full
How to integrate Table recognition.
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library.
3. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei's AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}Copy code
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Java:
import android.Manifest;
import android.content.Intent;
import android.graphics.Bitmap;
import android.provider.MediaStore;
import android.support.annotation.Nullable;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TableLayout;
import android.widget.TextView;
import android.widget.Toast;
import com.huawei.hiai.vision.common.ConnectionCallback;
import com.huawei.hiai.vision.common.VisionBase;
import com.huawei.hiai.vision.common.VisionImage;
import com.huawei.hiai.vision.image.sr.ImageSuperResolution;
import com.huawei.hiai.vision.text.TableDetector;
import com.huawei.hiai.vision.visionkit.text.config.VisionTableConfiguration;
import com.huawei.hiai.vision.visionkit.text.table.Table;
import com.huawei.hiai.vision.visionkit.text.table.TableCell;
import com.huawei.hiai.vision.visionkit.text.table.TableContent;
import java.util.List;
public class TableRecognition extends AppCompatActivity {
private boolean isConnection = false;
private int REQUEST_CODE = 101;
private int REQUEST_PHOTO = 100;
private Bitmap bitmap;
private Button btnImage;
private ImageView originalImage;
private ImageView conversionImage;
private TextView textView;
private TextView tableContentText;
private final String[] permission = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE};
private ImageSuperResolution resolution;
private TableLayout tableLayout;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_table_recognition);
requestPermissions(permission, REQUEST_CODE);
initializeVisionBase();
originalImage = findViewById(R.id.super_origin);
conversionImage = findViewById(R.id.super_image);
textView = findViewById(R.id.text);
tableContentText = findViewById(R.id.content_text);
btnImage = findViewById(R.id.btn_album);
tableLayout = findViewById(R.id.tableLayout);
btnImage.setOnClickListener(v -> {
selectImage();
tableLayout.removeAllViews();
});
}
private void initializeVisionBase() {
VisionBase.init(this, new ConnectionCallback() {
@Override
public void onServiceConnect() {
isConnection = true;
DoesDeviceSupportTableRecognition();
}
@Override
public void onServiceDisconnect() {
}
});
}
private void DoesDeviceSupportTableRecognition() {
resolution = new ImageSuperResolution(this);
int support = resolution.getAvailability();
if (support == 0) {
Toast.makeText(this, "Device supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(this, "Device doesn't supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
}
}
public void selectImage() {
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (data != null && requestCode == REQUEST_PHOTO) {
try {
bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), data.getData());
originalImage.setImageBitmap(bitmap);
if (isConnection) {
extractTableFromTheImage();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private void extractTableFromTheImage() {
tableContentText.setVisibility(View.VISIBLE);
TableDetector mTableDetector = new TableDetector(this);
VisionImage image = VisionImage.fromBitmap(bitmap);
VisionTableConfiguration mTableConfig = new VisionTableConfiguration.Builder()
.setAppType(VisionTableConfiguration.APP_NORMAL)
.setProcessMode(VisionTableConfiguration.MODE_OUT)
.build();
mTableDetector.setVisionConfiguration(mTableConfig);
mTableDetector.prepare();
Table table = new Table();
int mResult_code = mTableDetector.detect(image, table, null);
if (mResult_code == 0) {
int count = table.getTableCount();
List<TableContent> tc = table.getTableContent();
StringBuilder sbTableCell = new StringBuilder();
List<TableCell> tableCell = tc.get(0).getBody();
for (TableCell c : tableCell) {
List<String> words = c.getWord();
StringBuilder sb = new StringBuilder();
for (String s : words) {
sb.append(s).append(",");
}
String cell = c.getStartRow() + ":" + c.getEndRow() + ": " + c.getStartColumn() + ":" +
c.getEndColumn() + "; " + sb.toString();
sbTableCell.append(cell).append("\n");
tableContentText.setText("Count = " + count + "\n\n" + sbTableCell.toString());
}
}
}
}Copy code
Result
Tips and Tricks
Recommended image should be larger than 720px.
Multiple table recognition currently not supported.
If you are taking Video from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion
In this article, we have done table content extraction from image, for further analysis with statistics or just for editing it. This works for tables with clear and simple structure information. We have learnt the following concepts.
1. Introduction of Table recognition?
2. How to integrate Table using Huawei HiAI
3. How to Apply Huawei HiAI
4. How to build the application
Reference
Table Recognition
Apply for Huawei HiAI
Happy coding
useful sharing,thanks!

Compare your face with celebrities using Huawei HiAI Engine in Android

Introduction
In this article, we will learn how to compare two faces for entertainment. And check how much similar or the same person or another person. Many times most of the people says that you looks like X hero, heroine or singer etc. Now it’s time to check how much percentage you are similar with celebraty.
Facial comparison recognizes and extracts key facial features, performs high-precision comparisons of human faces, provides confidence scores, and determines whether the same person appears across different images. Cutting-edge facial comparison technology intelligently categorizes and manages photos. An intelligent image recognition algorithm ensures accurate facial recognition, and enables apps to provide an enhanced user experience.
It is not recommended that this algorithm be used for identity authentication (such as phone unlocking and secure payment). It can be applied in in-app scenarios where face similarity comparison is required, for example: to compare the similarity between two faces, or the similarity between a person and a celebrity in an entertainment app.
For two photos of one person, the result will show that the same person is appearing in both photos, and the confidence score will be high. For photos of two different persons, the result will show that the two photos are not of the same person, and the confidence score will be low.
How to integrate Facial Comparison
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library.
3. Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Request the runtime permission.
Java:
private void requestPermissions() {
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
int permission = ActivityCompat.checkSelfPermission(this,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
if (permission != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE}, 0x0010);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
@Override
public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
}Copy codeCopy code
public void onClickButton(View view) {
int requestCode;
switch (view.getId()) {
case R.id.btnPerson1: {
Log.d(TAG, "btnPerson1");
isPerson1 = true;
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
requestCode = PHOTO_REQUEST_GALLERY;
startActivityForResult(intent, requestCode);
break;
}
case R.id.btnPerson2: {
Log.d(TAG, "btnPerson2");
isPerson1 = false;
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
requestCode = PHOTO_REQUEST_GALLERY;
startActivityForResult(intent, requestCode);
break;
}
case R.id.btnstarCompare: {
startCompare();
break;
}
default:
break;
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == Activity.RESULT_OK) {
if (data == null) {
return;
}
Uri selectedImage = data.getData();
int type = data.getIntExtra("type", TYPE_CHOOSE_PHOTO_CODE4PERSON1);
Log.d(TAG, "select uri:" + selectedImage.toString() + "type: " + type);
mTxtViewResult.setText("");
getBitmap(type, selectedImage);
}
}
private void getBitmap(int type, Uri imageUri) {
String[] pathColumn = {MediaStore.Images.Media.DATA};
Cursor cursor = getContentResolver().query(imageUri, pathColumn, null, null, null);
cursor.moveToFirst();
int columnIndex = cursor.getColumnIndex(pathColumn[0]);
String picturePath = cursor.getString(columnIndex);
cursor.close();
if (isPerson1) {
mBitmapPerson1 = BitmapFactory.decodeFile(picturePath);
mHander.sendEmptyMessage(TYPE_CHOOSE_PHOTO_CODE4PERSON1);
} else {
mBitmapPerson2 = BitmapFactory.decodeFile(picturePath);
mHander.sendEmptyMessage(TYPE_CHOOSE_PHOTO_CODE4PERSON2);
}
}
@SuppressLint("HandlerLeak")
private Handler mHander = new Handler() {
@Override
public void handleMessage(Message msg) {
super.handleMessage(msg);
int status = msg.what;
Log.d(TAG, "handleMessage status = " + status);
switch (status) {
case TYPE_CHOOSE_PHOTO_CODE4PERSON1: {
if (mBitmapPerson1 == null) {
Log.e(TAG, "bitmap person1 is null !!!! ");
return;
}
mImageViewPerson1.setImageBitmap(mBitmapPerson1);
break;
}
case TYPE_CHOOSE_PHOTO_CODE4PERSON2: {
if (mBitmapPerson2 == null) {
Log.e(TAG, "bitmap person2 is null !!!! ");
return;
}
mImageViewPerson2.setImageBitmap(mBitmapPerson2);
break;
}
case TYPE_SHOW_RESULE: {
FaceCompareResult result = (FaceCompareResult) msg.obj;
if (result == null) {
mTxtViewResult.setText("Not the same person, result is null");
break;
}
if (result.isSamePerson()) {
mTxtViewResult.setText("The same Person, and score: " + result.getSocre());
} else {
mTxtViewResult.setText("Not the same Person, and score:" + result.getSocre());
}
break;
}
default:
break;
}
}
};
private void startCompare() {
mTxtViewResult.setText("Is the same Person ?");
synchronized (mWaitResult) {
mWaitResult.notifyAll();
}
}
private Thread mThread = new Thread(new Runnable() {
@Override
public void run() {
mFaceComparator = new FaceComparator(getApplicationContext());
FaceComparator faceComparator = mFaceComparator;
while (true) {
try {
synchronized (mWaitResult) {
mWaitResult.wait();
}
} catch (InterruptedException e) {
Log.e(TAG, e.getMessage());
}
Log.d(TAG, "start Compare !!!! ");
Frame frame1 = new Frame();
frame1.setBitmap(mBitmapPerson1);
Frame frame2 = new Frame();
frame2.setBitmap(mBitmapPerson2);
JSONObject jsonObject = faceComparator.faceCompare(frame1, frame2, null);
if (jsonObject != null)
Log.d(TAG, "Compare end !!!! json: " + jsonObject.toString());
FaceCompareResult result = faceComparator.convertResult(jsonObject);
Log.d(TAG, "convertResult end !!!! ");
Message msg = new Message();
msg.what = TYPE_SHOW_RESULE;
msg.obj = result;
mHander.sendMessage(msg);
}
}
});
Result
Same person different photos.
Different persons different photos
Tips and Tricks
An error code is returned, if the size of an image exceeds 20 MP.
Multi-thread invoking is currently not supported.
Intra-process and inter-process support.
This API can be called only by 64-bit apps.
If you are taking Video from a camera or gallery make sure your app has camera and storage permissions.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion
In this article, we have learnt what the facial comparison is and how to integrate facial comparison using Huawei HiAI using android with java. We got the result between same person different photos and different person different photos.
Reference
Facial Comparison
Apply for Huawei HiAI
Java:
Will this works on offline?
Thanks for sharing!!!

Categories

Resources