Extract table data from Image using Huawei Table Recognition by Huawei HiAI in Android - Huawei Developers

Introduction
The API can perfectly retrieve the information of tables as well as text from cells, besides, merged cells can also be recognized. It supports recognition of tables with clear and unbroken lines, but not supportive of tables with crooked lines or cells divided by color background. Currently the API supports recognition from printed materials and snapshots of slide meetings, but it is not functioning for the screenshots or photos of excel sheets and any other table editing software.
Here, the image resolution should be higher than 720p (1280×720 px), and the aspect ratio (length-to-width ratio) should be lower than 2:1.
In this article, we will learn how to implement Huawei HiAI kit using Table Recognition service into android application, this service helps us to extract the table content from images.
Software requirements
1. Any operating system (MacOS, Linux and Windows).
2. Any IDE with Android SDK installed (IntelliJ, Android Studio).
3. HiAI SDK.
4. Minimum API Level 23 is required.
5. Required EMUI 9.0.0 and later version devices.
6. Required processors kirin 990/985/980/970/ 825Full/820Full/810Full/ 720Full/710Full
How to integrate Table recognition.
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library.
3. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei's AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}Copy code
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Java:
import android.Manifest;
import android.content.Intent;
import android.graphics.Bitmap;
import android.provider.MediaStore;
import android.support.annotation.Nullable;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TableLayout;
import android.widget.TextView;
import android.widget.Toast;
import com.huawei.hiai.vision.common.ConnectionCallback;
import com.huawei.hiai.vision.common.VisionBase;
import com.huawei.hiai.vision.common.VisionImage;
import com.huawei.hiai.vision.image.sr.ImageSuperResolution;
import com.huawei.hiai.vision.text.TableDetector;
import com.huawei.hiai.vision.visionkit.text.config.VisionTableConfiguration;
import com.huawei.hiai.vision.visionkit.text.table.Table;
import com.huawei.hiai.vision.visionkit.text.table.TableCell;
import com.huawei.hiai.vision.visionkit.text.table.TableContent;
import java.util.List;
public class TableRecognition extends AppCompatActivity {
private boolean isConnection = false;
private int REQUEST_CODE = 101;
private int REQUEST_PHOTO = 100;
private Bitmap bitmap;
private Button btnImage;
private ImageView originalImage;
private ImageView conversionImage;
private TextView textView;
private TextView tableContentText;
private final String[] permission = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE};
private ImageSuperResolution resolution;
private TableLayout tableLayout;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_table_recognition);
requestPermissions(permission, REQUEST_CODE);
initializeVisionBase();
originalImage = findViewById(R.id.super_origin);
conversionImage = findViewById(R.id.super_image);
textView = findViewById(R.id.text);
tableContentText = findViewById(R.id.content_text);
btnImage = findViewById(R.id.btn_album);
tableLayout = findViewById(R.id.tableLayout);
btnImage.setOnClickListener(v -> {
selectImage();
tableLayout.removeAllViews();
});
}
private void initializeVisionBase() {
VisionBase.init(this, new ConnectionCallback() {
@Override
public void onServiceConnect() {
isConnection = true;
DoesDeviceSupportTableRecognition();
}
@Override
public void onServiceDisconnect() {
}
});
}
private void DoesDeviceSupportTableRecognition() {
resolution = new ImageSuperResolution(this);
int support = resolution.getAvailability();
if (support == 0) {
Toast.makeText(this, "Device supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(this, "Device doesn't supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
}
}
public void selectImage() {
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (data != null && requestCode == REQUEST_PHOTO) {
try {
bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), data.getData());
originalImage.setImageBitmap(bitmap);
if (isConnection) {
extractTableFromTheImage();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private void extractTableFromTheImage() {
tableContentText.setVisibility(View.VISIBLE);
TableDetector mTableDetector = new TableDetector(this);
VisionImage image = VisionImage.fromBitmap(bitmap);
VisionTableConfiguration mTableConfig = new VisionTableConfiguration.Builder()
.setAppType(VisionTableConfiguration.APP_NORMAL)
.setProcessMode(VisionTableConfiguration.MODE_OUT)
.build();
mTableDetector.setVisionConfiguration(mTableConfig);
mTableDetector.prepare();
Table table = new Table();
int mResult_code = mTableDetector.detect(image, table, null);
if (mResult_code == 0) {
int count = table.getTableCount();
List<TableContent> tc = table.getTableContent();
StringBuilder sbTableCell = new StringBuilder();
List<TableCell> tableCell = tc.get(0).getBody();
for (TableCell c : tableCell) {
List<String> words = c.getWord();
StringBuilder sb = new StringBuilder();
for (String s : words) {
sb.append(s).append(",");
}
String cell = c.getStartRow() + ":" + c.getEndRow() + ": " + c.getStartColumn() + ":" +
c.getEndColumn() + "; " + sb.toString();
sbTableCell.append(cell).append("\n");
tableContentText.setText("Count = " + count + "\n\n" + sbTableCell.toString());
}
}
}
}Copy code
Result
Tips and Tricks
Recommended image should be larger than 720px.
Multiple table recognition currently not supported.
If you are taking Video from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion
In this article, we have done table content extraction from image, for further analysis with statistics or just for editing it. This works for tables with clear and simple structure information. We have learnt the following concepts.
1. Introduction of Table recognition?
2. How to integrate Table using Huawei HiAI
3. How to Apply Huawei HiAI
4. How to build the application
Reference
Table Recognition
Apply for Huawei HiAI
Happy coding

useful sharing,thanks!

Related

Intermediate: How to extract the data from Image using Huawei HiAI Text Recognition service in Android

Introduction
In this article, we will learn how to implement Huawei HiAI kit using Text Recognition service into android application, this service helps us to extract the data from screen shots and photos.
Now a days everybody lazy to type the content, there are many reasons why we want to integrate this service into our apps. User can capture or pic image from gallery to retrieve the text, so that user can edit the content easily.
UseCase: Using this HiAI kit, user can extract the unreadble image content to make useful, let's start.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Any IDE with Android SDK installed (IntelliJ, Android Studio).
3. HiAI SDK.
4. Minimum API Level 23 is required.
5. Required EMUI 9.0.0 and later version devices.
6. Required process kirin 990/985/980/970/ 825Full/820Full/810Full/ 720Full/710Full
How to integrate HMS Dependencies
1. First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project. For more information check this link
2. Download agconnect-services.json file from AGC and add into app’s root directory.
3 Add the required dependencies to the build.gradle file under root folder.
Code:
maven {url 'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
4. Add the App level dependencies to the build.gradle file under app folder.
Code:
apply plugin: 'com.huawei.agconnect'
5. Add the required permission to the Manifestfile.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.hardware.camera"/>
<uses-permission android:name="android.permission.HARDWARE_TEST.camera.autofocus"/>
6. Now, sync your project.
How to apply for HiAI Engine Library
1. Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
2. Click Apply for HUAWEI HiAI kit.
3. Enter required information like product name and Package name, click Next button.
4. Verify the application details and click Submit button.
5. Click the Download SDK button to open the SDK list.
6. Unzip downloaded SDK and add into your android project under lib folder.
7. Add jar files dependences into app build.gradle file.
Code:
implementationfileTree(<b><span style="font-size: 10.0pt;font-family: Consolas;">include</span></b>: [<b><span style="font-size: 10.0pt;">'*.aar'</span></b>, <b><span style="font-size: 10.0pt;">'*.jar'</span></b>], <b><span style="font-size: 10.0pt;">dir</span></b>: <b><span style="font-size: 10.0pt;">'libs'</span></b>)
implementation <b><span style="font-size: 10.0pt;font-family: Consolas;">'com.google.code.gson:gson:2.8.6'
</span></b>repositories <b>{
</b>flatDir <b>{
</b>dirs <b><span style="font-size: 10.0pt;line-height: 115.0%;font-family: Consolas;color: green;">'libs'
}
}</span></b><b><span style="font-size: 10.0pt;font-family: Consolas;">
</span></b>
8. After completing this above setup, now Sync your gradle file.
Let’s do code
I have created a project on Android studio with empty activity let’s start coding.
In the MainActivity.java we can create the business logic.
Java:
public class MainActivity extends AppCompatActivity {
private boolean isConnection = false;
private int REQUEST_CODE = 101;
private int REQUEST_PHOTO = 100;
private Bitmap bitmap;
private Bitmap resultBitmap;
private Button btnImage;
private ImageView originalImage;
private ImageView conversionImage;
private TextView textView;
private TextView contentText;
private final String[] permission = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE};
private ImageSuperResolution resolution;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
requestPermissions(permission, REQUEST_CODE);
initHiAI();
originalImage = findViewById(R.id.super_origin);
conversionImage = findViewById(R.id.super_image);
textView = findViewById(R.id.text);
contentText = findViewById(R.id.content_text);
btnImage = findViewById(R.id.btn_album);
btnImage.setOnClickListener(v -> {
selectImage();
});
}
private void initHiAI() {
VisionBase.init(this, new ConnectionCallback() {
@Override
public void onServiceConnect() {
isConnection = true;
DeviceCompatibility();
}
@Override
public void onServiceDisconnect() {
}
});
}
private void DeviceCompatibility() {
resolution = new ImageSuperResolution(this);
int support = resolution.getAvailability();
if (support == 0) {
Toast.makeText(this, "Device supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(this, "Device doesn't supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
}
}
public void selectImage() {
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (data != null && requestCode == REQUEST_PHOTO) {
try {
bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), data.getData());
setBitmap();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private void setBitmap() {
int height = bitmap.getHeight();
int width = bitmap.getWidth();
if (width <= 1440 && height <= 15210) {
originalImage.setImageBitmap(bitmap);
setTextHiAI();
} else {
Toast.makeText(this, "Image size should be below 1440*15210 pixels", Toast.LENGTH_SHORT).show();
}
}
private void setTextHiAI() {
textView.setText("Extraction Text");
contentText.setVisibility(View.VISIBLE);
TextDetector detector = new TextDetector(this);
VisionImage image = VisionImage.fromBitmap(bitmap);
TextConfiguration config = new TextConfiguration();
config.setEngineType(TextConfiguration.AUTO);
config.setEngineType(TextDetectType.TYPE_TEXT_DETECT_FOCUS_SHOOT_EF);
detector.setTextConfiguration(config);
Text result = new Text();
int statusCode = detector.detect(image, result, null);
if (statusCode != 0) {
Log.e("TAG", "Failed to start engine, try restart app,");
}
if (result.getValue() != null) {
contentText.setText(result.getValue());
Log.d("TAG", result.getValue());
} else {
Log.e("TAG", "Result test value is null!");
}
}
}
Demo
Tips and Tricks
1. Download latest Huawei HiAI SDK.
2. Set minSDK version to 23 or later.
3. Do not forget to add jar files into gradle file.
4. Screenshots size should be 1440*15210 pixels.
5. Photos recommended size is 720p.
6. Refer this URL for supported Countries/Regions list.
Conclusion
In this article, we have learned how to implement HiAI Text Recognition service in android application to extract the content from screen shots and photos.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Huawei HiAI Kit URL
Original Source

Intermediate: How to extract table information from Images using Huawei HiAI Table Recognition service in Android

Introduction
In this article, we will learn how to implement Huawei HiAI kit using Table Recognition service into android application, this service helps us to extract the table content from images.
Table recognition algorithms, this one is based on the line structure of table. Clear and detectable lines are necessary for the proper identification of cells.
Use case: Imagine you have lots of paperwork and documents where you would be using tables, and using the same you would like to manipulate data. Conventionally you can copy them manually or generate excel files for third party apps.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Any IDE with Android SDK installed (IntelliJ, Android Studio).
3. HiAI SDK.
4. Minimum API Level 23 is required.
5. Required EMUI 9.0.0 and later version devices.
6. Required process kirin 990/985/980/970/ 825Full/820Full/810Full/ 720Full/710Full
Features
1. Restores the table information including text in the cells, and identifies merged cells as well.
2. Fast recognition it returns the text in a cell containing 50 lines within 3seconds
3. Recognition accuracy level >85%
4. Recall rate >80%
How to integrate HMS Dependencies
1. First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project. For more information check this link
2. Download agconnect-services.json file from AGC and add into app’s root directory.
3. Add the required dependencies to the build.gradle file under root folder.
Code:
maven {url'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
4. Add the App level dependencies to the build.gradle file under app folder.
Code:
apply plugin: 'com.huawei.agconnect'
5. Add the required permission to the Manifestfile.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.hardware.camera"/>
<uses-permission android:name="android.permission.HARDWARE_TEST.camera.autofocus"/>
6. Now, sync your project.
How to apply for HiAI Engine Library
1. Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
2. Click Apply for HUAWEI HiAI kit.
3. Enter required information like Product name and Package name, click Next button.
4. Verify the application details and click Submit button.
5. Click the Download SDK button to open the SDK list.
6. Unzip downloaded SDK and add into your android project under lib folder.
7. Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(<b><span style="font-size: 10.0pt;font-family: Consolas;">include</span></b>: [<b><span style="font-size: 10.0pt;">'*.aar'</span></b>, <b><span style="font-size: 10.0pt;">'*.jar'</span></b>], <b><span style="font-size: 10.0pt;">dir</span></b>: <b><span style="font-size: 10.0pt;">'libs'</span></b>)
implementation <b><span style="font-size: 10.0pt;">'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}</span></b><b style="font-family: Consolas;font-size: 10.0pt;background-color: white;">
</b>
8. After completing this above setup, now Sync your gradle file.
Let’s do code
I have created a project on Android studio with empty activity let’s start coding.
In the MainActivity.java we can create the business logic.
Code:
MainActivity extends AppCompatActivity {
private boolean isConnection = false;
private int REQUEST_CODE = 101;
private int REQUEST_PHOTO = 100;
private Bitmap bitmap;
private Bitmap resultBitmap;
private Button btnImage;
private ImageView originalImage;
private ImageView conversionImage;
private TextView textView;
private TextView contentText;
private final String[] permission = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE};
private ImageSuperResolution resolution;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
requestPermissions(permission, REQUEST_CODE);
initHiAI();
originalImage = findViewById(R.id.super_origin);
conversionImage = findViewById(R.id.super_image);
textView = findViewById(R.id.text);
contentText = findViewById(R.id.content_text);
btnImage = findViewById(R.id.btn_album);
btnImage.setOnClickListener(v -> {
selectImage();
});
}
private void initHiAI() {
VisionBase.init(this, new ConnectionCallback() {
@Override
public void onServiceConnect() {
isConnection = true;
DeviceCompatibility();
}
@Override
public void onServiceDisconnect() {
}
});
}
private void DeviceCompatibility() {
resolution = new ImageSuperResolution(this);
int support = resolution.getAvailability();
if (support == 0) {
Toast.makeText(this, "Device supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(this, "Device doesn't supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
}
}
public void selectImage() {
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (data != null && requestCode == REQUEST_PHOTO) {
try {
bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), data.getData());
if (isConnection) {
setTableAI();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private void setTableAI() {
textView.setText("Extraction Table Text");
contentText.setVisibility(View.VISIBLE);
TableDetector mTableDetector = new TableDetector(this);
VisionImage image = VisionImage.fromBitmap(bitmap);
VisionTableConfiguration mTableConfig = new VisionTableConfiguration.Builder()
.setAppType(VisionTableConfiguration.APP_NORMAL)
.setProcessMode(VisionTableConfiguration.MODE_OUT)
.build();
mTableDetector.setVisionConfiguration(mTableConfig);
mTableDetector.prepare();
Table table = new Table();
int mResult_code = mTableDetector.detect(image, table, null);
if (mResult_code == 0) {
int count = table.getTableCount();
List<TableContent> tc = table.getTableContent();
StringBuilder sbTableCell = new StringBuilder();
List<TableCell> tableCell = tc.get(0).getBody();
for (TableCell c : tableCell) {
List<String> words = c.getWord();
StringBuilder sb = new StringBuilder();
for (String s : words) {
sb.append(s).append(",");
}
String cell = c.getStartRow() + ":" + c.getEndRow() + ": " + c.getStartColumn() + ":" +
c.getEndColumn() + "; " + sb.toString();
sbTableCell.append(cell).append("\n");
contentText.setText("Count = " + count + "\n\n" + sbTableCell.toString());
}
}
}
}
Demo
Tips and Tricks
1. Download latest Huawei HiAI SDK.
2. Set minSDK version to 23 or later.
3. Do not forget to add jar files into gradle file.
4. It supports slides images.
5. Input resolution larger than 720p and with aspect ratio smaller than 2:1.
6. It supports only printed text, images, formulas, handwritten content, seals, watermarks cannot be identified.
7. Refer this URL for supported Countries/Regions list.
Conclusion
That’s it! Now your table content extracted from image, for further analysis with statistics or just for editing it. This works for tables with clear and simple structure information.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Huawei HiAI Table Recognition Kit URL
Original Source

Intermediate: Filter Pets by Scene Detection Using Huawei HiAI in Android

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we will learn how to integrate Huawei Scene detection using Huawei HiAI. We will build the Pets cart where we can sell pets online and filter pets by scene detection using Huawei HiAI.
What is Scene Detection?
Scene detection can quickly classify images by identifying the type of scene to which the image content belongs, such as animals, green plants, food, buildings, and automobiles. Scene detection can also add smart classification labels to images, facilitating smart album generation and category-based image management.
Features
Fast: This algorithm is currently developed based on the deep neural network, to fully utilize the neural processing unit (NPU) of Huawei mobile phones to accelerate the neural network, achieving an acceleration of over 10 times.
Lightweight: This API greatly reduces the computing time and ROM space the algorithm model takes up, making your app more lightweight.
Abundant: Scene detection can identify 103 scenarios such as Cat, Dog, Snow, Cloudy sky, Beach, Greenery, Document, Stage, Fireworks, Food, Sunset, Blue sky, Flowers, Night, Bicycle, Historical buildings, Panda, Car, and Autumn leaves. The detection average accuracy is over 95% and the average recall rate is over 85% (lab data).
How to integrate Scene Detection
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library
3. Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development, and click HUAWEI HiAI.
Step 2: Click Apply for the HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click the Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add to your android project under the libs folder.
Step 7: Add jar files dependencies into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}Copy code
Client application development process
Follow the steps
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
Code:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<!-- CAMERA -->
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Step 4: Build application.
First request run time permission
Code:
private void requestPermissions() {
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
int permission = ActivityCompat.checkSelfPermission(this,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
if (permission != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.CAMERA}, 0x0010);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}Copy code
Initialize vision base
private void initVisionBase() {
VisionBase.init(SceneDetectionActivity.this, new ConnectionCallback() {
[USER=439709]@override[/USER]
public void onServiceConnect() {
//This callback method is called when the connection to the service is successful.
//Here you can initialize the detector class, mark the service connection status, and more.
Log.i(LOG, "onServiceConnect ");
Toast.makeText(SceneDetectionActivity.this, "Service Connected", Toast.LENGTH_SHORT).show();
}
[USER=439709]@override[/USER]
public void onServiceDisconnect() {
//This callback method is called when disconnected from the service.
//You can choose to reconnect here or to handle exceptions.
Log.i(LOG, "onServiceDisconnect");
Toast.makeText(SceneDetectionActivity.this, "Service Disconnected", Toast.LENGTH_SHORT).show();
}
});
}
Build Async class for scene detection.
Code:
class SceneDetectionAsync extends AsyncTask<Bitmap, Void, JSONObject> {
[USER=439709]@override[/USER]
protected JSONObject doInBackground(Bitmap... bitmaps) {
//Bitmap bitmap = BitmapFactory.decodeFile(imgPath);//Obtain the Bitmap image. (Note that the Bitmap must be in the ARGB8888 format, that is, bitmap.getConfig() == Bitmap.Config.ARGB8888.)
Frame frame = new Frame();//Construct the Frame object
frame.setBitmap(bitmaps[0]);
SceneDetector sceneDetector = new SceneDetector(SceneDetectionActivity.this);//Construct Detector.
JSONObject jsonScene = sceneDetector.detect(frame, null);//Perform scene detection.
Scene sc = sceneDetector.convertResult(jsonScene);//Obtain the Java class result.
if (sc != null) {
int type = sc.getType();//Obtain the identified scene type.
Log.d(LOG, "Type:" + type);
}
Log.d(LOG, "Json data:" + jsonScene.toString());
return jsonScene;
}
[USER=439709]@override[/USER]
protected void onPostExecute(JSONObject data) {
super.onPostExecute(data);
if (dialog != null && dialog.isShowing()) {
dialog.dismiss();
}
adapter = new MyListAdapter(getPetsFilteredDataList(data));
recyclerView.setAdapter(adapter);
Toast.makeText(SceneDetectionActivity.this, "Data filtered successfully", Toast.LENGTH_SHORT).show();
}
}Copy code
Show select image dialog.
private void selectImage() {
try {
PackageManager pm = getPackageManager();
int hasPerm = pm.checkPermission(Manifest.permission.CAMERA, getPackageName());
if (hasPerm == PackageManager.PERMISSION_GRANTED) {
final CharSequence[] options = {"Take Photo", "Choose From Gallery", "Cancel"};
androidx.appcompat.app.AlertDialog.Builder builder = new androidx.appcompat.app.AlertDialog.Builder(this);
builder.setTitle("Select Option");
builder.setItems(options, new DialogInterface.OnClickListener() {
[USER=439709]@override[/USER]
public void onClick(DialogInterface dialog, int item) {
if (options[item].equals("Take Photo")) {
dialog.dismiss();
fileUri = getOutputMediaFileUri();
Log.d(LOG, "end get uri = " + fileUri);
Intent i = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
i.putExtra(MediaStore.EXTRA_OUTPUT, fileUri);
startActivityForResult(i, REQUEST_IMAGE_TAKE);
} else if (options[item].equals("Choose From Gallery")) {
dialog.dismiss();
Intent i = new Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
startActivityForResult(i, REQUEST_IMAGE_SELECT);
} else if (options[item].equals("Cancel")) {
dialog.dismiss();
}
}
});
builder.show();
} else
Toast.makeText(this, "Camera Permission error", Toast.LENGTH_SHORT).show();
} catch (Exception e) {
Toast.makeText(this, "Camera Permission error", Toast.LENGTH_SHORT).show();
e.printStackTrace();
}
}
/**
* Create a file Uri for saving an image or video
*/
private Uri getOutputMediaFileUri() {
//return Uri.fromFile(getOutputMediaFile(type));
Log.d(LOG, "authority = " + getPackageName() + ".provider");
Log.d(LOG, "getApplicationContext = " + getApplicationContext());
return FileProvider.getUriForFile(this, getPackageName() + ".fileprovider", getOutputMediaFile());
}
/**
* Create a File for saving an image
*/
private static File getOutputMediaFile() {
File mediaStorageDir = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES), "LabelDetect");
// Create the storage directory if it does not exist
if (!mediaStorageDir.exists()) {
if (!mediaStorageDir.mkdirs()) {
Log.d(LOG, "failed to create directory");
return null;
}
}
// Create a media file name
String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date());
File mediaFile = new File(mediaStorageDir.getPath() + File.separator +
"IMG_" + timeStamp + ".jpg");
Log.d(LOG, "mediaFile " + mediaFile);
return mediaFile;
}
When user select image start detecting.
Code:
@override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if ((requestCode == REQUEST_IMAGE_TAKE || requestCode == REQUEST_IMAGE_SELECT) && resultCode == RESULT_OK) {
String imgPath;
if (requestCode == REQUEST_IMAGE_TAKE) {
imgPath = Environment.getExternalStorageDirectory() + fileUri.getPath();
} else {
Uri selectedImage = data.getData();
String[] filePathColumn = {MediaStore.Images.Media.DATA};
Cursor cursor = SceneDetectionActivity.this.getContentResolver().query(selectedImage,
filePathColumn, null, null, null);
cursor.moveToFirst();
int columnIndex = cursor.getColumnIndex(filePathColumn[0]);
imgPath = cursor.getString(columnIndex);
cursor.close();
}
Log.d(LOG, "imgPath = " + imgPath);
bmp = BitmapFactory.decodeFile(imgPath);
if (bmp != null) {
//Toast.makeText(this, "Bit map is not null", Toast.LENGTH_SHORT).show();
dialog = ProgressDialog.show(SceneDetectionActivity.this,
"Predicting...", "Wait for one sec...", true);
SceneDetectionAsync async = new SceneDetectionAsync();
async.execute(bmp);
} else {
Toast.makeText(this, "Bit map is null", Toast.LENGTH_SHORT).show();
}
}
super.onActivityResult(requestCode, resultCode, data);
}
Data set
Code:
private MyListData[] getPetsList() {
MyListData[] listData = new MyListData[]{
new MyListData("Labrador Retriever", "20000INR", "Age: 1yr", R.drawable.labrador_retriever),
new MyListData("Bengal Cat", "8000INR", "Age: 1 month", R.drawable.bengal_cat),
new MyListData("Parrot", "2500INR", "Age: 3months", R.drawable.parrot),
new MyListData("Rabbit", "1500INR", "Age: 1 month", R.drawable.rabbit_image),
new MyListData("Beagle", "20500INR", "Age:6months", R.drawable.beagle),
new MyListData("Bulldog", "19000INR", "1yr", R.drawable.bulldog),
new MyListData("German Shepherd", "18000INR", "Age: 2yr", R.drawable.german_shepherd_dog),
new MyListData("German Shorthaired Pointer", "20000INR", "Age: 8 months", R.drawable.german_shorthaired_pointer),
new MyListData("Golder retriever", "12000INR", "Age: 7months", R.drawable.golden_retriever),
new MyListData("Pembroke Welsh corgi", "9000INR", "Age: 10months", R.drawable.pembroke_welsh_corgi),
new MyListData("Pomeranian", "25000INR", "Age: 10months", R.drawable.pomeranian),
new MyListData("Poodle", "15000INR", "Age: 3months", R.drawable.poodle),
new MyListData("Rottweiler", "1700INR", "Age:2yr", R.drawable.rottweiler),
new MyListData("Shihtzu", "18000INR", "Age: 5months", R.drawable.shih_tzu),
};
return listData;
}
private MyListData[] getPetsFilteredDataList(JSONObject jsonObject) {
MyListData[] listData = null;
try {
//{"resultCode":0,"scene":"{\"type\":13}"}
String scene = jsonObject.getString("scene");
JSONObject object = new JSONObject(scene);
int type = object.getInt("type");
switch (type) {
case 1:
break;
case 12:
//Get Cats filtered data here
break;
case 13:
listData = getDogsData();
break;
}
} catch (JSONException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
return listData;
}
private MyListData[] getDogsData() {
MyListData[] dogsList = new MyListData[]{
new MyListData("Labrador Retriever", "20000INR", "Age: 1yr", R.drawable.labrador_retriever),
new MyListData("Beagle", "20500INR", "Age:6months", R.drawable.beagle),
new MyListData("Bulldog", "19000INR", "1yr", R.drawable.bulldog),
new MyListData("German Shepherd", "18000INR", "Age: 2yr", R.drawable.german_shepherd_dog),
new MyListData("German Shorthaired Pointer", "20000INR", "Age: 8 months", R.drawable.german_shorthaired_pointer),
new MyListData("Golder retriever", "12000INR", "Age: 7months", R.drawable.golden_retriever),
new MyListData("Pembroke Welsh corgi", "9000INR", "Age: 10months", R.drawable.pembroke_welsh_corgi),
new MyListData("Pomeranian", "25000INR", "Age: 10months", R.drawable.pomeranian),
new MyListData("Poodle", "15000INR", "Age: 3months", R.drawable.poodle),
new MyListData("Rottweiler", "1700INR", "Age:2yr", R.drawable.rottweiler),
new MyListData("Shihtzu", "18000INR", "Age: 5months", R.drawable.shih_tzu),
};
return dogsList;
}
Result
Before Filter.
After filter
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise we get Manifest merge issue.
Run detect() background thread otherwise app will crash with error.
If you are taking image from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
If device does not supports you will get 601 code in the result code
Maximum 20MB image
Conclusion
In this article, we have learnt the following concepts.
What is Scene detection?
Features of scene detection
How to integrate scene detection using Huawei HiAI
How to Apply Huawei HiAI
How to build the application
How to filter data by scene
Reference
Scene detection
Apply for Huawei HiAI

Introduction of Pose estimation using Huawei HiAI Engine in Android

Introduction
In this article, we will learn how to detect human skeletal detection.
The key skeletal features are important for describing human posture and predicting human behavior. Therefore, the recognition of key skeletal features is the basis for a diversity of computer vision tasks, such as motion categorizations, abnormal behavior detection, and auto-navigation. In recent years, improved skeletal feature recognition has been widely applied to the development of deep learning technology, especially domains relating to computer vision.
Pose estimation mainly detects key human body features such as joints and facial features, and provides skeletal information based on such features.
If input a portrait image, users will obtain the coordinate information of 14 key skeletal features of each portrait in it. The algorithm supports real-time processing and returns the result within 70 ms. The result presents posture information regarding head, neck, right and left shoulders, right and left elbows, right and left wrists, and right and left hips, right and left knees, and right and left ankles.
How to integrate Pose Estimation
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library.
3. Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml.
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Java:
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.os.RemoteException;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import android.widget.ImageView;
import android.widget.Toast;
import com.huawei.hiai.pdk.pluginservice.ILoadPluginCallback;
import com.huawei.hiai.pdk.resultcode.HwHiAIResultCode;
import com.huawei.hiai.vision.common.ConnectionCallback;
import com.huawei.hiai.vision.common.VisionBase;
import com.huawei.hiai.vision.common.VisionImage;
import com.huawei.hiai.vision.image.detector.PoseEstimationDetector;
import com.huawei.hiai.vision.visionkit.image.detector.BodySkeletons;
import com.huawei.hiai.vision.visionkit.image.detector.PeConfiguration;
import com.huawei.hiai.vision.visionkit.text.config.VisionTextConfiguration;
import java.io.BufferedInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class MainActivity extends AppCompatActivity {
private Object mWaitResult = new Object(); // The user establishes a semaphore and waits for the callback information of the bound service
private ImageView mImageView;
private ImageView yogaPose;
[USER=439709]@override[/USER]
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mImageView = (ImageView) findViewById(R.id.skeleton_img);
yogaPose = (ImageView) findViewById(R.id.yogaPose);
//The application needs to bind the CV service first, and monitor whether the service is successfully connected
VisionBase.init(getApplicationContext(), new ConnectionCallback() {
public void onServiceConnect() { // Listen to the message that the service is successfully bound
Log.d("SkeletonPoint", "HwVisionManager onServiceConnect OK.");
Toast.makeText(getApplicationContext(),"Service binding successfully!",Toast.LENGTH_LONG).show();
synchronized (mWaitResult) {
mWaitResult.notifyAll();
doSkeletonPoint();
}
}
public void onServiceDisconnect() { // Listen to the message that the binding service failed
Log.d("SkeletonPoint", "HwVisionManager onServiceDisconnect OK.");
Toast.makeText(getApplicationContext(),"Service binding failed!",Toast.LENGTH_LONG).show();
synchronized (mWaitResult) {
mWaitResult.notifyAll();
}
}
});
}
[USER=439709]@override[/USER]
protected void onResume() {
super.onResume();
}
[USER=439709]@override[/USER]
protected void onDestroy() {
super.onDestroy();
}
private void doSkeletonPoint() {
// Declare the skeleton detection interface object, and set the plug-in to cross-process mode MODE_OUT (also can be set to the same process mode MODE_IN)
PoseEstimationDetector mPoseEstimationDetector = new PoseEstimationDetector(MainActivity.this);
PeConfiguration config = new PeConfiguration.Builder()
.setProcessMode(VisionTextConfiguration.MODE_OUT)
.build();
mPoseEstimationDetector.setConfiguration(config);
// Currently, the skeleton detection interface accepts input as Bitmap, which is encapsulated into VisionImage. Video streaming will be supported in the future
Bitmap bitmap = null;
VisionImage image = null;
// TODO: Developers need to create a Bitmap here
BufferedInputStream bis = null;
try {
bis = new BufferedInputStream(getAssets().open("0.jpg"));
} catch (IOException e) {
Log.d("SkeletonPoint", e.toString());
Toast.makeText(getApplicationContext(), e.toString(),Toast.LENGTH_LONG).show();
}
bitmap = BitmapFactory.decodeStream(bis);
yogaPose.setImageBitmap(bitmap);
Bitmap bitmap2 = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), bitmap.getConfig());
image = VisionImage.fromBitmap(bitmap);
// Query whether the capability supports the installation of plug-ins at the same time. getAvailability() returns -6 to indicate that the current engine supports this ability, but the plug-in needs to be downloaded and installed on the cloud side
int availability = mPoseEstimationDetector.getAvailability();
int installation = HwHiAIResultCode.AIRESULT_UNSUPPORTED; // Indicates that it does not support
if (availability == -6) {
Lock lock = new ReentrantLock();
Condition condition = lock.newCondition();
LoadPluginCallback cb = new LoadPluginCallback(lock, condition);
// Download and install the plugin
mPoseEstimationDetector.loadPlugin(cb);
lock.lock();
try {
condition.await(90, TimeUnit.SECONDS);
} catch (InterruptedException e) {
Log.e("SkeletonPoint", e.getMessage());
} finally {
lock.unlock();
}
installation = cb.mResultCode;
}
// You can call the interface after downloading and installing successfully
if ((availability == HwHiAIResultCode.AIRESULT_SUCCESS)
|| (installation == HwHiAIResultCode.AIRESULT_SUCCESS)) {
// Load model and resources
mPoseEstimationDetector.prepare();
// Skeleton point result returned
List<BodySkeletons> mBodySkeletons = new ArrayList<>();
// The run method is called synchronously. At present, the maximum interface run time is 70 ms, and it is recommended to use another thread to call every frame
// After detect, bitmap will be released
int resultCode = mPoseEstimationDetector.detect(image, mBodySkeletons, null);
Toast.makeText(getApplicationContext(),"resultCode: " + resultCode,Toast.LENGTH_LONG).show();
// Draw a point
if (mBodySkeletons.size() != 0) {
drawPointNew(mBodySkeletons, bitmap2);
mImageView.setImageBitmap(bitmap2);
}
// Release engine
mPoseEstimationDetector.release();
}
}
public static class LoadPluginCallback extends ILoadPluginCallback.Stub {
private int mResultCode = HwHiAIResultCode.AIRESULT_UNKOWN;
private Lock mLock;
private Condition mCondition;
LoadPluginCallback(Lock lock, Condition condition) {
mLock = lock;
mCondition = condition;
}
[USER=439709]@override[/USER]
public void onResult(int resultCode) throws RemoteException {
Log.d("SkeletonPoint", "LoadPluginCallback, onResult: " + resultCode);
mResultCode = resultCode;
mLock.lock();
try {
mCondition.signalAll();
} finally {
mLock.unlock();
}
}
[USER=439709]@override[/USER]
public void onProgress(int i) throws RemoteException {
}
}
private void drawPointNew(List<BodySkeletons> poseEstimationMulPeopleSkeletons, Bitmap bmp) {
if ((poseEstimationMulPeopleSkeletons == null)
|| (poseEstimationMulPeopleSkeletons.size() < 1)) {
return;
}
int humanNum = poseEstimationMulPeopleSkeletons.size();
int points = 14;
int size = humanNum * points;
int[] xArr = new int[size];
int[] yArr = new int[size];
for (int j = 0; (j < humanNum) && (j < 6); j++) {
for (int i = 0; i < points; i++) {
xArr[j * points + i] = (int)((float)poseEstimationMulPeopleSkeletons.get(j).getPosition().get(i).x);
yArr[j * points + i] = (int)((float)poseEstimationMulPeopleSkeletons.get(j).getPosition().get(i).y);
}
}
Paint p = new Paint();
p.setStyle(Paint.Style.FILL_AND_STROKE);
p.setStrokeWidth(5);
p.setColor(Color.GREEN);
Canvas canvas = new Canvas(bmp);
int len = xArr.length;
int[] color = {0xFF000000, 0xFF444444, 0xFF888888, 0xFFCCCCCC, 0xFFFF0000, 0xFF00FF00, 0xFF0000FF,
0xFFFFFF00, 0xFF00FFFF, 0xFFFF00FF, 0xFF8800FF, 0xFF4400FF, 0xFFFFDDDD};
p.setColor(color[4]);
for (int i = 0; i < len; i++) {
canvas.drawCircle(xArr, yArr, 10, p);
}
for (int i = 0; i < humanNum; i++) {
int j = 0;
p.setColor(color[j++]);
if ((xArr[0+points*i]>0) &&(yArr[0+points*i]>0)&&(xArr[1+points*i]>0)&&(yArr[1+points*i]>0)) {
canvas.drawLine(xArr[0+points*i], yArr[0+points*i], xArr[1+points*i], yArr[1+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[2+points*i]>0)&&(yArr[2+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[2+points*i], yArr[2+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[2+points*i]>0)&&(yArr[2+points*i]>0)&&(xArr[3+points*i]>0)&&(yArr[3+points*i]>0)) {
canvas.drawLine(xArr[2+points*i], yArr[2+points*i], xArr[3+points*i], yArr[3+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[3+points*i]>0)&&(yArr[3+points*i]>0)&&(xArr[4+points*i]>0)&&(yArr[4+points*i]>0)) {
canvas.drawLine(xArr[3+points*i], yArr[3+points*i], xArr[4+points*i], yArr[4+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[5+points*i]>0)&&(yArr[5+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[5+points*i], yArr[5+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[5+points*i]>0)&&(yArr[5+points*i]>0)&&(xArr[6+points*i]>0)&&(yArr[6+points*i]>0)) {
canvas.drawLine(xArr[5+points*i], yArr[5+points*i], xArr[6+points*i], yArr[6+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[6+points*i]>0)&&(yArr[6+points*i]>0)&&(xArr[7+points*i]>0)&&(yArr[7+points*i]>0)) {
canvas.drawLine(xArr[6+points*i], yArr[6+points*i], xArr[7+points*i], yArr[7+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[8+points*i]>0)&&(yArr[8+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[8+points*i], yArr[8+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[8+points*i]>0)&&(yArr[8+points*i]>0)&&(xArr[9+points*i]>0)&&(yArr[9+points*i]>0)) {
canvas.drawLine(xArr[8+points*i], yArr[8+points*i], xArr[9+points*i], yArr[9+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[9+points*i]>0)&&(yArr[9+points*i]>0)&&(xArr[10+points*i]>0)&&(yArr[10+points*i]>0)) {
canvas.drawLine(xArr[9+points*i], yArr[9+points*i], xArr[10+points*i], yArr[10+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[1+points*i]>0)&&(yArr[1+points*i]>0)&&(xArr[11+points*i]>0)&&(yArr[11+points*i]>0)) {
canvas.drawLine(xArr[1+points*i], yArr[1+points*i], xArr[11+points*i], yArr[11+points*i], p);
}
p.setColor(color[j++]);
if ((xArr[11+points*i]>0)&&(yArr[11+points*i]>0)&&(xArr[12+points*i]>0)&&(yArr[12+points*i]>0)) {
canvas.drawLine(xArr[11+points*i], yArr[11+points*i], xArr[12+points*i], yArr[12+points*i], p);
}
p.setColor(color[j]);
if ((xArr[12+points*i]>0)&&(yArr[12+points*i]>0)&&(xArr[13+points*i]>0)&&(yArr[13+points*i]>0)) {
canvas.drawLine(xArr[12+points*i], yArr[12+points*i], xArr[13+points*i], yArr[13+points*i], p);
}
}
}
}
Result
Tips and Tricks
This API provides optimal detection results when no more than three portraits are not appear in the image.
This API works better when the proportion of a portrait in an image is high.
At least four skeletal features of the upper part of the body are required for reliable recognition results.
If you are taking Video from a camera or gallery make sure your app has camera and storage permissions.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion
In this article, we have learnt what the pose estimation is and how to integrate pose estimation using Huawei HiAI in android with java. We able to detect the image skeleton in the example. It is able to detect head, neck, elbow, knee and ankle.
Reference
Pose Estimation
Apply for Huawei HiAI

Expert: Courier App MVVM Jetpack (HMS CloudDB Kit) in Android using Kotlin- Part-3

Overview
In this article, I will create a Courier android application using Kotlin in which I will integrate HMS Core kits such as HMS Account, AuthService, Push and Cloud DB Kit.
We have integrated HMS Account and AuthService Kit in part-1 and Client Push Notification Using HMS Push Kit in Part-2 of this series. Kindly go through the link below-
Part-1 https://forums.developer.huawei.com/forumPortal/en/topic/0202841957497640128
Part-2 https://forums.developer.huawei.com/forumPortal/en/topic/0201847982965230092
App will make use of android MVVM clean architecture using Jetpack components such as DataBinding, AndroidViewModel, Observer, LiveData and much more.
In this article, we are going to implement DataBinding using Observable pattern.
Huawei Cloud DB Kit Introduction
Huawei Cloud DB is a device-cloud synergy database product that enables seamless data synchronization between the device and cloud and between devices, and supports offline application operations, helping you quickly develop device-cloud and multi-device synergy applications.
Prerequisite
Huawei Phone EMUI 3.0 or later.
Non-Huawei phones Android 4.4 or later (API level 19 or higher).
HMS Core APK 4.0.0.300 or later
Android Studio
AppGallery Account
App Gallery Integration process
Sign In and Create or Choose a project on AppGallery Connect portal.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Navigate to Project settings and download the configuration file.
Navigate to General Information, and then provide Data Storage location.
Navigate to My Projects > Project Settings > Build > Cloud DB, and click Enable now.
Click Add
Click Export, select Java, android and Package Name, and click OK.
App Development
Add Required Dependencies:
Launch Android studio and create a new project. Once the project is ready.
Navigate to the Gradle scripts folder and open build.gradle (module: app) and Add following dependency for HMS Cloud DB Kits.
Code:
implementation 'com.huawei.agconnect:agconnect-cloud-database:1.5.0.300'
implementation "com.huawei.agconnect:agconnect-auth-huawei:1.6.0.300"
implementation 'com.huawei.agconnect:agconnect-auth:1.5.0.300'
Navigate to the Gradle scripts folder and open build.gradle (project: app).
Code:
ext.kotlin_version = "1.4.21"
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
Code Implementation
Created following package model, clouddb, viewmodel.
Model: In your primary folder, create a new package and name it model.
CloudInfo.java:
Code:
package com.hms.corrierapp.clouddb;import com.huawei.agconnect.cloud.database.CloudDBZoneObject;
import com.huawei.agconnect.cloud.database.annotations.DefaultValue;
import com.huawei.agconnect.cloud.database.annotations.EntireEncrypted;
import com.huawei.agconnect.cloud.database.annotations.Indexes;
import com.huawei.agconnect.cloud.database.annotations.NotNull;
import com.huawei.agconnect.cloud.database.annotations.PrimaryKeys;@PrimaryKeys({"CourierID"})
@Indexes({"ID:CourierID"})
public final class CourierInfo extends CloudDBZoneObject {
private Integer CourierID;@NotNull
@DefaultValue(stringValue = "Courier Name")
private String CourierName;@DefaultValue(stringValue = "Desc")
private String Desc;@EntireEncrypted(isEncrypted = true)
private String FromAdress;@EntireEncrypted(isEncrypted = true)
private String ToAddress;public CourierInfo() {
super(CourierInfo.class);
this.CourierName = "Courier Name";
this.Desc = "Desc";
}public Integer getCourierID() {
return CourierID;
}public void setCourierID(Integer CourierID) {
this.CourierID = CourierID;
}public String getCourierName() {
return CourierName;
}public void setCourierName(String CourierName) {
this.CourierName = CourierName;
}public String getDesc() {
return Desc;
}public void setDesc(String Desc) {
this.Desc = Desc;
}public String getFromAdress() {
return FromAdress;
}public void setFromAdress(String FromAdress) {
this.FromAdress = FromAdress;
}public String getToAddress() {
return ToAddress;
}public void setToAddress(String ToAddress) {
this.ToAddress = ToAddress;
}}
ObjectTypeInfoHelper.java:
Code:
package com.hms.corrierapp.clouddb;import com.huawei.agconnect.cloud.database.CloudDBZoneObject;
import com.huawei.agconnect.cloud.database.ObjectTypeInfo;import java.util.ArrayList;
import java.util.Collections;
import java.util.List;public final class ObjectTypeInfoHelper {
private static final int FORMAT_VERSION = 2;
private static final int OBJECT_TYPE_VERSION = 1;public static ObjectTypeInfo getObjectTypeInfo() {
ObjectTypeInfo objectTypeInfo = new ObjectTypeInfo();
objectTypeInfo.setFormatVersion(FORMAT_VERSION);
objectTypeInfo.setObjectTypeVersion(OBJECT_TYPE_VERSION);
List<Class<? extends CloudDBZoneObject>> objectTypeList = new ArrayList<>();
Collections.addAll(objectTypeList, CourierInfo.class);
objectTypeInfo.setObjectTypes(objectTypeList);
return objectTypeInfo;
}
}
CloudDBViewModel.kt:
Code:
package com.hms.corrierapp.viewmodelimport android.app.Application
import com.hms.corrierapp.push.NotificationMessageBody
import android.content.Context
import android.text.TextUtils
import android.util.Log
import android.widget.Toast
import androidx.databinding.Bindable
import androidx.databinding.Observable
import androidx.lifecycle.AndroidViewModel
import androidx.lifecycle.LiveData
import androidx.lifecycle.MutableLiveData
import androidx.lifecycle.ViewModel
import com.hms.corrierapp.clouddb.ObjectTypeInfoHelper
import com.hms.corrierapp.model.Email
import com.hms.corrierapp.model.SMS
import com.hms.corrierapp.push.*
import com.huawei.agconnect.cloud.database.AGConnectCloudDB
import com.huawei.agconnect.cloud.database.CloudDBZone
import com.huawei.agconnect.cloud.database.CloudDBZoneConfig
import com.huawei.agconnect.cloud.database.exceptions.AGConnectCloudDBException
import com.huawei.agconnect.config.AGConnectServicesConfig
import com.huawei.hms.aaid.HmsInstanceId
import retrofit2.Call
import retrofit2.Callback
import retrofit2.Responseclass CloudDBViewModel(application: Application) : AndroidViewModel(application), Observable {var mCloudDBZone: CloudDBZone? = nullprivate fun setAGC_DB() {
// val user = AGConnectAuth.getInstance().currentUser
AGConnectCloudDB.initialize(getApplication())
val mCloudDB = AGConnectCloudDB.getInstance()
try {
mCloudDB.createObjectType(ObjectTypeInfoHelper.getObjectTypeInfo())
} catch (e: AGConnectCloudDBException) {
e.printStackTrace()
}
val mConfig = CloudDBZoneConfig("Zone1",
CloudDBZoneConfig.CloudDBZoneSyncProperty.CLOUDDBZONE_CLOUD_CACHE,
CloudDBZoneConfig.CloudDBZoneAccessProperty.CLOUDDBZONE_PUBLIC)
mConfig.persistenceEnabled = true
val openDBZoneTask = mCloudDB.openCloudDBZone2(mConfig, true)
openDBZoneTask.addOnSuccessListener { cloudDBZone ->
Toast.makeText(getApplication(),"Cloud DB successfully opened", Toast.LENGTH_SHORT).show()
if (mCloudDBZone == null) {
mCloudDBZone = cloudDBZone
}
}.addOnFailureListener { e -> e.printStackTrace() }
}override fun addOnPropertyChangedCallback(callback: Observable.OnPropertyChangedCallback?) {
TODO("Not yet implemented")
}override fun removeOnPropertyChangedCallback(callback: Observable.OnPropertyChangedCallback?) {
TODO("Not yet implemented")
}
}
CloudDBViewModelFactory.kt:
Code:
package com.hms.corrierapp.viewmodelimport androidx.lifecycle.ViewModel
import androidx.lifecycle.ViewModelProviderclass CloudDBViewModelFactory : ViewModelProvider.Factory {
override fun <T : ViewModel?> create(modelClass: Class<T>): T {
if (modelClass.isAssignableFrom(CloudDBViewModel::class.java)) {
return CloudDBViewModel() as T
}
throw IllegalArgumentException("UnknownViewModel")
}
}
App Build Result
Tips and Tricks
Identity Kit displays the HUAWEI ID registration or sign-in page first. The user can use the functions provided by Identity Kit only after signing in using a registered HUAWEI ID.
Make sure you have added SHA-256 fingerprint without fail.
Set minSDK version to 24 or later, otherwise you will get AndriodManifest merge issue.
Conclusion
In this article, we have learned how to integrate Huawei ID, Push and Cloud DB Kit in Android application. After completely read this article user can easily implement Huawei ID and Client Side Push Notification and submit courier information on Cloud DB in the Courier android application using Kotlin.
Thanks for reading this article. Be sure to like and comment to this article, if you found it helpful. It means a lot to me.
References
HMS Docs:
https://developer.huawei.com/consum.../HMSCore-Guides/introduction-0000001050048870
https://developer.huawei.com/consum...-Guides/service-introduction-0000001050040060
https://developer.huawei.com/consum...des/agc-clouddb-introduction-0000001054212760
Cloud DB Kit Training Video:
https://developer.huawei.com/consumer/en/training/course/video/101628259491513909

Categories

Resources