Huawei ML kit – Integration of Scene Detection - Huawei Developers

Introduction
To help understand the image content, the scene detection service can classify the scenario content of images and add labels, such as outdoor scenery, indoor places, and buildings.You can create more customised experiences for users based on the data detected from image. Currently Huawei supports detection of 102 scenarios. For details about the scenarios, refer to List of Scenario Identification Categories.
This service can be used to identify image sets by scenario and create intelligent album sets. You can also select various camera parameters based on the scene in your app, to help users to take better-looking photos.
Prerequisite
The scene detection service supports integration with Android 6.0 and later versions.
The scene detection needs READ_EXTERNAL_STORAGE and CAMERA in AndroidManifest.xml.
Implementation of dynamic permission for camera and storage is not covered in this article. Please make sure to integrate dynamic permission feature.
Development
  1. Register as a developer account in AppGallery Connect.
  2. Create an application and enable ML kit from AppGallery connect.
  3. Refer to Service Enabling. Integrate AppGallery connect SDK. Refer to AppGallery Connect Service Getting Started.
4. Add Huawei Scene detection dependencies in app-level build.gradle.
Code:
// ML Scene Detection SDK
implementation 'com.huawei.hms:ml-computer-vision-scenedetection:2.0.3.300'
// Import the scene detection model package.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection-model:2.0.3.300'
implementation 'com.huawei.hms:ml-computer-vision-cloud:2.0.3.300'
 5. Sync the gradle.
We have an Activity (MainActivity.java) which has floating buttons to select static scene detection and live scene detection.
Static scene detection is used to detect scene in static images. When we select a photo, the scene detection service returns the results.
Camera stream (Live) scene detection can process camera streams, convert video frames into an MLFrame object, and detect scenarios using the static image detection method.
Implementation of Static scene detection
Code:
private void sceneDetectionEvaluation(Bitmap bitmap) {
//Create a scene detection analyzer instance based on the customized configuration.
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
// Set confidence for scene detection.
.setConfidence(confidence)
.create();
analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer(setting);
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
public void onSuccess(List<MLSceneDetection> result) {
// Processing logic for scene detection success.
for( MLSceneDetection sceneDetection : result) {
sb.append("Detected Scene : " + sceneDetection.getResult() + " , " + "Confidence : " + sceneDetection.getConfidence() + "\n");
tvResult.setText(sb.toString());
if (analyzer != null) {
analyzer.stop();
}
}
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
Log.e(TAG, "MLException : " + errorMessage +", error code: "+ String.valueOf(errorCode));
} else {
// Other errors.
Log.e(TAG, "Exception : " + e.getMessage());
}
if (analyzer != null) {
analyzer.stop();
}
}
});
}
@Override
protected void onDestroy() {
super.onDestroy();
if(analyzer != null) {
analyzer.stop();
}
}
We can set the settings MLSceneDetectionAnalyzerSetting() and set the confidence level for scene detection. setConfidence() methods needs to get float value. After settings are fixed, we can create the analyzer with settings value. Then, we can set the frame with bitmap. Lastly, we have created a task for list of MLSceneDetection object. We have listener functions for success and failure. The service returns list of results. The results have two parameter which are result and confidence. We can set the response to textView tvResult.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
More details, you can visit https://forums.developer.huawei.com/forumPortal/en/topic/0204400184662360088

we will get accuracy result ?

Related

Android: How to Develop an ID Photo DIY Applet in 30 Min

The source code will be shared and it's quite friendly to users who are fresh to Android. The whole process will take only 30 minutes.
It's an application level development and we won't go through the algorithm of image segmentation. I use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don't know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don't have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn't meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
[CODE]buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
[/CODE]
1.2 Add SDK Dependency in Application Level build.gradle
Code:
Introducing SDK and basic SDK of face recognition:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user's device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--使用存储权限--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // 根据包名打开对应的设置界面
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
Code:
The image segmentation detector can be created through the image segmentation detection configurator "mlimagesegmentation setting".
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create "mlframe" Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator "MLImageSegmentationSetting".
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call "asyncanalyseframe" Method for Image Segmentation
Code:
// 创建一个task,处理图像分割检测器返回的结果。 Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // 异步处理图像分割检测器返回结果 Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let's see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-Core/hms-ml-demo/tree/master/ID-Photo-DIY
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
1. People's portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
2. Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
3. Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4

Integrate ML Kit’s Document Skew Correction Capability

Introduction
In the previous post, we looked at how to use HUAWEI ML Kit’s text recognition capability. With this capability, users just need to provide an image, and your app will automatically recognize all of the key information. But what if the image is skew? Can we still get all the same key information? Yes, we can! ML Kit’s document skew correction capability automatically recognizes the position of the document, corrects the shooting angle, and lets users customize the edge points. So even if the document is skew, your users can still get all of the information they need.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Application Scenarios
Document skew correction is useful in a huge range of situations. For example, if your camera is at an angle when you shoot a paper document, the resulting image may be difficult to read. You can use document skew correction to adjust the document to the correct angle, which makes it easier to read.
Document skew correction also works for cards.
And if you’re traveling, you can use it to straighten out pictures of road signs that you’ve taken from an angle.
Isn’t it convenient? Now, I will show you how to quickly integrate this capability.
Document Correction Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we’ll just look at the most important procedures.
1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-documentskew:2.0.2.300'
// Import the document detection/correction model package.
implementation 'com.huawei.hms:ml-computer-vision-documentskew-model:2.0.2.300'
}
1.3 Add Configurations to the File Header
Code:
apply plugin: ‘com.huawei.agconnect’
apply plugin: ‘com.android.application’
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "dsc"/>
1.5 Apply for Camera Permission and Local Image Reading Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2 Code Development
2.1 Create a Text Box Detection/Correction Analyzer
Code:
MLDocumentSkewCorrectionAnalyzerSetting setting = new MLDocumentSkewCorrectionAnalyzerSetting.Factory().create();
MLDocumentSkewCorrectionAnalyzer analyzer = MLDocumentSkewCorrectionAnalyzerFactory.getInstance().getDocumentSkewCorrectionAnalyzer(setting);
2.2 Create an MLFrame Object by Using android.graphics.Bitmap for the Analyzer to Detect Images
JPG, JPEG, and PNG images are supported. It is recommended that you limit the image size to between 320 x 320 px and 1920 x 1920 px.
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncDocumentSkewDetect Asynchronous Method or analyseFrame Synchronous Method to Detect the Text Box.
When the return code is MLDocumentSkewCorrectionConstant.SUCCESS, the coordinates of the four verticals of the text box are returned. These coordinates are relative to the coordinates of the input image. If the coordinates are inconsistent with those of the device, you need to convert them. Otherwise, the returned data will be meaningless.
Code:
// Call the asyncDocumentSkewDetect asynchronous method.
Task<MLDocumentSkewDetectResult> detectTask = analyzer.asyncDocumentSkewDetect(mlFrame);
detectTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewDetectResult>() {
@Override
public void onSuccess(MLDocumentSkewDetectResult detectResult) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
})
// Call the analyseFrame synchronous method.
SparseArray<MLDocumentSkewDetectResult> detect = analyzer.analyseFrame(mlFrame);
if (detect != null && detect.get(0).getResultCode() == MLDocumentSkewCorrectionConstant.SUCCESS) {
// Detection success.
} else {
// Detection failure.
}
2.4 Obtain the Coordinate Data of the Four Verticals in the Text Box Once the Detection is Successful
Use the upper left vertex as the starting point, and add the upper left vertex, upper right vertex, lower right vertex, and lower left vertex to the list (List<Point>). Finally, create an MLDocumentSkewCorrectionCoordinateInput object.
If the synchronous method analyseFrame is called, the detection results will be obtained first, as you can see in the following figure. (If the asynchronous method asyncDocumentSkewDetect is called, skip this step.)
Code:
MLDocumentSkewDetectResult detectResult = detect.get(0);
Obtain the coordinate data for the four verticals of the text box and create an MLDocumentSkewCorrectionCoordinateInput object.
Code:
Point leftTop = detectResult.getLeftTopPosition();
Point rightTop = detectResult.getRightTopPosition();
Point leftBottom = detectResult.getLeftBottomPosition();
Point rightBottom = detectResult.getRightBottomPosition();
List<Point> coordinates = new ArrayList<>();
coordinates.add(leftTop);
coordinates.add(rightTop);
coordinates.add(rightBottom);
coordinates.add(leftBottom);
MLDocumentSkewCorrectionCoordinateInput coordinateData = new MLDocumentSkewCorrectionCoordinateInput(coordinates);
2.5 Call the asyncDocumentSkewCorrect Asynchronous Method or syncDocumentSkewCorrect Synchronous Method to Correct the Text Box
Code:
// Call the asyncDocumentSkewCorrect asynchronous method.
Task<MLDocumentSkewCorrectionResult> correctionTask = analyzer.asyncDocumentSkewCorrect(mlFrame, coordinateData);
correctionTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewCorrectionResult>() {
@Override
public void onSuccess(MLDocumentSkewCorrectionResult refineResult) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
// Call the syncDocumentSkewCorrect synchronous method.
SparseArray<MLDocumentSkewCorrectionResult> correct= analyzer.syncDocumentSkewCorrect(mlFrame, coordinateData);
if (correct != null && correct.get(0).getResultCode() == MLDocumentSkewCorrectionConstant.SUCCESS) {
// Correction success.
} else {
// Correction failure.
}
2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
Can we use ML Kit to translate the document to other language ?

A Programmer's Perfect Father's Birthday Gift: A Restored Old Photo

Everyone's family has some old photos filed away in an album. Despite the simple backgrounds and casual poses, these photos reveal quite a bit, telling stories and providing insight on what life was like in the past.
In anticipation of Father's Birthday, John, a programmer at Huawei, was racking his brains about what gift to get for his father. He thought about it for quite a while — then suddenly, a glimpse at an old photo album piqued his interest. "Why not using my coding expertise to restore my father's old photo, and shed light on his youthful personality?", he mused. Intrigued by this thought, John started to look into how he could achieve this goal.
Image super-resolution in HUAWEI ML Kit was ultimately what he settled on. With this service, John was able to convert the wrinkled and blurry old photo into a hi-res image, and presented it to his father. His father was deeply touched by the gesture.
Actual Effects:
Image Super-Resolution
This service converts an unclear, low-resolution image into a high-resolution image, increasing pixel intensity and displaying details that were missed when the image was originally taken.
Image super-resolution is ideal in computer vision, where it can help enhance image recognition and analysis capabilities. The Image super-resolution technology has improved rapidly, and weighs more in day-to-day work and life. It can be used to sharpen common images, such as portrait shots, as well as vital images in fields like medical imaging, security surveillance, and satellite imaging.
Image super-resolution offers both 1x and 3x super-resolution capabilities. 1x super-resolution removes compression noise, and 3x super-resolution effectively suppresses compression noise, while also providing a 3x enlargement capability.
The Image super-resolution service can help enhance images for a wide range of objects and items, such as greenery, food, and employee ID cards. You can even use it to enhance low-quality images such as news images obtained from the network into clear, enlarged ones.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Preparations
For more details about configuring the Huawei Maven repository and integrating the image super-resolution SDK, please refer to the Development Guide of ML Kit on HUAWEI Developers.
Configuring the Integrated SDK
Open the build.gradle file in the app directory. Add build dependencies for the image super-resolution SDK under the dependencies block.
Code:
implementation'com.huawei.hms:ml-computer-vision-imagesuperresolution:2.0.4.300'
implementation'com.huawei.hms:ml-computer-vision-imagesuperresolution-model:2.0.4.300'
Configuring the AndroidManifest.xml File
Open the AndroidManifest.xml file in the main folder. Apply for the storage read permission as needed by adding the following statement before <application>:
Code:
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Add the following statements in <application>. Then the app, after being installed, will automatically update the machine learning model to the device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imagesuperresolution"/>
Development Procedure
Configuring the Application for the Storage Read Permission
Check whether the app has had the storage read permission in onCreate() of MainActivity. If no, apply for this permission through requestPermissions; if yes, call startSuperResolutionActivity() to start super-resolution processing on the image.
Code:
if (ContextCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE)
!= PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, REQUEST_CODE);
} else {
startSuperResolutionActivity();
}
Check the permission application results:
Code:
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode == REQUEST_CODE) {
if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
startSuperResolutionActivity();
} else {
Toast.makeText(this, "Permission application failed, you denied the permission", Toast.LENGTH_SHORT).show();
}
}
}
After the application is complete, create a button. Set a configuration that after the button is tapped, the app will read images from the storage.
Code:
private void selectLocalImage() {
Intent intent = new Intent(Intent.ACTION_PICK, null);
intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*");
startActivityForResult(intent, REQUEST_SELECT_IMAGE);
}
Configuring the Image Super-Resolution Analyzer
Before the app can perform super-resolution processing on the image, create and configure an analyzer. The example below configures two parameters for the 1x super-resolution capability and 3x super-resolution capability respectively. Which one of them is used depends on the value of selectItem.
Code:
private MLImageSuperResolutionAnalyzer createAnalyzer() {
if (selectItem == INDEX_1X) {
return MLImageSuperResolutionAnalyzerFactory.getInstance().getImageSuperResolutionAnalyzer();
} else {
MLImageSuperResolutionAnalyzerSetting setting = new MLImageSuperResolutionAnalyzerSetting.Factory()
.setScale(MLImageSuperResolutionAnalyzerSetting.ISR_SCALE_3X)
.create();
return MLImageSuperResolutionAnalyzerFactory.getInstance().getImageSuperResolutionAnalyzer(setting);
}
}
Constructing and Processing the Image
Before the app can perform super-resolution processing on the image, convert the image into a bitmap whose color format is ARGB8888. Create an MLFrame object using the bitmap. After the image is added, obtain its information and override onActivityResult.
Code:
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_OK) {
if (data != null) {
imageUri = data.getData();
}
reloadAndDetectImage(true, false);
} else if (resultCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_CANCELED) {
finish();
}
}
Create an MLFrame object using the bitmap.
Code:
srcBitmap = BitmapUtils.loadFromPathWithoutZoom(this, imageUri, IMAGE_MAX_SIZE, IMAGE_MAX_SIZE);
MLFrame frame = MLFrame.fromBitmap(srcBitmap);
Call the asynchronous method asyncAnalyseFrame to perform super-resolution processing on the image.
Code:
Task<MLImageSuperResolutionResult> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSuperResolutionResult>() {
public void onSuccess(MLImageSuperResolutionResult result) {
// Recognition success.
desBitmap = result.getBitmap();
setImage(desImageView, desBitmap);
setImageSizeInfo(desBitmap.getWidth(), desBitmap.getHeight());
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Recognition failure.
Log.e(TAG, "Failed." + e.getMessage());
Toast.makeText(getApplicationContext(), e.getMessage(), Toast.LENGTH_SHORT).show();
}
});
After the recognition is complete, stop the analyzer.
Code:
if (analyzer != null) {
analyzer.stop();
}
Meanwhile, override onDestroy of the activity to release the bitmap resources.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
if (srcBitmap != null) {
srcBitmap.recycle();
}
if (desBitmap != null) {
desBitmap.recycle();
}
if (analyzer != null) {
analyzer.stop();
}
}
References
To learn more, please visit:
Official webpages for Image Super-Resolution and ML Kit
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original source

Integrating Huawei Map kit in HarmonyOs Wearable Device Application

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we will learn about Huawei Map Kit in HarmonyOs. Map Kit is an SDK for map development. It covers map data of more than 200 countries and regions, and supports over 70 languages. With this SDK, you can easily integrate map-based functions into your HarmonyOs application.
Development Overview
You need to install DevEcho Studio IDE and I assume that you have prior knowledge about the Harmony Os and java.
Hardware Requirements
A computer (desktop or laptop) running windows 10.
A HarmonyOs Smart Watch (with the USB cable), which is used for debugging.
Software Requirements
Java JDK installation package.
DevEcho Studio installed.
Steps:
Step 1: Create a HarmonyOs Application.
Step 1: Create a project in AppGallery
Step 2: Configure App in AppGallery
Step 3: Follow the SDK integration steps
Let's start coding
MapAbilitySlice.java
Java:
public class MapAbilitySlice extends AbilitySlice {
private static final HiLogLabel LABEL_LOG = new HiLogLabel(3, 0xD001100, "TAG");
private MapView mMapView;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
CommonContext.setContext(this);
// Declaring and Initializing the HuaweiMapOptions Object
HuaweiMapOptions huaweiMapOptions = new HuaweiMapOptions();
// Initialize Camera Properties
CameraPosition cameraPosition =
new CameraPosition(new LatLng(12.972442, 77.580643), 10, 0, 0);
huaweiMapOptions
// Set Camera Properties
.camera(cameraPosition)
// Enables or disables the zoom function. By default, the zoom function is enabled.
.zoomControlsEnabled(false)
// Sets whether the compass is available. The compass is available by default.
.compassEnabled(true)
// Specifies whether the zoom gesture is available. By default, the zoom gesture is available.
.zoomGesturesEnabled(true)
// Specifies whether to enable the scrolling gesture. By default, the scrolling gesture is enabled.
.scrollGesturesEnabled(true)
// Specifies whether the rotation gesture is available. By default, the rotation gesture is available.
.rotateGesturesEnabled(false)
// Specifies whether the tilt gesture is available. By default, the tilt gesture is available.
.tiltGesturesEnabled(true)
// Sets whether the map is in lite mode. The default value is No.
.liteMode(false)
// Set Preference Minimum Zoom Level
.minZoomPreference(3)
// Set Preference Maximum Zoom Level
.maxZoomPreference(13);
// Initialize the MapView object.
mMapView = new MapView(this,huaweiMapOptions);
// Create the MapView object.
mMapView.onCreate();
// Obtains the HuaweiMap object.
mMapView.getMapAsync(new OnMapReadyCallback() {
@Override
public void onMapReady(HuaweiMap huaweiMap) {
HuaweiMap mHuaweiMap = huaweiMap;
mHuaweiMap.setOnMapClickListener(new OnMapClickListener() {
@Override
public void onMapClick(LatLng latLng) {
new ToastDialog(CommonContext.getContext()).setText("onMapClick ").show();
}
});
// Initialize the Circle object.
Circle mCircle = new Circle(this);
if (null == mHuaweiMap) {
return;
}
if (null != mCircle) {
mCircle.remove();
mCircle = null;
}
mCircle = mHuaweiMap.addCircle(new CircleOptions()
.center(new LatLng(12.972442, 77.580643))
.radius(500)
.fillColor(Color.GREEN.getValue()));
new ToastDialog(CommonContext.getContext()).setText("color green: " + Color.GREEN.getValue()).show();
int strokeColor = Color.RED.getValue();
float strokeWidth = 15.0f;
// Set the edge color of a circle
mCircle.setStrokeColor(strokeColor);
// Sets the edge width of a circle
mCircle.setStrokeWidth(strokeWidth);
}
});
// Create a layout.
ComponentContainer.LayoutConfig config = new ComponentContainer.LayoutConfig(ComponentContainer.LayoutConfig.MATCH_PARENT, ComponentContainer.LayoutConfig.MATCH_PARENT);
PositionLayout myLayout = new PositionLayout(this);
myLayout.setLayoutConfig(config);
ShapeElement element = new ShapeElement();
element.setShape(ShapeElement.RECTANGLE);
element.setRgbColor(new RgbColor(255, 255, 255));
myLayout.addComponent(mMapView);
super.setUIContent(myLayout);
}
}
Result
Tips and Tricks
Add required dependencies without fail.
Add required images in resources > base > media.
Add custom strings in resources > base > element > string.json.
Define supporting devices in config.json file.
Do not log the sensitive data.
Enable required service in AppGallery Connect.
Use respective Log methods to print logs.
Conclusion
In this article, we have learnt, integration of Huawei Map in HarmonyOs wearable device using Huawei Map Kit. Sample application shows how to implement Map kit in HarmonyOs Wearables device. Hope this articles helps you to understand and integration of map kit, you can use this feature in your HarmonyOs application to display map in wearable devices.
Thank you so much for reading this article and I hope this article helps you to understand Huawei Map Kit in HarmonyOS. Please provide your comments in the comment section and like.
References
Map Kit
Checkout in forum

Solution to Creating an Image Classifier

I don't know if it's the same for you, but I always get frustrated when sorting through my phone's album. It seems to take forever before I can find the image that I want to use. As a coder, I can't help but wonder if there's a solution for this. Is there a way to organize an entire album? Well, let's take a look at how to develop an image classifier using a service called image classification.
Development Preparations​1. Configure the Maven repository address for the SDK to be used.
Java:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the image classification SDK.
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-classification:3.3.0.300'
// Import the image classification model package.
implementation 'com.huawei.hms:ml-computer-vision-image-classification-model:3.3.0.300'
Project Configuration​1. Set the authentication information for the app.
This information can be set through an API key or access token.
Use the setAccessToken method to set an access token during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create an image classification analyzer in on-device static image detection mode.
Java:
// Method 1: Use customized parameter settings for device-based recognition.
MLLocalClassificationAnalyzerSetting setting =
new MLLocalClassificationAnalyzerSetting.Factory()
.setMinAcceptablePossibility(0.8f)
.create();
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer(setting);
// Method 2: Use default parameter settings for on-device recognition.
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer();
3. Create an MLFrame object.
Java:
// Create an MLFrame object using the bitmap which is the image data in bitmap format. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image dimensions be greater than or equal to 112 x 112 px.
MLFrame frame = MLFrame.fromBitmap(bitmap);
4. Call asyncAnalyseFrame to classify images.
Java:
Task<List<MLImageClassification>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLImageClassification>>() {
@Override
public void onSuccess(List<MLImageClassification> classifications) {
// Recognition success.
// Callback when the MLImageClassification list is returned, to obtain information like image categories.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
try {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize relevant messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error message. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
5. Stop the analyzer after recognition is complete.
Java:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Remarks​The image classification capability supports the on-device static image detection mode, on-cloud static image detection mode, and camera stream detection mode. The demo here illustrates only the first mode.
I came up with a bunch of application scenarios to use image classification, for example: education apps. With the help of image classification, such an app enables its users to categorize images taken in a period into different albums; travel apps. Image classification allows such apps to classify images according to where they are taken or by objects in the images; file sharing apps. Image classification allows users of such apps to upload and share images by image category.
References​>>Image classification Development Guide
>>Reddit to join developer discussions
>>GitHub to download the sample code
>>Stack Overflow to solve integration problems

Categories

Resources