Solution to Creating an Image Classifier - Huawei Developers

I don't know if it's the same for you, but I always get frustrated when sorting through my phone's album. It seems to take forever before I can find the image that I want to use. As a coder, I can't help but wonder if there's a solution for this. Is there a way to organize an entire album? Well, let's take a look at how to develop an image classifier using a service called image classification.
Development Preparations​1. Configure the Maven repository address for the SDK to be used.
Java:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the image classification SDK.
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-classification:3.3.0.300'
// Import the image classification model package.
implementation 'com.huawei.hms:ml-computer-vision-image-classification-model:3.3.0.300'
Project Configuration​1. Set the authentication information for the app.
This information can be set through an API key or access token.
Use the setAccessToken method to set an access token during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create an image classification analyzer in on-device static image detection mode.
Java:
// Method 1: Use customized parameter settings for device-based recognition.
MLLocalClassificationAnalyzerSetting setting =
new MLLocalClassificationAnalyzerSetting.Factory()
.setMinAcceptablePossibility(0.8f)
.create();
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer(setting);
// Method 2: Use default parameter settings for on-device recognition.
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer();
3. Create an MLFrame object.
Java:
// Create an MLFrame object using the bitmap which is the image data in bitmap format. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image dimensions be greater than or equal to 112 x 112 px.
MLFrame frame = MLFrame.fromBitmap(bitmap);
4. Call asyncAnalyseFrame to classify images.
Java:
Task<List<MLImageClassification>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLImageClassification>>() {
@Override
public void onSuccess(List<MLImageClassification> classifications) {
// Recognition success.
// Callback when the MLImageClassification list is returned, to obtain information like image categories.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
try {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize relevant messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error message. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
5. Stop the analyzer after recognition is complete.
Java:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Remarks​The image classification capability supports the on-device static image detection mode, on-cloud static image detection mode, and camera stream detection mode. The demo here illustrates only the first mode.
I came up with a bunch of application scenarios to use image classification, for example: education apps. With the help of image classification, such an app enables its users to categorize images taken in a period into different albums; travel apps. Image classification allows such apps to classify images according to where they are taken or by objects in the images; file sharing apps. Image classification allows users of such apps to upload and share images by image category.
References​>>Image classification Development Guide
>>Reddit to join developer discussions
>>GitHub to download the sample code
>>Stack Overflow to solve integration problems

Related

HMS Image Super Resolution Application

More information like this, you can visit HUAWEI Developer Forum​
Introduction:
HMS ML Kit features image super-resolution service which provides 1x super-resolution capability. This feature removes the compression noise of images to obtain clear images.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Precautions:
1. Prior to the image super-resolution service, it is necessary to convert images into bitmaps in ARGB format. After
the service processes, the image output are bitmaps in ARGB format.
2. Maximum size of an input image is 1024 x 768 px or 768 x 1024 px. The minimum size is 64 x 64 px.
Integration:
1. Create a project in android studio and Huawei AGC.
2. Provide the SHA-256 Key in App Information Section.
3. Download the agconnect-services.json from AGCand save into app directory.
4. In root build.gradle
Navigate to allprojects > repositories and buildscript > repositories and add the below line.
Code:
maven { url 'http://developer.huawei.com/repo/' }
5. In app build.gradle
Configure the Maven dependency
Code:
implementation 'com.huawei.hms:ml-computer-vision-imageSuperResolution:2.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-imageSuperResolution-model:2.0.2.300'
Apply plugin
Code:
apply plugin: 'com.huawei.agconnect'
6. Permissions in Manifest
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Code Implementation:
Here is an image convertor application that uses HMS image super-resolution service. This application can select image from gallery and also can capture image. Follow the steps
1. Create an image super-resolution analyzer.
Code:
private void createAnalyzer() {
MLImageSuperResolutionAnalyzerSetting settings = new MLImageSuperResolutionAnalyzerSetting.Factory()
// Set the scale of image super resolution to 1x.
.setScale(MLImageSuperResolutionAnalyzerSetting.ISR_SCALE_1X)
.create();
analyzer = MLImageSuperResolutionAnalyzerFactory.getInstance().getImageSuperResolutionAnalyzer(settings);
}
2. Create an MLFrame object by using android.graphics.Bitmap.
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(srcBitmap).create();
3. Perform super-resolution processing on the image.
Code:
Task<MLImageSuperResolutionResult> task = analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSuperResolutionResult>() {
public void onSuccess(MLImageSuperResolutionResult result) {
// Recognition success.
Toast.makeText(getApplicationContext(), "Success", Toast.LENGTH_SHORT).show();
setImage(result.getBitmap());
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Recognition failure.
Toast.makeText(getApplicationContext(), "Failed:" + e.getMessage(), Toast.LENGTH_SHORT).show();
}
});
4. After the recognition is complete, stop the analyzer to release recognition resources.
Code:
private void release() {
if (analyzer == null) {
return;
}
analyzer.stop();
}
5. Capture picture from camera.
Code:
private void capturePictureFromCamera(){
if (checkSelfPermission(Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED)
{
requestPermissions(new String[]{Manifest.permission.CAMERA}, MY_CAMERA_PERMISSION_CODE);
}
else
{
Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(cameraIntent, CAMERA_REQUEST);
}
}
6. Acceessing image from Gallery.
Code:
private void getImageFromGallery(){
Intent intent = new Intent();
intent.setAction(Intent.ACTION_GET_CONTENT);
intent.setType("image/*");
startActivityForResult(intent, GALLERY_REQUEST);
}
Screen Shots:
Conclusion:
Image super-resolution service is widely used in day to day life and supports in common scenarios like improving low-quality images on the network, obtaining clear images while reading news, improving image clarity of Identification Card etc.This service intelligently reduces the image noise and provides clear image without changing resolution.
Reference:
HMSCore-Guides
Is there any image size limitations.
How to Reset the Mobile app?
I have installed an emulated to play the world best game on my computer. But emulator is not working smoothly.
Please share some good technique to run it well so that I can enjoy the taste of the game.
Thank you.
cool thing, I'll definitely have to try it, especially with my old photos.
Does this work with every image? I've some old black and white photos
JohnMes said:
cool thing, I'll definitely have to try it, especially with my old photos.
Click to expand...
Click to collapse
It will be a good choice.:highfive:
Rushikesh787 said:
Does this work with every image? I've some old black and white photos
Click to expand...
Click to collapse
I think it can be better to have a try. Maybe we can gain a surprise.
will it convert all format images ?
I would definitely want to try this out
Awesome
I would definitely try this app on my girlfriend's photo

Integrate ML Kit’s Document Skew Correction Capability

Introduction
In the previous post, we looked at how to use HUAWEI ML Kit’s text recognition capability. With this capability, users just need to provide an image, and your app will automatically recognize all of the key information. But what if the image is skew? Can we still get all the same key information? Yes, we can! ML Kit’s document skew correction capability automatically recognizes the position of the document, corrects the shooting angle, and lets users customize the edge points. So even if the document is skew, your users can still get all of the information they need.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Application Scenarios
Document skew correction is useful in a huge range of situations. For example, if your camera is at an angle when you shoot a paper document, the resulting image may be difficult to read. You can use document skew correction to adjust the document to the correct angle, which makes it easier to read.
Document skew correction also works for cards.
And if you’re traveling, you can use it to straighten out pictures of road signs that you’ve taken from an angle.
Isn’t it convenient? Now, I will show you how to quickly integrate this capability.
Document Correction Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we’ll just look at the most important procedures.
1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-documentskew:2.0.2.300'
// Import the document detection/correction model package.
implementation 'com.huawei.hms:ml-computer-vision-documentskew-model:2.0.2.300'
}
1.3 Add Configurations to the File Header
Code:
apply plugin: ‘com.huawei.agconnect’
apply plugin: ‘com.android.application’
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "dsc"/>
1.5 Apply for Camera Permission and Local Image Reading Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2 Code Development
2.1 Create a Text Box Detection/Correction Analyzer
Code:
MLDocumentSkewCorrectionAnalyzerSetting setting = new MLDocumentSkewCorrectionAnalyzerSetting.Factory().create();
MLDocumentSkewCorrectionAnalyzer analyzer = MLDocumentSkewCorrectionAnalyzerFactory.getInstance().getDocumentSkewCorrectionAnalyzer(setting);
2.2 Create an MLFrame Object by Using android.graphics.Bitmap for the Analyzer to Detect Images
JPG, JPEG, and PNG images are supported. It is recommended that you limit the image size to between 320 x 320 px and 1920 x 1920 px.
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncDocumentSkewDetect Asynchronous Method or analyseFrame Synchronous Method to Detect the Text Box.
When the return code is MLDocumentSkewCorrectionConstant.SUCCESS, the coordinates of the four verticals of the text box are returned. These coordinates are relative to the coordinates of the input image. If the coordinates are inconsistent with those of the device, you need to convert them. Otherwise, the returned data will be meaningless.
Code:
// Call the asyncDocumentSkewDetect asynchronous method.
Task<MLDocumentSkewDetectResult> detectTask = analyzer.asyncDocumentSkewDetect(mlFrame);
detectTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewDetectResult>() {
@Override
public void onSuccess(MLDocumentSkewDetectResult detectResult) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
})
// Call the analyseFrame synchronous method.
SparseArray<MLDocumentSkewDetectResult> detect = analyzer.analyseFrame(mlFrame);
if (detect != null && detect.get(0).getResultCode() == MLDocumentSkewCorrectionConstant.SUCCESS) {
// Detection success.
} else {
// Detection failure.
}
2.4 Obtain the Coordinate Data of the Four Verticals in the Text Box Once the Detection is Successful
Use the upper left vertex as the starting point, and add the upper left vertex, upper right vertex, lower right vertex, and lower left vertex to the list (List<Point>). Finally, create an MLDocumentSkewCorrectionCoordinateInput object.
If the synchronous method analyseFrame is called, the detection results will be obtained first, as you can see in the following figure. (If the asynchronous method asyncDocumentSkewDetect is called, skip this step.)
Code:
MLDocumentSkewDetectResult detectResult = detect.get(0);
Obtain the coordinate data for the four verticals of the text box and create an MLDocumentSkewCorrectionCoordinateInput object.
Code:
Point leftTop = detectResult.getLeftTopPosition();
Point rightTop = detectResult.getRightTopPosition();
Point leftBottom = detectResult.getLeftBottomPosition();
Point rightBottom = detectResult.getRightBottomPosition();
List<Point> coordinates = new ArrayList<>();
coordinates.add(leftTop);
coordinates.add(rightTop);
coordinates.add(rightBottom);
coordinates.add(leftBottom);
MLDocumentSkewCorrectionCoordinateInput coordinateData = new MLDocumentSkewCorrectionCoordinateInput(coordinates);
2.5 Call the asyncDocumentSkewCorrect Asynchronous Method or syncDocumentSkewCorrect Synchronous Method to Correct the Text Box
Code:
// Call the asyncDocumentSkewCorrect asynchronous method.
Task<MLDocumentSkewCorrectionResult> correctionTask = analyzer.asyncDocumentSkewCorrect(mlFrame, coordinateData);
correctionTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewCorrectionResult>() {
@Override
public void onSuccess(MLDocumentSkewCorrectionResult refineResult) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
// Call the syncDocumentSkewCorrect synchronous method.
SparseArray<MLDocumentSkewCorrectionResult> correct= analyzer.syncDocumentSkewCorrect(mlFrame, coordinateData);
if (correct != null && correct.get(0).getResultCode() == MLDocumentSkewCorrectionConstant.SUCCESS) {
// Correction success.
} else {
// Correction failure.
}
2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
Can we use ML Kit to translate the document to other language ?

Huawei ML kit – Integration of Scene Detection

Introduction
To help understand the image content, the scene detection service can classify the scenario content of images and add labels, such as outdoor scenery, indoor places, and buildings.You can create more customised experiences for users based on the data detected from image. Currently Huawei supports detection of 102 scenarios. For details about the scenarios, refer to List of Scenario Identification Categories.
This service can be used to identify image sets by scenario and create intelligent album sets. You can also select various camera parameters based on the scene in your app, to help users to take better-looking photos.
Prerequisite
The scene detection service supports integration with Android 6.0 and later versions.
The scene detection needs READ_EXTERNAL_STORAGE and CAMERA in AndroidManifest.xml.
Implementation of dynamic permission for camera and storage is not covered in this article. Please make sure to integrate dynamic permission feature.
Development
  1. Register as a developer account in AppGallery Connect.
  2. Create an application and enable ML kit from AppGallery connect.
  3. Refer to Service Enabling. Integrate AppGallery connect SDK. Refer to AppGallery Connect Service Getting Started.
4. Add Huawei Scene detection dependencies in app-level build.gradle.
Code:
// ML Scene Detection SDK
implementation 'com.huawei.hms:ml-computer-vision-scenedetection:2.0.3.300'
// Import the scene detection model package.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection-model:2.0.3.300'
implementation 'com.huawei.hms:ml-computer-vision-cloud:2.0.3.300'
 5. Sync the gradle.
We have an Activity (MainActivity.java) which has floating buttons to select static scene detection and live scene detection.
Static scene detection is used to detect scene in static images. When we select a photo, the scene detection service returns the results.
Camera stream (Live) scene detection can process camera streams, convert video frames into an MLFrame object, and detect scenarios using the static image detection method.
Implementation of Static scene detection
Code:
private void sceneDetectionEvaluation(Bitmap bitmap) {
//Create a scene detection analyzer instance based on the customized configuration.
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
// Set confidence for scene detection.
.setConfidence(confidence)
.create();
analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer(setting);
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
public void onSuccess(List<MLSceneDetection> result) {
// Processing logic for scene detection success.
for( MLSceneDetection sceneDetection : result) {
sb.append("Detected Scene : " + sceneDetection.getResult() + " , " + "Confidence : " + sceneDetection.getConfidence() + "\n");
tvResult.setText(sb.toString());
if (analyzer != null) {
analyzer.stop();
}
}
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
Log.e(TAG, "MLException : " + errorMessage +", error code: "+ String.valueOf(errorCode));
} else {
// Other errors.
Log.e(TAG, "Exception : " + e.getMessage());
}
if (analyzer != null) {
analyzer.stop();
}
}
});
}
@Override
protected void onDestroy() {
super.onDestroy();
if(analyzer != null) {
analyzer.stop();
}
}
We can set the settings MLSceneDetectionAnalyzerSetting() and set the confidence level for scene detection. setConfidence() methods needs to get float value. After settings are fixed, we can create the analyzer with settings value. Then, we can set the frame with bitmap. Lastly, we have created a task for list of MLSceneDetection object. We have listener functions for success and failure. The service returns list of results. The results have two parameter which are result and confidence. We can set the response to textView tvResult.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
More details, you can visit https://forums.developer.huawei.com/forumPortal/en/topic/0204400184662360088
we will get accuracy result ?

Animate a Picture with the Moving Picture Capability

Ever wondered how to animate a static image? The moving picture capability of Video Editor Kit has the answer. It adds authentic facial expressions to an image of faces by leveraging the AI algorithms such as face detection, face key point detection, facial expression feature extraction, and facial expression animation.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Impressive stuff, right? Let's move on and see how this capability can be integrated.
Integration Procedure​Preparations​For details, please check the official document.
Configuring a Video Editing Project​1. Set the app authentication information.
You can set the information through an API key or access token.
Use the setAccessToken method to set an access token during initialization when the app is started. The access token needs to be set only once.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID.
This ID is used to manage your usage quotas, so ensure that the ID is unique
Code:
MediaApplication.getInstance().setLicenseId("License ID");
2.1 Initialize the running environment for HuaweiVideoEditor.
When creating a video editing project, first create a HuaweiVideoEditor object and initialize its running environment. When exiting a video editing project, release the HuaweiVideoEditor object.
Create a HuaweiVideoEditor object
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the position for the preview area.
This area renders video images, which is implemented by creating SurfaceView in the fundamental capability SDK. Ensure that the preview area position on your app is specified before creating this area.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Set the layout of the preview area.
editor.setDisplay(mSdkPreviewContainer);
Initialize the running environment. If the license verification fails, LicenseException is thrown.
After the HuaweiVideoEditor object is created, it has not occupied any system resource. You need to manually set the time for initializing the running environment of the object. Then, necessary threads and timers will be created in the fundamental capability SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
2.2 Add a video or image.
Create a video lane and add a video or image to the lane using the file path.
Code:
// Obtain the HVETimeLine object.
HVETimeLine timeline = editor.getTimeLine();
// Create a video lane.
HVEVideoLane videoLane = timeline.appendVideoLane();
// Add a video to the end of the video lane.
HVEVideoAsset videoAsset = vidoeLane.appendVideoAsset("test.mp4");
// Add an image to the end of the video lane.
HVEImageAsset imageAsset = vidoeLane.appendImageAsset("test.jpg");
Integrating the Moving Picture Capability​
Code:
// Add the moving picture effect.
videoAsset.addFaceReenactAIEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Handling progress.
}
@Override
public void onSuccess() {
// Handling success.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Handling failure.
}
});
// Remove the moving picture effect.
videoAsset.removeFaceReenactAIEffect();
Result​
This article presents the moving picture capability of Video Editor Kit. For more, check here.
To learn more, please visit:
>> HUAWEI Developers official website
>> Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.

Implementing Hair Color Change in a Tap

Users sometimes want to recolor people's hair in their videos. The color hair capability of HMS Core Video Editor Kit makes it possible for users to choose from a rich array of preset colors to recolor a person's hair, with just a simple tap. In this way, the capability helps make users' videos more interesting.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Function Overview​
Processes an image or a video in real time.
Supports hair color change for multiple people in an image or a video.
Supports color strength adjustment.
Integration Procedure​Preparations
For details, please check the official document.
Configuring a Video Editing Project
1. Set the app authentication information.
You can set the information through an API key or access token.
Use the setAccessToken method to set an access token when the app is started. The access token needs to be set only once.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Use the setApiKey method to set an API key when the app is started. The API key needs to be set only once.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a License ID.
This ID is used to manage your usage quotas, so ensure that the ID is unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the running environment for HuaweiVideoEditor.
When creating a video editing project, first create a HuaweiVideoEditor object and initialize its running environment. When exiting a video editing project, release the HuaweiVideoEditor object.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the position for the preview area.
This area renders video images, which is implemented by creating SurfaceView in the fundamental capability SDK. Ensure that the preview area position on your app is specified before creating this area.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Set the layout of the preview area.
editor.setDisplay(mSdkPreviewContainer);
Initialize the running environment. If the license verification fails, LicenseException will be thrown.
After the HuaweiVideoEditor object is created, it has not occupied any system resource. You need to manually set the time for initializing the running environment of the object. Then, necessary threads and timers will be created in the fundamental capability SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
4. Add a video or image.
Create a video lane and add a video or image to the lane using the file path.
Code:
// Obtain the HVETimeLine object.
HVETimeLine timeline = editor.getTimeLine();
// Create a video lane.
HVEVideoLane videoLane = timeline.appendVideoLane();
// Add a video to the end of the video lane.
HVEVideoAsset videoAsset = videoLane.appendVideoAsset("test.mp4");
// Add an image to the end of the video lane.
HVEImageAsset imageAsset = videoLane.appendImageAsset("test.jpg");
Integrating the Color Hair Capability
Code:
// Initialize the AI algorithm for the color hair effect.
asset.initHairDyeingEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Initialization progress.
}
@Override
public void onSuccess() {
// The initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// The initialization failed.
}
});
// Add the color hair effect by specifying a color and the default strength.
asset.addHairDyeingEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Handling progress.
}
@Override
public void onSuccess() {
// The handling is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// The handling failed.
}
}, colorPath, defaultStrength);
//Remove the color hair effect.
asset.removeHairDyeingEffect();
This article presents the hair dyeing capability of Video Editor Kit. For more, check here.
To learn more, please visit:
>> HUAWEI Developers official website
>> Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.

Categories

Resources