Machine Learning made Easy :- Image Segmentation using Huawei ML Kit and Kotlin etc. - Huawei Developers

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
Image Segmentation – This service can extract segments from image. In this service static image form camera and gallery both are supported and it also supports dynamic camera streams.
Internally it uses Mean Intersection-Over-Union which is a common evaluation metric for semantic image segmentation.
Once segmentation is done it can create coordinate array of each element like human, plant animal etc. Once you get this array you can replace any value with other one to create some wonderful results.
Here we will learn on how we can extract a user from an image, change his background and create a new wonderful result using Huawei ML Kit.
I will divide this process in 4 steps
1. How to choose an image from gallery or camera
2. Once we get the result how to send it to Huawei ML Kit
3. How to complete the processing of image with desired result
4. How to showcase newly extracted image onto the screen
Have you noticed how easy it is to select an image and change its background as per our choice.
One more thing to notice here is how sharp the selection was and which results to how clear the extracted image is looking.
Once we get the segmented image we can create different results as well.
So let’s get to the point
Step 1:
To choose an image from gallery or camera use the below code.
By Camera:
Code:
private fun uploadByCamera() {
val takePicture = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(takePicture, 1222)
}
By Gallery:
Code:
private fun uploadByGallery() {
val photoPickerIntent = Intent(Intent.ACTION_PICK)
photoPickerIntent.type = "image/*"
photoPickerIntent.putExtra(Intent.EXTRA_LOCAL_ONLY, false)
startActivityForResult(
Intent.createChooser(photoPickerIntent, "Choosing picture from gallery"),
loadImageGalleryCode
)
}
Step 2:
Once the intent is triggered you will be directed to system activities and you have to choose an image.
The control once again comes back to our ImageSegmentActivity in onActivityResult method.
We have to get the bitmap and save it for future use. Below is the code for reference.
Code:
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == loadImageGalleryCode && resultCode == Activity.RESULT_OK && data != null) {
val pickedImage: Uri? = data.data
val filePath = arrayOf(MediaStore.Images.Media.DATA)
val cursor: Cursor? = contentResolver.query(pickedImage!!, filePath, null, null, null)
cursor!!.moveToFirst()
val imagePath: String = cursor.getString(cursor.getColumnIndex(filePath[0]))
val options = BitmapFactory.Options()
options.inPreferredConfig = Bitmap.Config.ARGB_8888
val bitmap: Bitmap = BitmapFactory.decodeFile(imagePath, options)
imageSegmentationViewModel.bitmap.value = bitmap
imageSegmentationViewModel.imageSegmentation()
cursor.close()
} else if(requestCode == loadImageCameraCode && resultCode == Activity.RESULT_OK && data != null){
val bitmap: Bitmap? = data.extras!!["data"] as Bitmap?
imageSegmentationViewModel.bitmap.value = bitmap
imageSegmentationViewModel.imageSegmentation()
}
}
Step 3:
After getting the bitmap I am setting a MutableLiveData which is present in my ImageSegmentationViewModel and calling imageSegmentation() method.
Code:
fun imageSegmentation(){
val setting = MLImageSegmentationSetting.Factory()
.setExact(false)
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setScene(MLImageSegmentationScene.ALL)
.create()
analyzer = MLAnalyzerFactory.getInstance()
.getImageSegmentationAnalyzer(setting)
val frame = MLFrame.fromBitmap(bitmap.value)
val task = analyzer.asyncAnalyseFrame(frame)
task.addOnSuccessListener {
bitmap.value = it.foreground
}.addOnFailureListener {
Log.d("Image Segmentation: ", "Error occurred: "+it.message)
}
}
Let us discus about above used line of code in detail
Before we create MLImageSegmentationAnalyzer and as we need to tune our settings to get a desired result, Hence below settings are been changed in order to extract fast human body segment from the image.
Once this MLImageSegmentationSetting object is created for tuning we will pass this to MLImageSegmentationAnalyzer.
setExact(False) : True is whether you want fine segmentation and False is fast segmentation.
setAnalyzerType(MLImageSegmentationSetting.BODY_SEG) : Setting the mode to identify and extract body from image.
setScene(MLImageSegmentationScene.ALL) : On a whole we can extract 4 scenes from image which are mentioned below with their key as well.
MLImageSegmentationScene.ALL: All segmentation results are returned (pixel-level label information, human body image with a transparent background, and gray-scale image with a white human body and black background).
MLImageSegmentationScene.MASK_ONLY: Only pixel-level label information is returned.
MLImageSegmentationScene.FOREGROUND_ONLY: Only a human body image with a transparent background is returned.
MLImageSegmentationScene.GRAYSCALE_ONLY: Only a gray-scale image with a white human body and black background is returned.
For our example we are using MLImageSegmentationScene.ALL as we want all the scenes from the imag
Finally above created MLImageSegmentationSetting object we will be providing to MLImageSegmentationAnalyzer
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting)
We can also create MLImageSegmentationAnalyzer object by calling below code as well.
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();
But in this code customization is very limited.
Next step is to create an MLFrame by below code and ptovide your previously fetched image in bitmap format.
MLFrame frame = MLFrame.fromBitmap(bitmap);
On analyser object we will be calling asyncAnalyseFrame(frame) and providing MLFrame which we recently created.
This will yield you a Task<MLImageSegmentation> object, on this object you will get 2 callbacks.
onSuccess
onFailure
You can save the new resource from onSuccess() and stop the analyzer to release detection resources by analyzer.stop() method.
Step 4:
Here is how we can set bitmap to imageview
Inside our main layout view model was added with name as dashboardViewModel
We also added a custom tag named as customImageSrc
Code:
<ImageView
android:id="@+id/imageView"
customImageSrc="@{dashboardViewModel.bitmap}"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:contentDescription="@string/user_image"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.5"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
We created an Adapter class named CustomImageAdapter where we bind our ImageView by @BindingAdapter("customImageSrc") tag.
Once our Mutable livedata gets updated in ViewModel by success callback from Task<MLImageSegmentation>
This imageview will automatically get updated by BindingAdapter.
Code:
class CustomImageAdapter {
companion object{
@JvmStatic
[user=8928444]@Binding[/user]Adapter("customImageSrc")
fun setImageViewResource(imageView: ImageView, bitmap: Bitmap?) {
bitmap?.apply { imageView.setImageBitmap(bitmap) }
}
}
}
I hope you liked this article. I would love to hear your ideas on how you can use this kit in your Applications.
In case you dont have a real device you can check out this article
For more information, you can visit https://forums.developer.huawei.com/forumPortal/en/home

Related

Huawei Analytics Kit-Track the growth of your application

This article is originally from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home​
In this post, we will learn Kotlin with data binding in Android. It gives you the ability to communicate between your view and model. It keeps the code clean and sorted.
Note: Configuration part check previous article
How to Use Data Binding Library with Kotlin – A Step By Step Guide
It’s a library that allows you to bind the data of your models directly to the xml views in a very flexible way.
Kotlin was recently introduced as a secondary ‘official’ Java language. It is similar to Java in many ways but is a little easier to learn and get to grips with. Some of the biggest companies have adopted Kotlin and seen amazing results.
If you want to use data binding and Kotlin, here are a few things to keep in mind:
· Data binding is a support library, so it can be used with all Android platform versions all the way back to Android 2.1 (API level 7+).
· To use data binding, you need Android Plugin for Gradle 1.5.0-alpha1 or higher. You can see here how to update the Android Plugin for Gradle.
· Android Studio 3.0 fully supports kotlin
First of all, create an Android Studio project and add a dependency for Kotlin and few changes for your App level build.gradle
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
· Add below line into Root level build.gradle
Code:
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
Data Binding:
· Android offers support to write declarative layouts using data binding.
· This minimizes the necessary code in your application logic to connect to the user interface elements.
· The usage of data binding requires changes in your layout files. Such layout files starts with a layout root tag followed by a data element and a view root element.
· The data elements describe data which is available for binding. This view element contains your root hierarchy similar to layout files which are not used with data binding.
· References to the data elements or expressions within the layout are written in the attribute properties using the @{} or @={}
1. The user variable within data describes a property that may be used within this layout
2. Normal view hierarchy.
3. Binding Input data.
Code Implementation:
1.SignInActivity.kt
Code:
class SignInActivity : AppCompatActivity() {
private var mInstance: HiAnalyticsInstance? = null
private lateinit var mDataBinding: ActivitySigninBinding
var viewmodel: SignInViewModel? = null
var customeProgressDialog: CustomeProgressDialog? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
mDataBinding = DataBindingUtil.setContentView(this, R.layout.activity_signin)
viewmodel = ViewModelProviders.of(this).get(SignInViewModel::class.java)
mDataBinding.setLifecycleOwner(this);
mDataBinding?.viewmodel = viewmodel
customeProgressDialog = CustomeProgressDialog(this)
initObservables()
init();
}
private fun initObservables() {
viewmodel?.userLogin?.observe(this, Observer { userInfo ->
Toast.makeText(this, "welcome, ${userInfo?.email}", Toast.LENGTH_LONG).show()
val bundle = Bundle()
bundle.putString("email", userInfo?.email)
bundle.putString("password", userInfo?.password)
mInstance!!.onEvent(HAEventType.SIGNIN, bundle)
val intent = Intent(this, ProfileScreen::class.java)
startActivity(intent)
})
viewmodel?.progressDialog?.observe(this, Observer {
if (it!!) customeProgressDialog?.show() else customeProgressDialog?.dismiss()
})
}
private fun init() {
HiAnalyticsTools.enableLog()
mInstance = HiAnalytics.getInstance(this)
mInstance?.setAnalyticsEnabled(true)
mInstance?.regHmsSvcEvent()
}
}
Viewmodel Class:
· The view model coordinates the view's interaction with the model.
· It may convert or manipulate data so that it can be easily consumed by the view and may implement additional properties that may not be present on the model.
· The view model may define logical states that the view can represent visually to the user.
Code:
class SignInViewModel(application: Application) : AndroidViewModel(application) {
var btnSelected: ObservableBoolean? = null
var email: ObservableField<String>? = null
var password: ObservableField<String>? = null
var userLogin: MutableLiveData<UserInfo>? = null
var progressDialog: SingleLiveEvent<Boolean>? = null
init {
btnSelected = ObservableBoolean(false)
email = ObservableField("")
password = ObservableField("")
userLogin = MutableLiveData()
progressDialog = SingleLiveEvent<Boolean>()
}
fun onEmailChanged(s: CharSequence, start: Int, befor: Int, count: Int) {
btnSelected?.set(password?.get()!!.length != 0)
}
fun onPasswordChanged(s: CharSequence, start: Int, befor: Int, count: Int) {
btnSelected?.set(s.toString().length >= 8)
}
fun onLoginClick() {
progressDialog?.value = false
val userInfo = UserInfo(email?.get(), password?.get())
userLogin?.value = userInfo
}
}
Result:
Overview of application:
Event Analysis:
· Event Analysis to collect data about interactions with your content.
· An Event hit includes a value of each component and these values are displayed in report.
Any questions about this, you can acquire answers from HUAWEI Developer Forum.

Android: How to Develop an ID Photo DIY Applet in 30 Min

The source code will be shared and it's quite friendly to users who are fresh to Android. The whole process will take only 30 minutes.
It's an application level development and we won't go through the algorithm of image segmentation. I use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don't know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don't have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn't meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
[CODE]buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
[/CODE]
1.2 Add SDK Dependency in Application Level build.gradle
Code:
Introducing SDK and basic SDK of face recognition:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user's device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--使用存储权限--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // 根据包名打开对应的设置界面
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
Code:
The image segmentation detector can be created through the image segmentation detection configurator "mlimagesegmentation setting".
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create "mlframe" Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator "MLImageSegmentationSetting".
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call "asyncanalyseframe" Method for Image Segmentation
Code:
// 创建一个task,处理图像分割检测器返回的结果。 Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // 异步处理图像分割检测器返回结果 Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let's see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-Core/hms-ml-demo/tree/master/ID-Photo-DIY
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
1. People's portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
2. Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
3. Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4

Huawei ML kit – Integration of Scene Detection

Introduction
To help understand the image content, the scene detection service can classify the scenario content of images and add labels, such as outdoor scenery, indoor places, and buildings.You can create more customised experiences for users based on the data detected from image. Currently Huawei supports detection of 102 scenarios. For details about the scenarios, refer to List of Scenario Identification Categories.
This service can be used to identify image sets by scenario and create intelligent album sets. You can also select various camera parameters based on the scene in your app, to help users to take better-looking photos.
Prerequisite
The scene detection service supports integration with Android 6.0 and later versions.
The scene detection needs READ_EXTERNAL_STORAGE and CAMERA in AndroidManifest.xml.
Implementation of dynamic permission for camera and storage is not covered in this article. Please make sure to integrate dynamic permission feature.
Development
  1. Register as a developer account in AppGallery Connect.
  2. Create an application and enable ML kit from AppGallery connect.
  3. Refer to Service Enabling. Integrate AppGallery connect SDK. Refer to AppGallery Connect Service Getting Started.
4. Add Huawei Scene detection dependencies in app-level build.gradle.
Code:
// ML Scene Detection SDK
implementation 'com.huawei.hms:ml-computer-vision-scenedetection:2.0.3.300'
// Import the scene detection model package.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection-model:2.0.3.300'
implementation 'com.huawei.hms:ml-computer-vision-cloud:2.0.3.300'
 5. Sync the gradle.
We have an Activity (MainActivity.java) which has floating buttons to select static scene detection and live scene detection.
Static scene detection is used to detect scene in static images. When we select a photo, the scene detection service returns the results.
Camera stream (Live) scene detection can process camera streams, convert video frames into an MLFrame object, and detect scenarios using the static image detection method.
Implementation of Static scene detection
Code:
private void sceneDetectionEvaluation(Bitmap bitmap) {
//Create a scene detection analyzer instance based on the customized configuration.
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
// Set confidence for scene detection.
.setConfidence(confidence)
.create();
analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer(setting);
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
public void onSuccess(List<MLSceneDetection> result) {
// Processing logic for scene detection success.
for( MLSceneDetection sceneDetection : result) {
sb.append("Detected Scene : " + sceneDetection.getResult() + " , " + "Confidence : " + sceneDetection.getConfidence() + "\n");
tvResult.setText(sb.toString());
if (analyzer != null) {
analyzer.stop();
}
}
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
Log.e(TAG, "MLException : " + errorMessage +", error code: "+ String.valueOf(errorCode));
} else {
// Other errors.
Log.e(TAG, "Exception : " + e.getMessage());
}
if (analyzer != null) {
analyzer.stop();
}
}
});
}
@Override
protected void onDestroy() {
super.onDestroy();
if(analyzer != null) {
analyzer.stop();
}
}
We can set the settings MLSceneDetectionAnalyzerSetting() and set the confidence level for scene detection. setConfidence() methods needs to get float value. After settings are fixed, we can create the analyzer with settings value. Then, we can set the frame with bitmap. Lastly, we have created a task for list of MLSceneDetection object. We have listener functions for success and failure. The service returns list of results. The results have two parameter which are result and confidence. We can set the response to textView tvResult.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
More details, you can visit https://forums.developer.huawei.com/forumPortal/en/topic/0204400184662360088
we will get accuracy result ?

A Programmer's Perfect Father's Birthday Gift: A Restored Old Photo

Everyone's family has some old photos filed away in an album. Despite the simple backgrounds and casual poses, these photos reveal quite a bit, telling stories and providing insight on what life was like in the past.
In anticipation of Father's Birthday, John, a programmer at Huawei, was racking his brains about what gift to get for his father. He thought about it for quite a while — then suddenly, a glimpse at an old photo album piqued his interest. "Why not using my coding expertise to restore my father's old photo, and shed light on his youthful personality?", he mused. Intrigued by this thought, John started to look into how he could achieve this goal.
Image super-resolution in HUAWEI ML Kit was ultimately what he settled on. With this service, John was able to convert the wrinkled and blurry old photo into a hi-res image, and presented it to his father. His father was deeply touched by the gesture.
Actual Effects:
Image Super-Resolution
This service converts an unclear, low-resolution image into a high-resolution image, increasing pixel intensity and displaying details that were missed when the image was originally taken.
Image super-resolution is ideal in computer vision, where it can help enhance image recognition and analysis capabilities. The Image super-resolution technology has improved rapidly, and weighs more in day-to-day work and life. It can be used to sharpen common images, such as portrait shots, as well as vital images in fields like medical imaging, security surveillance, and satellite imaging.
Image super-resolution offers both 1x and 3x super-resolution capabilities. 1x super-resolution removes compression noise, and 3x super-resolution effectively suppresses compression noise, while also providing a 3x enlargement capability.
The Image super-resolution service can help enhance images for a wide range of objects and items, such as greenery, food, and employee ID cards. You can even use it to enhance low-quality images such as news images obtained from the network into clear, enlarged ones.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Preparations
For more details about configuring the Huawei Maven repository and integrating the image super-resolution SDK, please refer to the Development Guide of ML Kit on HUAWEI Developers.
Configuring the Integrated SDK
Open the build.gradle file in the app directory. Add build dependencies for the image super-resolution SDK under the dependencies block.
Code:
implementation'com.huawei.hms:ml-computer-vision-imagesuperresolution:2.0.4.300'
implementation'com.huawei.hms:ml-computer-vision-imagesuperresolution-model:2.0.4.300'
Configuring the AndroidManifest.xml File
Open the AndroidManifest.xml file in the main folder. Apply for the storage read permission as needed by adding the following statement before <application>:
Code:
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Add the following statements in <application>. Then the app, after being installed, will automatically update the machine learning model to the device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imagesuperresolution"/>
Development Procedure
Configuring the Application for the Storage Read Permission
Check whether the app has had the storage read permission in onCreate() of MainActivity. If no, apply for this permission through requestPermissions; if yes, call startSuperResolutionActivity() to start super-resolution processing on the image.
Code:
if (ContextCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE)
!= PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, REQUEST_CODE);
} else {
startSuperResolutionActivity();
}
Check the permission application results:
Code:
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode == REQUEST_CODE) {
if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
startSuperResolutionActivity();
} else {
Toast.makeText(this, "Permission application failed, you denied the permission", Toast.LENGTH_SHORT).show();
}
}
}
After the application is complete, create a button. Set a configuration that after the button is tapped, the app will read images from the storage.
Code:
private void selectLocalImage() {
Intent intent = new Intent(Intent.ACTION_PICK, null);
intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*");
startActivityForResult(intent, REQUEST_SELECT_IMAGE);
}
Configuring the Image Super-Resolution Analyzer
Before the app can perform super-resolution processing on the image, create and configure an analyzer. The example below configures two parameters for the 1x super-resolution capability and 3x super-resolution capability respectively. Which one of them is used depends on the value of selectItem.
Code:
private MLImageSuperResolutionAnalyzer createAnalyzer() {
if (selectItem == INDEX_1X) {
return MLImageSuperResolutionAnalyzerFactory.getInstance().getImageSuperResolutionAnalyzer();
} else {
MLImageSuperResolutionAnalyzerSetting setting = new MLImageSuperResolutionAnalyzerSetting.Factory()
.setScale(MLImageSuperResolutionAnalyzerSetting.ISR_SCALE_3X)
.create();
return MLImageSuperResolutionAnalyzerFactory.getInstance().getImageSuperResolutionAnalyzer(setting);
}
}
Constructing and Processing the Image
Before the app can perform super-resolution processing on the image, convert the image into a bitmap whose color format is ARGB8888. Create an MLFrame object using the bitmap. After the image is added, obtain its information and override onActivityResult.
Code:
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_OK) {
if (data != null) {
imageUri = data.getData();
}
reloadAndDetectImage(true, false);
} else if (resultCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_CANCELED) {
finish();
}
}
Create an MLFrame object using the bitmap.
Code:
srcBitmap = BitmapUtils.loadFromPathWithoutZoom(this, imageUri, IMAGE_MAX_SIZE, IMAGE_MAX_SIZE);
MLFrame frame = MLFrame.fromBitmap(srcBitmap);
Call the asynchronous method asyncAnalyseFrame to perform super-resolution processing on the image.
Code:
Task<MLImageSuperResolutionResult> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSuperResolutionResult>() {
public void onSuccess(MLImageSuperResolutionResult result) {
// Recognition success.
desBitmap = result.getBitmap();
setImage(desImageView, desBitmap);
setImageSizeInfo(desBitmap.getWidth(), desBitmap.getHeight());
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Recognition failure.
Log.e(TAG, "Failed." + e.getMessage());
Toast.makeText(getApplicationContext(), e.getMessage(), Toast.LENGTH_SHORT).show();
}
});
After the recognition is complete, stop the analyzer.
Code:
if (analyzer != null) {
analyzer.stop();
}
Meanwhile, override onDestroy of the activity to release the bitmap resources.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
if (srcBitmap != null) {
srcBitmap.recycle();
}
if (desBitmap != null) {
desBitmap.recycle();
}
if (analyzer != null) {
analyzer.stop();
}
}
References
To learn more, please visit:
Official webpages for Image Super-Resolution and ML Kit
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original source

Solution to Creating an Image Classifier

I don't know if it's the same for you, but I always get frustrated when sorting through my phone's album. It seems to take forever before I can find the image that I want to use. As a coder, I can't help but wonder if there's a solution for this. Is there a way to organize an entire album? Well, let's take a look at how to develop an image classifier using a service called image classification.
Development Preparations​1. Configure the Maven repository address for the SDK to be used.
Java:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the image classification SDK.
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-classification:3.3.0.300'
// Import the image classification model package.
implementation 'com.huawei.hms:ml-computer-vision-image-classification-model:3.3.0.300'
Project Configuration​1. Set the authentication information for the app.
This information can be set through an API key or access token.
Use the setAccessToken method to set an access token during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create an image classification analyzer in on-device static image detection mode.
Java:
// Method 1: Use customized parameter settings for device-based recognition.
MLLocalClassificationAnalyzerSetting setting =
new MLLocalClassificationAnalyzerSetting.Factory()
.setMinAcceptablePossibility(0.8f)
.create();
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer(setting);
// Method 2: Use default parameter settings for on-device recognition.
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer();
3. Create an MLFrame object.
Java:
// Create an MLFrame object using the bitmap which is the image data in bitmap format. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image dimensions be greater than or equal to 112 x 112 px.
MLFrame frame = MLFrame.fromBitmap(bitmap);
4. Call asyncAnalyseFrame to classify images.
Java:
Task<List<MLImageClassification>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLImageClassification>>() {
@Override
public void onSuccess(List<MLImageClassification> classifications) {
// Recognition success.
// Callback when the MLImageClassification list is returned, to obtain information like image categories.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
try {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize relevant messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error message. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
5. Stop the analyzer after recognition is complete.
Java:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Remarks​The image classification capability supports the on-device static image detection mode, on-cloud static image detection mode, and camera stream detection mode. The demo here illustrates only the first mode.
I came up with a bunch of application scenarios to use image classification, for example: education apps. With the help of image classification, such an app enables its users to categorize images taken in a period into different albums; travel apps. Image classification allows such apps to classify images according to where they are taken or by objects in the images; file sharing apps. Image classification allows users of such apps to upload and share images by image category.
References​>>Image classification Development Guide
>>Reddit to join developer discussions
>>GitHub to download the sample code
>>Stack Overflow to solve integration problems

Categories

Resources