Huawei Analytics Kit-Track the growth of your application - Huawei Developers

This article is originally from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home​
In this post, we will learn Kotlin with data binding in Android. It gives you the ability to communicate between your view and model. It keeps the code clean and sorted.
Note: Configuration part check previous article
How to Use Data Binding Library with Kotlin – A Step By Step Guide
It’s a library that allows you to bind the data of your models directly to the xml views in a very flexible way.
Kotlin was recently introduced as a secondary ‘official’ Java language. It is similar to Java in many ways but is a little easier to learn and get to grips with. Some of the biggest companies have adopted Kotlin and seen amazing results.
If you want to use data binding and Kotlin, here are a few things to keep in mind:
· Data binding is a support library, so it can be used with all Android platform versions all the way back to Android 2.1 (API level 7+).
· To use data binding, you need Android Plugin for Gradle 1.5.0-alpha1 or higher. You can see here how to update the Android Plugin for Gradle.
· Android Studio 3.0 fully supports kotlin
First of all, create an Android Studio project and add a dependency for Kotlin and few changes for your App level build.gradle
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
· Add below line into Root level build.gradle
Code:
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
Data Binding:
· Android offers support to write declarative layouts using data binding.
· This minimizes the necessary code in your application logic to connect to the user interface elements.
· The usage of data binding requires changes in your layout files. Such layout files starts with a layout root tag followed by a data element and a view root element.
· The data elements describe data which is available for binding. This view element contains your root hierarchy similar to layout files which are not used with data binding.
· References to the data elements or expressions within the layout are written in the attribute properties using the @{} or @={}
1. The user variable within data describes a property that may be used within this layout
2. Normal view hierarchy.
3. Binding Input data.
Code Implementation:
1.SignInActivity.kt
Code:
class SignInActivity : AppCompatActivity() {
private var mInstance: HiAnalyticsInstance? = null
private lateinit var mDataBinding: ActivitySigninBinding
var viewmodel: SignInViewModel? = null
var customeProgressDialog: CustomeProgressDialog? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
mDataBinding = DataBindingUtil.setContentView(this, R.layout.activity_signin)
viewmodel = ViewModelProviders.of(this).get(SignInViewModel::class.java)
mDataBinding.setLifecycleOwner(this);
mDataBinding?.viewmodel = viewmodel
customeProgressDialog = CustomeProgressDialog(this)
initObservables()
init();
}
private fun initObservables() {
viewmodel?.userLogin?.observe(this, Observer { userInfo ->
Toast.makeText(this, "welcome, ${userInfo?.email}", Toast.LENGTH_LONG).show()
val bundle = Bundle()
bundle.putString("email", userInfo?.email)
bundle.putString("password", userInfo?.password)
mInstance!!.onEvent(HAEventType.SIGNIN, bundle)
val intent = Intent(this, ProfileScreen::class.java)
startActivity(intent)
})
viewmodel?.progressDialog?.observe(this, Observer {
if (it!!) customeProgressDialog?.show() else customeProgressDialog?.dismiss()
})
}
private fun init() {
HiAnalyticsTools.enableLog()
mInstance = HiAnalytics.getInstance(this)
mInstance?.setAnalyticsEnabled(true)
mInstance?.regHmsSvcEvent()
}
}
Viewmodel Class:
· The view model coordinates the view's interaction with the model.
· It may convert or manipulate data so that it can be easily consumed by the view and may implement additional properties that may not be present on the model.
· The view model may define logical states that the view can represent visually to the user.
Code:
class SignInViewModel(application: Application) : AndroidViewModel(application) {
var btnSelected: ObservableBoolean? = null
var email: ObservableField<String>? = null
var password: ObservableField<String>? = null
var userLogin: MutableLiveData<UserInfo>? = null
var progressDialog: SingleLiveEvent<Boolean>? = null
init {
btnSelected = ObservableBoolean(false)
email = ObservableField("")
password = ObservableField("")
userLogin = MutableLiveData()
progressDialog = SingleLiveEvent<Boolean>()
}
fun onEmailChanged(s: CharSequence, start: Int, befor: Int, count: Int) {
btnSelected?.set(password?.get()!!.length != 0)
}
fun onPasswordChanged(s: CharSequence, start: Int, befor: Int, count: Int) {
btnSelected?.set(s.toString().length >= 8)
}
fun onLoginClick() {
progressDialog?.value = false
val userInfo = UserInfo(email?.get(), password?.get())
userLogin?.value = userInfo
}
}
Result:
Overview of application:
Event Analysis:
· Event Analysis to collect data about interactions with your content.
· An Event hit includes a value of each component and these values are displayed in report.
Any questions about this, you can acquire answers from HUAWEI Developer Forum.

Related

Machine Learning made Easy :- Image Segmentation using Huawei ML Kit and Kotlin etc.

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
Image Segmentation – This service can extract segments from image. In this service static image form camera and gallery both are supported and it also supports dynamic camera streams.
Internally it uses Mean Intersection-Over-Union which is a common evaluation metric for semantic image segmentation.
Once segmentation is done it can create coordinate array of each element like human, plant animal etc. Once you get this array you can replace any value with other one to create some wonderful results.
Here we will learn on how we can extract a user from an image, change his background and create a new wonderful result using Huawei ML Kit.
I will divide this process in 4 steps
1. How to choose an image from gallery or camera
2. Once we get the result how to send it to Huawei ML Kit
3. How to complete the processing of image with desired result
4. How to showcase newly extracted image onto the screen
Have you noticed how easy it is to select an image and change its background as per our choice.
One more thing to notice here is how sharp the selection was and which results to how clear the extracted image is looking.
Once we get the segmented image we can create different results as well.
So let’s get to the point
Step 1:
To choose an image from gallery or camera use the below code.
By Camera:
Code:
private fun uploadByCamera() {
val takePicture = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(takePicture, 1222)
}
By Gallery:
Code:
private fun uploadByGallery() {
val photoPickerIntent = Intent(Intent.ACTION_PICK)
photoPickerIntent.type = "image/*"
photoPickerIntent.putExtra(Intent.EXTRA_LOCAL_ONLY, false)
startActivityForResult(
Intent.createChooser(photoPickerIntent, "Choosing picture from gallery"),
loadImageGalleryCode
)
}
Step 2:
Once the intent is triggered you will be directed to system activities and you have to choose an image.
The control once again comes back to our ImageSegmentActivity in onActivityResult method.
We have to get the bitmap and save it for future use. Below is the code for reference.
Code:
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == loadImageGalleryCode && resultCode == Activity.RESULT_OK && data != null) {
val pickedImage: Uri? = data.data
val filePath = arrayOf(MediaStore.Images.Media.DATA)
val cursor: Cursor? = contentResolver.query(pickedImage!!, filePath, null, null, null)
cursor!!.moveToFirst()
val imagePath: String = cursor.getString(cursor.getColumnIndex(filePath[0]))
val options = BitmapFactory.Options()
options.inPreferredConfig = Bitmap.Config.ARGB_8888
val bitmap: Bitmap = BitmapFactory.decodeFile(imagePath, options)
imageSegmentationViewModel.bitmap.value = bitmap
imageSegmentationViewModel.imageSegmentation()
cursor.close()
} else if(requestCode == loadImageCameraCode && resultCode == Activity.RESULT_OK && data != null){
val bitmap: Bitmap? = data.extras!!["data"] as Bitmap?
imageSegmentationViewModel.bitmap.value = bitmap
imageSegmentationViewModel.imageSegmentation()
}
}
Step 3:
After getting the bitmap I am setting a MutableLiveData which is present in my ImageSegmentationViewModel and calling imageSegmentation() method.
Code:
fun imageSegmentation(){
val setting = MLImageSegmentationSetting.Factory()
.setExact(false)
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setScene(MLImageSegmentationScene.ALL)
.create()
analyzer = MLAnalyzerFactory.getInstance()
.getImageSegmentationAnalyzer(setting)
val frame = MLFrame.fromBitmap(bitmap.value)
val task = analyzer.asyncAnalyseFrame(frame)
task.addOnSuccessListener {
bitmap.value = it.foreground
}.addOnFailureListener {
Log.d("Image Segmentation: ", "Error occurred: "+it.message)
}
}
Let us discus about above used line of code in detail
Before we create MLImageSegmentationAnalyzer and as we need to tune our settings to get a desired result, Hence below settings are been changed in order to extract fast human body segment from the image.
Once this MLImageSegmentationSetting object is created for tuning we will pass this to MLImageSegmentationAnalyzer.
setExact(False) : True is whether you want fine segmentation and False is fast segmentation.
setAnalyzerType(MLImageSegmentationSetting.BODY_SEG) : Setting the mode to identify and extract body from image.
setScene(MLImageSegmentationScene.ALL) : On a whole we can extract 4 scenes from image which are mentioned below with their key as well.
MLImageSegmentationScene.ALL: All segmentation results are returned (pixel-level label information, human body image with a transparent background, and gray-scale image with a white human body and black background).
MLImageSegmentationScene.MASK_ONLY: Only pixel-level label information is returned.
MLImageSegmentationScene.FOREGROUND_ONLY: Only a human body image with a transparent background is returned.
MLImageSegmentationScene.GRAYSCALE_ONLY: Only a gray-scale image with a white human body and black background is returned.
For our example we are using MLImageSegmentationScene.ALL as we want all the scenes from the imag
Finally above created MLImageSegmentationSetting object we will be providing to MLImageSegmentationAnalyzer
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting)
We can also create MLImageSegmentationAnalyzer object by calling below code as well.
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();
But in this code customization is very limited.
Next step is to create an MLFrame by below code and ptovide your previously fetched image in bitmap format.
MLFrame frame = MLFrame.fromBitmap(bitmap);
On analyser object we will be calling asyncAnalyseFrame(frame) and providing MLFrame which we recently created.
This will yield you a Task<MLImageSegmentation> object, on this object you will get 2 callbacks.
onSuccess
onFailure
You can save the new resource from onSuccess() and stop the analyzer to release detection resources by analyzer.stop() method.
Step 4:
Here is how we can set bitmap to imageview
Inside our main layout view model was added with name as dashboardViewModel
We also added a custom tag named as customImageSrc
Code:
<ImageView
android:id="@+id/imageView"
customImageSrc="@{dashboardViewModel.bitmap}"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:contentDescription="@string/user_image"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.5"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
We created an Adapter class named CustomImageAdapter where we bind our ImageView by @BindingAdapter("customImageSrc") tag.
Once our Mutable livedata gets updated in ViewModel by success callback from Task<MLImageSegmentation>
This imageview will automatically get updated by BindingAdapter.
Code:
class CustomImageAdapter {
companion object{
@JvmStatic
[user=8928444]@Binding[/user]Adapter("customImageSrc")
fun setImageViewResource(imageView: ImageView, bitmap: Bitmap?) {
bitmap?.apply { imageView.setImageBitmap(bitmap) }
}
}
}
I hope you liked this article. I would love to hear your ideas on how you can use this kit in your Applications.
In case you dont have a real device you can check out this article
For more information, you can visit https://forums.developer.huawei.com/forumPortal/en/home

Read Daily Step Data with Health Kit

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello everyone,
Especially recently when we realized how important the “health” issue is for us, we learned how important it is to act in order to lead a healthy life. For the days when we worked in the home office and forgot to act like as a human, we can follow our daily steps, how many steps we have taken in 3 months and how many calories we have burned. We develop sample app to use this properties. We used these features with the help of Health Kit.
Before step into development progress, let’s examine what is Huawei Health Kit and what are the main functions.
Huawei Health Kit :
HUAWEI Health Kit allows ecosystem apps to access fitness and health data of users based on their HUAWEI ID and authorization. For consumers, Health Kit provides a mechanism for fitness and health data storage and sharing based on flexible authorization. For developers and partners, Health Kit provides a data platform and fitness and health open capabilities, so that they can build related apps and services based on a multitude of data types. Health Kit connects the hardware devices and ecosystem apps to provide consumers with health care, workout guidance, and ultimate service experience.
Main Functions :
· Data storage
Provides a data platform for developers to store fitness and health data.
· Data openness
Provides a wide range of fitness and health APIs and supports sharing of the various fitness and health data, including step count, weight, and heart rate.
· Data access authorization management
Provides settings for users and developers so users can manage developers’ access to their health and fitness data, guaranteeing users’ data privacy and legal rights.
· Device access
Provides hardware devices (including fitness and health devices) with APIs for measuring and uploading data through the standard Bluetooth protocol.
You can browse this page to examine it in more detail.
https://developer.huawei.com/consumer/en/hms/huaweihealth/
Development Process
We learned about Huawei Health Kit and its main functions. Now, we can move on development part. Primarily, we should create a project on App Gallery Connect. You can follow the steps from the link below.
https://medium.com/huawei-developers/android-integrating-your-apps-with-huawei-hms-core-1f1e2a090e98
After all these steps, we need to apply for Health Kit.
Then, we need to select data and their permissions.
!!! To access health data we need to authenticate the user with Huawei ID.
Let’s continue by coding Data Controller to get our users’ step data.
Code:
private fun initDataController() {
val hiHealthOptions = HiHealthOptions.builder()
.addDataType(DataType.DT_CONTINUOUS_STEPS_DELTA, HiHealthOptions.ACCESS_READ)
.build()
val signInHuaweiId =
HuaweiIdAuthManager.getExtendedAuthResult(hiHealthOptions)
dataController = HuaweiHiHealth.getDataController(context!!, signInHuaweiId)
}
}
After creating the Data Controller, we write the read Daily function to get daily steps data of our users.
Code:
fun readDaily(startTime: Int, endTime: Int) {
val daliySummationTask =
dataController!!.readDailySummation(
DataType.DT_CONTINUOUS_STEPS_DELTA,
startTime,
endTime
)
daliySummationTask.addOnSuccessListener { sampleSet ->
sampleSet?.let { showSampleSet(it) }
}
daliySummationTask.addOnFailureListener { e ->
val errorCode: String? = e.message
val pattern: Pattern = Pattern.compile("[0-9]*")
val isNum: Matcher = pattern.matcher(errorCode)
when {
e is ApiException -> {
val eCode = e.statusCode
val errorMsg = HiHealthStatusCodes.getStatusCodeMessage(eCode)
}
isNum.matches() -> {
val errorMsg =
errorCode?.toInt()?.let { HiHealthStatusCodes.getStatusCodeMessage(it) }
}
else -> {
}
}
}
}
Let’s build the request body for reading activity records. We call the read method of the Data Controller to obtain activity records from the Health platform based on the conditions in the request body. Also, we should set time format. The daily hours reading format should be like this:
” yyyy-MM-dd hh: mm: ss “
Code:
fun getActivityRecord() {
val dateFormat = SimpleDateFormat("yyyy-MM-dd hh:mm:ss", Locale.getDefault())
val startDate: Date = dateFormat.parse("yyyy-MM-dd hh:mm:ss")!!
val endDate: Date = dateFormat.parse("yyyy-MM-dd hh:mm:ss")!!
val readOptions = ReadOptions.Builder()
.read(DataType.DT_CONTINUOUS_STEPS_DELTA)
.setTimeRange(
startDate.time,
endDate.time,
TimeUnit.MILLISECONDS
)
.build()
val readReplyTask = dataController!!.read(readOptions)
readReplyTask.addOnSuccessListener { readReply ->
Log.i("TAG", "Success read an SampleSets from HMS core")
for (sampleSet in readReply.getSampleSets()) {
showSampleSet(sampleSet)
}
}.addOnFailureListener { e ->
val errorCode: String? = e.message
val pattern: Pattern = Pattern.compile("[0-9]*")
val isNum: Matcher = pattern.matcher(errorCode)
when {
e is ApiException -> {
val eCode = e.statusCode
val errorMsg = HiHealthStatusCodes.getStatusCodeMessage(eCode)
}
isNum.matches() -> {
val errorMsg =
errorCode?.toInt()?.let { HiHealthStatusCodes.getStatusCodeMessage(it) }
}
else -> {
}
}
}
}
Bonus : I used pagination to displaying step data for better user experience and resource management.
Code:
var count = 0
var today = Calendar.getInstance()
var nextDay = Calendar.getInstance()
nextDay.add(Calendar.DATE, -20)
var dateFormat = SimpleDateFormat("yyyyMMdd", Locale.getDefault())
println("Today : " + dateFormat.format(Date(today.timeInMillis)))
stepViewModel.readToday()
stepViewModel.readDaily(dateFormat.format(Date(nextDay.timeInMillis)).toInt(), dateFormat.format(Date(today.timeInMillis)).toInt()-1)
binding.stepList.addOnScrollListener(object : RecyclerView.OnScrollListener() {
override fun onScrollStateChanged(recyclerView: RecyclerView, newState: Int) {
if (!binding.stepList.canScrollVertically(1) && count<5) {
count++
var otherDay = Calendar.getInstance()
var nextDay = Calendar.getInstance()
otherDay.add(Calendar.DATE, -(20*count))
nextDay.add(Calendar.DATE, -((20*(count + 1)-1)))
stepViewModel.readDaily(dateFormat.format(Date(nextDay.timeInMillis)).toInt(), dateFormat.format(Date(otherDay.timeInMillis)).toInt())
}
}
})
Finally, we completed our development process. Let’s see how it looks in our project.
Conclusion
We have created an application where we can track our daily step count by taking advantage of the features provided by the Health Kit. I think it is necessary for all of us to face this reality. I hope it has been useful article for you. Your questions and opinions are very important to me.
References :
https://developer.huawei.com/consumer/en/hms/huaweihealth/

Intermediate: How to fetch Remote Configuration from Huawei AGC in Unity

Introduction
Huawei provides Remote Configuration service to manage parameters online, with this service you can control or change the behavior and appearance of you app online without requiring user’s interaction or update to app. By implementing the SDK you can fetch the online parameter values delivered on the AG-console to change the app behavior and appearance.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Functional features
1. Parameter management: This function enables user to add new parameter, delete, update existing parameter and setting conditional values.
2. Condition management: This function enables user to adding, deleting and modifying conditions and copy and modify existing conditions. Currently, you can set the following conditions version, country/region, audience, user attribute, user percentage, time and language. You can expect more conditions in the future.
3. Version management: This feature function supports user to manage and rollback up to 90 days of 300 historical versions for parameters and conditions.
4. Permission management: This feature function allows account holder, app administrator, R&D personnel, and administrator and operations personals to access Remote Configuration by default.
Service use cases
Change app language by Country/Region
Show Different Content to Different Users
Change the App Theme by Time
Development Overview
You need to install Unity software and I assume that you have prior knowledge about the unity and C#.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
A Huawei phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK 1.7 or later.
Unity software installed.
Visual Studio/Code installed.
HMS Core (APK) 4.X or later.
Integration Preparations
1. Create a project in AppGallery Connect.
2. Create Unity project.
3. Huawei HMS AGC Services to project.
4. Download and save the configuration file.
Add the agconnect-services.json file following directory Assests > Plugins > Android
5. Add the following plugin and dependencies in LaucherTemplate.
Code:
apply plugin:'com.huawei.agconnect'
Code:
implementation 'com.huawei.agconnect:agconnect-remoteconfig:1.4.1.300'
implementation 'com.huawei.agconnect:agconnect-core:1.4.2.301'
6. Add the following dependencies in MainTemplate.
Code:
apply plugin: 'com.huawei.agconnect'
Code:
implementation 'com.huawei.agconnect:agconnect-remoteconfig:1.4.1.300'
implementation 'com.huawei.agconnect:agconnect-core:1.4.2.301'
7. Add dependencies in build script repositories and all project repositories & class path in BaseProjectTemplate.
Code:
maven { url 'https://developer.huawei.com/repo/' }
8. Configuring project in AGC
9. Create Empty Game object rename to RemoteConfigManager, UI canvas texts and button and assign onclick events to respective text and button as shown below.
RemoteConfigManager.cs
C#:
using UnityEngine;
using HuaweiService.RemoteConfig;
using HuaweiService;
using Exception = HuaweiService.Exception;
using System;
public class RemoteConfigManager : MonoBehaviour
{
public static bool develporMode;
public delegate void SuccessCallBack<T>(T o);
public delegate void SuccessCallBack(AndroidJavaObject o);
public delegate void FailureCallBack(Exception e);
public void SetDeveloperMode()
{
AGConnectConfig config;
config = AGConnectConfig.getInstance();
develporMode = !develporMode;
config.setDeveloperMode(develporMode);
Debug.Log($"set developer mode to {develporMode}");
}
public void showAllValues()
{
AGConnectConfig config = AGConnectConfig.getInstance();
if(config!=null)
{
Map map = config.getMergedAll();
var keySet = map.keySet();
var keyArray = keySet.toArray();
foreach (var key in keyArray)
{
Debug.Log($"{key}: {map.getOrDefault(key, "default")}");
}
}else
{
Debug.Log(" No data ");
}
config.clearAll();
}
void Start()
{
SetDeveloperMode();
SetXmlValue();
}
public void SetXmlValue()
{
var config = AGConnectConfig.getInstance();
// get res id
int configId = AndroidUtil.GetId(new Context(), "xml", "remote_config");
config.applyDefault(configId);
// get variable
Map map = config.getMergedAll();
var keySet = map.keySet();
var keyArray = keySet.toArray();
config.applyDefault(map);
foreach (var key in keyArray)
{
var value = config.getSource(key);
//Use the key and value ...
Debug.Log($"{key}: {config.getSource(key)}");
}
}
public void GetCloudSettings()
{
AGConnectConfig config = AGConnectConfig.getInstance();
config.fetch().addOnSuccessListener(new HmsSuccessListener<ConfigValues>((ConfigValues configValues) =>
{
config.apply(configValues);
Debug.Log("===== ** Success ** ====");
showAllValues();
config.clearAll();
}))
.addOnFailureListener(new HmsFailureListener((Exception e) =>
{
Debug.Log("activity failure " + e.toString());
}));
}
public class HmsFailureListener:OnFailureListener
{
public FailureCallBack CallBack;
public HmsFailureListener(FailureCallBack c)
{
CallBack = c;
}
public override void onFailure(Exception arg0)
{
if(CallBack !=null)
{
CallBack.Invoke(arg0);
}
}
}
public class HmsSuccessListener<T>:OnSuccessListener
{
public SuccessCallBack<T> CallBack;
public HmsSuccessListener(SuccessCallBack<T> c)
{
CallBack = c;
}
public void onSuccess(T arg0)
{
if(CallBack != null)
{
CallBack.Invoke(arg0);
}
}
public override void onSuccess(AndroidJavaObject arg0)
{
if(CallBack !=null)
{
Type type = typeof(T);
IHmsBase ret = (IHmsBase)Activator.CreateInstance(type);
ret.obj = arg0;
CallBack.Invoke((T)ret);
}
}
}
}
10. Click to Build apk, choose File > Build settings > Build, to Build and Run, choose File > Build settings > Build And Run
Result
Tips and Tricks
Add agconnect-services.json file without fail.
Make sure dependencies added in build files.
Make sure that you released once parameters added/updated.
Conclusion
We have learnt integration of Huawei Remote Configuration Service into Unity Game development. Remote Configuration service lets you to fetch configuration data from local xml file and online i.e. AG-Console,changes will reflect immediately once you releases the changes.Conclusion is service lets you to change your app behaviour and appearance without app update or user interaction.
Thank you so much for reading article, hope this article helps you.
Reference
Unity Manual
GitHub Sample Android
Huawei Remote Configuration service
Read in huawei developer forum

Integrating Huawei Map kit in HarmonyOs Wearable Device Application

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we will learn about Huawei Map Kit in HarmonyOs. Map Kit is an SDK for map development. It covers map data of more than 200 countries and regions, and supports over 70 languages. With this SDK, you can easily integrate map-based functions into your HarmonyOs application.
Development Overview
You need to install DevEcho Studio IDE and I assume that you have prior knowledge about the Harmony Os and java.
Hardware Requirements
A computer (desktop or laptop) running windows 10.
A HarmonyOs Smart Watch (with the USB cable), which is used for debugging.
Software Requirements
Java JDK installation package.
DevEcho Studio installed.
Steps:
Step 1: Create a HarmonyOs Application.
Step 1: Create a project in AppGallery
Step 2: Configure App in AppGallery
Step 3: Follow the SDK integration steps
Let's start coding
MapAbilitySlice.java
Java:
public class MapAbilitySlice extends AbilitySlice {
private static final HiLogLabel LABEL_LOG = new HiLogLabel(3, 0xD001100, "TAG");
private MapView mMapView;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
CommonContext.setContext(this);
// Declaring and Initializing the HuaweiMapOptions Object
HuaweiMapOptions huaweiMapOptions = new HuaweiMapOptions();
// Initialize Camera Properties
CameraPosition cameraPosition =
new CameraPosition(new LatLng(12.972442, 77.580643), 10, 0, 0);
huaweiMapOptions
// Set Camera Properties
.camera(cameraPosition)
// Enables or disables the zoom function. By default, the zoom function is enabled.
.zoomControlsEnabled(false)
// Sets whether the compass is available. The compass is available by default.
.compassEnabled(true)
// Specifies whether the zoom gesture is available. By default, the zoom gesture is available.
.zoomGesturesEnabled(true)
// Specifies whether to enable the scrolling gesture. By default, the scrolling gesture is enabled.
.scrollGesturesEnabled(true)
// Specifies whether the rotation gesture is available. By default, the rotation gesture is available.
.rotateGesturesEnabled(false)
// Specifies whether the tilt gesture is available. By default, the tilt gesture is available.
.tiltGesturesEnabled(true)
// Sets whether the map is in lite mode. The default value is No.
.liteMode(false)
// Set Preference Minimum Zoom Level
.minZoomPreference(3)
// Set Preference Maximum Zoom Level
.maxZoomPreference(13);
// Initialize the MapView object.
mMapView = new MapView(this,huaweiMapOptions);
// Create the MapView object.
mMapView.onCreate();
// Obtains the HuaweiMap object.
mMapView.getMapAsync(new OnMapReadyCallback() {
@Override
public void onMapReady(HuaweiMap huaweiMap) {
HuaweiMap mHuaweiMap = huaweiMap;
mHuaweiMap.setOnMapClickListener(new OnMapClickListener() {
@Override
public void onMapClick(LatLng latLng) {
new ToastDialog(CommonContext.getContext()).setText("onMapClick ").show();
}
});
// Initialize the Circle object.
Circle mCircle = new Circle(this);
if (null == mHuaweiMap) {
return;
}
if (null != mCircle) {
mCircle.remove();
mCircle = null;
}
mCircle = mHuaweiMap.addCircle(new CircleOptions()
.center(new LatLng(12.972442, 77.580643))
.radius(500)
.fillColor(Color.GREEN.getValue()));
new ToastDialog(CommonContext.getContext()).setText("color green: " + Color.GREEN.getValue()).show();
int strokeColor = Color.RED.getValue();
float strokeWidth = 15.0f;
// Set the edge color of a circle
mCircle.setStrokeColor(strokeColor);
// Sets the edge width of a circle
mCircle.setStrokeWidth(strokeWidth);
}
});
// Create a layout.
ComponentContainer.LayoutConfig config = new ComponentContainer.LayoutConfig(ComponentContainer.LayoutConfig.MATCH_PARENT, ComponentContainer.LayoutConfig.MATCH_PARENT);
PositionLayout myLayout = new PositionLayout(this);
myLayout.setLayoutConfig(config);
ShapeElement element = new ShapeElement();
element.setShape(ShapeElement.RECTANGLE);
element.setRgbColor(new RgbColor(255, 255, 255));
myLayout.addComponent(mMapView);
super.setUIContent(myLayout);
}
}
Result
Tips and Tricks
Add required dependencies without fail.
Add required images in resources > base > media.
Add custom strings in resources > base > element > string.json.
Define supporting devices in config.json file.
Do not log the sensitive data.
Enable required service in AppGallery Connect.
Use respective Log methods to print logs.
Conclusion
In this article, we have learnt, integration of Huawei Map in HarmonyOs wearable device using Huawei Map Kit. Sample application shows how to implement Map kit in HarmonyOs Wearables device. Hope this articles helps you to understand and integration of map kit, you can use this feature in your HarmonyOs application to display map in wearable devices.
Thank you so much for reading this article and I hope this article helps you to understand Huawei Map Kit in HarmonyOS. Please provide your comments in the comment section and like.
References
Map Kit
Checkout in forum

Solution to Creating an Image Classifier

I don't know if it's the same for you, but I always get frustrated when sorting through my phone's album. It seems to take forever before I can find the image that I want to use. As a coder, I can't help but wonder if there's a solution for this. Is there a way to organize an entire album? Well, let's take a look at how to develop an image classifier using a service called image classification.
Development Preparations​1. Configure the Maven repository address for the SDK to be used.
Java:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the image classification SDK.
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-classification:3.3.0.300'
// Import the image classification model package.
implementation 'com.huawei.hms:ml-computer-vision-image-classification-model:3.3.0.300'
Project Configuration​1. Set the authentication information for the app.
This information can be set through an API key or access token.
Use the setAccessToken method to set an access token during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create an image classification analyzer in on-device static image detection mode.
Java:
// Method 1: Use customized parameter settings for device-based recognition.
MLLocalClassificationAnalyzerSetting setting =
new MLLocalClassificationAnalyzerSetting.Factory()
.setMinAcceptablePossibility(0.8f)
.create();
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer(setting);
// Method 2: Use default parameter settings for on-device recognition.
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer();
3. Create an MLFrame object.
Java:
// Create an MLFrame object using the bitmap which is the image data in bitmap format. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image dimensions be greater than or equal to 112 x 112 px.
MLFrame frame = MLFrame.fromBitmap(bitmap);
4. Call asyncAnalyseFrame to classify images.
Java:
Task<List<MLImageClassification>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLImageClassification>>() {
@Override
public void onSuccess(List<MLImageClassification> classifications) {
// Recognition success.
// Callback when the MLImageClassification list is returned, to obtain information like image categories.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
try {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize relevant messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error message. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
5. Stop the analyzer after recognition is complete.
Java:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Remarks​The image classification capability supports the on-device static image detection mode, on-cloud static image detection mode, and camera stream detection mode. The demo here illustrates only the first mode.
I came up with a bunch of application scenarios to use image classification, for example: education apps. With the help of image classification, such an app enables its users to categorize images taken in a period into different albums; travel apps. Image classification allows such apps to classify images according to where they are taken or by objects in the images; file sharing apps. Image classification allows users of such apps to upload and share images by image category.
References​>>Image classification Development Guide
>>Reddit to join developer discussions
>>GitHub to download the sample code
>>Stack Overflow to solve integration problems

Categories

Resources