How to Integrate the Volumetric Cloud Plug-in of HMS Core CG Kit - Huawei Developers

1. Introduction
Since childhood, I've always wondered what it would be like to walk among the clouds. And now as a graphics programmer, I've become fascinated by an actual sea of clouds, as in volumetric clouds in game clients, which are semi-transparent irregular clouds produced by a physically-based cloud rendering system. However, due to the computing bottleneck resulting from mobile apps, I'm still exploring how to best balance performance with effects.
As an avid fan of game development, I've kept an eye on the cutting-edge technologies in this field, and have even developed plug-ins based on some of them. When recently I came across the Volumetric Cloud plug-in introduced by Computer Graphics (CG) Kit of HUAWEI HMS Core, I spent two days integrating the plug-in into Unity by following the official documents. The following figure shows a simple integration (the upper clouds are the skybox effect, and the lower ones are the rendered volumetric clouds). You'll notice that the volumetric clouds are more true-to-life, with clear silver linings. Better yet, the plug-in supports dynamic lighting and player free travel amidst the clouds. Perhaps most surprisingly, I tested its performance on a low-end smartphone (HONOR 8 Lite), at a resolution of 720p, the frame rate reached an astonishing 50 fps! Another highlight of this plug-in is cloud shape customization, which allowed to shape cloud I desired.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Now, let's go through the process of integrating the Volumetric Cloud plug-in into Unity.
2. Prerequisites​1. Visual Studio 2017 or later
2. Android Studio 4.0 or later
3. Unity 2018.4.12 or later
4. Huawei phones running EMUI 8.0 or later or non-Huawei phones running Android 8.0 or later
5. Volumetric Cloud plug-in SDK
Click the links below to download the SDK, and reference the relevant documents on HUAWEI Developers:
SDK download
Development guide
API reference
The directory structure of the downloaded SDK is as follows.
According to the official documents, RenderingVolumeCloud.dll is a PC plug-in based on OpenGL, and libRenderingVolumeCloud.so is an Android plug-in based on OpenGL ES 3.0. The two .bin files in the assets directory are the resource files required for volumetric cloud rendering, and the header file in the include directory defines the APIs for the Volumetric Cloud plug-in.
3. Development Walkthrough​The Volumetric Cloud plug-in is available as C++ APIs. Therefore, to integrate this plug-in into Unity, it needs to be encapsulated into a native plug-in of Unity, and then integrated into Unity to render volumetric clouds. Next, I'm going to show my integration details.
3.1 Native Plug-in Creation​A native Unity plug-in is a library of native code written in C, C++, or Objective-C, which enables game code (in JavaScript or C#) to call functions from the library. You can visit the following links for more details on how to create a native Unity plug-in:
Unity - Manual: Low-level native plug-in interface
docs.unity3d.com
https://github.com/Unity-Technologies/NativeRenderingPlugin
Unity code samples show that you need to build dynamic link libraries (.so and .dll) for Unity to call. A simple way to do this is to modify the open-source code of Unity in Android Studio and Visual Studio, respectively. This enables you to integrate the volumetric cloud rendering function, and generate the required libraries in the following manner.
The functions in the RenderingPlugin.def file are APIs of the native plug-in for Unity, and are implemented in the RenderingPlugin.cpp file. In the RenderingPlugin.cpp file, you'll need to retain the required functions, including UnityPluginLoad, UnityPluginUnload, OnGraphicsDeviceEvent, OnRenderEvent, and GetRenderEventFunc, as well as corresponding static global variables, and then add three APIs (ReleaseSource, BakeMultiMesh, and SetRenderParasFromUnity), as shown below.
To integrate the Volumetric Cloud plug-in of CG Kit, modify these APIs in the given source code as follows:
(1) Modify OnGraphicsDeviceEvent. If eventType is set to kUnityGfxDeviceEventInitialize, call the CreateRenderAPI function of the Volumetric Cloud plug-in to create a variable of the RenderAPI class, and call the RenderAPI.CreateResources() function. If eventType is set to kUnityGfxDeviceEventShutdown, delete the variable of the RenderAPI class.
(2) Modify OnRenderEvent. Pass the static global variable set in the SetTextureFromUnity function to this function, and directly call RenderAPI.RenderCloudFrameTexture() in this function.
(3) Define SetTextureFromUnity. Pass the four inputs required by RenderAPI.RenderCloudFrameTexture() to the defined static global variable to facilitate future calls to this function for volumetric cloud rendering.
(4) Define SetRenderParasFromUnity. Call RenderAPI.SetRenderCloudParas() in this function.
(5) Define ReleaseSource. Call RenderAPI.ReleaseData() in this function.
The plug-in for PC will need to integrate the baking function for volumetric cloud shape customization. Therefore, an extra API is necessary for the .dll file, which means that the BakeMultiMesh function needs to be defined. Call CreateBakeShapeAPI to create a variable of the BakeShapeAPI class, and then call BakeShapeAPI.BakeMultiMesh() to perform baking.
3.2 Integration into Unity​Once the native plug-in is successfully created, you can obtain the libUnityPluginAdaptive.so and UnityPluginAdaptive.dll files that adapt to Unity and the Volumetric Cloud plug-in.
Next, you'll need to create a Unity 3D project to implement volumetric cloud rendering. Here, I've used the ARM64 version as an example.
Place libUnityPluginAdaptive.so, libRenderingVolumeCloud.so, UnityPluginAdaptive.dll, and RenderingVolumeCloud.dll of the ARM64 version in the Assets/Plugins/x86_64 directory (if this directory does not exist, create it). Configure the .so and .dll files as follows.
In addition, you'll need to configure the OpenGL-based PC plug-in of the Volumetric Cloud plug-in as follows.
Also, configure the OpenGL ES 3.0–based Android plug-in of the Volumetric Cloud plug-in by performing the following.
The Volumetric Cloud plug-in contains two .bin files. Place them in any directory of the project, and set the corresponding input parameter to the path. noise.bin is the detail noise texture of the volumetric clouds, and shape.bin is the 3D shape texture. Cloud shape customization can also be performed by calling the BakeMultiMesh API for the plug-in, which I'll detail later. The following uses the provided 3D shape texture as an example.
3.3 Real-Time Volumetric Cloud Rendering​Before calling the Volumetric Cloud plug-in, you'll need to add the dependency (.jar) for counting calls to CG Kit, by modifying the app-level Gradle file. You can choose any package version.
Copy the downloaded .jar package to the Assets/Plugins/Android/bin directory of your Unity project (if this directory does not exist, create it).
Next, you can write a C# script to call the relevant APIs. The adaptation layer APIs that are explicitly called are as follows.
(1) The SetTextureFromUnity function sets the cloudTexture pointer, depthTexture pointer, and cloudTexture size. cloudTexture is the texture for volumetric cloud drawing, and depthTexture is the texture with the depth of the current frame. Execute this function once.
(2) The SetRenderParasFromUnity function calls the API for setting parameters of the Volumetric Cloud plug-in of CG Kit. These parameters are updated in each frame. Therefore, this function needs to be executed for each frame.
(3) The GetRenderEventFunc function calls plug-in APIs for drawing volumetric clouds on cloudTexture. This function can be called in the following ways: GL.IssuePluginEvent(GetRenderEventFunc(), 1) or commandBuffer.IssuePluginEvent (GetRenderEventFunc(), 1). Execute this function for each frame.
(4) The ReleaseSource function calls the Volumetric Cloud plug-in to destroy resources. Call this function once at the end.
The call process is as follows.
The gray APIs in the figure should be implemented according to your own specific requirements. Here, I'll show a simple implementation. Before calling the rendering API, you'll need to create two RenderTextures. One stores the rendering result of the Volumetric Cloud plug-in, and the other stores the depth. Call the SetTextureFromUnity API, and pass NativeTexturePtrs and the sizes of the two RenderTextures to the API. This ensures that the volumetric cloud rendering result is obtained.
In update phase, the struct parameters of the Volumetric Cloud plug-in need to be updated by referring to the VolumeRenderParas.h file in the include directory of the plug-in package. The same struct needs to be defined in the C# script. For details about the parameters, please refer to the relevant documents for the Volumetric Cloud plug-in. Please note that the struct must be 1-byte aligned, and that the four arrays indicating matrices in the struct should be in row-major order. The following is a simple example.
After the struct variables are updated, call the SetRenderParasFromUnity API to pass the volumetric cloud rendering parameter references. Later rendering will be performed with these parameters.
After calling the rendering API, you can call OnRenderImage in postprocessing phrase to draw volumetric clouds on the screen. You can also use command buffers in other phases. In the OnRenderImage call, first, you'll need to draw the depth on the created depthTexture RenderTexture, then call GL.IssuePluginEvent(GetRenderEventFunc() to draw the volumetric clouds on the created cloudTexture RenderTexture, and lastly, apply cloudTexture and the input src of OnRenderImage to dst by setting transparency. The volumetric clouds can then be seen on the screen.
3.4 APK Testing on the Phone​After debugging on the PC, you can directly generate the APK for testing on an Android phone. The only difference between the Android version and the PC version is the two string arrays in the struct, which indicate the paths of the shape.bin and noise.bin files. The two paths should differ on these two platforms. You can put the two .bin files in the Application.persistentDataPath directory.
3.5 3D Shape Texture Baking​The Volumetric Cloud plug-in also offers a baking API for 3D shape texture customization. Integrate this API in the adaptation layer as detailed above, with the following API function:
The function takes the BakeData struct variable and the .bin file path as its inputs, and outputs a file similar to shape.bin. savePath should be a complete path, including the file name extension (.bin). The member variables of this struct are array pointers or integers. For more details, please refer to the relevant documents for the Volumetric Cloud plug-in of CG Kit. According to the documents, the baking API is used to bake meshes in a bounding box into 3D textures. The size of a bounding box depends on minBox and maxBox in the struct.
As shown below, before the API calls, you'll need to combine multiple 3D models. To visualize which areas can be baked, you can draw a wireframe in the space, based on minBox and maxBox. The areas outside the wireframe are not baked. Once each model is set to a position, you can call this API to perform baking.
It's worth noting that the baked 3D shape texture is cyclically sampled during volumetric cloud rendering. Therefore, when arranging 3D models, you'll need to ensure the horizontal continuity of the 3D models intersecting the vertical bounding boxes. Take the x axis for example. You'll need to make sure that models that intersect the two vertical bounding boxes on the x axis are the same and have an x-axis distance of the bounding box's x-length (while having the same y- and z-coordinates).
Upon the completion of baking, you can render volumetric clouds with the customized 3D texture. A similar process, as officially outlined for using shape.bin, can be followed for using the customized texture.
Demo Download​The demo is uploaded as a .unitypackage file to Google Drive, and available at:
VolumeCloud - Google Drive
drive.google.com
Choose Assets > Import Package > Custom Package to import the .unitypackage package, where the Plugin directory contains the .dll and .so files of the adaptation layer and the Volumetric Cloud plug-in of CG Kit. The volumeCloud directory includes the relevant scripts, shaders, and pre-built scenes. The StreamingAssets directory consists of resource files (shape.bin and noise.bin). The volumeCloud/Readme.txt file provides some notes for running the demo. You can read it, and configure the corresponding parameters as detailed, before running the demo.
For more details, you can go to:
Our official website
Our Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download demos and sample codes
Stack Overflow to solve any integration problems
Original Source

Related

HUAWEI HMS Scan Kit vs Zxing

This is originally from HUAWEI Developer Forum (https://forums.developer.huawei.com/forumPortal/en/home)​
Brief Introduction of Both
Zxing is a common third-party open-source SDK. However, Zxing has the following defect: It only implements basic operations of scanning the QR code and does not support more complex scanning environments such as strong light, bending, and deformation. Currently, the mainstream practice is to optimize the source code based on Zxing. However, the optimization effect is still not ideal, and many people will spend a lot of time on the optimization.
The Huawei Scan Kit service provides convenient bar code and QR code scanning, parsing, and generation capabilities, helping developers quickly build the QR code scanning function in apps. Thanks to Huawei’s long-term accumulation in the computer vision field, Huawei’s unified barcode scanning service (Scan Kit) can detect and automatically amplify long-distance or small-sized codes, and optimize the identification of common complex barcode scanning scenarios (such as reflection, dark light, smudge, blur, and cylinder). Improves the QR code scanning success rate and user experience.
Now, let’s compare the capabilities of Zxing and Huawei HMS Scan Kit from the following aspects:
long-distance code scanning
The success of long-distance QR code scanning depends on the QR code specifications (the more information is, the more difficult it is to identify) and the distance between the camera and the QR code.
Due to the lack of automatic zoom-in optimization for Zxing, it is difficult to recognize the code when the code is less than 1/5 of the screen.
The HMS Scan Kit has a pre-detection function, which can automatically amplify the QR code at a long distance even if the QR code cannot be identified by naked eyes.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Conclusion: Scan Kit Wins
Scanning Codes in Complex Scenarios
In complex scenarios, code scanning can be classified into reflection, dark light, smudge, blur, and cylinder scanning. In complex scenarios, the recognition effect of Zxing is poor. Complex scenarios are as follows:
These scenarios are common in daily life. For example, outdoor scenarios such as reflection, dark light, and smudge may occur. When a QR code is attached to a product, curved surfaces or even edges and corners may occur. When you walk and scan the QR code, you will also encounter the challenge of motion blur. The following figure shows the test comparison in these scenarios.
Conclusion: Scan Kit Wins
Scan the QR code at any angle
Currently, Zxing supports only forward scanning, that is, Zxing cannot identify the code with a certain angle. Scan Kit can easily achieve this. When the code deflection is within 10 degrees, Zxing can still have high recognition accuracy. However, when the code deflection exceeds 10 degrees, the recognition accuracy of Zxing decreases sharply. However, Scan Kit is not affected by the clip angle, and the recognition accuracy does not decrease.
Comparison Conclusion: Scan Kit Wins
Multi-code identification comparison
Multi-code identification helps identify multiple codes at a time in scenarios such as express delivery and supermarket checkout, improving service processing efficiency. In multi-code identification mode, the Scan Kit can identify five types of codes on the screen at the same time and return the corresponding types and values of all codes at a time.
Conclusion: Scan Kit Wins
SDK Package Size
The size of the Zxing package is about 500 KB, which is a satisfactory size. Scan Kit has two modes: Lite and Pro. In Lite mode, the package size is 700 KB. In Pro mode, the package size is 3.3 MB. If you use the table, you will have a clearer understanding.
These two modes are slightly different on non-Huawei phones. Therefore, if you are not sensitive to the package size on non-Huawei phones, try to select the Pro version. I have also performed tests on non-Huawei Lite versions, but the test results are slightly lower than those of the Pro version.
Conclusion: Zxing has advantages.
Platform Support
Zxing and Scan Kit support both iOS and Android platforms.
Conclusion: The score is even.
Integration Modes
The integration mode of Zxing is relatively simple. It can be quickly integrated with SDK by only a few lines of code. However, in the actual product development process, the development of the product interface and auxiliary functions is also involved. However, Zxing does not provide the corresponding quick integration mode. The integration guide is available on the live network for a long time. Therefore, the development difficulty can be reduced. In summary, the first point in Zxing integration is that no default interface is available. Second, you need to achieve their own automatic amplification, flash and other functions.
Scan Kit provides multiple access modes, including single-code access, multi-code access, and customized access. The differences between the two integration modes are as follows:
The default layout is provided for the single-code access of Scan Kit cameras. In addition, functions such as automatic amplification and flash are preset. Developers do not need to manually configure these functions. The code integration volume is 5 lines, which is especially suitable for scenarios where quick integration and quick replacement of the QR code scanning function are required.
The customized access of Scan Kit allows users to design the layout by themselves. Only the basic functions and blank layout of scanning and decoding are provided. Users can design the layout based on their app style. However, they need to implement functions such as automatic zoom and flash. The corresponding technical documents can be found on the optical network of Huawei developers. However, compared with the single-code access, this access mode is more complicated.
The integration mode is as follows:
Zxing
1. Create a project and import the Zxing module.
2. Add rights and dynamically apply for rights.
3. Copy the onActivity method.
4. Invoking the Decoding Function
5. Compile the UI and ensure that the UI is correctly displayed.
Scan Kit
The default view mode provides two functions: camera QR code scanning and image-based QR code scanning. In this mode, developers do not need to develop the UI for QR code scanning.
The process is almost the same as that of Zxing.
1. Create a project and import the Scan Kit module.
2. Add permissions and dynamically apply for permissions.
3. Copy the onActivity method.
4. Invoke the decoding function.
The following uses the Default View Mode as an example to describe the integration procedure.
Create a project and add online dependency in the app/build.gradle file.
Code:
implementation'com.huawei.hms:scan:{version}'
2. Declare the QR code scanning page in the AndroidManifest.xml file of the calling module.
Code:
<!-Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Reading the file permission-->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!--Features-->
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
3. Create QR code scanning options based on the site requirements.
Code:
HmsScanAnalyzerOptions options = new HmsScanAnalyzerOptions.Creator().setHmsScanTypes(HmsScan.QRCODE_SCAN_TYPE, HmsScan.DATAMATRIX_SCAN_TYPE).create();
4. Invoke the static method startScan of ScanUtil to start the Default View QR code scanning page.
Code:
ScanUtil.startScan(this, REQUEST_CODE_SCAN_ONE, options);
The comparison shows that Scan Kit and Zxing have the same dependency and permission application methods. However, Scan Kit can use the UI by default (with built-in flash, automatic focal length, and QR code import). Zxing needs to implement the UI by itself, then, manually complete these functions.
Conclusion: Scan Kit Wins
Technical Analysis
Why is Scankit better than Zxing? The following describes the technical analysis of Zxing and Scan Kit from the perspective of technical implementation principles.
Zxing
Zxing uses the traditional recognition algorithm. It can detect codes by analyzing the codes from a certain angle. This algorithm allows only a certain degree of deformation, for example, the square code can be slightly skewed by less than 10 degrees, his pixels still fit the pattern, but if they’re deformed too much or angularly too large, they can’t detect the position of the code. The detection process of ZXing is classified into two types: one-dimensional code detection and two-dimensional code serial detection.
In one-dimensional code detection, Zxing uses a progressive scanning mechanism for feature recognition. Because one-dimensional code features are black-and-white crossover, when the black-and-white sequence with equal spacing of the class is identified, it is considered as a potential code. The length of the potential code is determined by finding the start bit and the end bit. Then, the sequence is sent to different one-dimensional code decoding modules for serial decoding, which takes a long time. When serial decoding fails, a failure message is displayed, and the failure time is also long. In addition, once the one-dimensional code has a wrinkle, rotation, or deformation, a sequence that meets a specific requirement cannot be found through progressive scanning, that is, the one-dimensional code cannot be detected in a complex condition.
1、Quiet zone(fround) 2.Start character 3.Data symbols 4.Terminator 5、Quiet zone(behind)
In two-dimensional code detection, Zxing uses different detection algorithms for different two-dimensional codes. For example, the most common QR code has three location detection patterns. Therefore, Zxing still uses the interlaced scanning mode to find the features of the location detection pattern. Once the features whose black-and-white ratio is 1:1:3:1:1 are found, that is, a central point of the position detection graph is used as a reference point to perform affine transformation, so that the corrected picture is sent to the QR decoding module. The positioning point of the QR code has a function of correcting rotation, and therefore can be well adapted to a rotation situation. However, Zxing is completely unable to process cases such as partial blocking, deformation, and contaminating and reflecting of the positioning point. As shown in the figure, the detection position detection graph is the most important step for detecting whether the two-dimensional code is successfully detected. Once a location fails to be detected, the two-dimensional code cannot be detected.
HUAWEI Scan Kit
Scan Kit uses the deep learning algorithm, which is spatially invariant. By training detectors of corresponding code types, Scan Kit can quickly find all required codes.
Actual process:
The bar code detection module and angle prediction module use the deep learning model.
Barcode detection: The serial process of separate detection of two-dimensional codes of one-dimensional codes in Zxing is no longer restricted. A trained detector can be used to directly obtain the code pattern and corresponding position. The bar code can be accurately sent to the corresponding decoding module through one detection, and a separate serial decoding process is no longer required. Because decoding includes a series of operations with high overheads such as skipping scanning, and information of different codes cannot be shared, this operation greatly reduces an end-to-end delay, and avoids a lot of repeated and unnecessary calculation overheads.
Angle prediction: The corresponding three-bit angle of the code is returned for radiographic transformation. In practice, the core of barcode detection is to accurately obtain boundary points. After being converted into binary images, the images are sent to the decoding module, but the decoding effect is still poor. This is also the most important step to solve the bar code identification in complex scenarios.
To sum up, the deep learning changes the serial detection and decoding process of the barcode of Zxing to a parallel process. In addition, the three-digit angle value of the barcode is returned. After the affine change, the aligned standard front barcode is obtained. This greatly improves the barcode detection success rate and greatly reduces the latency.
Nice and useful article
do huawei and zxing scan detect damaged qr code?
riteshchanchal said:
do huawei and zxing scan detect damaged qr code?
Click to expand...
Click to collapse
Hi, it depends on the broken level of your QR code. If the QR code is a little unclear or slightly broken, it can be avaliable.
If possible, I think you can send a picture or describe the degree of the broken condition. It will help you get a more accurate reply.

Comparison Between Zxing and Huawei HMS Scan Kit

About This Document
Zxing is a common third-party open-source SDK. However, Zxing has the following defect: It only implements basic operations of scanning the QR code and does not support more complex scanning environments such as strong light, bending, and deformation. Currently, the mainstream practice is to optimize the source code based on Zxing. However, the optimization effect is still not ideal, and many people will spend a lot of time on the optimization.
The Huawei Scan Kit service provides convenient bar code and QR code scanning, parsing, and generation capabilities, helping developers quickly build the QR code scanning function in apps. Thanks to Huawei's long-term accumulation in the computer vision field, Huawei's unified barcode scanning service (Scan Kit) can detect and automatically amplify long-distance or small-sized codes, and optimize the identification of common complex barcode scanning scenarios (such as reflection, dark light, smudge, blur, and cylinder). Improves the QR code scanning success rate and user experience.
Now, let's compare the capabilities of Zxing and Huawei HMS Scan Kit from the following aspects:
 Remote code scanning
 Scan QR codes in complex scenarios
 Scan the barcode at any angle.
 multicode recognition
 Integration difficulty
 SDK Package Size
 Cross-platform support
Comparison of long-distance code scanning
The success of long-distance QR code scanning depends on the QR code specifications (the more information is, the more difficult it is to identify) and the distance between the camera and the QR code. Due to the lack of automatic zoom-in optimization for Zxing, it is difficult to recognize the code when the code is less than 1/5 of the screen. The HMS Scan Kit has a pre-detection function, which can automatically amplify the QR code at a long distance even if the QR code cannot be identified by naked eyes.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
https://communityfile-drcn.op.hiclo...A1B389BE789A4FD4B3C70DF5A2582CD647359FF7D.gif
Comparison Conclusion: Scan Kit Wins
Comparison by Scanning Codes in Complex Scenarios
In complex scenarios, code scanning can be classified into reflection, dark light, smudge, blur, and cylinder scanning. In complex scenarios, the recognition effect of Zxing is poor. Complex scenarios are as follows:
These scenarios are common in daily life. For example, outdoor scenarios such as reflection, dark light, and smudge may occur. When a QR code is attached to a product, curved surfaces or even edges and corners may occur. When you walk and scan the QR code, you will also encounter the challenge of motion blur. The following figure shows the test comparison in these scenarios.
https://communityfile-drcn.op.hiclo...4F29750B35AD76B2B0183B45B46F9DFCAF24F3105.gif
https://communityfile-drcn.op.hiclo...9C42B539997998371B63EA6A0451ED87ED041DFEE.gif
https://communityfile-drcn.op.hiclo...CE580ECA67EEEBBCC9E7D7238AA5052D87B882D79.gif
Comparison Conclusion: Scan Kit Wins
Scan the QR code at any angle for comparison.
Currently, Zxing supports only forward scanning, that is, Zxing cannot identify the code with a certain angle. Scan Kit can easily achieve this. When the code deflection is within 10 degrees, Zxing can still have high recognition accuracy. However, when the code deflection exceeds 10 degrees, the recognition accuracy of Zxing decreases sharply. However, Scan Kit is not affected by the clip angle, and the recognition accuracy does not decrease.
https://communityfile-drcn.op.hiclo...4B65A180C69FEEF682C5E2069485FD83D8CA58C18.gif
Comparison Conclusion: Scan Kit Wins
Multi-code identification comparison
Multi-code identification helps identify multiple codes at a time in scenarios such as express delivery and supermarket checkout, improving service processing efficiency. In multi-code identification mode, the Scan Kit can identify five types of codes on the screen at the same time and return the corresponding types and values of all codes at a time.
Comparison Conclusion: Scan Kit Wins
SDK Package Size Comparison
The size of the Zxing package is about 500 KB, which is a satisfactory size. Scan Kit has two modes: Lite and Pro. In Lite mode, the package size is 700 KB. In Pro mode, the package size is 3.3 MB. If you use the table, you will have a clearer understanding.
These two modes are slightly different on non-Huawei phones. Therefore, if you are not sensitive to the package size on non-Huawei phones, try to select the Pro version. I have also performed tests on non-Huawei Lite versions, but the test results are slightly lower than those of the Pro version.
Conclusion: Zxing has advantages.
Platform Support Comparison
Zxing and Scan Kit support both iOS and Android platforms.
Conclusion: The score is even.
Comparison of Integration Modes
The integration mode of Zxing is relatively simple. It can be quickly integrated with SDK by only a few lines of code. However, in the actual product development process, the development of the product interface and auxiliary functions is also involved. However, Zxing does not provide the corresponding quick integration mode. The integration guide is available on the live network for a long time. Therefore, the development difficulty can be reduced. In summary, the first point in Zxing integration is that no default interface is available. Second, you need to achieve their own automatic amplification, flash and other functions.
Scan Kit provides multiple access modes, including single-code access, multi-code access, and customized access. The differences between the two integration modes are as follows:
The default layout is provided for the single-code access of Scan Kit cameras. In addition, functions such as automatic amplification and flash are preset. Developers do not need to manually configure these functions. The code integration volume is 5 lines, which is especially suitable for scenarios where quick integration and quick replacement of the QR code scanning function are required.
The customized access of Scan Kit allows users to design the layout by themselves. Only the basic functions and blank layout of scanning and decoding are provided. Users can design the layout based on their app style. However, they need to implement functions such as automatic zoom and flash. The corresponding technical documents can be found on the optical network of Huawei developers. However, compared with the single-code access, this access mode is more complicated.
The integration mode is as follows:
Zxing integration process
1. Create a project and import the Zxing module.
2. Add rights and dynamically apply for rights.
3. Copy the onActivity method.
4. Invoking the Decoding Function
5. Compile the UI and ensure that the UI is correctly displayed.
Scan Kit integration process
The default view mode provides two functions: camera QR code scanning and image-based QR code scanning. In this mode, developers do not need to develop the UI for QR code scanning.
The process is almost the same as that of Zxing.
1. Create a project and import the Scan Kit module.
2. Add permissions and dynamically apply for permissions.
3. Copy the onActivity method.
4. Invoke the decoding function.
The following uses the Default View Mode as an example to describe the integration procedure.
1. Create a project and add online dependency in the app/build.gradle file.
implementation'com.huawei.hms:scan:{version}'
2. Declare the QR code scanning page in the AndroidManifest.xml file of the calling module.
<!-Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Reading the file permission-->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!--Features-->
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
3. Create QR code scanning options based on the site requirements.
HmsScanAnalyzerOptions options = new HmsScanAnalyzerOptions.Creator().setHmsScanTypes(HmsScan.QRCODE_SCAN_TYPE, HmsScan.DATAMATRIX_SCAN_TYPE).create();
4. Invoke the static method startScan of ScanUtil to start the Default View QR code scanning page.
ScanUtil.startScan(this, REQUEST_CODE_SCAN_ONE, options);
The comparison shows that Scan Kit and Zxing have the same dependency and permission application methods. However, Scan Kit can use the UI by default (with built-in flash, automatic focal length, and QR code import). Zxing needs to implement the UI by itself, then, manually complete these functions.
Comparison Conclusion: Scan Kit Wins
Technical Analysis
Why is Scankit better than Zxing? The following describes the technical analysis of Zxing and Scan Kit from the perspective of technical implementation principles.
Zxing Technology Analysis
Zxing uses the traditional recognition algorithm. It can detect codes by analyzing the codes from a certain angle. This algorithm allows only a certain degree of deformation, for example, the square code can be slightly skewed by less than 10 degrees, his pixels still fit the pattern, but if they're deformed too much or angularly too large, they can't detect the position of the code. The detection process of ZXing is classified into two types: one-dimensional code detection and two-dimensional code serial detection.
In one-dimensional code detection, Zxing uses a progressive scanning mechanism for feature recognition. Because one-dimensional code features are black-and-white crossover, when the black-and-white sequence with equal spacing of the class is identified, it is considered as a potential code. The length of the potential code is determined by finding the start bit and the end bit. Then, the sequence is sent to different one-dimensional code decoding modules for serial decoding, which takes a long time. When serial decoding fails, a failure message is displayed, and the failure time is also long. In addition, once the one-dimensional code has a wrinkle, rotation, or deformation, a sequence that meets a specific requirement cannot be found through progressive scanning, that is, the one-dimensional code cannot be detected in a complex condition.
In two-dimensional code detection, Zxing uses different detection algorithms for different two-dimensional codes. For example, the most common QR code has three location detection patterns. Therefore, Zxing still uses the interlaced scanning mode to find the features of the location detection pattern. Once the features whose black-and-white ratio is 1:1:3:1:1 are found, that is, a central point of the position detection graph is used as a reference point to perform affine transformation, so that the corrected picture is sent to the QR decoding module. The positioning point of the QR code has a function of correcting rotation, and therefore can be well adapted to a rotation situation. However, Zxing is completely unable to process cases such as partial blocking, deformation, and contaminating and reflecting of the positioning point. As shown in the figure, the detection position detection graph is the most important step for detecting whether the two-dimensional code is successfully detected. Once a location fails to be detected, the two-dimensional code cannot be detected.
Technical Analysis of Huawei HMS Scan Kit
Scan Kit uses the deep learning algorithm, which is spatially invariant. By training detectors of corresponding code types, Scan Kit can quickly find all required codes.
Actual process:
The bar code detection module and angle prediction module use the deep learning model.
Barcode detection: The serial process of separate detection of two-dimensional codes of one-dimensional codes in Zxing is no longer restricted. A trained detector can be used to directly obtain the code pattern and corresponding position. The bar code can be accurately sent to the corresponding decoding module through one detection, and a separate serial decoding process is no longer required. Because decoding includes a series of operations with high overheads such as skipping scanning, and information of different codes cannot be shared, this operation greatly reduces an end-to-end delay, and avoids a lot of repeated and unnecessary calculation overheads.
Angle prediction: The corresponding three-bit angle of the code is returned for radiographic transformation. In practice, the core of barcode detection is to accurately obtain boundary points. After being converted into binary images, the images are sent to the decoding module, but the decoding effect is still poor. This is also the most important step to solve the bar code identification in complex scenarios.
To sum up, the deep learning changes the serial detection and decoding process of the barcode of Zxing to a parallel process. In addition, the three-digit angle value of the barcode is returned. After the affine change, the aligned standard front barcode is obtained. This greatly improves the barcode detection success rate and greatly reduces the latency.
More information
Demos, sample codes, and development documents are available on the Huawei developer official website.
Demo and sample code:
https://developer.huawei.com/consumer/en/doc/development/HMS-Examples/scan-sample-code4
Development guide:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/scan-introduction-4
API reference:
https://developer.huawei.com/consumer/en/doc/development/HMS-References/scan-apioverview
To be supplemented
Based on Huawei machine learning service, we will share a series of practical experience later. You can continue to pay attention to it.

Enrich Your life with HMS Core AI Service

Introduction:
HMS ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps,based on ML Kit, we provide various of service, and this article will introduce all of the ML Kit service for the developer in detail.
Text-related Services
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
1. Text Recognition, can extracts text from images of receipts, business cards, and documents. This service is widely used in office, education, transit, and other apps.
2. Document Recognition, can recognize text with paragraph formats in document images. It can extract text from document images to convert paper documents into electronic copies, greatly improving the information input efficiency and reducing labor costs.
3. Bank Card Recognition, can quickly recognize information such as the bank card number, covering mainstream bank cards such as China Union Pay, American Express, MasterCard, Visa, and JCB around the world. It is widely used in finance and payment scenarios requiring bank card binding to quickly extract bank card information, realizing quick input of bank card information.
4. General Card Recognition, provides a universal development framework based on the text recognition technology. It allows you to customize the post-processing logic to extract required information from any fixed-format cards, such as Exit-Entry Permit for Traveling to and from Hong Kong and Macao, Hong Kong identity card, and Mainland Travel Permit for Hong Kong and Macao Residents.
Language/Voice-related Services
1. Translation, can detect the language of text and translate the text into different languages. Currently, this service can translate text online between 21 languages and translate text offline between 17 languages.
2. Language Detection, supports both online and offline modes. Currently, 52 languages can be detected on the cloud and 51 languages can be detected on the device.
3. Automatic Speech Recognition (ASR), can convert speech (no more than 60 seconds) into text in real time. Currently, Mandarin Chinese (including Chinese-English bilingual speech), English, French, German, Spanish, and Italian are supported.
4. Text to Speech (TTS), can convert text information into audio output. Real-time audio data can be output from the on-device API (offline models can be downloaded). Rich timbres, and volume and speed options are supported to produce more natural sounds.
5. Audio File Transcription, can convert an audio file into text, output punctuation, and generate text information with timestamps. Currently, the service supports Chinese and English.
6. Video Course Creator, can automatically create video courses based on courseware and commentaries, reducing video creation costs and improving efficiency.
7. Real-Time Transcription, enables your app to convert long speech (no longer than 5 hours) into text in real time. The generated text contains punctuation marks and timestamps.
8. Sound Detection, can detect sound events in online (real-time recording) mode. The detected sound events can help you perform subsequent actions.
Image-related Services
1. Image Classification, classifies elements in images into intuitive categories, such as people, objects, environments, activities, or artwork, to define image themes and application scenarios.
2. Object Detection and Tracking, can detect and track multiple objects in an image, so they can be located and classified in real time. This is useful for examining and recognizing images.
3. Landmark Recognition, can identify the names and latitude and longitude of landmarks in an image. You can use this information to create individualized experiences for users.
4. Image Segmentation, The image segmentation service can differentiate elements in an image. For example, you can use this service to create photo editing apps that replace certain parts of photos, such as the background.
5. Product Visual Search, searches for the same or similar products by a taken photo from the users in the pre-established product image library, and returns the IDs of those products and related information.
6. Image Super-Resolution, provides the 1x and 3x super-resolution capabilities. 1x super-resolution removes the compression noise, and 3x super-resolution not only effectively suppresses the compression noise, but also provides a 3x enlargement capability.
7. Document Skew Correction, can automatically identify the location of a document in an image and adjust the shooting angle to the angle facing the document.
8. Text Image Super-Resolution, can zoom in an image that contains text and significantly improve the definition of text in the image.
9. Scene Detection, Classify the scene content of images and add annotation information, such as outdoor scenery, indoor places, and buildings, to help understand the image content.
Face/Body-related Services
1. Face Detection, can detect the shapes and features of your user's face, including their facial expression, age, gender, and wearing. You can use the service to develop apps that dynamically beautify users' faces during video calls.
2. Skeleton Detection, detects and locates key points of the human body, such as the top of the head, neck, shoulder, elbow, wrist, hip, knee, and ankle. For example, when taking a photo, the user can pose a posture similar to a preset one.
3. Liveness Detection, can detect whether a user in a service scenario is a real person. This service is useful in various scenarios.
4. Hand Keypoint Detection, can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return positions of the keypoints. Currently, static image detection and real-time video stream detection are supported.
Conclusion
Except ML Kit, HMS still provides Awareness Kit, which provides your app with the ability to obtain contextual information including users' current time, location, behavior, audio device status, ambient light, weather, and nearby beacons. Scan Kit, which scans and parses all major 1D and 2D barcodes and generates QR codes, helping you quickly build barcode scanning functions into your apps. And Nearby Service, which allows apps to easily discover nearby devices and set up communication with them using technologies such as Bluetooth and Wi-Fi. The service provides Nearby Connection and Nearby Message APIs.

Are you wearing Face Mask? Let's detect using HUAWEI Face Detection ML Kit and AI engine MindSpore

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Article Introduction
In this article, we will show how to integrate Huawei ML Kit (Face Detection) and powerful AI engine MindSpore Lite in an android application to detect in realtime either the users are wearing masks or not. Due to Covid-19, the face mask is mandatory in many parts of the world. Considering this fact, the use case has been created with an option to remind the users with audio commands.
Huawei ML Kit (Face Detection)
Huawei Face Detection service (offered by ML Kit) detects 2D and 3D face contours. The 2D face detection capability can detect features of your user's face, including their facial expression, age, gender, and wearing. The 3D face detection capability can obtain information such as the face keypoint coordinates, 3D projection matrix, and face angle. The face detection service supports static image detection, camera stream detection, and cross-frame face tracking. Multiple faces can be detected at a time.
Following are the important features supported by Face Detection service:
MindSpore Lite
MindSpore Lite is an ultra-fast, intelligent, and simplified AI engine that enables intelligent applications in all scenarios, provides E2E solutions for users, and helps users enable AI capabilities. Following are some of common scenarios to use MindSpore:
For this article, we implemented Image classification. The camera stream yield frames. We then process it to detect faces using ML Kit (Face Detection). Once, we have the faces, we process our trained MindSpore lite engine to detect either the face is With or Without Mask.
Pre-Requisites
Before getting started, we need to train our model and generate .ms file. For that, I used HMS Toolkit plugin of Android Studio. If you are migrating from Tensorflow, you can convert your model from .tflite to .ms using the same plugin.
The dataset used for this article is from Kaggle (link is provided in the references). It provided 5000 images for both cases. It also provided some testing and validation images to test our model after being trained.
Step 1: Importing the images
To start the training, please select HMS > Coding Assistance > AI > AI Create > Image Classification. Import both folders (WithMask and WithoutMask) in the Train Data description. Select the output folder and train parameters based on your requirements. You can read more about this in the official documentation (link is provided in the references).
Step 2: Creating the Model
When you are ready, click on Create Model button. It will take some time depending upon your machine. You can check the progress of the training and validation throughout the process.
Once the process is completed, you will see the summary of the training and validation.
Step 3: Testing the Model
It is always recommended to test your model before using it practically. We used the provided test images in the dataset to complete the testing manually. Following were the test results for our dataset:
After testing, add the generated .ms file along with labels.txt in the assets folder of your project. You can also generate Demo Project from the HMS Toolkit plugin.
Development
Since it is on device capability, we don't need to integrate HMS Core or import agconnect-services.json in our project. Following are the major steps of development for this article:
Read full article.
7.2: Final Results
Conclusion
Building smart solutions with AI capabilities is much easy with HUAWEI mobile services (HMS) ML Kit and AI engine MindSpore Lite. Considering different situations, the use cases can be developed for all industries including but not limited to transportation, manufacturing, agriculture and construction.
Having said that, we used Face Detection ML Kit and AI engine MindSpore to develop Face Mask detection feature. The on-device open capabiltiies of HMS provided us highly efficient and optimized results. Individual or Multiple users without Mask can be detected from far in realtime. This is applicable to be used in public places, offices, malls or at any entrance.
Tips & Tricks
Make sure to add all the permissions like WRITE_EXTERNAL_STORAGE, READ_EXTERNAL_STORAGE, CAMERA, ACCESS_NETWORK_STATE, ACCESS_WIFI_STATE.
Make sure to add aaptOptions in the app-level build.gradle file aftering adding .ms and labels.txt files in the assets folder. If you miss this, you might get Load model failed.
Always use animation libraries like Lottie to enhance UI/UX in your application. We also used OwlBottomSheet for the help bottom sheet.
The performance of model is directly propotional to the number of training inputs. Higher the number of inputs, higher will be accuracy to yield better results. In our article, we used 5000 images for each case. You can add as many as possible to improve the accuracy.
MindSpore Lite provides output as callback. Make sure to design your use case while considering this fact.
If you have Tensorflow Lite Model file (.tflite), you can convert it to .ms using the HMS Toolkit plugin.
HMS Toolkit plugin is very powerful. It supports converting MindSpore Lite and HiAI models. MindSpore Lite supports TensorFlow Lite and Caffe and HiAI supports TensorFlow, Caffe, CoreML, PaddlePaddle, ONNX, MxNet and Keras.
If you want to use Tensorflow with HMS ML Kit, you can also implement that. I have created another demo where I put the processing engine as dynamic. You can check the link in the references section.
References
HUAWEI ML Kit (Face Detection) Official Documentation:
https://developer.huawei.com/consum...-Guides-V5/face-detection-0000001050038170-V5
HUAWEI HMS Toolkit AI Create Official Documentation:
https://developer.huawei.com/consumer/en/doc/development/Tools-Guides/ai-create-0000001055252424
HUAWEI Model Integration Official Documentation:
https://developer.huawei.com/consum...ols-Guides/model-integration-0000001054933838
MindSpore Lite Documentation:
Using MindSpore on Mobile and IoT — MindSpore Lite r1.1 documentation
MindSpore Lite Code Repo:
MindSpore/mindspore
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
gitee.com
Kaggle Dataset Link:
Face Mask Detection ~12K Images Dataset
12K Images divided in training testing and validation directories.
www.kaggle.com
Lottie Android Documentation:
Lottie Docs
Lottie is a library for Android, iOS, Web, and Windows that parses Adobe After Effects animations exported as json with Bodymovin and renders them natively on mobile and on the web
airbnb.io
Tensorflow as a processor with HMS ML Kit:
https://github.com/yasirtahir/Huawe...icodelabs/fragments/mlkit/facemask/tensorflow
Github Code Link:
https://github.com/yasirtahir/DetectFaceMask
Read full article.
Nice and useful to know it in the time of COVID-19.
How much accuracy it provides?

Beginner: Integration of Huawei Remote configuration in flutter for taxi booking application

Introduction
Welcome Folks, in this article, I will explain what is Huawei Remote configuration? How does Huawei Remote Configuration work in Flutter? At the end of this tutorial, we will create the Huawei Remote Configuration Flutter taxi booking application.
In this example, I am enabling/Disabling share feature from remote configuration. When share feature is enabled user can book share cab otherwise user can’t see the share feature.
What is Huawei Remote Configuration?
Huawei Remote Configuration is cloud service. It changes the behavior and appearance of your app without publishing an app update on App Gallery for all active users. Basically, Remote Configuration allows you to maintain parameters on the cloud, based on these parameters we control the behavior and appearance of your app. In the festival scenario, we can define parameters with the text, color, images for a theme which can be fetched using Remote Configuration.
How does Huawei Remote Configuration work?
Huawei Remote Configuration is a cloud service that allows you change the behavior and appearance of your app without requiring users to download an app update. When using Remote Configuration, you can create in-app default values that control the behavior and appearance of your app. Then, you can later use the Huawei console or the Remote Configuration to override in-app default values for all app users or for segments of your user base. Your app controls when updates are applied, and it can frequently check for updates and apply them with a negligible impact on performance.
In Remote Configuration, we can create in-app default values that control the behavior and appearance (such as text, color, and image, etc.) in the app. Later on, with the help of Huawei Remote Configuration, we can fetch parameters from the Huawei remote configuration and override the default value.
Integration of Remote configuration
1. Configure application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves the couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on current location.
Step 4: Enabling Remote configuration. Open AppGallery connect, choose Grow > Remote confihuration
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves the couple of steps as follows.
Step 1: Create flutter application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level gradle dependencies. Choose inside project Android > app > build.gradle
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Add the below permissions in Android Manifest file.
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
Step 3: Add the agconnect_remote_config in pubspec.yaml
Step 4: Add downloaded file into outside project directory. Declare plugin path in pubspec.yaml file under dependencies.
dependencies:
flutter:
sdk: flutter
huawei_account:
path: ../huawei_account/
huawei_location:
path: ../huawei_location/
huawei_map:
path: ../huawei_map/
huawei_analytics:
path: ../huawei_analytics/
huawei_site:
path: ../huawei_site/
huawei_push:
path: ../huawei_push/
huawei_dtm:
path: ../huawei_dtm/
agconnect_crash: ^1.0.0
agconnect_remote_config: ^1.0.0
http: ^0.12.2
To achieve Remote configuration service example let us follow the steps.
1. AGC Configuration
2. Build Flutter application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate Huawei Remote configuration Service.
3. Navigate to Grow > Remote configuration
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Build Flutter application
In this example, I am enabling/Disabling share feature from remote configuration. When share feature is enabled, user can book share cab otherwise user can’t see the share feature
Basically, Huawei Remote Configuration has three different configurations as explained below.
Default Configuration: In this configuration default values defined in your app, if no matching key found on remote configuration sever than default value is copied the in active configuration and returned to the client.
Map<String, dynamic> defaults = {
'enable_feature_share': false,
'button_color': 'red',
'text_color': 'white',
'show_shadow_button': true,
'default_distance': 4.5,
'min_price':80
};
AGCRemoteConfig.instance.applyDefaults(defaults);
Fetched Configuration: Most recent configuration that fetched from the server but not activated yet. We need to activate these configurations parameters, then all value copied in active configuration.
_fetchAndActivateNextTime() async {
await AGCRemoteConfig.instance.applyLastFetched();
Map value = await AGCRemoteConfig.instance.getMergedAll();
setState(() {
_allValue = value;
});
await AGCRemoteConfig.instance.fetch().catchError((error)=>log(error.toString()));
}
Active Configuration: It directly accessible from your app. It contains values either default and fetched.
fetchAndActivateImmediately() async {
await AGCRemoteConfig.instance.fetch().catchError((error)=>log(error.toString()));
await AGCRemoteConfig.instance.applyLastFetched();
Map value = await AGCRemoteConfig.instance.getMergedAll();
setState(() {
_allValue = value;
});
}
Fetch Parameter value
After default parameter values are set or parameter values are fetched from Remote Configuration, you can call AGCRemoteConfig.getValue to obtain the parameter values through key values to use in your app.
_fetchParameterValue(){
AGCRemoteConfig.instance.getValue('enable_feature_share').then((value){
// onSuccess
if(value == 'true'){
_isVisible = true;
}else{
_isVisible =false;
}
}).catchError((error){
// onFailure
});
}
Resetting Parameter Values
You can clear all existing parameter using below function.
_resetParameterValues(){
AGCRemoteConfig.instance.clearAll();
}
What all can be done using Huawei remote configuration
Displaying Different Content to Different Users: Remote Configuration can work with HUAWEI Analytics to personalize content displayed to different audiences. For example, office workers and students will see different products and UI layouts in an app
Adapting the App Theme by Time: You can set time conditions, different app colors, and various materials in Remote Configuration to change the app theme for specific situations. For example, during the graduation season, you can adapt your app to the graduation theme to attract more users.
Releasing New Functions by User Percentage: Releasing new functions to all users at the same time will be risky. Remote Configuration enables new function release by user percentage for you to slowly increase the target user scope, effectively helping you to improve your app based on the feedback from users already exposed to the new functions.
Features of Remote configuration
1. Add parameters
2. Add conditions
1. Adding Parameters: In this you can add parameter with value as many as you want. Later you can also change the value that will be automatically reflected in the app. After adding all the required parameters, lets release the parameter.
2. Adding condition: This feature helps developer to add the conditions based on the below parameters. And conditions can be released.
App Version
OS version
Language
Country/Region
Audience
User Attributes
Predictions
User Percentage
Time
App Version: Condition can be applied on app versions. Which has four operator Include, Exclude, Equal, Include regular expression. Based on these four operators you can add conditions.
OS Version: Using the developer can add condition based on android OS version.
Language: Developer can add the condition based on the language.
Country/Region: Developer can add condition based on the country or region.
User percentage: Developer can roll feature to users based on the percentage of the users between 1-100%.
Time: Developer can use time condition to enable or disable some feature based on time. For example if the feature has to enable on particular day.
After adding required condition, release all the added conditions
Result
Tips and Tricks
Download latest HMS Flutter plugin.
Check dependencies downloaded properly.
Latest HMS Core APK is required.
Conclusion
In this article, we have learnt integration of Huawei Remote configuration, how to add the parameters, how to add the Conditions, how to release parameters and conditions and how to fetch the remote data in application and how to clear the data in flutter Taxi booking application.
Reference
Huawei Remote Configuration
Happy coding
Basavaraj.navi said:
Introduction
Welcome Folks, in this article, I will explain what is Huawei Remote configuration? How does Huawei Remote Configuration work in Flutter? At the end of this tutorial, we will create the Huawei Remote Configuration Flutter taxi booking application.
In this example, I am enabling/Disabling share feature from remote configuration. When share feature is enabled user can book share cab otherwise user can’t see the share feature.
What is Huawei Remote Configuration?
Huawei Remote Configuration is cloud service. It changes the behavior and appearance of your app without publishing an app update on App Gallery for all active users. Basically, Remote Configuration allows you to maintain parameters on the cloud, based on these parameters we control the behavior and appearance of your app. In the festival scenario, we can define parameters with the text, color, images for a theme which can be fetched using Remote Configuration.
How does Huawei Remote Configuration work?
Huawei Remote Configuration is a cloud service that allows you change the behavior and appearance of your app without requiring users to download an app update. When using Remote Configuration, you can create in-app default values that control the behavior and appearance of your app. Then, you can later use the Huawei console or the Remote Configuration to override in-app default values for all app users or for segments of your user base. Your app controls when updates are applied, and it can frequently check for updates and apply them with a negligible impact on performance.
In Remote Configuration, we can create in-app default values that control the behavior and appearance (such as text, color, and image, etc.) in the app. Later on, with the help of Huawei Remote Configuration, we can fetch parameters from the Huawei remote configuration and override the default value.
Integration of Remote configuration
1. Configure application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves the couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on current location.
Step 4: Enabling Remote configuration. Open AppGallery connect, choose Grow > Remote confihuration
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves the couple of steps as follows.
Step 1: Create flutter application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level gradle dependencies. Choose inside project Android > app > build.gradle
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Add the below permissions in Android Manifest file.
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
Step 3: Add the agconnect_remote_config in pubspec.yaml
Step 4: Add downloaded file into outside project directory. Declare plugin path in pubspec.yaml file under dependencies.
dependencies:
flutter:
sdk: flutter
huawei_account:
path: ../huawei_account/
huawei_location:
path: ../huawei_location/
huawei_map:
path: ../huawei_map/
huawei_analytics:
path: ../huawei_analytics/
huawei_site:
path: ../huawei_site/
huawei_push:
path: ../huawei_push/
huawei_dtm:
path: ../huawei_dtm/
agconnect_crash: ^1.0.0
agconnect_remote_config: ^1.0.0
http: ^0.12.2
To achieve Remote configuration service example let us follow the steps.
1. AGC Configuration
2. Build Flutter application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate Huawei Remote configuration Service.
3. Navigate to Grow > Remote configuration
Step 2: Build Flutter application
In this example, I am enabling/Disabling share feature from remote configuration. When share feature is enabled, user can book share cab otherwise user can’t see the share feature
Basically, Huawei Remote Configuration has three different configurations as explained below.
Default Configuration: In this configuration default values defined in your app, if no matching key found on remote configuration sever than default value is copied the in active configuration and returned to the client.
Map<String, dynamic> defaults = {
'enable_feature_share': false,
'button_color': 'red',
'text_color': 'white',
'show_shadow_button': true,
'default_distance': 4.5,
'min_price':80
};
AGCRemoteConfig.instance.applyDefaults(defaults);
Fetched Configuration: Most recent configuration that fetched from the server but not activated yet. We need to activate these configurations parameters, then all value copied in active configuration.
_fetchAndActivateNextTime() async {
await AGCRemoteConfig.instance.applyLastFetched();
Map value = await AGCRemoteConfig.instance.getMergedAll();
setState(() {
_allValue = value;
});
await AGCRemoteConfig.instance.fetch().catchError((error)=>log(error.toString()));
}
Active Configuration: It directly accessible from your app. It contains values either default and fetched.
fetchAndActivateImmediately() async {
await AGCRemoteConfig.instance.fetch().catchError((error)=>log(error.toString()));
await AGCRemoteConfig.instance.applyLastFetched();
Map value = await AGCRemoteConfig.instance.getMergedAll();
setState(() {
_allValue = value;
});
}
Fetch Parameter value
After default parameter values are set or parameter values are fetched from Remote Configuration, you can call AGCRemoteConfig.getValue to obtain the parameter values through key values to use in your app.
_fetchParameterValue(){
AGCRemoteConfig.instance.getValue('enable_feature_share').then((value){
// onSuccess
if(value == 'true'){
_isVisible = true;
}else{
_isVisible =false;
}
}).catchError((error){
// onFailure
});
}
Resetting Parameter Values
You can clear all existing parameter using below function.
_resetParameterValues(){
AGCRemoteConfig.instance.clearAll();
}
What all can be done using Huawei remote configuration
Displaying Different Content to Different Users: Remote Configuration can work with HUAWEI Analytics to personalize content displayed to different audiences. For example, office workers and students will see different products and UI layouts in an app
Adapting the App Theme by Time: You can set time conditions, different app colors, and various materials in Remote Configuration to change the app theme for specific situations. For example, during the graduation season, you can adapt your app to the graduation theme to attract more users.
Releasing New Functions by User Percentage: Releasing new functions to all users at the same time will be risky. Remote Configuration enables new function release by user percentage for you to slowly increase the target user scope, effectively helping you to improve your app based on the feedback from users already exposed to the new functions.
Features of Remote configuration
1. Add parameters
2. Add conditions
1. Adding Parameters: In this you can add parameter with value as many as you want. Later you can also change the value that will be automatically reflected in the app. After adding all the required parameters, lets release the parameter.
2. Adding condition: This feature helps developer to add the conditions based on the below parameters. And conditions can be released.
App Version
OS version
Language
Country/Region
Audience
User Attributes
Predictions
User Percentage
Time
App Version: Condition can be applied on app versions. Which has four operator Include, Exclude, Equal, Include regular expression. Based on these four operators you can add conditions.
OS Version: Using the developer can add condition based on android OS version.
Language: Developer can add the condition based on the language.
Country/Region: Developer can add condition based on the country or region.
User percentage: Developer can roll feature to users based on the percentage of the users between 1-100%.
Time: Developer can use time condition to enable or disable some feature based on time. For example if the feature has to enable on particular day.
After adding required condition, release all the added conditions
View attachment 5283841
Result
View attachment 5283843View attachment 5283845
Tips and Tricks
Download latest HMS Flutter plugin.
Check dependencies downloaded properly.
Latest HMS Core APK is required.
Conclusion
In this article, we have learnt integration of Huawei Remote configuration, how to add the parameters, how to add the Conditions, how to release parameters and conditions and how to fetch the remote data in application and how to clear the data in flutter Taxi booking application.
Reference
Huawei Remote Configuration
Happy coding
Click to expand...
Click to collapse

Categories

Resources