How to Analyse your user and Improve User Engagement - Huawei Developers

Notifications play a vital role in your app success. They give you a platform to reach out to your users, interact with them, give them updates, update the app data and much more.
While notification is an important part of the whole app development process, it somehow is typical to integrate as compared to the normal android apis. The reason? Well, while normal apis need configuration inside the app only, notifications need more than that — outside configuration like handling user tokens, and sending notifications using those tokens.
The main goal of this app is to be a sample of how to build an high quality Android application that uses the Architecture components, MVVM, RxJava,HMS PUSH Kit,HMS Analytics Kit etc. in Kotlin
What is Kotlin? The Java alternative explained
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Kotlin is a general purpose, free, open source, statically typed “pragmatic” programming language initially designed for the JVM (Java Virtual Machine) and Android that combines object-oriented and functional programming features
Safety features in Kotlin
· Speaking of avoiding common errors, Kotlin was designed to eliminate the danger of null pointer references and streamline the handling of null values.
· It does this by making a null illegal for standard types, adding nullable types, and implementing shortcut notations to handle tests for null.
· To avoid the verbose grammar normally needed for null testing, Kotlin introduces a safe call, written ?.. For example, b?.length returns b.length if b is not null, and null otherwise. The type of this expression is Int?.
RxJava:
· RxJava has become the single most important skill for Android development.
· Most of you must have worked with it in some form, either in your own codebase or through other third party libraries, like Fast Android Networking and Retrofit
How does this Reactive Stream implement?
· Publisher: These interfaces(APIs) define the structure of RxJava.
Code:
interface Publisher<T> {
fun subscribe(s: Subscriber<in T?>?)
}
· Subscriber: A Subscriber exposes the following callback methods.
Code:
interface Subscriber<T> {
fun onSubscribe(s: Subscription?)
fun onNext(t: T)
fun onError(t: Throwable?)
fun onComplete()
}
· Subscription: When the Subscriber becomes ready to start handling events, it signals the Publisher through its Subscription
Code:
interface Subscription {
fun request(n: Long)
fun cancel()
}
Publishers
Let’s see how publishers are structured in RxJava. RxJava provides following types of event publishers:
· Flowable: A Publisher that emits 0..N elements, and then completes successfully or with an error.
· Observable: Similar to Flowables but without a backpressure strategy. They were introduced in RxJava 1.x.
· Single: It completes with a value successfully or an error.(doesn’t have onComplete callback, instead onSuccess(val)).
· Maybe: It completes with/without a value or completes with an error.
Completable: It just signals if it has completed successfully or with an error.
What is MVVM?
Model-View-ViewModel architecture consists of 3 parts.
· The View gets user’s actions and sends to the ViewModel, or listens live data stream from the ViewModel and provides to the users.
· The ViewModel gets user’s actions from the View or provides data to View.
· The Model abstracts the data source. View and ViewModel uses that on data stream.
LiveData:
· LiveData is one of the newly introduced architecture components. LiveData is an observable data holder.
· This allows the components in your app to be able to observe LiveData objects for changes without creating explicit and rigid dependency paths between them.
· This decouples completely the LiveData object producer from the LiveData object consumer.
ViewModel:
· ViewModel is also one of the newly introduced architecture components.
· Architecture components provide a new class called ViewModel, which is responsible for preparing the data for the UI/View.
· ViewModel gives you a good base class for your MVVM ViewModel layer since ViewModel (and its children AndroidViewModel)’s extending classes are automatically having their holding data retained during configuration changes.
· This means that after configuration changes, this ViewModel holded data is immediately available to the next activity or fragment instance.
Project Structure:
We will create packages by features. It will make your code more modular and manageable.
DataBinding:
· Android data binding allows for 1 or 2-way binding within the layout XML files of Android. This should be familiar to anyone who has done web based development with Angular or Ember, or C# WPF development.
· The most basic case is to add an on-click listener. This requires no registering of on click listeners in any of your application code, just some XML in the layout file and a function to be called in your ViewModel class
Code:
<?xml version="1.0" encoding="utf-8"?>
<layout >
<data>
<variable
name=" "
type=""/>
</data>
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:text="@{viewModel.getHourOfDay()}"
tools:text="14:00" />
</RelativeLayout>
</layout>
Weather Notification API:
The Weather Notifications API enables you to set weather-driven push notifications to applications. With predefined weather parameter threshold values, you are able to be informed of certain weather phenomena like when rain, strong winds or exceptional heat starts.
The following content will be the coding process.
If you are interested, check the full content by https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201235519702030082&fid=0101187876626530001
More information about HMS, you can visit https://forums.developer.huawei.com/forumPortal/en/home

Related

Optimize Conversion Rate Using A/B Testing

This article is originally from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home​Experiment your ideas, make your changes in website/Applications, AB Testing analytics will help you find which experiment results better conversion rate optimization.
Introduction
A/B Testing (also known as Split Testing), A/B testing is a fantastic method for find out the best online advertisement and marketing strategies. designers are using it at this very moment to obtain valuable insight regarding visitor behavior and to improve applications or website conversion rate.
A/B testing involves sending half your users to one version of a page, and the other half to another, and watching the analytics to see which one is more effective in getting them to do what you want them to do (for example, sign up for a user account), this helps you make reach more audience.
Implementation Process
1. Enabling A/B Testing, Access dependent service
2. Create an experiment
3. Manage an experiment
Enabling A/B Testing
· Create sample application in android and configure into AGC.
· Enable Push service and Analytics
· Open AGC select sample project Growing->A/B testing Now enable A/B testing.
After enabling service, you will get options
· Create notifications experiment.
· Create remote configuration experiment.
Creating an Experiment
Create Remote Configuration Experiment
The best way to implement new feature or an updated user experience, based on user response you can implement features. follow below steps.
Click create remote configuration experiment you will get below screen
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Open AGC -> select project->Growing->Remote Configuration enable service
Now open A/B Testing tab.
Declare name and description then click Next Button.
· In target user page, set filter conditions, select target user ratio.
· Click Conditions and select one of the following filter conditions.
Version: Select the version of the app.
Language: Select the language of the device
Country/Region: Select the country or region where the device is located.
Audience: Select the audience of HUAWEI Analytics Kit.
User attributes: Select the user attributes of HUAWEI Analytics Kit
After the setting is complete, click Next.
· Create Treatment & Control Groups
To add multiple parameters using new parameter option.
To add multiple treatment group using New treatment group option.
You can create customize parameter names.
· After filling all details done, click next button.
· Create Track Indicators
Select main and optional indicators, then save all details
· After saving all details you will get below screen.
· After experiment created you can follow below steps.
Test the experiment
Start the experiment
View the report
Increase the percentage of users who participate in the experiment.
Release the experiment
Test an Experiment
Before starting an experiment, you need to test individually.
· Generate AAID
· Using Add test user and add an AAID of a test user.
· Run your app on the test user device and ensure that the test user can receive information.
· Follow below step to generate AAID.
Code:
private fun getAAID() {
var inst = HmsInstanceId.getInstance(this)
val aaid = inst.id
txtAaid.text=aaid
Log.d("AAID", "getAAID success:$aaid")
}
Starting an experiment
· Click Start operation column and click OK
Viewing an Experiment Report
· You can view experiment reports in any state. The system also provides the following information for each indicator by default
· Increment: The system compares the indicator value of the treatment group with that of the control group to measure the improvement of the indicator.
· Other related indicators: For example, for Retention rate after 1 day, the system also provides data about Retained users.
Releasing an Experiment
· Click Release in the operation column
· Select treatment group set condition name.
· Click Go to Remote configuration button.it will redirect new page.
· If you want modify any parameter values. click operation choose Edit option to modify values, Click Save Button.
Let’s do Code
· Setting Default In-app Parameter Values
Code:
val config = AGConnectConfig.getInstance()
val map: MutableMap<String, Any> = HashMap()
map["test1"] = "test1"
map["test2"] = "true"
config.applyDefault(map)
· Fetching Parameter Values from Remote Configuration
· You can call the fetch API to obtain parameter values from Remote Configuration in asynchronous mode.
· After the API is successfully called, data of the ConfigValues type is returned from Remote Configuration.
· After obtaining parameter value updates, the app calls the apply() method
· Calling the fetch() method within an interval will obtain locally cached data instead of obtaining data from the cloud.
· We can customize the fetch interval
Loading Process
· Applying parameter values Immediately.
Code:
config.fetch()
.addOnSuccessListener { configValues ->
config.apply(configValues)
}
.addOnFailureListener { error ->
Log.d("TAG", error.message)
}
· Applying parameter values upon the next startup
You can fetch parameter values at any time, in this case latest parameter values can be applied without asynchronous waiting.
Code:
val last = config.loadLastFetched()
config.apply(last)
config.fetch()
.addOnSuccessListener {
}
.addOnFailureListener { error ->
Log.d("TAG", error.message)
}
}
Report
Conclusion:
A/B testing which was earlier used mostly on e-commerce portals and websites has now been introduced for Android apps as well. This form of testing was found to be highly effective for the developers.
So, that's how to implement an A/B Testing in your next app.
Any questions about this, you can try to acquire answers from HUAWEI Developer Forum.

HMS ADS Integration into Unity Game | Installation and Example

Introduction
In this article, we will learn how to add ADS into our Unity Game. Get paid to show relevant ads from over a million advertisers with HMS ADS in our Unity Games. Ads are an effective and easy way to earn revenue from your games. Ads Kit is a smart monetization platform for apps that helps you to maximize revenue from ads and in-app purchases. Thousands of Apps use HMS ADS Kit to generate a reliable revenue stream.
All you need to do is add the kit to your unity game, to place ads with just a few lines of code.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Implementation Steps
1. Creation of our App in App Gallery Connect
2. Evil Mind plugin Integration
3. Unity Configuration
4. Creation of the scene
5. Coding
6. Configuration for project execution
7. Final result
App Gallery Connect Configuration
Creating a new App in the App Gallery connect console is a fairly simple procedure but requires paying attention to certain important aspects.
Once inside the console we must create a project and to this project we must add an App.
When creating our App we will find the following form. It is important to take into account that the category of the App must be Game.
Once the App is created, it will be necessary for us to activate the Account Kit in our APIs, we can also activate the Game service if we wish.
HUAWEI Push Kit establishes a messaging channel from the cloud to devices. By integrating Push Kit, you can send messages to your apps on users' devices in real time.
Once this configuration is completed, it will be necessary to add the SHA-256 fingerprint, for this we can use the keytool command, but for this we must first create a keystore and for this we can use Unity.
Once the keystore is created we must open the console, go to the path where the keystore is located and execute the following code. Remember that to use this command you must have the Java JDK installed.
Once inside the route
Keytool -list -v -keystore yournamekey.keystore
This will give us all the information in our keystore, we obtain the SHA256 and add it to the App.
Unity Evil Mind Plugin configuration
We have concluded the creation of the App, the project and now we know how we can create a keystore and obtain the sha in Unity.
In case you have not done it now we must create our project in Unity once the project is created we must obtain the project package which we will use to connect our project with the AGC SDK. first of all let's go download the Evil Mind plugin.
GitHub - EvilMindDevs/hms-unity-plugin
Contribute to EvilMindDevs/hms-unity-plugin development by creating an account on GitHub.
github.com
In the link you can find the package to import it to Unity, to import it we must follow the following steps.
Download the .unity package and then import it into Unity using the package importer.   
Once imported, you will have the Huawei option in the toolbar, we click on the option and add the data
from our App Gallery Console
  
Once we have imported the plugin we will have to add the necessary data from our App Gallery App and place it within the
required fields of the Unity plugin. Well, now we have our App Gallery App connected to the Unity project.
Now we can add the Push Notifications prefab to our scene remember that to do this we must create a new scene,
for this example we can use the scene that the plugin provides.
Unity Configuration
We have to remember that when we are using Unity is important to keep in mind that configurations needs to be done within the Engine so the apk runs correctly. In this section i want to detail some important configurations that we have to change for our Engine.
Switch Plaform.- Usually Unity will show us as default platform the PC, MAC so we have to change it for Android like in this picture.
Scripting Backend.- In order to run correctly the Scripting Backend must be change to IL2CPP, by default U
nity will have Mono as Scripting Backend so its important to chenge this information.
Minimun API Level.- Other configuration that needs to be done is Minimun API, we have to set it to API Level 21. otherwise the build wont work.
Creation of the scene
Within this step we must Create a new Scene From Scratch where we will have to add the following elements.
AdsManager.-The prefab that comes with the plugin
Canvas.- Where we will show the buttons to trigger Banners
EventSystem.- To Add input for Android Handheld
AdsManager.-Where the script to control the Ads Instances.
Coding
Lets check the Code of the Prefab that comes with the plugin. In this article i want to use Interstitial Ads so, first lets review what are the characteristics of these ads. Interstitial ads are full-screen ads that cover the interface of an app. Such an ad is displayed when a user starts, pauses, or exits an app, without disrupting the user's experience.
The code of the Instersticial Ad has the way to get an Instance of the Object.
Code:
public static InterstitalAdManager GetInstance(string name = "AdsManager") => GameObject.Find(name).GetComponent<InterstitalAdManager>();
As well we have a get Set to add the token Id of the Test or release Ad.
Code:
public string AdId
{
get => mAdId;
set
{
Debug.Log($"[HMS] InterstitalAdManager: Set interstitial ad ID: {value}");
mAdId = value;
LoadNextInterstitialAd();
}
}
This paramaters needs to be assign in the Script that controls the Ads behavior. Finally another important code that we have in this script is the listener of the Interstitial Ad. Methods to recognize Ad behaviour like Click, Close and Fail can be listen in this Section.
Code:
private class InterstitialAdListener : IAdListener
{
private readonly InterstitalAdManager mAdsManager;
public InterstitialAdListener(InterstitalAdManager adsManager)
{
mAdsManager = adsManager;
}
public void OnAdClicked()
{
Debug.Log("[HMS] AdsManager OnAdClicked");
mAdsManager.OnAdClicked?.Invoke();
}
public void OnAdClosed()
{
Debug.Log("[HMS] AdsManager OnAdClosed");
mAdsManager.OnAdClosed?.Invoke();
mAdsManager.LoadNextInterstitialAd();
}
public void OnAdFailed(int reason)
{
Debug.Log("[HMS] AdsManager OnAdFailed");
mAdsManager.OnAdFailed?.Invoke(reason);
}
public void OnAdImpression()
{
Debug.Log("[HMS] AdsManager OnAdImpression");
mAdsManager.OnAdImpression?.Invoke();
}
public void OnAdLeave()
{
Debug.Log("[HMS] AdsManager OnAdLeave");
mAdsManager.OnAdLeave?.Invoke();
}
public void OnAdLoaded()
{
Debug.Log("[HMS] AdsManager OnAdLoaded");
mAdsManager.OnAdLoaded?.Invoke();
}
public void OnAdOpened()
{
Debug.Log("[HMS] AdsManager OnAdOpened");
mAdsManager.OnAdOpened?.Invoke();
}
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topic/0201461409178140023
very useful writeup

How I Developed a Smile Filter for My App

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I recently read an article that explained how we as human beings are hardwired to enter the fight-or-flight mode when we realize that we are being watched. This feeling is especially strong when somebody else is trying to take a picture of us, which is why many of us find it difficult to smile in photos. This effect is so strong that we've all had the experience of looking at a photo right after it was taken and noticing straight away that the photo needs to be retaken because our smile wasn't wide enough or didn't look natural. So, the next time someone criticizes my smile in a photo, I'm just going to them, "It's not my fault. It's literally an evolutionary trait!"
Or, instead of making such an excuse, what about turning to technology for help? Actually, I have tried using some photo editor apps to modify my portrait photos, making my facial expression look nicer by, for example, removing my braces, whitening my teeth, and erasing my smile lines. However, maybe it's because of my rusty image editing skills, the modified images often turn out to be strange.
My lack of success with photo editing made me wonder: Wouldn't it be great if there was a function specially designed for people like me, who find it difficult to smile naturally in photos and who aren't good at photo editing, which could automatically give us picture-perfect smiles?
I then suddenly remembered that I had heard about an interesting function called smile filter that has been going viral on different apps and platforms. A smile filter is an app feature which can automatically add a natural-looking smile to a face detected in an image. I have tried it before and was really amazed by the result. In light of my sudden recall, I decided to create a demo app with a similar function, in order to figure out the principle behind it.
To provide my app with a smile filter, I chose to use the auto-smile capability provided by HMS Core Video Editor Kit. This capability automatically detects people in an image and then lightens up the detected faces with a smile (either closed- or open-mouth) that perfectly blends in with each person's facial structure. With the help of such a capability, a mobile app can create the perfect smile in seconds and save users from the hassle of having to use a professional image editing program.
Check the result out for yourselves:
Looks pretty natural, right? This is the result offered by my demo app integrated with the auto-smile capability. The original image looks like this:
Next, I will explain how I integrated the auto-smile capability into my app and share the relevant source code from my demo app.
Integration Procedure​Preparations​1, Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration​1. Set the app authentication information. This can be done via an API key or an access token.
Using an API key: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, using an access token: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID, which must be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development​
Code:
// Apply the auto-smile effect. Currently, this effect only supports image assets.
imageAsset.addFaceSmileAIEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Callback when the handling progress is received.
}
@Override
public void onSuccess() {
// Callback when the handling is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the handling failed.
}
});
// Stop applying the auto-smile effect.
imageAsset.interruptFaceSmile();
// Remove the auto-smile effect.
imageAsset.removeFaceSmileAIEffect();
And with that, I successfully integrated the auto-smile capability into my demo app, and now it can automatically add smiles to faces detected in the input image.
Conclusion​Research has demonstrated that it is normal for people to behave unnaturally when we are being photographed. Such unnaturalness becomes even more obvious when we try to smile. This explains why numerous social media apps and video/image editing apps have introduced smile filter functions, which allow users to easily and quickly add a naturally looking smile to faces in an image.
Among various solutions to such a function, HMS Core Video Editor Kit's auto-smile capability stands out by providing excellent, natural-looking results and featuring straightforward and quick integration.
What's better, the auto-smile capability can be used together with other capabilities from the same kit, to further enhance users' image editing experience. For example, when used in conjunction with the kit's AI color capability, you can add color to an old black-and-white photo and then use auto-smile to add smiles to the sullen expressions of the people in the photo. It's a great way to freshen up old and dreary photos from the past.
And that's just one way of using the auto-smile capability in conjunction with other capabilities. What ideas do you have? Looking forward to knowing your thoughts in the comments section.
References​How to Overcome Camera Shyness or Phobia
Introduction to Auto-Smile

How I Created a Smart Video Clip Extractor

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Travel and life vlogs are popular among app users: Those videos are telling, covering all the most attractive parts in a journey or a day. To create such a video first requires great editing efforts to cut out the trivial and meaningless segments in the original video, which used to be a thing of video editing pros.
This is no longer the case. Now we have an array of intelligent mobile apps that can help us automatically extract highlights from a video, so we can focus more on spicing up the video by adding special effects, for example. I opted to use the highlight capability from HMS Core Video Editor Kit to create my own vlog editor.
How It Works​This capability assesses how appealing video frames are and then extracts the most suitable ones. To this end, it is said that the capability takes into consideration the video properties most concerned by users, a conclusion that is drawn from survey and experience assessment from users. On the basis of this, the highlight capability develops a comprehensive frame assessment scheme that covers various aspects. For example:
Aesthetics evaluation. This aspect is a data set built upon composition, lighting, color, and more, which is the essential part of the capability.
Tags and facial expressions. They represent the frames that are detected and likely to be extracted by the highlight capability, such as frames that contain people, animals, and laughter.
Frame quality and camera movement mode. The capability discards low-quality frames that are blurry, out-of-focus, overexposed, or shaky, to ensure such frames will not impact the quality of the finished video. Amazingly, despite all of these, the highlight capability is able to complete the extraction process in just 2 seconds.
See for yourself how the finished video by the highlight capability compares with the original video.
Backing Technology​The highlight capability stands out from the crowd by adopting models and a frame assessment scheme that are iteratively optimized. Technically and specifically speaking:
The capability introduces AMediaCodec for hardware decoding and Open Graphics Library (OpenGL) for rendering frames and automatically adjusting the frame dimensions according to the screen dimensions. The capability algorithm uses multiple neural network models. In this way, the capability checks the device model where it runs and then automatically chooses to run on NPU, CPU, or GPU. Consequently, the capability delivers a higher running performance.
To provide the extraction result more quickly, the highlight capability uses the two-stage algorithm of sparse sampling to dense sampling, checks how content distributed among numerous videos, and adopts the frame buffer. All these contribute to a higher efficiency of determining the most attractive video frames. To ensure high performance of the algorithm, the capability adopts the thread pool scheduling and producer-consumer model, to ensure that the video decoder and models can run at the same time.
During the sparse sampling stage, the capability decodes and processes some (up to 15) key frames in a video. The interval between the key frames is no less than 2 seconds. During the dense sampling stage, the algorithm picks out the best key frame and then extracts frames before and after to further analyze the highlighted part of the video.
The extraction result is closely related to the key frame position. The processing result of the highlight capability will not be ideal when the sampling points are not dense enough because, for example, the video does not have enough key frames or the duration is too long (greater than 1 minute). For the capability to deliver optimal performance, it recommends that the duration of the input video be less than 60 seconds.
Let's now move on to how this capability can be integrated.
Integration Process​Preparations​Make necessary preparations before moving on to the next part. Required steps include:
Configure the app information in AppGallery Connect.
Integrate the SDK of HMS Core.
Configure obfuscation scripts.
Declare necessary permissions.
Setting up the Video Editing Project​1. Configure the app authentication information by using either an access token or API key.
Method 1: Call setAccessToken to set an access token, which is required only once during app startup.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Method 2: Call setApiKey to set an API key, which is required only once during app startup.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a License ID.
This ID is used to manage the usage quotas of Video Editor Kit and must be unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
Initialize the runtime environment of HuaweiVideoEditor.
When creating a video editing project, we first need to create an instance of HuaweiVideoEditor and initialize its runtime environment. When you exit the project, the instance shall be released.
Create an instance of HuaweiVideoEditor.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Determine the layout of the preview area.
Such an area renders video images, and this is implemented by SurfaceView within the fundamental capability SDK. Before the area is created, we need to specify its layout.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify a preview area.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Design the layout of the area.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If the license verification fails, LicenseException will be thrown.
After the HuaweiVideoEditor instance is created, it will not use any system resources, and we need to manually set the initialization time for the runtime environment. Then, the fundamental capability SDK will internally create necessary threads and timers.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Integrating the Highlight Capability​
Code:
// Create an object that will be processed by the highlight capability.
HVEVideoSelection hveVideoSelection = new HVEVideoSelection();
// Initialize the engine of the highlight capability.
hveVideoSelection.initVideoSelectionEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// After the initialization is successful, extract the highlighted video. filePath indicates the video file path, and duration indicates the desired duration for the highlighted video.
hveVideoSelection.getHighLight(filePath, duration, new HVEVideoSelectionCallback() {
@Override
public void onResult(long start) {
// The highlighted video is successfully extracted.
}
});
// Release the highlight engine.
hveVideoSelection.releaseVideoSelectionEngine();
Conclusion​The vlog has been playing a vital part in this we-media era since its appearance. In the past, there were just a handful of people who could create a vlog, because the process of picking out the most interesting part from the original video could be so demanding.
Thanks to smart mobile app technology, even video editing amateurs can now create a vlog because much of the process can be completed automatically by an app with the function of highlighted video extraction.
The highlight capability from the Video Editor Kit is one such function. This capability introduces a set of features to deliver incredible results, such as AMediaCodec, OpenGL, neural networks, a two-stage algorithm (sparse sampling to dense sampling), and more. This capability can help create either a highlighted video extractor or build a highlighted video extraction feature in an app.

Must-Have Tool for Anonymous Virtual Livestreams

Influencers have become increasingly important, as more and more consumers choose to purchase items online – whether on Amazon, Taobao, or one of the many other prominent e-commerce platforms. Brands and merchants have spent a lot of money finding influencers to promote their products through live streams and consumer interactions, and many purchases are made on the recommendation of a trusted influencer.
However, employing a public-facing influencer can be costly and risky. Many brands and merchants have opted instead to host live streams with their own virtual characters. This gives them more freedom to showcase their products, and widens the pool of potential on camera talent. For consumers, virtual characters can add fun and whimsy to the shopping experience.
E-commerce platforms have begun to accommodate the preference for anonymous livestreaming, by offering a range of important capabilities, such as those that allow for automatic identification, skeleton point-based motion tracking in real time (as shown in the gif), facial expression and gesture identification, copying of traits to virtual characters, a range of virtual character models for users to choose from, and natural real-world interactions.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Building these capabilities comes with its share of challenges. For example, after finally building a model that is able to translate the person's every gesture, expression, and movement into real-time parameters and then applying them to the virtual character, you can find out that the virtual character can't be blocked by real bodies during the livestream, which gives it a fake, ghost-like form. This is a problem I encountered when I developed my own e-commerce app, and it occurred because I did not occlude the bodies that appeared behind and in front of the virtual character. Fortunately I was able to find an SDK that helped me solve this problem — HMS Core AR Engine.
This toolkit provides a range of capabilities that make it easy to incorporate AR-powered features into apps. From hit testing and movement tracking, to environment mesh, and image tracking, it's got just about everything you need. The human body occlusion capability was exactly what I needed at the time.
Now I'll show you how I integrated this toolkit into my app, and how helpful it's been for.
First I registered for an account on the HUAWEI Developers website, downloaded the AR Engine SDK, and followed the step-by-step development guide to integrate the SDK. The integration process was quite simple and did not take too long. Once the integration was successful, I ran the demo on a test phone, and was amazed to see how well it worked. During livestreams my app was able to recognize and track the areas where I was located within the image, with an accuracy of up to 90%, and provided depth-related information about the area. Better yet, it was able to identify and track the profile of up to two people, and output the occlusion information and skeleton points corresponding to the body profiles in real time. With this capability, I was able to implement a lot of engaging features, for example, changing backgrounds, hiding virtual characters behind real people, and even a feature that allows the audience to interact with the virtual character through special effects. All of these features have made my app more immersive and interactive, which makes it more attractive to potential shoppers.
Demo​As shown in the gif below, the person blocks the virtual panda when walking in front of it.
How to Develop​Preparations​Registering as a developer​Before getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
Creating an app​Create a project and create an app under the project. Pay attention to the following parameter settings:
Platform: Select Android.
Device: Select Mobile phone.
App category: Select App or Game.
Integrating the AR Engine SDK​Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
Configuring the Maven repository address for the AR Engine SDK​The procedure for configuring the Maven repository address in Android Studio is different for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
Adding build dependencies​Open the build.gradle file in the app directory of your project.
Add a build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}'
}
Open the modified build.gradle file again. You will find a Sync Now link in the upper right corner of the page. Click Sync Now and wait until synchronization is complete.
Developing Your App​Checking the Availability​Check whether AR Engine has been installed on the current device. If so, the app can run properly. If not, the app prompts the user to install AR Engine, for example, by redirecting the user to AppGallery. The code is as follows:
Code:
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
Create a BodyActivity object to display body bones and output human body features, for AR Engine to recognize human body.
Code:
Public class BodyActivity extends BaseActivity{
Private BodyRendererManager mBodyRendererManager;
Protected void onCreate(){
// Initialize surfaceView.
mSurfaceView = findViewById();
// Context for keeping the OpenGL ES running.
mSurfaceView.setPreserveEGLContextOnPause(true);
// Set the OpenGL ES version.
mSurfaceView.setEGLContextClientVersion(2);
// Set the EGL configuration chooser, including for the number of bits of the color buffer and the number of depth bits.
mSurfaceView.setEGLConfigChooser(……);
mBodyRendererManager = new BodyRendererManager(this);
mSurfaceView.setRenderer(mBodyRendererManager);
mSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
Protected void onResume(){
// Initialize ARSession to manage the entire running status of AR Engine.
If(mArSession == null){
mArSession = new ARSession(this.getApplicationContext());
mArConfigBase = new ARBodyTrackingConfig(mArSession);
mArConfigBase.setEnableItem(ARConfigBase.ENABLE_DEPTH | ARConfigBase.ENABLE_MASK);
mArConfigBase.setFocusMode(ARConfigBase.FocusMode.AUTO_FOCUS
mArSession.configure(mArConfigBase);
}
// Pass the required parameters to setBodyMask.
mBodyRendererManager.setBodyMask(((mArConfigBase.getEnableItem() & ARConfigBase.ENABLE_MASK) != 0) && mIsBodyMaskEnable);
sessionResume(mBodyRendererManager);
}
}
Create a BodyRendererManager object to render the personal data obtained by AR Engine.
Code:
Public class BodyRendererManager extends BaseRendererManager{
Public void drawFrame(){
// Obtain the set of all traceable objects of the specified type.
Collection<ARBody> bodies = mSession.getAllTrackables(ARBody.class);
for (ARBody body : bodies) {
if (body.getTrackingState() != ARTrackable.TrackingState.TRACKING){
continue;
}
mBody = body;
hasBodyTracking = true;
}
// Update the body recognition information displayed on the screen.
StringBuilder sb = new StringBuilder();
updateMessageData(sb, mBody);
Size textureSize = mSession.getCameraConfig().getTextureDimensions();
if (mIsWithMaskData && hasBodyTracking && mBackgroundDisplay instanceof BodyMaskDisplay) {
((BodyMaskDisplay) mBackgroundDisplay).onDrawFrame(mArFrame, mBody.getMaskConfidence(),
textureSize.getWidth(), textureSize.getHeight());
}
// Display the updated body information on the screen.
mTextDisplay.onDrawFrame(sb.toString());
for (BodyRelatedDisplay bodyRelatedDisplay : mBodyRelatedDisplays) {
bodyRelatedDisplay.onDrawFrame(bodies, mProjectionMatrix);
} catch (ArDemoRuntimeException e) {
LogUtil.error(TAG, "Exception on the ArDemoRuntimeException!");
} catch (ARFatalException | IllegalArgumentException | ARDeadlineExceededException |
ARUnavailableServiceApkTooOldException t) {
Log(…);
}
}
// Update gesture-related data for display.
Private void updateMessageData(){
if (body == null) {
return;
}
float fpsResult = doFpsCalculate();
sb.append("FPS=").append(fpsResult).append(System.lineSeparator());
int bodyAction = body.getBodyAction();
sb.append("bodyAction=").append(bodyAction).append(System.lineSeparator());
}
}
Customize the camera preview class, which is used to implement human body drawing based on certain confidence.
Code:
Public class BodyMaskDisplay implements BaseBackGroundDisplay{}
Obtain skeleton data and pass the data to the OpenGL ES, which renders the data and displays it on the screen.
Code:
public class BodySkeletonDisplay implements BodyRelatedDisplay {
Obtain skeleton point connection data and pass it to OpenGL ES for rendering the data and display it on the screen.
Code:
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {}
Conclusion​True-to-life AR live-streaming is now an essential feature in e-commerce apps, but developing this capability from scratch can be costly and time-consuming. AR Engine SDK is the best and most convenient SDK I've encountered, and it's done wonders for my app, by recognizing individuals within images with accuracy as high as 90%, and providing the detailed information required to support immersive, real-world interactions. Try it out on your own app to add powerful and interactive features that will have your users clamoring to shop more!
References​AR Engine Development Guide
Sample Code
API Reference

Categories

Resources