Streamlining 3D Animation Creation via Rigging - Huawei Developers

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I dare say there are two types of people in this world: people who love Toy Story and people who have not watched it.
Well, this is just the opinion of a huge fan of the animation film. When I was a child, I always dreamed of having toys that could move and play with me, like my own Buzz Lightyear. Thanks to a fancy technique called rigging, I can now bring my toys to life, albeit I'm probably too old for them now.
What Is Rigging in 3D Animation and Why Do We Need It?​Put simply, rigging is a process whereby a skeleton is created for a 3D model to make it move. In other words, rigging creates a set of connected virtual bones that are used to control a 3D model.
It paves the way for animation because it enables a model to be deformed, making it moveable, which is the very reason that rigging is necessary for 3D animation.
What Is Auto Rigging​3D animation has been adopted by mobile apps in a number of fields (gaming, e-commerce, video, and more), to achieve more realistic animations than 2D.
However, this graphic technique has daunted many developers (like me) because rigging, one of its major prerequisites, is difficult and time-consuming for people who are unfamiliar with modeling. Specifically speaking, most high-performing rigging solutions have many requirements. An example of this is that the input model should be in a standard position, seven or eight key skeletal points should be added, as well as inverse kinematics which must be added to the bones, and more.
Luckily, there are solutions that can automatically complete rigging, such as the auto rigging solution from HMS Core 3D Modeling Kit.
This capability delivers a wholly automated rigging process, requiring just a biped humanoid model that is generated using images taken from a mobile phone camera. After the model is input, auto rigging uses AI algorithms for limb rigging and generates the model skeleton and skin weights (which determine the degree to which a bone can influence a part of the mesh). Then, the capability changes the orientation and position of the skeleton so that the model can perform a range of preset actions, like walking, running, and jumping. Besides, the rigged model can also be moved according to an action generated by using motion capture technology, or be imported into major 3D engines for animation.
Lower requirements do not compromise rigging accuracy. Auto rigging is built upon hundreds of thousands of 3D model rigging data records. Thanks to some fine-tuned data records, the capability delivers ideal algorithm accuracy and generalization.
I know that words alone are no proof, so check out the animated model I've created using the capability.
Movement is smooth, making the cute panda move almost like a real one. Now I'd like to show you how I created this model and how I integrated auto rigging into my demo app.
Integration Procedure​Preparations​Before moving on to the real integration work, make necessary preparations, which include:
Configure app information in AppGallery Connect.
Integrate the HMS Core SDK with the app project, which includes Maven repository address configuration.
Configure obfuscation scripts.
Declare necessary permissions.
Capability Integration​1. Set an access token or API key — which can be found in agconnect-services.json — during app initialization for app authentication.
Using the access token: Call setAccessToken to set an access token. This task is required only once during app initialization.
Code:
ReconstructApplication.getInstance().setAccessToken("your AccessToken");
Using the API key: Call setApiKey to set an API key. This key does not need to be set again.
Code:
ReconstructApplication.getInstance().setApiKey("your api_key");
The access token is recommended. And if you prefer the API key, it is assigned to the app when it is created in AppGallery Connect.
2. Create a 3D object reconstruction engine and initialize it. Then, create an auto rigging configurator.
Code:
// Create a 3D object reconstruction engine.
Modeling3dReconstructEngine modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(context);
// Create an auto rigging configurator.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
// Set the working mode of the engine to PICTURE.
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
// Set the task type to auto rigging.
.setTaskType(Modeling3dReconstructConstants.TaskType.AUTO_RIGGING)
.create();
3. Create a listener for the result of uploading images of an object.
Code:
private Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Callback when the upload progress is received.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
// Callback when the upload is successful.
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback when the upload failed.
}
};
4. Use a 3D object reconstruction configurator to initialize the task, set an upload listener for the engine created in step 1, and upload images.
Code:
// Use the configurator to initialize the task, which should be done in a sub-thread.
Modeling3dReconstructInitResult modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
String taskId = modeling3dReconstructInitResult.getTaskId();
// Set an upload listener.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Call the uploadFile API of the 3D object reconstruction engine to upload images.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
5. Query the status of the auto rigging task.
Code:
// Initialize the task processing class.
Modeling3dReconstructTaskUtils modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(context);
// Call queryTask in a sub-thread to query the task status.
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(taskId);
// Obtain the task status.
int status = queryResult.getStatus();
6. Create a listener for the result of model file download.
Code:
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
// Callback when download progress is received.
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
// Callback when download is successful.
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback when download failed.
}
};
7. Pass the download listener to the 3D object reconstruction engine, to download the rigged model.
Code:
// Set download configurations.
Modeling3dReconstructDownloadConfig downloadConfig = new Modeling3dReconstructDownloadConfig.Factory()
// Set the model file format to OBJ or glTF.
.setModelFormat(Modeling3dReconstructConstants.ModelFormat.OBJ)
// Set the texture map mode to normal mode or PBR mode.
.setTextureMode(Modeling3dReconstructConstants.TextureMode.PBR)
.create();
// Set the download listener.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Call downloadModelWithConfig, passing the task ID, path to which the downloaded file will be saved, and download configurations, to download the rigged model.
modeling3dReconstructEngine.downloadModelWithConfig(taskId, fileSavePath, downloadConfig);
Where to Use​Auto rigging is used in many scenarios, for example:
Gaming. The most direct way of using auto rigging is to create moveable characters in a 3D game. Or, I think we can combine it with AR to create animated models that can appear in the camera display of a mobile device, which will interact with users.
Online education. We can use auto rigging to animate 3D models of popular toys, and liven them up with dance moves, voice-overs, and nursery rhymes to create educational videos. These models can be used in educational videos to appeal to kids more.
E-commerce. Anime figurines look rather plain compared to how they behave in animes. To spice up the figurines, we can use auto rigging to animate 3D models that will look more engaging and dynamic.
Conclusion​3D animation is widely used in mobile apps, because it presents objects in a more fun and interactive way.
A key technique for creating great 3D animations is rigging. Conventional rigging requires modeling know-how and expertise, which puts off many amateur modelers.
Auto rigging is the perfect solution to this challenge because its full-automated rigging process can produce highly accurate rigged models that can be easily animated using major engines available on the market. Auto rigging not only lowers the costs and requirements of 3D model generation and animation, but also helps 3D models look more appealing.

Related

How to Automatically Create a Scenic Timelapse Video

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Have you ever watched a video of the northern lights? Mesmerizing light rays that swirl and dance through the star-encrusted sky. It's even more stunning when they are backdropped by crystal-clear waters that flow smoothly between and under ice crusts. Complementing each other, the moving sky and water compose a dynamic scene that reflects the constant rhythm of the mother nature.
Now imagine that the video is frozen into an image: It still looks beautiful, but lacks the dynamism of the video. Such a contrast between still and moving images shows how videos are sometimes better than still images when it comes to capturing majestic scenery, since the former can convey more information and thus be more engaging.
This may be the reason why we sometimes regret just taking photos instead of capturing a video when we encounter beautiful scenery or a memorable moment.
In addition to this, when we try to add a static image to a short video, we will find that the transition between the image and other segments of the video appears very awkward, since the image is the only static segment in the whole video.
If we want to turn a static image into a dynamic video by adding some motion effects to the sky and water, one way to do this is to use a professional PC program to modify the image. However, this process is often very complicated and time-consuming: It requires adjustment of the timeline, frames, and much more, which can be a daunting prospect for amateur image editors.
Luckily, there are now numerous AI-driven capabilities that can automatically create time-lapse videos for users. I chose to use the auto-timelapse capability provided by HMS Core Video Editor Kit. It can automatically detect the sky and water in an image and produce vivid dynamic effects for them, just like this:
The movement speed and angle of the sky and water are customizable.
Now let's take a look at the detailed integration procedure for this capability, to better understand how such a dynamic effect is created.
Integration Procedure​Preparations​1. Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable the required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration​1. Set the app authentication information. This can be done via an API key or an access token.
Set an API key via the setApiKey method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, set an access token by using the setAccessToken method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID. This ID should be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development​
Code:
// Initialize the auto-timelapse engine.
imageAsset.initTimeLapseEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// When the initialization is successful, check whether there is sky or water in the image.
int motionType = -1;
imageAsset.detectTimeLapse(new HVETimeLapseDetectCallback() {
@Override
public void onResult(int state) {
// Record the state parameter, which is used to define a motion effect.
motionType = state;
}
});
// skySpeed indicates the speed at which the sky moves; skyAngle indicates the direction to which the sky moves; waterSpeed indicates the speed at which the water moves; waterAngle indicates the direction to which the water moves.
HVETimeLapseEffectOptions options =
new HVETimeLapseEffectOptions.Builder().setMotionType(motionType)
.setSkySpeed(skySpeed)
.setSkyAngle(skyAngle)
.setWaterAngle(waterAngle)
.setWaterSpeed(waterSpeed)
.build();
// Add the auto-timelapse effect.
imageAsset.addTimeLapseEffect(options, new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
}
@Override
public void onSuccess() {
}
@Override
public void onError(int errorCode, String errorMessage) {
}
});
// Stop applying the auto-timelapse effect.
imageAsset.interruptTimeLapse();
// Remove the auto-timelapse effect.
imageAsset.removeTimeLapseEffect();
Now, the auto-timelapse capability has been successfully integrated into an app.
Conclusion​When capturing scenic vistas, videos, which can show the dynamic nature of the world around us, are often a better choice than static images. In addition, when creating videos with multiple shots, dynamic pictures deliver a smoother transition effect than static ones.
However, for users not familiar with the process of animating static images, if they try do so manually using computer software, they may find the results unsatisfying.
The good news is that there are now mobile apps integrated with capabilities such as Video Editor Kit's auto-timelapse feature that can create time-lapse effects for users. The generated effect appears authentic and natural, the capability is easy to use, and its integration is straightforward. With such capabilities in place, a video/image app can provide users with a more captivating user experience.
In addition to video/image editing apps, I believe the auto-timelapse capability can also be utilized by many other types of apps. What other kinds of apps do you think would benefit from such a feature? Let me know in the comments section.

Tips on Creating a Cutout Tool

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In photography, cutout is a function that is often used to editing images, such as removing the background. To achieve this function, a technique known as green screen is universally used, which is also called as chroma keying. This technique requires a green background to be added manually.
This, however, makes the green screen-dependent cutout a challenge to those new to video/image editing. The reason is that most images and videos do not come with a green background, and adding such a background is actually quite complex.
Luckily, a number of mobile apps on the market help with this, as they are able to automatically cut out the desired object for users to later edit. To create an app that is capable of doing this, I turned to the recently released object segmentation capability from HMS Core Video Editor Kit for help. This capability utilizes the AI algorithm, instead of the green screen, to intelligently separate an object from other parts of an image or a video, delivering an ideal segmentation result for removing the background and many other further editing operations.
This is what my demo has achieved with the help of the capability.
It is a perfect cutout, right? As you can see, the cut object comes with a smooth edge, without any unwanted parts appearing in the original video.
Before moving on to how I created this cutout tool with the help of object segmentation, let's see what lies behind the capability.
How It Works​The object segmentation capability adopts an interactive way for cutting out objects. A user first taps or draws a line on the object to be cut out, and then the interactive segmentation algorithm checks the track of the user's tap and intelligently identifies their intent. The capability then selects and cuts out the object. The object segmentation capability performs interactive segmentation on the first video frame to obtain the mask of the object to be cut out. The model supporting the capability traverses frames following the first frame by using the mask obtained from the first frame and applying it to subsequent frames, and then matches the mask with the object in them before cutting the object out.
The model assigns frames with different weights, according to the segmentation accuracy of each frame. It then blends the weighted segmentation result of the intermediate frame with the mask obtained from the first frame, in order to segment the desired object from other frames. Consequently, the capability manages to cut out an object as wholly as possible, delivering a higher segmentation accuracy.
What makes the capability better is that it has no restrictions on object types. As long as an object is distinctive to the other parts of the image or video and is against a simple background, the capability can cleanly cut out this object.
Now, let's check out how the capability can be integrated.
Integration Procedure​Making Preparations​There are necessary steps to do before the next part. The steps include:
Configure app information in AppGallery Connect.
Integrate the SDK of HMS Core.
Configure obfuscation scripts.
Apply for necessary permissions.
Setting Up the Video Editing Project​1. Configure the app authentication information. Available options include:
Call setAccessToken to set an access token, which is required only once during app startup.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, call setApiKey to set an API key, which is required only once during app startup.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a License ID.
Because this ID is used to manage the usage quotas of the mentioned service, the ID must be unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
1) Initialize the runtime environment for HuaweiVideoEditor.
When creating a video editing project, we first need to create an instance of HuaweiVideoEditor and initialize its runtime environment. When you exit the project, the instance shall be released.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Determine the layout of the preview area.
Such an area renders video images, and this is implemented by SurfaceView within the fundamental capability SDK. Before the area is created, we need to specify its layout.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify a preview area.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Design the layout for the area.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If the license verification fails, LicenseException will be thrown.
After the HuaweiVideoEditor instance is created, it will not use any system resources, and we need to manually set the initialization time for the runtime environment. Then, the fundamental capability SDK will internally create necessary threads and timers.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Integrating Object Segmentation​
Code:
// Initialize the engine of object segmentation.
videoAsset.initSegmentationEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Initialization progress.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// After the initialization is successful, segment a specified object and then return the segmentation result.
// bitmap: video frame containing the object to be segmented; timeStamp: timestamp of the video frame on the timeline; points: set of coordinates determined according to the video frame, and the upper left vertex of the video frame is the coordinate origin. It is recommended that the coordinate count be greater than or equal to two. All of the coordinates must be within the object to be segmented. The object is determined according to the track of coordinates.
int result = videoAsset.selectSegmentationObject(bitmap, timeStamp, points);
// After the handling is successful, apply the object segmentation effect.
videoAsset.addSegmentationEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Progress of object segmentation.
}
@Override
public void onSuccess() {
// The object segmentation effect is successfully added.
}
@Override
public void onError(int errorCode, String errorMessage) {
// The object segmentation effect failed to be added.
}
});
// Stop applying the object segmentation effect.
videoAsset.interruptSegmentation();
// Remove the object segmentation effect.
videoAsset.removeSegmentationEffect();
// Release the engine of object segmentation.
videoAsset.releaseSegmentationEngine();
And this concludes the integration process. A cutout function ideal for an image/video editing app was just created.
I just came up with a bunch of fields where object segmentation can help, like live commerce, online education, e-conference, and more.
In live commerce, the capability helps replace the live stream background with product details, letting viewers conveniently learn about the product while watching a live stream.
In online education and e-conference, the capability lets users switch the video background with an icon, or an image of a classroom or meeting room. This makes online lessons and meetings feel more professional.
The capability is also ideal for video editing apps. Take my demo app for example. I used it to add myself to a vlog that my friend created, which made me feel like I was traveling with her.
I think the capability can also be used together with other video editing functions, to realize effects like copying an object, deleting an object, or even adjusting the timeline of an object. I'm sure you've also got some great ideas for using this capability. Let me know in the comments section.
Conclusion​Cutting out objects used to be a thing of people with editing experience, a process that requires the use of a green screen.
Luckily, things have changed thanks to the cutout function found in many mobile apps. It has become a basic function in mobile apps that support video/image editing and is essential for some advanced functions like background removal.
Object segmentation from Video Editor Kit is a straightforward way of implementing the cutout feature into your app. This capability leverages an elaborate AI algorithm and depends on the interactive segmentation method, delivering an ideal and highly accurate object cutout result.

Build an Emoji Making App Effortlessly

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Emojis are a must-have tool in today's online communications as they help add color to text-based chatting and allow users to better express the emotions behind their words. Since the number of preset emojis is always limited, many apps now allow users to create their own custom emojis to keep things fresh and exciting.
For example, in a social media app, users who do not want to show their faces when making video calls can use an animated character to protect their privacy, with their facial expressions applied to the character; in a live streaming or e-commerce app, virtual streamers with realistic facial expressions are much more likely to attract watchers; in a video or photo shooting app, users can control the facial expressions of an animated character when taking a selfie, and then share the selfie via social media; and in an educational app for kids, a cute animated character with detailed facial expressions will make online classes much more fun and engaging for students.
I myself am developing such a messaging app. When chatting with friends and wanting to express themselves in ways other than words, users of my app can take a photo to create an emoji of themselves, or of an animated character they have selected. The app will then identify users' facial expressions, and apply their facial expressions to the emoji. In this way, users are able to create an endless amount of unique emojis. During the development of my app, I used the capabilities provided by HMS Core AR Engine to track users' facial expressions and convert the facial expressions into parameters, which greatly reduced the development workload. Now I will show you how I managed to do this.
Implementation​AR Engine provides apps with the ability to track and recognize facial expressions in real time, which can then be converted into facial expression parameters and used to accurately control the facial expressions of virtual characters.
Currently, AR Engine provides 64 facial expressions, including eyelid, eyebrow, eyeball, mouth, and tongue movements. It supports 21 eye-related movements, including eyeball movement and opening and closing the eyes; 28 mouth movements, including opening the mouth, puckering, pulling, or licking the lips, and moving the tongue; as well as 5 eyebrow movements, including raising or lowering the eyebrows.
Demo​Facial expression based emoji
Development Procedure​Requirements on the Development Environment​JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
Test device: see Software and Hardware Requirements of AR Engine Features
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations​1. Before getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
2. Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. Take Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
App Development​1. Check whether AR Engine has been installed on the current device. If yes, your app can run properly. If not, you need to prompt the user to install it, for example, by redirecting the user to AppGallery. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Create an AR scene. AR Engine supports five scenes, including motion tracking (ARWorldTrackingConfig), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).
The following takes creating a face tracking scene by calling ARFaceTrackingConfig as an example.
Code:
// Create an ARSession object.
mArSession = new ARSession(this);
// Select a specific Config to initialize the ARSession object based on the application scenario.
ARFaceTrackingConfig config = new ARFaceTrackingConfig(mArSession);
Set scene parameters using the config.setXXX method.
Code:
// Set the camera opening mode, which can be external or internal. The external mode can only be used in ARFace. Therefore, you are advised to use the internal mode.
mArConfig.setImageInputMode(ARConfigBase.ImageInputMode.EXTERNAL_INPUT_ALL);
3. Set the AR scene parameters for face tracking and start face tracking.
Code:
mArSession.configure(mArConfig);
mArSession.resume();
4. Initialize the FaceGeometryDisplay class to obtain the facial geometric data and render the data on the screen.
Code:
public class FaceGeometryDisplay {
// Initialize the OpenGL ES rendering related to face geometry, including creating the shader program.
void init(Context context) {...
}
}
5. Initialize the onDrawFrame method in the FaceGeometryDisplay class, and call face.getFaceGeometry() to obtain the face mesh.
Code:
public void onDrawFrame(ARCamera camera, ARFace face) {
ARFaceGeometry faceGeometry = face.getFaceGeometry();
updateFaceGeometryData(faceGeometry);
updateModelViewProjectionData(camera, face);
drawFaceGeometry();
faceGeometry.release();
}
6. Initialize updateFaceGeometryData() in the FaceGeometryDisplay class.
Pass the face mesh data for configuration and set facial expression parameters using OpenGl ES.
Code:
private void updateFaceGeometryData (ARFaceGeometry faceGeometry) {
FloatBuffer faceVertices = faceGeometry.getVertices();
FloatBuffer textureCoordinates =faceGeometry.getTextureCoordinates();
// Obtain an array consisting of face mesh texture coordinates, which is used together with the vertex data returned by getVertices() during rendering.
}
7. Initialize the FaceRenderManager class to manage facial data rendering.
Code:
public class FaceRenderManager implements GLSurfaceView.Renderer {
public FaceRenderManager(Context context, Activity activity) {
mContext = context;
mActivity = activity;
}
// Set ARSession to obtain the latest data.
public void setArSession(ARSession arSession) {
if (arSession == null) {
LogUtil.error(TAG, "Set session error, arSession is null!");
return;
}
mArSession = arSession;
}
// Set ARConfigBase to obtain the configuration mode.
public void setArConfigBase(ARConfigBase arConfig) {
if (arConfig == null) {
LogUtil.error(TAG, "setArFaceTrackingConfig error, arConfig is null.");
return;
}
mArConfigBase = arConfig;
}
// Set the camera opening mode.
public void setOpenCameraOutsideFlag(boolean isOpenCameraOutsideFlag) {
isOpenCameraOutside = isOpenCameraOutsideFlag;
}
...
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
mFaceGeometryDisplay.init(mContext);
}
}
8. Implement the face tracking effect by calling methods like setArSession and setArConfigBase of FaceRenderManager in FaceActivity.
Code:
public class FaceActivity extends BaseActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
mFaceRenderManager = new FaceRenderManager(this, this);
mFaceRenderManager.setDisplayRotationManage(mDisplayRotationManager);
mFaceRenderManager.setTextView(mTextView);
glSurfaceView.setRenderer(mFaceRenderManager);
glSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
}
Conclusion​Emojis allow users to express their moods and excitement in a way words can't. Instead of providing users with a selection of the same old boring preset emojis that have been used a million times, you can now make your app more fun by allowing users to create emojis themselves! Users can easily create an emoji with their own smiles, simply by facing the camera, selecting an animated character they love, and smiling. With such an ability to customize emojis, users will be able to express their feelings in a more personalized and interesting manner. If you have any interest in developing such an app, AR Engine is a great choice for you. With accurate facial tracking capabilities, it is able to identify users' facial expressions in real time, convert the facial expressions into parameters, and then apply them to virtual characters. Integrating the capability can help you considerably streamline your app development process, leaving you with more time to focus on how to provide more interesting features to users and improve your app's user experience.
Reference​AR Engine Sample Code
Face Tracking Capability

Environment Mesh: Blend the Real with the Virtual

Augmented reality (AR) is now widely used in a diverse range of fields, to facilitate fun and immersive experiences and interactions. Many features like virtual try-on, 3D gameplay, and interior design, among many others, depend on this technology. For example, many of today's video games use AR to keep gameplay seamless and interactive. Players can create virtual characters in battle games, and make them move as if they are extensions of the player's body. With AR, characters can move and behave like real people, hiding behind a wall, for instance, to escape detection by the enemy. Another common application is adding elements like pets, friends, and objects to photos, without compromising the natural look in the image.
However, AR app development is still hindered by the so-called pass-through problem, which you may have encountered during the development. Examples include a ball moving too fast and then passing through the table, a player being unable to move even when there are no obstacles around, or a fast-moving bullet passing through and then missing its target. You may also have found that the virtual objects that your app applies to the physical world look as if they were pasted on the screen, instead of blending into the environment. This can to a large extent undermine the user experience and may lead directly to user churn. Fortunately there is environment mesh in HMS Core AR Engine, a toolkit that offers powerful AR capabilities and streamlines your app development process, to resolve these issues once and for all. After being integrated with this toolkit, your app will enjoy better perception of the 3D space in which a virtual object is placed, and perform collision detection using the reconstructed mesh. This ensures that users are able to interact with virtual objects in a highly realistic and natural manner, and that virtual characters will be able to move around 3D spaces with greater ease. Next we will show you how to implement this capability.
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Implementation​AR Engine uses the real time computing to output the environment mesh, which includes the device orientation in a real space, and 3D grid for the current camera view. AR Engine is currently supported on mobile phone models with rear ToF cameras, and only supports the scanning of static scenes. After being integrated with this toolkit, your app will be able to use environment meshes to accurately recognize the real world 3D space where a virtual character is located, and allow for the character to be placed anywhere in the space, whether it is a horizontal surface, vertical surface, or curved surface that can be reconstructed. You can use the reconstructed environment mesh to implement virtual and physical occlusion and collision detection, and even hide virtual objects behind physical ones, to effectively prevent pass-through.
Environment mesh technology has a wide range of applications. For example, it can be used to provide users with more immersive and refined virtual-reality interactions during remote collaboration, video conferencing, online courses, multi-player gaming, laser beam scanning (LBS), metaverse, and more.
Integration Procedure​Ensure that you have met the following requirements on the development environment:
JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations​1. Before getting started, you will need to register as a Huawei developer and complete identity verification on the HUAWEI Developers website. You can click here to find out the detailed registration and identity verification procedure.
2. Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. The following takes Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
Development Procedure​Initialize the HitResultDisplay class to draw virtual objects based on the specified parameters.
Code:
Public class HitResultDisplay implements SceneMeshComponenDisplay{
// Initialize VirtualObjectData.
VirtualObjectData mVirtualObject = new VirtualObjectData();
// Pass the context to mVirtualObject in the init method.
Public void init(Context context){
mVirtualObject.init(context);
/ / Pass the material attributes.
mVirtualObject.setMaterialProperties();
}
// Pass ARFrame in the onDrawFrame method to obtain the estimated lighting.
Public void onDrawFrame(ARFrame arframe){
// Obtain the estimated lighting.
ARLightEstimate le = arframe.getLightEstimate();
// Obtain the pixel intensity of the current camera field of view.
lightIntensity = le.getPixelIntensity();
// Pass data to methods in mVirtualObject.
mVirtualObject.draw(…,…,lightIntensity,…);
// Pass the ARFrame object in the handleTap method to obtain the coordinate information.
handleTap(arframe);
}
// Implement the handleTap method.
Private void handleTap(ARFrame frame){
// Call hitTest with the ARFrame object.
List<ARHitResult> hitTestResults = frame.hitTest(tap);
// Check whether a surface is hit and whether it is hit in a plane polygon.
For(int i = 0;i<hitTestResults.size();i++){
ARHitResult hitResultTemp = hitTestResults.get(i);
Trackable = hitResultTemp.getTrackable();
If(trackable instanceof ARPoint && ((ARPoint) trackable).getOrientationMode() == ARPoint.OrientationMode.ESTIMATED_SURFACE_NORMAL){
isHasHitFlag = true;
hitResult = hitResultTemp;
}
}
}
}
2. Initialize the SceneMeshDisplay class to render the scene network.
Code:
Public class SceneMeshDiaplay implements SceneMeshComponenDisplay{
// Implement openGL operations in init.
Public void init(Context context){}
// Obtain the current environment mesh in the onDrawFrame method.
Public void onDrawFrame(ARFrame arframe){
ARSceneMesh arSceneMesh = arframe.acquireSceneMesh();
// Create a method for updating data and pass arSceneMesh to the method.
updateSceneMeshData(arSceneMesh);
// Release arSceneMesh when it is no longer in use.
arSceneMesh.release();
}
// Implement this method to update data.
Public void updateSceneMeshData(ARSceneMesh sceneMesh){
// Obtain an array containing mesh vertex coordinates of the environment mesh in the current view.
FloatBuffer meshVertices = sceneMesh.getVertices();
// Obtain an array containing the indexes of the vertices in the mesh triangle plane in the current view.
IntBuffer meshTriangleIndices = sceneMesh.getTriangleIndices();
}
}
3. Initialize the SceneMeshRenderManager class to provide render managers for external scenes, including render managers for virtual objects.
Code:
public class SceneMeshRenderManager implements GLSurfaceView.Render{
// Initialize the class for updating network data and performing rendering.
private SceneMeshDisplay mSceneMesh = new SceneMeshDisplay();
// Initialize the class for drawing virtual objects.
Private HitResultDisplay mHitResultDisplay = new HitResultDisplay();
// Implement the onSurfaceCreated() method.
public void onSurfaceCreated(){
// Pass context to the mSceneMesh and mHitResultDisplay classes.
mSceneMesh.init(mContext);
mHitResultDisplay.init(mContext);
}
// Implement the onDrawFrame() method.
public void onDrawFrame(){
// Configure camera using the ARSession object.
mArSession.setCameraTexTureName();
ARFrame arFrame = mArSession.update();
ARCamera arCamera = arframe.getCamera();
// Pass the data required for the SceneMeshDisplay class.
mSceneMesh.onDrawFrame(arframe,viewmtxs,projmtxs);
}
}
4. Initialize the SceneMeshActivity class to implement display functions.
Code:
public class SceneMeshActivity extends BaseActivity{
// Provides render managers for external scenes, including those for virtual objects.
private ScemeMeshRenderManager mSceneMeshRenderManager;
// Manages the entire running status of AR Engine.
private ARSession mArSession;
// Initialize some classes and objects.
protected void onCreate(Bundle savedInstanceState){
mSceneMeshRenderManager = new SceneMeshRenderManager();
}
// Initialize ARSession in the onResume method.
protected void onResume(){
// Initialize ARSession.
mArSession = new ARSession(this.getApplicationContext());
// Create an ARWorldTrackingConfig object based on the session parameters.
ARConfigBase config = new ARWorldTrackingConfig(mArSession);
// Pass ARSession to SceneMeshRenderManager.
mSceneMeshRenderManager.setArSession(mArSession);
// Enable the mesh, and call the setEnableItem method using config.
config.setEnableItem(ARConfigBase.ENABLE_MESH | ARConfigBase.ENABLE_DEPTH);
}
}
Conclusion​AR bridges the real and the virtual worlds, to make jaw-dropping interactive experiences accessible to all users. That is why so many mobile app developers have opted to build AR capabilities into their apps. Doing so can give your app a leg up over the competition.
When developing such an app, you will need to incorporate a range of capabilities, such as hand recognition, motion tracking, hit test, plane detection, and lighting estimate. Fortunately, you do not have to do any of this on your own. Integrating an SDK can greatly streamline the process, and provide your app with many capabilities that are fundamental to seamless and immersive AR interactions. If you are not sure how to deal with the pass-through issue, or your app is not good at presenting virtual objects naturally in the real world, AR Engine can do a lot of heavy lifting for you. After being integrated with this toolkit, your app will be able to better perceive the physical environments around virtual objects, and therefore give characters the freedom to move around as if they are navigating real spaces.
References​AR Engine Development Guide
Software and Hardware Requirements of AR Engine Features
AR Engine Sample Code

Must-Have Tool for Anonymous Virtual Livestreams

Influencers have become increasingly important, as more and more consumers choose to purchase items online – whether on Amazon, Taobao, or one of the many other prominent e-commerce platforms. Brands and merchants have spent a lot of money finding influencers to promote their products through live streams and consumer interactions, and many purchases are made on the recommendation of a trusted influencer.
However, employing a public-facing influencer can be costly and risky. Many brands and merchants have opted instead to host live streams with their own virtual characters. This gives them more freedom to showcase their products, and widens the pool of potential on camera talent. For consumers, virtual characters can add fun and whimsy to the shopping experience.
E-commerce platforms have begun to accommodate the preference for anonymous livestreaming, by offering a range of important capabilities, such as those that allow for automatic identification, skeleton point-based motion tracking in real time (as shown in the gif), facial expression and gesture identification, copying of traits to virtual characters, a range of virtual character models for users to choose from, and natural real-world interactions.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Building these capabilities comes with its share of challenges. For example, after finally building a model that is able to translate the person's every gesture, expression, and movement into real-time parameters and then applying them to the virtual character, you can find out that the virtual character can't be blocked by real bodies during the livestream, which gives it a fake, ghost-like form. This is a problem I encountered when I developed my own e-commerce app, and it occurred because I did not occlude the bodies that appeared behind and in front of the virtual character. Fortunately I was able to find an SDK that helped me solve this problem — HMS Core AR Engine.
This toolkit provides a range of capabilities that make it easy to incorporate AR-powered features into apps. From hit testing and movement tracking, to environment mesh, and image tracking, it's got just about everything you need. The human body occlusion capability was exactly what I needed at the time.
Now I'll show you how I integrated this toolkit into my app, and how helpful it's been for.
First I registered for an account on the HUAWEI Developers website, downloaded the AR Engine SDK, and followed the step-by-step development guide to integrate the SDK. The integration process was quite simple and did not take too long. Once the integration was successful, I ran the demo on a test phone, and was amazed to see how well it worked. During livestreams my app was able to recognize and track the areas where I was located within the image, with an accuracy of up to 90%, and provided depth-related information about the area. Better yet, it was able to identify and track the profile of up to two people, and output the occlusion information and skeleton points corresponding to the body profiles in real time. With this capability, I was able to implement a lot of engaging features, for example, changing backgrounds, hiding virtual characters behind real people, and even a feature that allows the audience to interact with the virtual character through special effects. All of these features have made my app more immersive and interactive, which makes it more attractive to potential shoppers.
Demo​As shown in the gif below, the person blocks the virtual panda when walking in front of it.
How to Develop​Preparations​Registering as a developer​Before getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
Creating an app​Create a project and create an app under the project. Pay attention to the following parameter settings:
Platform: Select Android.
Device: Select Mobile phone.
App category: Select App or Game.
Integrating the AR Engine SDK​Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
Configuring the Maven repository address for the AR Engine SDK​The procedure for configuring the Maven repository address in Android Studio is different for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
Adding build dependencies​Open the build.gradle file in the app directory of your project.
Add a build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}'
}
Open the modified build.gradle file again. You will find a Sync Now link in the upper right corner of the page. Click Sync Now and wait until synchronization is complete.
Developing Your App​Checking the Availability​Check whether AR Engine has been installed on the current device. If so, the app can run properly. If not, the app prompts the user to install AR Engine, for example, by redirecting the user to AppGallery. The code is as follows:
Code:
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
Create a BodyActivity object to display body bones and output human body features, for AR Engine to recognize human body.
Code:
Public class BodyActivity extends BaseActivity{
Private BodyRendererManager mBodyRendererManager;
Protected void onCreate(){
// Initialize surfaceView.
mSurfaceView = findViewById();
// Context for keeping the OpenGL ES running.
mSurfaceView.setPreserveEGLContextOnPause(true);
// Set the OpenGL ES version.
mSurfaceView.setEGLContextClientVersion(2);
// Set the EGL configuration chooser, including for the number of bits of the color buffer and the number of depth bits.
mSurfaceView.setEGLConfigChooser(……);
mBodyRendererManager = new BodyRendererManager(this);
mSurfaceView.setRenderer(mBodyRendererManager);
mSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
Protected void onResume(){
// Initialize ARSession to manage the entire running status of AR Engine.
If(mArSession == null){
mArSession = new ARSession(this.getApplicationContext());
mArConfigBase = new ARBodyTrackingConfig(mArSession);
mArConfigBase.setEnableItem(ARConfigBase.ENABLE_DEPTH | ARConfigBase.ENABLE_MASK);
mArConfigBase.setFocusMode(ARConfigBase.FocusMode.AUTO_FOCUS
mArSession.configure(mArConfigBase);
}
// Pass the required parameters to setBodyMask.
mBodyRendererManager.setBodyMask(((mArConfigBase.getEnableItem() & ARConfigBase.ENABLE_MASK) != 0) && mIsBodyMaskEnable);
sessionResume(mBodyRendererManager);
}
}
Create a BodyRendererManager object to render the personal data obtained by AR Engine.
Code:
Public class BodyRendererManager extends BaseRendererManager{
Public void drawFrame(){
// Obtain the set of all traceable objects of the specified type.
Collection<ARBody> bodies = mSession.getAllTrackables(ARBody.class);
for (ARBody body : bodies) {
if (body.getTrackingState() != ARTrackable.TrackingState.TRACKING){
continue;
}
mBody = body;
hasBodyTracking = true;
}
// Update the body recognition information displayed on the screen.
StringBuilder sb = new StringBuilder();
updateMessageData(sb, mBody);
Size textureSize = mSession.getCameraConfig().getTextureDimensions();
if (mIsWithMaskData && hasBodyTracking && mBackgroundDisplay instanceof BodyMaskDisplay) {
((BodyMaskDisplay) mBackgroundDisplay).onDrawFrame(mArFrame, mBody.getMaskConfidence(),
textureSize.getWidth(), textureSize.getHeight());
}
// Display the updated body information on the screen.
mTextDisplay.onDrawFrame(sb.toString());
for (BodyRelatedDisplay bodyRelatedDisplay : mBodyRelatedDisplays) {
bodyRelatedDisplay.onDrawFrame(bodies, mProjectionMatrix);
} catch (ArDemoRuntimeException e) {
LogUtil.error(TAG, "Exception on the ArDemoRuntimeException!");
} catch (ARFatalException | IllegalArgumentException | ARDeadlineExceededException |
ARUnavailableServiceApkTooOldException t) {
Log(…);
}
}
// Update gesture-related data for display.
Private void updateMessageData(){
if (body == null) {
return;
}
float fpsResult = doFpsCalculate();
sb.append("FPS=").append(fpsResult).append(System.lineSeparator());
int bodyAction = body.getBodyAction();
sb.append("bodyAction=").append(bodyAction).append(System.lineSeparator());
}
}
Customize the camera preview class, which is used to implement human body drawing based on certain confidence.
Code:
Public class BodyMaskDisplay implements BaseBackGroundDisplay{}
Obtain skeleton data and pass the data to the OpenGL ES, which renders the data and displays it on the screen.
Code:
public class BodySkeletonDisplay implements BodyRelatedDisplay {
Obtain skeleton point connection data and pass it to OpenGL ES for rendering the data and display it on the screen.
Code:
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {}
Conclusion​True-to-life AR live-streaming is now an essential feature in e-commerce apps, but developing this capability from scratch can be costly and time-consuming. AR Engine SDK is the best and most convenient SDK I've encountered, and it's done wonders for my app, by recognizing individuals within images with accuracy as high as 90%, and providing the detailed information required to support immersive, real-world interactions. Try it out on your own app to add powerful and interactive features that will have your users clamoring to shop more!
References​AR Engine Development Guide
Sample Code
API Reference

Categories

Resources