Lighting Estimate: Lifelike Virtual Objects in Real Environments - Huawei Developers

Augmented reality (AR) is a technology that facilitates immersive AR interactions by applying virtual objects with the real world in a visually intuitive way. In order to ensure that virtual objects are naturally incorporated into the real environment, AR needs to estimate the environmental lighting conditions and apply it to the virtual world as well.
What we see around us is the result of interactions between lights and objects. When a light shines on an object, it is absorbed, reflected, or transmitted, before reaching our eyes. The light then tells us what the object's color, brightness, and shadow are, giving us a sense of how the object looks. Therefore, to integrate 3D virtual objects into the real world in a natural manner, AR apps will need to provide lighting conditions that mirror those in the real world.
Feature Overview​
HMS Core AR Engine provides a lighting estimate capability to provide real lighting conditions for virtual objects. With this capability, AR apps are able to track light in the device's vicinity, and calculate the average light intensity of images captured by the camera. This information is fed back in real time to facilitate the rendering of virtual objects. This ensures that the colors of virtual objects change as the environmental light changes, no different than how the colors of real objects change over time.
How It Works​
In real environments, the same material looks different depending on the lighting conditions. To ensure rendering as close to the reality as possible, lighting estimate will need to implement the following:
Tracking where the main light comes from​
When the position of the virtual object and the viewpoint of the camera are fixed, the brightness, shadow, and highlights of objects will change dramatically when the main light comes from different directions.
Ambient light coloring and rendering​
When the color and material of a virtual object remain the same, the object can be brighter or less bright depending on the ambient lighting conditions.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Less bright lighting
Brighter lighting
The same is true for color. The lighting estimate capability allows virtual objects to reflect different colors in real time.
Color
Environment mapping​
If the surface of a virtual object is specular, the lighting estimate capability will simulate the mirroring effect, applying the texture of different environments to the specular surface.
Texture
Making virtual objects look vivid in real environments requires a 3D model and high-level rendering process. The lighting estimate capability in AR Engine builds true-to-life AR interactions, with precise light tracking, real-time information feedback, and realistic rendering.
References
HUAWEI Developers
AR Engine Development Guide

Related

Four steps for taking portraits with blurred backgrounds

Black and white photos have a degree of detail and contrast that confers them a unique, moody intensity. However, a carefully-composed, artistic photo is easily ruined by background objects, which can distract the viewer. Good photographers sometimes manage to use creative camera angles to keep some of this "background noise" out of shot, but such techniques only get you so far.
For example, I originally intended for the photo below to center on the removal men at work, but they were drowned out by other objects in the foreground and background.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I took the following photo at a low angle to try to give the teddy bear a "larger-than-life" look, but once again background objects stole the show and detracted from the desired effect.
When I place my photos side by side with some of the slick, glossy photos my friends share with me, I'm too ashamed to even contemplate posting them on social media services such as Instagram and Facebook.
However, more recently, I discovered a clever trick on the HUAWEI P10/P10 Plus that can be used to blur out background objects and make the subject more prominent. This technique produces absorbing, arty shots that are guaranteed to garner you more "likes" on social media. Moreover, no fancy camera angles are necessary; simply take your phone, find an interesting subject, and point and shoot.
When you take ordinary black and white photos, usually both the foreground and background are in focus, so there is no obvious subject or theme. However, by combining the black and white and wide aperture shooting modes on the HUAWEI P10/P10 Plus, you can blur out the background and place emphasis on a particular object or person.
If you look closely at the images below, you will observe that the photo on the left is overexposed and has a cluttered background. The photo on the right, on the other hand, was taken with the HUAWEI P10/P10 Plus and effectively combines the black and white and wide aperture shooting modes to reduce background interference and create a more dramatic contrast. This is particularly noticeable in the "Cloud Park" lettering, which has a much clearer outline.
After learning and applying this technique, and with a bit of practice, my black and white photos now look infinitely better, to the point that I can proudly post them on social media for my friends to see. To achieve similar results yourself, simply follow the four steps that are set out in the animated graphic below.
By combining these two shooting modes on the HUAWEI P10/P10 Plus, you can produce photos with that timeless black and white look, while enjoying all of the speed and convenience that modern technology can offer.

HUAWEI HiAI Enables SketchAR with Exclusive Enhanced Drawing Experiences

1. What is SketchAR?
As a fully-fledged tool for teaching drawing, SketchAR can put virtual images on a surface to let users trace drawings from the phone. Also, the app detects users’ drawing and release learning tips for drawing improvement accordingly.
2. Update Pain Points
SketchAR used to receive user complaints about the speed and accuracy of object recognition. Successfully recognizing surface as canvas is a prerequisite for users to draw with the app, thus, low recognition speed will result in negative user experiences as the waiting time is rising. Besides, the virtual images frequently lost because of the unstable recognition of surface, as a consequence, drawing is often interrupted and learning tips are not responsive enough.
High waiting costs: Most phones cannot attain a fast recognition speed, therefore, it increases the waiting costs as the users have to wait until the surface is successfully recognized.
Drawing is interrupted: The loss of virtual images will interrupt or even terminate users’ drawing.
Unstable learning tips: Learning tips delay or lost due to low recognition accuracy.
3. Solutions
With HiAI Foundation’s heterogeneous scheduling and NPU acceleration, SketchAR improves the processes of recognition by over 40% on HUAWEI NPU smartphones, bringing about enhanced enjoyable drawing experiences
Decrease waiting costs: With HiAI Foundation chip open capabilities, SketchAR accelerates the algorithms of neural networks, improves the recognition speed significantly and thus decrease the waiting cost of users.
Smooth painting and responsive guidance: greatly reduce painting interruptions or guidance delay caused by slow speed or low sensitivity in recognition process.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
4. Benefits
To SketchAR: Optimizing object recognition functions, improving the performances notably and reducing the negative reviews.
To the users: Less waiting time before drawing, more stable learning tips and smoother painting experiences.
Listen to SketchAR Founder and CEO, Andrey Drobitko introduce HUAWEI HiAI and the SketchAR here: https://www.youtube.com/watch?v=4w3FLXJnD1U
Want to leverage HUAWEI NPU's poweful computing capability?
Join HUAWEI HiAI's online course and learn the integration step by step: https://developer.huawei.com/consumer/en/videoCenter/play?id=101588756198447422
Interesting, thank you
I didn't know but it looks very interesting.

[HMS Core 6.0 Global Release] New CG Kit Plugins Offer Breathtaking HD 3D Graphics for Breakthrough Mobile Gaming Interactions

HMS Core 6.0 was released to global developers on July 15, providing a wide range of new capabilities and features. Notably, the new version features Volumetric Fog and Smart Fluid plugins within HMS Core Computer Graphics (CG) Kit, two capabilities that lay a solid technical foundation for an enhanced 3D mobile game graphics experience.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The Volumetric Fog plugin is an inventive mobile volumetric fog solution that renders realistic fogs characterized by complex lighting effects. It harnesses Huawei's prowess in low-level GPU hardware technology, resulting in premium, power-efficient performance. The plugin takes less than 4 ms to render a single frame on high-end Huawei phones, and comes equipped with height fog and noise fog features. Height fogs, like fogs in the real world, get thicker the closer they are to the ground, and likewise become thinner as the altitude increases. The noise fog feature allows developers to adjust the density, extinction coefficient, scattering coefficient, and shape of the fog, as well as wind direction. The plugin also supports volumetric shadows under global directional light and dynamic lighting with moving global ambient light (sunlight) or local lights (point lights and spot lights).
CG Kit also comes with an all-new Smart Fluid plugin that provides three key features: (1) simulated high-speed shaking with realistic physical features retained, a broadly applicable solution; (2) scaling, applicable to objects of various sizes from small backpacks to boxes that consume the entire screen; (3) enriching interactions, including object floating and liquid splash, which can be used to reproduce waterfalls, rain, snow, smoke, and fireworks. CG Kit takes mobile performance limitations and power consumption requirements into consideration, based on the native method, ensuring highly vivid visuals while eliminating unnecessary overhead. The kit also empowers computer shaders to tap into the device compute power potential, to deliver optimal performance per unit time. In addition, the scene-based in-depth analysis optimization algorithm streamlines the computing overhead, resulting in a mobile computing duration of less than 1 ms. The kit employs smoothed-particle hydrodynamics (SPH) on mobile devices for the first time, achieving a leap forward in the fluid rendering on mobile devices, enabling developers to craft true-to-life interactive scenes in real time and strengthening the ties between players and games.
The new plugins of HMS Core CG Kit make it remarkably easy for developers to apply high-resolution game graphics, and pursue trailblazing game play innovation bolstered by lifelike visuals.

Using 2D/3D Tracking Tech for Smarter AR Interactions

Artificial reality (AR) has been widely deployed in many fields, such as marketing, education, and gaming fields, as well as in exhibition halls. 2D image and 3D object tracking technologies allow users to add AR effects to photos or videos taken with their phones, like a 2D poster or card, or a 3D cultural relic or garage kit. More and more apps are using AR technologies to provide innovative and fun features. But to stand out from the pack, more resources must be put into app development, which is time-consuming and entails huge workload.
HMS Core AR Engine makes development easier than ever. With 2D image and 3D object tracking based on device-cloud synergy, you will be able to develop apps that deliver premium experience.
2D Image Tracking​Real-time 2D image tracking technology is largely employed by online shopping platforms for product demonstration, where shoppers interact with the AR effects to view products from different angles. According to the background statistics of one platform, the sales volume of products with AR special effects is much higher than other products, involving twice as much interaction in AR-based activities than common activities. This is one example of how platforms can deploy AR technologies to make profit.
To apply AR effects to more images on an app using traditional device-side 2D image tracking solutions, you need to release a new app version, which can be costly. In addition, increasing the number of images will stretch the app size. That's why AR Engine adopts device-cloud synergy, which allows you to easily apply AR effects to new images by simply uploading images to the cloud, without updates to your app, or occupying extra space.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2D image tracking with device-cloud synergy​
This technology consists of the following modules:
Cloud-side image feature extraction
Cloud-side vector retrieval engine
Device-side visual tracking
In terms of response speed to and from the cloud, AR Engine runs a high-performance vector retrieval engine by virtue of the platform's hardware acceleration capability, to ensure millisecond-level retrieval from massive volumes of feature data.
3D Object Tracking​AR Engine also allows real-time tracking of 3D objects like cultural relics and products. It presents 3D objects as holograms to supercharge images.
3D objects can be mundane and stem from various textures and materials, such as a textureless sculpture, or metal utensils that reflect light and appear shiny. In addition, as the light changes, 3D objects can cast shadows. These conditions pose a great challenge to 3D object tracking. AR Engine implements quick, accurate object recognition and tracking with multiple deep neutral networks (DNNs) in three major steps: object detection, coarse positioning of object poses, and pose optimization.
3D object tracking with device-cloud synergy​This technology consists of the following modules:
Cloud-side AI-based generation of training samples
Cloud-side automatic training of DNNs
Cloud-side DNN inference
Device-side visual tracking
Training algorithms for DNNs by manual labeling is labor-and time-consuming. Based on massive offline data and generative adversarial networks (GANs), AR Engine designs an AI-based algorithm for generating training samples, so as to accurately identify 3D objects in complex scenarios without manual labeling.
Currently, Huawei Cyberverse uses the 3D object tracking capability of AR Engine to create an immersive tour guide for Mogao Caves, to reveal never-before-seen details about the caves to tourists.
These premium technologies were constructed, built, and released by Central Media Technology Institute, 2012 Labs. They are open for you to bring users differentiated AR experience.
Learn more about AR Engine at HMS Core AR Engine.

Bring a Cartoon Character to Life via 3D Tech

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What do you usually do if you like a particular cartoon character? Buy a figurine of it?
That's what most people would do. Unfortunately, however, it is just for decoration. Therefore, I tried to create a way of sending these figurines back to the virtual world — In short, I created a virtual but moveable 3D model of a figurine.
This is done with auto rigging, a new capability of HMS Core 3D Modeling Kit. It can animate a biped humanoid model that can even interact with users.
Check out what I've created using the capability.
What a cutie.
The auto rigging capability is ideal for many types of apps when used together with other capabilities. Take those from HMS Core as an example:
Audio-visual editing capabilities from Audio Editor Kit and Video Editor Kit. We can use auto rigging to animate 3D models of popular stuffed toys that can be livened up with proper dances, voice-overs, and nursery rhymes, to create educational videos for kids. With the adorable models, such videos can play a better role in attracting kids and thus imbuing them with knowledge.
The motion creation capability. This capability, coming from 3D Engine, is loaded with features like real-time skeletal animation, facial expression animation, full body inverse kinematic (FBIK), blending of animation state machines, and more. These features help create smooth 3D animations. Combining models animated by auto rigging and the mentioned features, as well as numerous other 3D Engine features such as HD rendering, visual special effects, and intelligent navigation, is helpful for creating fully functioning games.
AR capabilities from AR Engine, including motion tracking, environment tracking, and human body and face tracking. They allow a model animated by auto rigging to appear in the camera display of a mobile device, so that users can interact with the model. These capabilities are ideal for a mobile game to implement model customization and interaction. This makes games more interactive and fun, which is illustrated perfectly in the image below.
As mentioned earlier, the auto rigging capability supports only the biped humanoid object. However, I think we can try to add two legs to an object (for example, a candlestick) for auto rigging to animate, to recreate the Be Our Guest scene from Beauty and the Beast.
How It Works​After a static model of a biped humanoid is input, auto rigging uses AI algorithms for limb rigging and automatically generates the skeleton and skin weights for the model, to finish the skeleton rigging process. Then, the capability changes the orientation and position of the model skeleton so that the model can perform a range of actions such as walking, jumping, and dancing.
Advantages​Delivering a wholly automated rigging process​Rigging can be done either manually or automatically. Most highly accurate rigging solutions that are available on the market require the input model to be in a standard position and seven or eight key skeletal points to be added manually.
Auto rigging from 3D Modeling Kit does not have any of these requirements, yet it is able to accurately rig a model.
Utilizing massive data for high-level algorithm accuracy and generalization​Accurate auto rigging depends on hundreds of thousands of 3D model rigging data records that are used to train the Huawei-developed algorithms behind the capability. Thanks to some fine-tuned data records, auto rigging delivers ideal algorithm accuracy and generalization. It can implement rigging for an object model that is created from photos taken from a standard mobile phone camera.
Input Model Specifications​The capability's official document lists the following suggestions for an input model that is to be used for auto rigging.
Source: a biped humanoid object (like a figurine or plush toy) that is not holding anything.
Appearance: The limbs and trunk of the object model are not separate, do not overlap, and do not feature any large accessories. The object model should stand on two legs, without its arms overlapping.
Posture: The object model should face forward along the z-axis and be upward along the y-axis. In other words, the model should stand upright, with its front facing forward. None of the model's joints should twist beyond 15 degrees, while there is no requirement on symmetry.
Mesh: The model meshes can be triangle or quadrilateral. The number of mesh vertices should not exceed 80,000. No large part of meshes is missing on the model.
Others: The limbs-to-trunk ratio of the object model complies with that of most toys. The limbs and trunk cannot be too thin or short, which means that the ratio of the arm width to the trunk width and the ratio of the leg width to the trunk width should be no less than 8% of the length of the object's longest edge.
Driven by AI, the auto rigging capability lowers the threshold of 3D modeling and animation creation, opening them up to amateur users.
While learning about this capability, I also came across three other fantastic capabilities of the 3D Modeling Kit. Wanna know what they are? Check them out here. Let me know in the comments section how your auto rigging has come along.

Categories

Resources