Huawei Starts to Sell New SERES SF5 Car in its China Flagship Stores - Huawei Developers

Today(4/20), at the 19th Shanghai International Automobile Industry Exhibition, Huawei announced that China’s automotive company SERES has launched an extended range electric vehicle-the new SERES SF5, and it will be available in Huawei flagship stores across China. For the first time, Huawei welcomes a vehicle partner to its “1+8+N” ecosystem.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The high performance monster with Huawei technologies
In response to consumers’ range and recharging anxieties about electric vehicle, Huawei and SERES have created a new electric drive range extension system. Equipped with HUAWEI DriveONE Three-in-One Electric Drive, the system is not only the optimal solution for reducing range anxiety, but also delivering world-class coupe performance. The new SERES SF5 offers flexibility for users, with a range of 180 km in pure electric mode to meet the needs of daily city commuters,1 and a range of over 1000 km in extended range mode for long distance travelers. The high-performance capabilities boast 0-100 km/h acceleration in 4.68s. The vehicle performance is also aided by the lightweight aluminum chassis, the four ball joint double- wishbone front suspension and the trapezoidal multi-link rear suspension. This makes the new SF5 a performance monster, delivering a faster acceleration while also providing a more stable tilt control and superior shock absorption.
HUAWEI HiCar solutions enable users to seamlessly switch between their mobile phone applications to the vehicle’s central control panel, enabling access to navigation, music and more, anytime, anywhere. This world-class coupe provides entirely new user experiences: the music on your phone can be continued seamlessly when you get in the car. It also provides interactive voice control, allowing users to focus purely on driving. HUAWEI HiCar can also connect the car to other smart devices, so users in the car can turn on air condition, smart screen and other connected devices2 at home, with consummate ease.
The vehicle’s three-dimensional surround audio system consists of 11 sound units, tuned by HUAWEI SOUND® core technicians and acoustic experts. Together with the vehicle’s perfect NVH solution - a library-level silence of 38 dB3, the audio system delivers authentic and immersive opera-like sound quality.
The new SERES SF5 has a vehicle-to-vehicle (V2V) rescue recharge mode, which can provide emergency power to isolated vehicles in the wilderness. It also provides vehicle-to-load (V2L) camping power supply mode, empowering induction cookers, stereos, and other equipment, so users can have BBQ when they stop.
The new SERES SF5’s twisting and bending strength meets five-star safety design standards, with a body made of robust and reliable ultra-high strength 1500 Mpa thermoformed steel. With L2+ level automatic assisted driving, traffic congestion assistance, full speed domain adaptive cruise control and other features, users facing urban congestion don’t need to worry about long-distance driving.
In addition, the front row is equipped with sporty and sophisticated seats with ventilation, heating and massage functions, and the intimate smart welcome mode makes every getting in and out of the car more comfortable and convenient.
The new model will be available in Huawei Stores from April 21st
Available in China at 246800 CNY (4WD) and 216800 CNY (2WD), the new SERES SF5 comes in four colorways – Deep Ocean Blue, Charcoal Black, Pearl White and Titanium Silver Grey - along with Midnight Black, Garnet Red, and Ivory White interior trim.
Starting from April 21, 2021, users can visit Huawei flagship shops nationwide for a test drive.
About SERES
Founded in 2016, and is one of the first brands in China to master the extended range electric drive technology and cutting-edge technologies in the front and rear motors, as well as an all-in-one battery and electronic control system. SERES now has more than 1000 technology patents and applications. In addition, SERES has a mature model production capacity and supply chain integration capabilities. Built upon industry 4.0, SERES’ factory in Chongqing, China, is equipped with online monitoring on full production process, and owns over 1000 intelligent robots, with 100% automation in key production procedures.
1.The data is based on the New European Driving Cycle Test (NEDC) when the vehicle is fully charged and fueled.
2.The devices refer to Huawei HiLink smart home products.
3.Idling NVH data
In response to consumers’ range and recharging anxieties about electric vehicle, Huawei and SERES have created a new electric drive range extension system. Equipped with HUAWEI DriveONE Three-in-One Electric Drive, the system is not only the optimal solution for reducing range anxiety, but also delivering world-class coupe performance. The new SERES SF5 offers flexibility for users, with a range of 180 km in pure electric mode to meet the needs of daily city commuters,1 and a range of over 1000 km in extended range mode for long distance travelers. The high-performance capabilities boast 0-100 km/h acceleration in 4.68s. The vehicle performance is also aided by the lightweight aluminum chassis, the four ball joint double- wishbone front suspension and the trapezoidal multi-link rear suspension. This makes the new SF5 a performance monster, delivering a faster acceleration while also providing a more stable tilt control and superior shock absorption.

Related

New Decade, New Tricks: The Future and How We Get There

The new decade gives us the opportunity to make a few predictions about where we will be in ten years’ time and chart a course to get there. It’s a decade of unbridled promise with mobile and ICT technologies converging to unleash a bold new future. To put this in perspective, let’s remember where we were ten years ago.
There was no Uber or Grab or Didi and we arrived in new cities at the mercy of the local taxi companies. There was no Huawei Pay or Apple Pay or WeChat Pay and we carried leather wallets containing an archaic artefact called cash. There was no Waze to direct us down the backroads to avoid traffic jams and we relied instead on dumb GPS terminals that took us along the same congested highways as everyone else. And AirBNB was a mere start-up offering budget nights in strangers’ living rooms on airbeds with breakfast thrown in. We take these apps for granted now.
So, let’s get out the crystal ball and make a few predictions. Some of the scenarios may feel oddly familiar to viewers of the TV show Black Mirror.
Death Proof
Let’s start with an easy, uncontroversial, one. By the end of the twenties we will be driven around cities and across countries by autonomous vehicles that drive better and safer with greater awareness than humans. Drink driving will cease to exist and with it the accidents and deaths that it causes. Car ownership will dwindle as it will be cheaper and more convenient to call up a shared autonomous vehicle whenever you need to go somewhere.
Read more: How Will Driverless Cars Change Your City?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Conventional transport from buses to taxis will have to fundamentally change their business operations or cease to exist. Traffic jams will improve as AI determines the best route based on local conditions. We will raise the first generation that no longer needs to learn how to drive a car. Our journey will instead become free time to work or entertain ourselves as the interior of the car becomes an extension of our offices or living rooms.
Total Recall
Retina implants will capture everything you see and hear in UHD video and transmit it in real time over ultra-high speed mobile networks to massive amounts of cloud storage. AI will instantly recognise and record each frame’s people, places, and things that will be used to retrieve digital memories more accurate than anything recalled by Maximilian the Amazing Memory Man. Can’t remember that address? Spin back and find it instantly. Missed photographing your cat slipping on ice? Replay the moment time and again. Forgot what that argument was about? Relive it in all its glory. Our digital memories will be perfect.
And in the cinema, there will be an abundance of terrible political thrillers about the perils of deep fake videos.
Christmas Vacation
The digitalisation of the home has already begun but to date it’s largely been focused on getting a smart speaker to do something we used to do ourselves such as set an alarm or play a particular piece of music. In ten years, AI, environmental controls, UHD screens, and holographic projections will have matured and converged to take control of the whole home.
The living room will detect your mood and set the environment accordingly from temperature to view from the window while holograms will replace flat panels as the main means of visual communication. This means that on a dark, cold winter’s morning, your living room will detect your bad mood and automatically set the environmental controls and view to be an idyllic Caribbean beach. Holidays will begin at home.
The Sixth Sense
Impossible though it may seem at the start of the decade but we are probably at peak mobile phone right now and the device itself will rapidly become obsolete.
The smart phone will give way to smart wearables which will rapidly give way to tiny devices absorbed into the skin complete with an augmented reality display. We will no longer need a powerful computing device and screen in the palm of our hand because all the processing will take place in the cloud delivered to us over these super-high speed mobile networks. The era of seamless mobile communications as an addition to our current five senses will be a reality.
Office Space
While our personal lives and homes are being transformed, our workplaces will also undergo a radical digital transformation. Factories, manufacturing plants, production lines, healthcare, agriculture, heavy industry, all enterprises will be cloudifying their operations and introducing AI to deliver the next phase of automation. Great efficiencies will be achieved across many industries transforming the way that we work together, requiring new skills and competencies.
Read more: Is Education Today Teaching the Skills We Need Tomorrow?
Swing Shift
Underpinning all this will be three major social shifts. First, an increasingly aging population will force a great increase in productivity through automation.
Second, sustainability issues will become increasingly dominant demanding new efficiencies across the board.
And third, ICT vendors must work with regulators, governments, and service providers to create a bond of trust between industry and the consumer that is equivalent to (and similarly complex to) the civil aviation industry, otherwise any attempt to deliver the promise of 5G and digital transformation will be doomed by customer resistance.
Electric Dreams
The five key technology components that will enable this future are: 5G networks, cloud computing, AI, new devices, and new apps.
These technology enablers will increasingly become a co-dependent ecosystem where no one component stands alone and everyone works together and interoperates together to deliver a better future.
At Huawei, we are clear what our role and focus is. We will continue to invest up to 15% of our annual revenue back into Research and Development focusing on next-generation computing and connection technologies. As well as delivering industry-leading ICT networks and infrastructure we will invest more in fundamental R&D: the new theories, new materials, and new engineering that will produce truly innovative ways of thinking and doing. We will employ more mathematicians, physicists, and chemists, and challenge established industry lore such as Moore’s Law and Shannon’s Theorem.
The new decade begins with such unbridled promise for transformation in our lives. Let’s get on and deliver it.
Sourced From https://blog.huawei.com/2020/01/15/new-decade-new-tricks-the-future-and-how-we-get-there/

GIV predicts that by 2025, 14% of homes will have a domestic robot.

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In what form? Think of a robot in 2020 and the scope is broader than ever, ranging from the cute (a child’s smart toy or a vac bot bumping around your home) to the functional (the thing you voice command to do stuff in the home or an assembly line bot) to the classic dystopian humanoid (think iRobot or the Terminator franchise).
Whatever it is, robots are here to stay. And given their potential benefits, that’s a good thing. Advances in materials science, perceptual AI, and network tech like 5G, cloud, and IoT are making it increasingly likely that a robot will be making your life better by 2025.
Based on GIV 2025, here’s a taster of some of the applications we can look forward to:
Nursing Bots
Two main trends are driving the demand for nursing bots:
(1) It’s not just you who’s getting older: Globally, the number of people hitting the senior demographic (65 years old) is increasing by 3% per year. In today’s Europe, for example, 20% of people are over 60. For society, the prognosis of this “demographic transition” means fewer people of working age taking care of more elderly people.
(2) There aren’t enough healthcare professionals: The World Health Organization (WHO) estimates that by 2030, a global shortfall of 9 million healthcare professionals will exist, with Southeast Asia and Africa hit hardest. The future demand for home care workers is likely to far outstrip supply.
Many nations are stepping up research and investment into nursing-specific bots and applications to ensure that they stay ahead of the crisis curve. The functions of such bots outside of clinical settings will include:
Collecting data from sensors embedded in wearables or from around the home and performing predictive analytics on health.
Performing checkups, acting as smart first-aiders that can respond to incidents at the millisecond level and administer aid in the “first golden minute”, calling emergency services, and transmitting your medical data to hospitals.
Dispensing medicine at the correct times in the correct amounts.
Heading off unnecessary, time- and resource-wasting hospital trips before they happen.
Bionic Bots
Who doesn’t want to run faster, see better, stave off the effects of aging, or be safer at work? Demand for bionic tech is unsurprisingly on the rise, with human augmentation making Gartner’s Top Ten Strategic Technology Trends for 2020. Here are some examples:
Exoskeletons: Combining mechanics, sensors, AI, and mobile computing, smart exoskeletons are already available and demand is set to increase. As well as a mobility and protective tool for the elderly, exoskeletons will boost safety in industrial scenarios, help with gait rehabilitation, and allow people with neurological disorders or stroke patients to get about.
Prosthetics: Smart prosthetics are getting smarter, with machine learning able to make the brain-limb connection that automatically conveys the intention of an action to the limb and current research looking at musculoskeletal computing models.
Augmentation devices: As well as exoskeletons and prosthetics, sensory augmentation, for example, bionic lenses that can potentially replace the eye’s natural lens with camera optics, and brain implants that can perform functions like controlling seizures are on the horizon.
Companion Bots
The evolution of perceptual AI, such as natural language processing and computer vision (including facial recognition), will enable multiple-round, multiple-level dialogues with nuanced, real-time changes in tone and intonation based on increasingly complex decision trees. In short, chatting to a robot buddy is destined to become indistinguishable from chatting to another person.
So, what kind of applications and experience can we expect from the artificial linguist?
Study bots: By 2025, it’s expected that every child will benefit from an “Einstein-like” smart tutor that can tailor learning methods to individuals. In more formal settings, AI-powered analytics will also be able to spot data-driven correlations too sophisticated for human teachers, for example, if exercise before math class improves performance or how nutrition affects learning outcomes.
Therapy bots: While machines may never replace the human connection, they’re already occupying a predictive and therapeutic role in certain health scenarios, including evaluating the brain signals of children with Autism Spectrum Disorder (ASD) using EEG and video cameras to record interaction with the robot.
Virtual therapists: With the WHO estimating that 300 million people worldwide suffer from depression, emotionally intelligent robots that employ empathy and decision-tree dialogues are already proving valuable in helping to address mental health issues. The CBT-wielding Woebot and the data-analyzing mental health monitor mind.me are two examples.
Friendship bots: Depression’s unpleasant sibling – loneliness – is on the rise. Studies report that 14% of Brits are lonely and that loneliness in the US has tripled since the 80s. But, advances in perceptual and cognitive AI will increase the sophistication of decision trees and subtle responses to human stimuli that may go some way to ameliorating the issue.
Butler Bots
When it comes to housework, the majority of people would rather be doing something else. And the good news is that you probably will be able to – sooner than you think.
Currently, we’re seeing bots perform basic tasks like folding clothes, vacuuming, and picking things up. Butler bots will continue to learn the preferences and usage habits of individual family members and provide a range of home services for individuals and families based on voice commands, sensors, and apps.
For the full analysis and [email protected] predictions for how robots will impact home life and also the business opportunities and value that will be created, visit the Global Industry Vision website.
In the meantime, it might be time to start thinking about that new family member – will you embrace a robot helper, nurse, pet, or butler into your home or not? Leave a comment below.

Lean AI: How Much AI Does Your Company Need?

Being humans, we have a tendency to anthropomorphize objects around us. Thus, in the 20th century many technology inventions carried the names of people who invented them, like Ford cars and Boeing airplanes. At the same time, pet owners love nothing more than attributing human characteristics to the behavior of dogs, and who doesn’t love the talking animals in Disney movies? Even the God’s of Greek, Chinese, and Egyptian mythology resemble humans.
This trend holds true for AI, the field of creating an electronic mind that knows no fatigue or stress. Whatever huge promises that the technology gives us, we often unconsciously expect more from AI than it can do.
Many companies have experimented with AI adoption, but have found it challenging to prove the value of AI solutions. In its report Three Barriers to AI Adoption, Gartner names a lack of skills, fear of the unknown, and data quality as three most common ones. And surely there are others reasons also – one may be the high expectations on the technology’s ability to be, in HR terms, a “self-starter” and a “team player”. Companies should not take what they desire for reality and take full control on AI adoption in their hands.
The hype surrounding AI has certainly put it in the spotlight—companies understand the benefits that an artificial mind can offer. But at the same time, they need to know more about how to put it in practice. This article outlines three recommendations on AI adoption:
1) Ensure data quality and volume. Data has to go a long way from entering the storage system to creating value. Data discovery, tagging, and organizing are tedious activities, especially when today’s expectations center on fast innovation.
However, one kilobyte of structured data may bring way more value to an enterprise than terabytes of data that are messy. Ironically, when companies are drowning in data, they tend not to have enough of it. Modern enterprises are likely to find that they lack some pieces of information that they need to navigate in a changing environment. For example, a supermarket may not know what customers are searching online, but this potentially represents an opportunity for the retailer to grow sales. Companies need to take responsibility to put their data in order and evaluate what information is missing to the full picture. A systematic approach to data collection and cataloging is the cornerstone of the overall enterprise data initiative.
2) Find the balance between predictive accuracy and required computing power. AI is basically a series of math problems, but the number of problems grows at an accelerating rate as the precision of the model increases. Complex AI models can consume a tremendous amount of computing power, resulting in mounting costs. At the same time, increasing model complexity means that incremental improvements in algorithm efficiency become smaller with every step. For example, in a recommendation system that isn’t critical, it might be the case that implementing a model with 50% accuracy can pay off better than a beefier model with 90% accuracy, as the costs of hardware will be a dimension lower.
Another reason for the increasing demand for massive computing power is the increasing amount of data. For example, by doubling the resolution of a picture, the number of pixels also doubles. Certain calculus calculations, however, can show how much AI is enough for the organization. This is a matter of trade off, and every company should decide where the sweet spot is. Hardware can be costly for an enterprise upfront, but with the salaries of experts and the cost of power consumption, early adopters who decided to implement full blown machine learning initiative may lose their motivation very fast. AI is a real thing but it requires some thoughtful consideration.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Source: https://openai.com/blog/ai-and-compute/ Two Distinct Eras in Compute Usage in Training AI Systems
3) Companies should start with perceptual AI – natural language processing and computer vision, the most mature technologies in the AI domain. While AI has a broad scope of applications, NLP and computer vision are well understood largely due to their implementation in smart city scenarios around the world. Many companies invest in AI to explore its possibilities. And intelligent camera recognition, document scanning, automatic video, and image tagging should be some of the first applications on an IT department’s AI agendas. The industry has accumulated a lot of experience in this domain, so enterprises just need to build on industry blueprints, best practices, and previous experiences. In this area, the threshold for implementation is the lowest.
While the potential of the technology is very strong, decision-makers should take full responsibility for AI adoption.
The best approach is some rational thinking on how AI can bring positive changes and then developing a step-by-step plan.
Source:https://blog.huawei.com/2020/08/07/lean-ai-how-much-ai-does-your-company-need/ Ilya Brovashov (Marketing Manager, Data Storage, Huawei)
For details about Huawei developers and HMS, visit the website.
https://forums.developer.huawei.com/forumPortal/en/home?fid=

How Huawei Helps the Visually Impaired Help Themselves

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Wang Zhongwei sits at the piano, hands hovering above the keys. Nearby sits a small box which contains all of his tools. Tuning wrenches, tuning forks, and silencers are all neatly arranged in their proper compartments. He tilts his head to one side, straightens his back, and presses a key. As the note rings out, Zhongwei listens intently for subtleties of pitch. This one is a little flat. He takes a wrench from the box and reaches inside the piano to tighten the corresponding pin a little. This is how a piano tuner works. It may sound simple, but it's also skilled work. This work requires intimate familiarity with the hundreds of densely arranged tuning pins inside the piano. Zhongwei, as a visually impaired person, has had to learn by feel alone.
Wang Zhongwei was born visually impaired. Since childhood, he has seen the world only as a vague blur, but his keen sensitivity to sound led him to his career as a piano tuner. He can tune every note by staying close to the piano, feeling it with his hands and hearing the sound repeatedly after recording it on his mobile phone.
Zhongwei spent nearly 10 years in special schools learning Braille while also studying other knowledge one typically covers at school through a magnifying glass. He has been using consumer electronic products for nearly 10 years. In 2015, he bought his first Huawei phone, a P8max
Since then, Zhongwei has got to know the accessibility features in Huawei phones, which have become a "good helper" in his daily life. With the ScreenReader function, he can operate his phone without the help of other people, and take advantage of functions that help him live a more independent life. He can access study materials online, especially now, with the rise of audiobooks.
In 2018, Zhongwei began to work as a piano tuner. He often uses the Recorder feature when tuning a piano. "It's one of the features that I use most," says Zhongwei, before going on to praise the sound quality of Huawei phones.
He feels around inside a piano to determine the position of the strings and tuning pins, adjusts the pins with a tuning wrench, and then presses the corresponding key and listens to the subtle change in pitch. He uses his Huawei phone to record the sound of the piano before and after tuning for comparative study, which helps him hone his ears.
"When I tune pianos, sing a song, or listen to a concert, I record the music with my phone. When I listen to the recordings, I feel as if I were there again. I've started using the AI Life app to manage other devices through my phone, and it has made my life much easier," says Zhongwei.
With the ScreenReader feature, Weizhong can turn on the air conditioner, and freely adjust the temperature and wind direction with the AI Life app. He can also help friends coming over connect to his Wi-Fi network with the Guest Wi-Fi feature, which is secure and convenient.
Zhongwei is effusive in his praise for the Huawei AI Life app. "It can control and manage all of my household appliances and smart devices. This is beyond what I could have imagined just a few years ago. In the past, every device came with a dedicated remote control. There was no voice command option for a lot of devices and, as a visually impaired person, I sometimes needed help when using them."
Unlike Wang Zhongwei, Wu Yiming was not born visually impaired. In 2017, he began gradually losing his eyesight due to advanced glaucoma. "I can only see a shimmer of light now," says Yiming.
Wu Yiming is now an accessibility engineer. His passion for software technology can be traced back to his high school days 7 year ago. Driven by his passion for the subject, he taught himself software development by taking online video courses and reading books.
"When medical treatment does not help, I try to look for the brightness from technology." In 2018, Wu Yiming joined the Accessibility Research Association, where he finds bugs related to accessibility experience in apps, software, and UI design, and then proposes solutions to developers.
"The way we interact with technology is very different from how a sighted person interacts with it, and developers of accessibility features should have a sense of empathy. I am both a promoter and a beneficiary of this effort. It's a wonderful feeling, and I'm always motivated to do the job as well as possible," Yiming says.
For Yiming, his phone's ScreenReader feature, which reads onscreen text aloud, is an essential function. Since 2018, Yiming has not just been a Huawei user, he has been an active contributor to the design process. He participated in the Huawei Gallery upgrade program, helping develop features that enable visually impaired users to share photos with friends and enjoy themselves while recording beautiful moments in their lives with their phones.
Accessibility experience and technology have been improving and upgrading thanks to a team of dedicated, empathetic engineers like Wu Yiming. In the future, Yiming will continue to explore the field of accessibility. He is also applying for some patents that will help visually impaired users embrace smart technologies. "I've gone from being a wheel user to a wheel maker," says Yiming, deploying one of his favorite metaphors. "In the future, I want to bring the convenience of technology to even more people."
For details about Huawei developers and HMS, visit the website.
HUAWEI Developer Forum | HUAWEI Developer
forums.developer.huawei.com

Solution to 3D Modeling on Mobile Apps

"3D modeling" is commonly referred to the use of 3D production software to build a model with 3D data. 3D modeling is useful in a wide range of scenarios like product modeling, 3D printing, teaching demos, game development, and animation creation.
However, It takes at least several hours to build a low polygon model for a simple object, and even longer for a high polygon one. The average cost of building a simple model can be more than one hundred dollars, and even higher for building a complex one.
3D Modeling Kit is the best tool to improve 3D content production efficiency, which is designed to facilitate fast, simple, and cost-effective 3D content creation, to help create 3D models and animations.
3D Modeling Kit currently offers three capabilities: 3D object reconstruction, material generation, and motion capture. In the near future, more capabilities, such as human body modeling and facial motion capture, will be available.
3D Object Reconstruction​Introduction​This capability is used for modeling, but differs from the other two modeling methods, of which one uses a 3D modeling program and the other uses a scanner. 3D object reconstruction helps develop the user-generated content at lower costs for mobile apps. Instead of images with depth information, the capability requires images collected from a standard RGB camera and captured from multiple angles of an object. Using the images uploaded to the cloud, the capability can automatically generate a 3D model with textures.
​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
When it comes to hardware requirements, 3D object reconstruction now supports Android and iOS devices. It could offer web APIs in the future, bringing powerful functions to all kinds of apps.
As this capability implements image-based modeling, it has some special requirements for objects, such as requiring them to have rich textures with rigid bodies that are non-reflective and medium-sized. Such objects include goods (plush toys, bags, and shoes), furniture (sofas and pillows), and cultural relics (bronzes, stone artifacts, and wooden artifacts).
The optimal dimensions for an object are between 10 x 10 x 10 cm and 200 x 200 x 200 cm. The capability also supports objects with bigger dimensions, but will require more time to model such objects. It typically takes less than five minutes for 3D object reconstruction to use 1080p images to generate a model, which will be saved as a standard model file format, OBJ or glTF.
Last but not least, the size of the SDK for this capability is only 83 KB, which will not greatly increase the app size.
Application Scenarios​3D object reconstruction is ideal for the e-commerce industry. The image below demonstrates an example of this function's usefulness by showing how to model a shoe and cultural relic. Put it on a turntable in a light box, and then collect images by following the image collection suggestions mentioned above.
3D models can be a key factor for boosting user conversion, as it provides a more immersive shopping experience for users and allows them to grasp a realistic impression of the products.
Regarding the cultural relic industry, 3D object reconstruction can be used to generate 3D models for cultural relics, ensuring that they can be digitally preserved and displayed in multimedia.
Material Generation​Definition of the Material​The second capability from 3D Modeling Kit is material generation. Appearance in computer graphics, also known as material, is a property that describes how light interacts with the surface and inside of the object. A material model and a set of control parameters define the surface appearance.
Materials play a vital role in making a virtual scene look more realistic. Take the following two images as an example. The image on the right appears more realistic than the one on the left, because the former is texturized to define the appearance of the walls, floor, oil drums, and table cloth.
Developers and users are faced with several obstacles when trying to create materials.
First, material creation is time-consuming, strenuous, and hard to standardize. Before being used to create life-like materials, images of textures in the real world have to be processed by an art designer using professional software. For example, obtrusive lighting must be removed, and roughness and dimensions must be adjusted.
Second, the cost of material creation is high. The classic approach of texture mapping requires skills and specific renderers. A further barrier to texture mapping is that images used in a project and renderer can rarely be reused in others. Besides, it's hard for art designers to accumulate their experience.
To overcome these obstacles, we propose two solutions:
First, improve the efficiency and quality of material creation, by using the deep learning network to easily generate physically based rendering (PBR) texture maps.
Second, make texture maps usable in different projects and renderers and spread artistic experience, by standardizing the experience and material creation specifications of technical artists into data that complies with the PBR standards.
Introduction to Material Generation​This capability supports one-tap generation of four types of texture maps via one or more RGB images.
All you need for this is an Android device with a standard RGB camera, which does not have to be a time-of-flight (ToF) camera or light detection and ranging (LiDAR) sensor. The supported material types include concrete, marble, rock, gravel, brick, gypsum, clay, metal, wood, bark, leather, fabric, paint, plastic, and composite material.
The generated texture maps come in four types, which are the diffuse map, normal map, specular map, and roughness map. Regarding input image quality, this capability requires that the image's resolution be between 1K to 4K and that the image have no seam, bright spot, shadow, or reflection. The SSIM for the rendered material is greater than 0.9, which indicates that the material generation capability performs rather well.
The image below illustrates how to use material generation to quickly create a house. To begin with, we need to generate texture maps. Once this is done, drag the maps to the spheres, and then copy the texture spheres to the blank models as needed. Finally, the house is created via offline or online rendering.
Motion Capture​​This capability becomes available recently. It quickly and accurately calculates the 3D data of 24 key skeleton points of the human body, through the RGB image or continuous video frames shot from a common monocular camera.
It supports common poses including standing, walking, and running.
The optimal resolution for input images and video is within the range from 320p to 1080p. When the resolution exceeds this, the capability will take a longer time to perform calculation, and this will not significantly boost accuracy. Therefore, to ensure that the capability can offer the ideal results for the input video frames or RGB image with a resolution higher than 1080p, you are advised to scale them down first.
Let's see the output. This capability simultaneously outputs quaternions and 3D coordinates of 24 key skeleton points, which can be used directly in some engines for skeleton animation.
When the capability runs on a phone with the CPU, the detection frame rate is up to 30 fps; when it runs on a phone with the NPU, the detection frame rate is up to 80 fps.
Motion capture can be integrated in either full SDK mode or base SDK mode. If your app runs on Huawei phones, you need to integrate the base SDK which has a file size of 200 KB and download the algorithm package when the capability is used.

Categories

Resources