How a Programmer Developed a Text Reader App for His 80-Year-Old Grandpa - Huawei Developers

"John, have you seen my glasses?"
Our old friend John, a programmer at Huawei, has a grandpa who despite his old age, is an avid reader. Leaning back, struggling to make out what was written on the newspaper through his glasses, but unable to take his eyes off the text — this was how my grandpa used to read, John explained.
Reading this way was harmful on his grandpa's vision, and it occurred to John that the ears could take over the role of "reading" from the eyes. He soon developed a text-reading app that followed this logic, recognizing and then reading out text from a picture. Thanks to this app, John's grandpa now can ”read” from the comfort of his rocking chair, without having to strain his eyes.
How to Implement
The user takes a picture of a text passage. The app then automatically identifies the location of the text within the picture, and adjusts the shooting angle to an angle directly facing the text.
The app recognizes and extracts the text from the picture.
The app converts the recognized text into audio output by leveraging text-to-speech technology.
These functions are easy to implement, when relying on three services in HUAWEI ML Kit: document skew correction, text recognition, and text to speech (TTS).
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Preparations
1. Configure the Huawei Maven repository address.
2. Add the build dependencies for the HMS Core SDK.
Code:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-voice-tts:2.1.0.300'
// Import the bee voice package.
implementation 'com.huawei.hms:ml-computer-voice-tts-model-bee:2.1.0.300'
// Import the eagle voice package.
implementation 'com.huawei.hms:ml-computer-voice-tts-model-eagle:2.1.0.300'
// Import a PDF file analyzer.
implementation 'com.itextpdf:itextg:5.5.10'
}
Tap PREVIOUS or NEXT to turn to the previous or next page. Tap speak to start reading; tap it again to pause reading.
Development process
1. Create a TTS engine by using the custom configuration class MLTtsConfig. Here, on-device TTS is used as an example.
Java:
private void initTts() {
// Set authentication information for your app to download the model package from the server of Huawei.
MLApplication.getInstance().setApiKey(AGConnectServicesConfig.
fromContext(getApplicationContext()).getString("client/api_key"));
// Create a TTS engine by using MLTtsConfig.
mlTtsConfigs = new MLTtsConfig()
// Set the text converted from speech to English.
.setLanguage(MLTtsConstants.TTS_EN_US)
// Set the speaker with the English male voice (eagle).
.setPerson(MLTtsConstants.TTS_SPEAKER_OFFLINE_EN_US_MALE_EAGLE)
// Set the speech speed whose range is (0, 5.0]. 1.0 indicates a normal speed.
.setSpeed(.8f)
// Set the volume whose range is (0, 2). 1.0 indicates a normal volume.
.setVolume(1.0f)
// Set the TTS mode to on-device.
.setSynthesizeMode(MLTtsConstants.TTS_OFFLINE_MODE);
mlTtsEngine = new MLTtsEngine(mlTtsConfigs);
// Update the configuration when the engine is running.
mlTtsEngine.updateConfig(mlTtsConfigs);
// Pass the TTS callback function to the TTS engine to perform TTS.
mlTtsEngine.setTtsCallback(callback);
// Create an on-device TTS model manager.
manager = MLLocalModelManager.getInstance();
isPlay = false;
}
2. Create a TTS callback function for processing the TTS result.
Java:
MLTtsCallback callback = new MLTtsCallback() {
@Override
public void onError(String taskId, MLTtsError err) {
// Processing logic for TTS failure.
}
@Override
public void onWarn(String taskId, MLTtsWarn warn) {
// Alarm handling without affecting service logic.
}
@Override
// Return the mapping between the currently played segment and text. start: start position of the audio segment in the input text; end (excluded): end position of the audio segment in the input text.
public void onRangeStart(String taskId, int start, int end) {
// Process the mapping between the currently played segment and text.
}
@Override
// taskId: ID of a TTS task corresponding to the audio.
// audioFragment: audio data.
// offset: offset of the audio segment to be transmitted in the queue. One TTS task corresponds to a TTS queue.
// range: text area where the audio segment to be transmitted is located; range.first (included): start position; range.second (excluded): end position.
public void onAudioAvailable(String taskId, MLTtsAudioFragment audioFragment, int offset,
Pair<Integer, Integer> range, Bundle bundle) {
// Audio stream callback API, which is used to return the synthesized audio data to the app.
}
@Override
public void onEvent(String taskId, int eventId, Bundle bundle) {
// Callback method of a TTS event. eventId indicates the event name.
boolean isInterrupted;
switch (eventId) {
case MLTtsConstants.EVENT_PLAY_START:
// Called when playback starts.
break;
case MLTtsConstants.EVENT_PLAY_STOP:
// Called when playback stops.
isInterrupted = bundle.getBoolean(MLTtsConstants.EVENT_PLAY_STOP_INTERRUPTED);
break;
case MLTtsConstants.EVENT_PLAY_RESUME:
// Called when playback resumes.
break;
case MLTtsConstants.EVENT_PLAY_PAUSE:
// Called when playback pauses.
break;
// Pay attention to the following callback events when you focus on only the synthesized audio data but do not use the internal player for playback.
case MLTtsConstants.EVENT_SYNTHESIS_START:
// Called when TTS starts.
break;
case MLTtsConstants.EVENT_SYNTHESIS_END:
// Called when TTS ends.
break;
case MLTtsConstants.EVENT_SYNTHESIS_COMPLETE:
// TTS is complete. All synthesized audio streams are passed to the app.
isInterrupted = bundle.getBoolean(MLTtsConstants.EVENT_SYNTHESIS_INTERRUPTED);
break;
default:
break;
}
}
};
3. Extract text from a PDF file.
Java:
private String loadText(String path) {
String result = "";
try {
PdfReader reader = new PdfReader(path);
result = result.concat(PdfTextExtractor.getTextFromPage(reader,
mCurrentPage.getIndex() + 1).trim() + System.lineSeparator());
reader.close();
} catch (IOException e) {
showToast(e.getMessage());
}
// Obtain the position of the header.
int header = result.indexOf(System.lineSeparator());
// Obtain the position of the footer.
int footer = result.lastIndexOf(System.lineSeparator());
if (footer != 0){
// Do not display the text in the header and footer.
return result.substring(header, footer - 5);
}else {
return result;
}
}
4. Perform TTS in on-device mode.
Java:
// Create an MLTtsLocalModel instance to set the speaker so that the language model corresponding to the speaker can be downloaded through the model manager.
MLTtsLocalModel model = new MLTtsLocalModel.Factory(MLTtsConstants.TTS_SPEAKER_OFFLINE_EN_US_MALE_EAGLE).create();
manager.isModelExist(model).addOnSuccessListener(new OnSuccessListener<Boolean>() {
@Override
public void onSuccess(Boolean aBoolean) {
// If the model is not downloaded, call the download API. Otherwise, call the TTS API of the on-device engine.
if (aBoolean) {
String source = loadText(mPdfPath);
// Call the speak API to perform TTS. source indicates the text to be synthesized.
mlTtsEngine.speak(source, MLTtsEngine.QUEUE_APPEND);
if (isPlay){
// Pause playback.
mlTtsEngine.pause();
tv_speak.setText("speak");
}else {
// Resume playback.
mlTtsEngine.resume();
tv_speak.setText("pause");
}
isPlay = !isPlay;
} else {
// Call the API for downloading the on-device TTS model.
downloadModel(MLTtsConstants.TTS_SPEAKER_OFFLINE_EN_US_MALE_EAGLE);
showToast("The offline model has not been downloaded!");
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
showToast(e.getMessage());
}
});
5. Release resources when the current UI is destroyed.
Java:
@Override
protected void onDestroy() {
super.onDestroy();
try {
if (mParcelFileDescriptor != null) {
mParcelFileDescriptor.close();
}
if (mCurrentPage != null) {
mCurrentPage.close();
}
if (mPdfRenderer != null) {
mPdfRenderer.close();
}
if (mlTtsEngine != null){
mlTtsEngine.shutdown();
}
} catch (IOException e) {
e.printStackTrace();
}
}
Other Applicable Scenarios
TTS can be used across a broad range of scenarios. For example, you could integrate it into an education app to read bedtime stories to children, or integrate it into a navigation app, which could read out instructions aloud.
For more details, you can go to:
Reddit to join our developer discussion
GitHub to download demos and sample codes
Stack Overflow to solve any integration problems
Original Source

Well explained will it supports all languages?

Related

Send Notification when Headset is Connected: Foreground Service with Huawei Awareness Kit

Hello everyone, in this article we will learn how to use Huawei Awareness Kit with foreground service to send notification when certain condition is met.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Huawei Awareness Kit enables us to observe some environmental factors such as time, location, behavior, audio device status, ambient light, weather and nearby beacons. So, why don’t we create our own conditions to be met and observe them even when the application is not running?
First of all, we need to do HMS Core integration to be able to use Awareness Kit. I will not go into the details of that because it is already covered here.
If you are done with the integration, let’s start coding.
Activity Class
We will keep our activity class pretty simple to prevent any confusion. It will only be responsible for starting the service:
Java:
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Intent serviceStartIntent = new Intent(this, MyService.class);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
startForegroundService(serviceStartIntent);
}
else {
startService(serviceStartIntent);
}
}
}
However, we need to add the following permission to start a service correctly:
XML:
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
Service Class
Now, let’s talk about the service class. We are about to create a service which will run even when the application is killed. However, this comes with some restrictions. Since the Android Oreo, if an application wants to start a foreground service, it must inform the user by a notification which needs to be visible during the lifetime of the foreground service. Also, this notification needs to be used to start foreground. Therefore, our first job in the service class is to create this notification and call the startForeground() method with it:
Java:
Notification notification;
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O)
notification = createCustomNotification();
else
notification = new Notification();
startForeground(1234, notification);
And here is how we create the information notification we need for SDK versions later than 25:
Java:
@RequiresApi(api = Build.VERSION_CODES.O)
private Notification createCustomNotification() {
NotificationChannel notificationChannel = new NotificationChannel("1234", "name", NotificationManager.IMPORTANCE_HIGH);
NotificationManager manager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);
manager.createNotificationChannel(notificationChannel);
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this, "com.awarenesskit.demo");
return notificationBuilder
.setSmallIcon(R.drawable.ic_notification)
.setContentTitle("Observing headset status")
.setPriority(NotificationManager.IMPORTANCE_HIGH)
.build();
}
Note: You should replace the application id above with the application id of your application.
Now, it is time to prepare the parameters to create a condition to be met called barrier:
Java:
PendingIntent pendingIntent = PendingIntent.getBroadcast(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT);
headsetBarrierReceiver = new HeadsetBarrierReceiver();
registerReceiver(headsetBarrierReceiver, new IntentFilter(barrierReceiverAction));
AwarenessBarrier headsetBarrier = HeadsetBarrier.connecting();
createBarrier(this, HEADSET_BARRIER_LABEL, headsetBarrier, pendingIntent);
Here we have sent the required parameters to a method which will create a barrier for observing headset status. If you want, you can use other awareness barriers too.
Creating a barrier is a simple and standard process which will be taken care of the following method:
Java:
private void createBarrier(Context context, String barrierLabel, AwarenessBarrier barrier, PendingIntent pendingIntent) {
BarrierUpdateRequest.Builder builder = new BarrierUpdateRequest.Builder();
BarrierUpdateRequest request = builder.addBarrier(barrierLabel, barrier, pendingIntent).build();
Awareness.getBarrierClient(context).updateBarriers(request)
.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void void1) {
System.out.println("Barrier Create Success");
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
System.out.println("Barrier Create Fail");
}
});
}
We will be done with the service class after adding the following methods:
Java:
@Override
public int onStartCommand(Intent intent, int flags, int startId) {
return START_STICKY;
}
@Override
public void onDestroy() {
unregisterReceiver(headsetBarrierReceiver);
super.onDestroy();
}
@Nullable
@Override
public IBinder onBind(Intent intent) {
return null;
}
And of course, we shouldn’t forget to add our service to the manifest file:
XML:
<service
android:name=".MyService"
android:enabled="true"
android:exported="true" />
Broadcast Receiver Class
Lastly, we need to create a broadcast receiver where we will observe and handle the changes in the headset status:
Java:
public class HeadsetBarrierReceiver extends BroadcastReceiver {
public static final String HEADSET_BARRIER_LABEL = "HEADSET_BARRIER_LABEL";
@Override
public void onReceive(Context context, Intent intent) {
BarrierStatus barrierStatus = BarrierStatus.extract(intent);
String barrierLabel = barrierStatus.getBarrierLabel();
int barrierPresentStatus = barrierStatus.getPresentStatus();
if (HEADSET_BARRIER_LABEL.equals(barrierLabel)) {
if (barrierPresentStatus == BarrierStatus.TRUE) {
System.out.println("The headset is connected.");
createNotification(context);
}
else if (barrierPresentStatus == BarrierStatus.FALSE) {
System.out.println("The headset is disconnected.");
}
}
}
When a change occurs in the headset status, this method will receive the information. Here, the value of barrierPresentStatus will determine if headset is connected or disconnected.
At this point, we can detect that headset is just connected, so it is time to send a notification. The following method will take care of that:
Java:
private void createNotification(Context context) {
// Create PendingIntent to make user open the application when clicking on the notification
PendingIntent pendingIntent = PendingIntent.getActivity(context, 1234, new Intent(context, MainActivity.class), PendingIntent.FLAG_UPDATE_CURRENT);
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(context, "channelId")
.setSmallIcon(R.drawable.ic_headset)
.setContentTitle("Cool Headset!")
.setContentText("Want to listen to some music ?")
.setContentIntent(pendingIntent);
NotificationManager notificationManager = (NotificationManager) context.getSystemService(Context.NOTIFICATION_SERVICE);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
NotificationChannel mChannel = new NotificationChannel("channelId", "ChannelName", NotificationManager.IMPORTANCE_DEFAULT);
notificationManager.createNotificationChannel(mChannel);
}
notificationManager.notify(1234, notificationBuilder.build());
}
Output
When headset is connected, the following notification will be created even if the application is not running:
​
Final Thoughts
We have learned how to use Barrier API of Huawei Awareness Kit with a foreground service to observe the changes in environmental factors even when the application is not running.
As you may notice, the permanent notification indicating that the application is running in the background is not dismissible by the user which can be annoying. Even though these notifications can be dismissed by some third party applications, not every user has those applications, so you should be careful when deciding to build such services.
In this case, observing the headset status was just an example, so it might not be the best scenario for this use case, but Huawei Awareness Kit has many other great features that you can use with foreground services in your projects.
References
You can check the complete project on GitHub.
Note that, you will not be able to run this project because you don’t have the agconnect-services.json file for it. Therefore, you should only take it as a reference to create your own project.
Does Awareness Kit support wearable devices such as smart watches?

How to Build a 3D Product Model Within Just 5 Minutes

Displaying products with 3D models is something too great to ignore for an e-commerce app. Using those fancy gadgets, such an app can leave users with the first impression upon products in a fresh way!
The 3D model plays an important role in boosting user conversion. It allows users to carefully view a product from every angle, before they make a purchase. Together with the AR technology, which gives users an insight into how the product will look in reality, the 3D model brings a fresher online shopping experience that can rival offline shopping.
Despite its advantages, the 3D model has yet to be widely adopted. The underlying reason for this is that applying current 3D modeling technology is expensive:
Technical requirements: Learning how to build a 3D model is time-consuming.
Time: It takes at least several hours to build a low polygon model for a simple object, and even longer for a high polygon one.
Spending: The average cost of building a simple model can be more than one hundred dollars, and even higher for building a complex one.
Luckily, 3D object reconstruction, a capability in 3D Modeling Kit newly launched in HMS Core, makes 3D model building straightforward. This capability automatically generates a 3D model with a texture for an object, via images shot from different angles with a common RGB-Cam. It gives an app the ability to build and preview 3D models. For instance, when an e-commerce app has integrated 3D object reconstruction, it can generate and display 3D models of shoes. Users can then freely zoom in and out on the models for a more immersive shopping experience.
Actual Effect​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Technical Solutions​
3D object reconstruction is implemented on both the device and cloud. RGB images of an object are collected on the device and then uploaded to the cloud. Key technologies involved in the on-cloud modeling process include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Finally, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.
Preparations​1. Configuring a Dependency on the 3D Modeling SDK
Open the app-level build.gradle file and add a dependency on the 3D Modeling SDK in the dependencies block.
Code:
// Build a dependency on the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
2. Configuring AndroidManifest.xml
Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and camera permission.
Code:
/<!-- Permission to read data from and write data into storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Permission to use the camera. -->
<uses-permission android:name="android.permission.CAMERA" />
Development Procedure​1. Configuring the Storage Permission Application
In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.
Code:
/if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
Log.i(TAG, "Permissions OK");
} else {
EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
Check the application result. If the permissions are not granted, prompt the user to grant them.
Code:
@Override
public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) {
Log.i(TAG, "permissions = " + perms);
if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE && PERMISSIONS.length == perms.size()) {
initView();
initListener();
}
}
@Override
public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) {
if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
new AppSettingsDialog.Builder(this)
.setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
.setRationale("To use this app, you need to enable the permission.")
.setTitle("Insufficient permissions")
.build()
.show();
}
}
2. Creating a 3D Object Reconstruction Configurator
Code:
/// Set the PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
.create();
3. Creating a 3D Object Reconstruction Engine and Initializing the Task
Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.
Code:
// Create an engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
Use the engine to initialize the task.
Code:
// Initialize the 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
4. Creating a Listener Callback to Process the Image Upload Result
Create a listener callback that allows you to configure the operations triggered upon upload success and failure.
Code:
// Create an upload listener callback.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Upload progress.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
if (result.isComplete()) {
isUpload = true;
ScanActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
}
});
TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
isUpload = false;
runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
}
});
}
};
5. Passing the Upload Listener Callback to the Engine to Upload Images
Pass the upload listener callback to the engine. Call uploadFile(),
pass the task ID obtained in step 3 and the path of the images to be uploaded. Then, upload the images to the cloud server.
Code:
// Pass the listener callback to the engine.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Start uploading.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
6. Querying the Task Status
Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.
Code:
// Create a task processing instance.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
Call queryTask of the task processing instance to query the status of the 3D object reconstruction task.
Code:
// Query the task status, which can be: 0 (images to be uploaded); 1: (image upload completed); 2: (model being generated); 3( model generation completed); 4: (model generation failed).
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());
7. Creating a Listener Callback to Process the Model File Download Result
Create a listener callback that allows you to configure the operations triggered upon download success and failure.
Code:
// Create a download listener callback.
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
dialog.show();
}
});
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
dialog.dismiss();
}
});
}
@Override
public void onError(String taskId, int errorCode, String message) {
LogUtil.e(taskId + " <---> " + errorCode + message);
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
dialog.dismiss();
}
});
}
};
8. Passing the Download Listener Callback to the Engine to Download the File of the Generated Model
Pass the download listener callback to the engine. Call downloadModel, pass the task ID obtained in step 3 and the path for saving the model file to download it.
Code:
/ Pass the download listener callback to the engine.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
More Information​
The object should have rich texture, be medium-sized, and a rigid body. The object should not be reflective, transparent, or semi-transparent. The object types include goods (like plush toys, bags, and shoes), furniture (like sofas), and cultural relics (such as bronzes, stone artifacts, and wooden artifacts).
The object dimension should be within the range from 15 x 15 x 15 cm to 150 x 150 x 150 cm. (A larger dimension requires a longer time for modeling.)
3D object reconstruction does not support modeling for the human body and face.
Ensure the following requirements are met during image collection: Put a single object on a stable plane in pure color. The environment shall not be dark or dazzling. Keep all images in focus, free from blur caused by motion or shaking. Ensure images are taken from various angles including the bottom, flat, and top (it is advised that you upload more than 50 images for an object). Move the camera as slowly as possible. Do not change the angle during shooting. Lastly, ensure the object-to-image ratio is as big as possible, and all parts of the object are present.
These are all about the sample code of 3D object reconstruction. Try to integrate it into your app and build your own 3D models!
References​For more details, you can go to:
3D Modeling Kit official website
3D Moedling Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download 3D Modeling Kit sample codes
Stack Overflow to solve any integration problems

How to Add AI Dubbing to App

Text to speech (TTS) is highly sought after by audio/video editors, thanks to its ability to automatically turn text into naturally sounding speech, as a low cost alternative to human dubbing. It can be used on all kinds of video, regardless of whether the video is long or short.
I recently stumbled upon the AI dubbing capability of HMS Core Audio Editor Kit, which does just that. It is able to turn input text into speech with just a tap, and comes loaded with a selection of smooth, naturally-sounding male and female timbres.
This is ideal for developing apps that involve e-books, creating audio content, and editing audio/video. Below describes how I integrated this capability.
Making Preparations​Complete all necessary preparations by following the official guide.
Configuring the Project​1. Set the app authentication information
The information can be set via an API key or access token (recommended).
Use setAccessToken to set an access token during app initialization.
Java:
HAEApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. The API key needs to be set only once.
Java:
HAEApplication.getInstance().setApiKey("your ApiKey");
2. Initialize the runtime environment
Initialize HuaweiAudioEditor, and create a timeline and necessary lanes.
Java:
// Create a HuaweiAudioEditor instance.
HuaweiAudioEditor mEditor = HuaweiAudioEditor.create(mContext);
// Initialize the runtime environment of HuaweiAudioEditor.
mEditor.initEnvironment();
// Create a timeline.
HAETimeLine mTimeLine = mEditor.getTimeLine();
// Create a lane.
HAEAudioLane audioLane = mTimeLine.appendAudioLane();
Import audio.
Java:
// Add an audio asset to the end of the lane.
HAEAudioAsset audioAsset = audioLane.appendAudioAsset("/sdcard/download/test.mp3", mTimeLine.getCurrentTime());
3. Integrate AI dubbing.
Call HAEAiDubbingEngine to implement AI dubbing.
Java:
// Configure the AI dubbing engine.
HAEAiDubbingConfig haeAiDubbingConfig = new HAEAiDubbingConfig()
// Set the volume.
.setVolume(volumeVal)
// Set the speech speed.
.setSpeed(speedVal)
// Set the speaker.
.setType(defaultSpeakerType);
// Create a callback for an AI dubbing task.
HAEAiDubbingCallback callback = new HAEAiDubbingCallback() {
@Override
public void onError(String taskId, HAEAiDubbingError err) {
// Callback when an error occurs.
}
@Override
public void onWarn(String taskId, HAEAiDubbingWarn warn) {}
@Override
public void onRangeStart(String taskId, int start, int end) {}
@Override
public void onAudioAvailable(String taskId, HAEAiDubbingAudioInfo haeAiDubbingAudioFragment, int i, Pair<Integer, Integer> pair, Bundle bundle) {
// Start receiving and then saving the file.
}
@Override
public void onEvent(String taskId, int eventID, Bundle bundle) {
// Synthesis is complete.
if (eventID == HAEAiDubbingConstants.EVENT_SYNTHESIS_COMPLETE) {
// The AI dubbing task has been complete. That is, the synthesized audio data is completely processed.
}
}
@Override
public void onSpeakerUpdate(List<HAEAiDubbingSpeaker> speakerList, List<String> lanList,
List<String> lanDescList) { }
};
// AI dubbing engine.
HAEAiDubbingEngine mHAEAiDubbingEngine = new HAEAiDubbingEngine(haeAiDubbingConfig);
// Set the listener for the playback process of an AI dubbing task.
mHAEAiDubbingEngine.setAiDubbingCallback(callback);
// Convert text to speech and play the speech. In the method, text indicates the text to be converted to speech, and mode indicates the mode for playing the converted audio.
String taskId = mHAEAiDubbingEngine.speak(text, mode);
// Pause playback.
mHAEAiDubbingEngine.pause();
// Resume playback.
mHAEAiDubbingEngine.resume();
// Stop AI dubbing.
mHAEAiDubbingEngine.stop();
Result​In the demo below, I successfully implement the AI dubbing function in app. Now, I can converts text into emotionally expressive speech, with default and custom timbres.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To learn more, please visit:
>> Audio Editor Kit official website
>> Audio Editor Kit Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.

Keep Track of Workouts While Running in the Background

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
It can be so frustrating to lose track of a workout because the fitness app has stopped running in the background, when you turn off the screen or have another app in the front to listen to music or watch a video during the workout. Talk about all of your sweat and effort going to waste!
Fitness apps work by recognizing and displaying the user's workout status in real time, using the sensor on the phone or wearable device. They can obtain and display complete workout records to users only if they can keep running in the background. Since most users will turn off the screen, or use other apps during a workout, it has been a must-have feature for fitness apps to keep alive in the background. However, to save the battery power, most phones will restrict or even forcibly close apps once they are running in the background, causing the workout data to be incomplete. When building your own fitness app, it's important to keep this limitation in mind.
There are two tried and tested ways to keep fitness apps running in the background:
Instruct the user to manually configure the settings on their phones or wearable devices, for example, to disable battery optimization, or to allow the specific app to run in the background. However, this process can be cumbersome, and not easy to follow.
Or integrate development tools into your app, for example, Health Kit, which provides APIs that allow your app to keep running in the background during workouts, without losing track of any workout data.
The following details the process for integrating this kit.
Integration Procedure​1. Before you get started, apply for Health Kit on HUAWEI Developers, select the required data scopes, and integrate the Health SDK.
2. Obtain users' authorization, and apply for the scopes to read and write workout records.
3. Enable a foreground service to prevent your app from being frozen by the system, and call ActivityRecordsController in the foreground service to create a workout record that can run in the background.
4. Call beginActivityRecord of ActivityRecordsController to start the workout record. By default, an app will be allowed to run in the background for 10 minutes.
Code:
// Note that this refers to an Activity object.
ActivityRecordsController activityRecordsController = HuaweiHiHealth.getActivityRecordsController(this);
// 1. Build the start time of a new workout record.
long startTime = Calendar.getInstance().getTimeInMillis();
// 2. Build the ActivityRecord object and set the start time of the workout record.
ActivityRecord activityRecord = new ActivityRecord.Builder()
.setId("MyBeginActivityRecordId")
.setName("BeginActivityRecord")
.setDesc("This is ActivityRecord begin test!")
.setActivityTypeId(HiHealthActivities.RUNNING)
.setStartTime(startTime, TimeUnit.MILLISECONDS)
.build();
// 3. Construct the screen to be displayed when the workout record is running in the background. Note that you need to replace MyActivity with the Activity class of the screen.
ComponentName componentName = new ComponentName(this, MyActivity.class);
// 4. Construct a listener for the status change of the workout record.
OnActivityRecordListener activityRecordListener = new OnActivityRecordListener() {
@Override
public void onStatusChange(int statusCode) {
Log.i("ActivityRecords", "onStatusChange statusCode:" + statusCode);
}
};
// 5. Call beginActivityRecord to start the workout record.
Task<Void> task1 = activityRecordsController.beginActivityRecord(activityRecord, componentName, activityRecordListener);
// 6. ActivityRecord is successfully started.
task1.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
Log.i("ActivityRecords", "MyActivityRecord begin success");
}
// 7. ActivityRecord fails to be started.
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
Log.i("ActivityRecords", errorCode + ": " + errorMsg);
}
});
5. If the workout lasts for more than 10 minutes, call continueActivityRecord of ActivityRecordsController each time before a 10-minute ends to apply for the workout to continue for another 10 minutes.
Code:
// Note that this refers to an Activity object.
ActivityRecordsController activityRecordsController = HuaweiHiHealth.getActivityRecordsController(this);
// Call continueActivityRecord and pass the workout record ID for the record to continue in the background.
Task<Void> endTask = activityRecordsController.continueActivityRecord("MyBeginActivityRecordId");
endTask.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
Log.i("ActivityRecords", "continue backgroundActivityRecord was successful!");
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.i("ActivityRecords", "continue backgroundActivityRecord error");
}
});
6. When the user finishes the workout, call endActivityRecord of ActivityRecordsController to stop the record and stop keeping it alive in the background.
Code:
// Note that this refers to an Activity object.
final ActivityRecordsController activityRecordsController = HuaweiHiHealth.getActivityRecordsController(this);
// Call endActivityRecord to stop the workout record. The input parameter is null or the ID string of ActivityRecord.
// Stop a workout record of the current app by specifying the ID string as the input parameter.
// Stop all workout records of the current app by specifying null as the input parameter.
Task<List<ActivityRecord>> endTask = activityRecordsController.endActivityRecord("MyBeginActivityRecordId");
endTask.addOnSuccessListener(new OnSuccessListener<List<ActivityRecord>>() {
@Override
public void onSuccess(List<ActivityRecord> activityRecords) {
Log.i("ActivityRecords","MyActivityRecord End success");
// Return the list of workout records that have stopped.
if (activityRecords.size() > 0) {
for (ActivityRecord activityRecord : activityRecords) {
DateFormat dateFormat = DateFormat.getDateInstance();
DateFormat timeFormat = DateFormat.getTimeInstance();
Log.i("ActivityRecords", "Returned for ActivityRecord: " + activityRecord.getName() + "\n\tActivityRecord Identifier is "
+ activityRecord.getId() + "\n\tActivityRecord created by app is " + activityRecord.getPackageName()
+ "\n\tDescription: " + activityRecord.getDesc() + "\n\tStart: "
+ dateFormat.format(activityRecord.getStartTime(TimeUnit.MILLISECONDS)) + " "
+ timeFormat.format(activityRecord.getStartTime(TimeUnit.MILLISECONDS)) + "\n\tEnd: "
+ dateFormat.format(activityRecord.getEndTime(TimeUnit.MILLISECONDS)) + " "
+ timeFormat.format(activityRecord.getEndTime(TimeUnit.MILLISECONDS)) + "\n\tActivity:"
+ activityRecord.getActivityType());
}
} else {
// null will be returned if the workout record hasn't stopped.
Log.i("ActivityRecords","MyActivityRecord End response is null");
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
Log.i("ActivityRecords",errorCode + ": " + errorMsg);
}
});
Note that calling the API for keeping your app running in the background is a sensitive operation and requires manual approval. Make sure that your app meets the data security and compliance requirements before applying for releasing it.
Conclusion​Health Kit allows you to build apps that continue tracking workouts in the background, even when the screen has been turned off, or another app has been opened to run in the front. It's a must-have for fitness app developers. Integrate the kit to get started today!
References​HUAWEI Developers
Development Procedure for Keeping Your App Running in the Background

3D Product Model: See How to Create One in 5 Minutes

Quick question: How do 3D models help e-commerce apps?
The most obvious answer is that it makes the shopping experience more immersive, and there are a whole host of other benefits they bring.
To begin with, a 3D model is a more impressive way of showcasing a product to potential customers. One way it does this is by displaying richer details (allowing potential customers to rotate the product and view it from every angle), to help customers make more informed purchasing decisions. Not only that, customers can virtually try-on 3D products, to recreate the experience of shopping in a physical store. In short, all these factors contribute to boosting user conversion.
As great as it is, the 3D model has not been widely adopted among those who want it. A major reason is that the cost of building a 3D model with existing advanced 3D modeling technology is very high, due to:
Technical requirements: Building a 3D model requires someone with expertise, which can take time to master.
Time: It takes at least several hours to build a low-polygon model for a simple object, not to mention a high-polygon one.
Spending: The average cost of building just a simple model can reach hundreds of dollars.
Fortunately for us, the 3D object reconstruction capability found in HMS Core 3D Modeling Kit makes 3D model creation easy-peasy. This capability automatically generates a texturized 3D model for an object, via images shot from multiple angles with a standard RGB camera on a phone. And what's more, the generated model can be previewed. Let's check out a shoe model created using the 3D object reconstruction capability.
Shoe Model Images​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Technical Solutions​
3D object reconstruction requires both the device and cloud. Images of an object are captured on a device, covering multiple angles of the object. And then the images are uploaded to the cloud for model creation. The on-cloud modeling process and key technologies include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Once the model is created, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.
Now the boring part is out of the way. Let's move on to the exciting part: how to integrate the 3D object reconstruction capability.
Integrating the 3D Object Reconstruction Capability​Preparations​1. Configure the build dependency for the 3D Modeling SDK.​Add the build dependency for the 3D Modeling SDK in the dependencies block in the app-level build.gradle file.
Code:
// Build dependency for the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
2. Configure AndroidManifest.xml.​Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and camera permission as needed:
Code:
<!-- Write into and read from external storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Use the camera. -->
<uses-permission android:name="android.permission.CAMERA" />
Function Development​1. Configure the storage permission application.​In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.
Code:
if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
Log.i(TAG, "Permissions OK");
} else {
EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
Check the application result. If the permissions are granted, initialize the UI; if the permissions are not granted, prompt the user to grant them.
Code:
@Override
public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) {
Log.i(TAG, "permissions = " + perms);
if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE && PERMISSIONS.length == perms.size()) {
initView();
initListener();
}
}
@Override
public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) {
if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
new AppSettingsDialog.Builder(this)
.setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
.setRationale("To use this app, you need to enable the permission.")
.setTitle("Insufficient permissions")
.build()
.show();
}
}
2. Create a 3D object reconstruction configurator.​
Code:
// PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
.create();
3. Create a 3D object reconstruction engine and initialize the task.​Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.
Code:
// Initialize the engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
Use the engine to initialize the task.
Code:
// Create a 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
4. Create a listener callback to process the image upload result.​Create a listener callback in which you can configure the operations triggered upon upload success and failure.
Code:
// Create a listener callback for the image upload task.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Upload progress
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
if (result.isComplete()) {
isUpload = true;
ScanActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
}
});
TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
isUpload = false;
runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
}
});
}
};
5. Set the image upload listener for the 3D object reconstruction engine and upload the captured images.​Pass the upload callback to the engine. Call uploadFile(), pass the task ID obtained in step 3 and the path of the images to be uploaded, and upload the images to the cloud server.
Code:
// Set the upload listener.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Upload captured images.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
6. Query the task status.​Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.
Code:
// Initialize the task processing class.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
Call queryTask to query the status of the 3D object reconstruction task.
Code:
// Query the reconstruction task execution result. The options are as follows: 0: To be uploaded; 1: Generating; 3: Completed; 4: Failed.
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());
7. Create a listener callback to process the model file download result.​Create a listener callback in which you can configure the operations triggered upon download success and failure.
Code:
// Create a download callback listener
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
dialog.show();
}
});
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
dialog.dismiss();
}
});
}
@Override
public void onError(String taskId, int errorCode, String message) {
LogUtil.e(taskId + " <---> " + errorCode + message);
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
dialog.dismiss();
}
});
}
};
8. Pass the download listener callback to the engine to download the generated model file.​Pass the download listener callback to the engine. Call downloadModel. Pass the task ID obtained in step 3 and the path for saving the model file to download it.
Code:
// Set the listener for the model file download task.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
Notes​1. To deliver an ideal modeling result, 3D object reconstruction has some requirements on the object to be modeled. For example, the object should have rich textures and a fixed shape. The object is expected to be non-reflective and medium-sized. Transparency or semi-transparency is not recommended. An object that meets these requirements may fall into one of the following types: goods (including plush toys, bags, and shoes), furniture (like sofas), and cultural relics (like bronzes, stone artifacts, and wooden artifacts).
2. The object dimensions should be within the range of 15 x 15 x 15 cm to 150 x 150 x 150 cm. (Larger dimensions require a longer modeling time.)
3. Modeling for the human body or face is not yet supported by the capability.
4. Suggestions for image capture: Put a single object on a stable plane in pure color. The environment should be well lit and plain. Keep all images in focus, free from blur caused by motion or shaking, and take pictures of the object from various angles including the bottom, face, and top. Uploading more than 50 images for an object is recommended. Move the camera as slowly as possible, and do not suddenly alter the angle when taking pictures. The object-to-image ratio should be as big as possible, and not a part of the object is missing.
With all these in mind, as well as the development procedure of the capability, now we are ready to create a 3D model like the shoe model above. Looking forward to seeing your own models created using this capability in the comments section below.
Reference​
Home page of 3D Modeling Kit
Service introduction to 3D Modeling Kit
Detailed information about 3D object reconstruction

Categories

Resources