MVVM Architecture On HarmonyOS Using Retrofit And RxJava - Huawei Developers

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this tutorial, we will be discussing and implementing the HarmonyOS MVVM Architectural pattern in our Harmony app.
This project is available on github, link can be found at the end of the article
Table of contents
What is MVVM
Harmony MVVM example project structure
Adding dependencies
Model
Layout
Retrofit interface
ViewModel
Tip and Tricks
Conclusion
Recommended resources
What is MVVM
MVVM stands for Model, View, ViewModel:
Model: This holds the data of the application. It cannot directly talk to the View. Generally, it’s recommended to expose the data to the ViewModel through ActiveDatas (Observables ).
View: It represents the UI of the application devoid of any Application Logic. It observes the ViewModel.
ViewModel: It acts as a link between the Model and the View. It’s responsible for transforming the data from the Model. It provides data streams to the View. It also uses hooks or callbacks to update the View. It’ll ask for the data from the Model.
MVVM can be achieved in two ways:
Using Data binding
RxJava
In this tutorial we will implement MVVM in Harmony using RXjava, as Data binding is still under development and not ready to use in Harmony.
Harmony MVVM example project structure​
We will create packages by features. It will make your code more modular and manageable.
Adding the Dependencies
Add the following dependencies in your module level build.gradle file:
Code:
dependencies {
//[...]
//RxJava
implementation "io.reactivex.rxjava2:rxjava:2.2.17"
//retrofit
implementation 'com.squareup.retrofit2:retrofit:2.6.0'
implementation "com.squareup.retrofit2:converter-moshi:2.6.0"
implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
//RxJava adapter for retrofit
implementation 'com.squareup.retrofit2:adapter-rxjava2:2.7.1'
}
Model
The Model would hold the user’s email and password. The following User.java class does it:
Code:
package com.megaache.mvvmdemo.model;
public class User {
private String email;
private String password;
public User(String email, String password) {
this.email = email;
this.password = password;
}
public void setEmail(String email) {
this.email = email;
}
public String getEmail() {
return email;
}
public void setPassword(String password) {
this.password = password;
}
public String getPassword() {
return password;
}
@Override
public String toString() {
return "User{" +
"email='" + email + '\'' +
", password='" + password + '\'' +
'}';
}
}
Layout
NOTE: For this tutorial, I have decided to create the layout for smart watch devices, however it will work fine on all devices, you just need to re-arrange the components and modify the alignment.
The layout will consist of login button, two text fields and two error texts, each will be shown or hidden depending on the value of the text box above it, after clicking the login button. the final UI will like the screenshot below:
Before we create layout lets add some colors:
First create file color.json under resources/base/element and add the following json content:
Code:
{
"color": [
{
"name": "primary",
"value": "#283148"
},
{
"name": "primaryDark",
"value": "#283148"
},
{
"name": "accent",
"value": "#06EBBF"
},
{
"name": "red",
"value": "#FF406E"
}
]
}
Then, lets design background elements for the Text Fields and the Button:
Create file background_text_field.xml and background_text_button.xml under resources/base/graphic as shown in below screenshot :
Then add the following code:
Background_text_field.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="20"/>
<solid
ohos:color="#ffffff"/>
<stroke
ohos:width="2"
ohos:color="$color:accent"/>
</shape>
Background_button.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="20"/>
<solid
ohos:color="$color:accent"/>
</shape>
Now lets create the background element for the main layout, let’s called background_ability_login.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<solid
ohos:color="$color:primaryDark"/>
</shape>
Finally, let’s create the layout file ability_login.xml:
Code:
<?xml version="1.0" encoding="utf-8"?>
<ScrollView
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:id="$+id:scrollview"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:background_element="$graphic:background_ability_login"
ohos:layout_alignment="horizontal_center"
ohos:rebound_effect="true"
>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:padding="20vp"
>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:layout_alignment="center"
ohos:orientation="vertical"
>
<TextField
ohos:id="$+id:tf_email"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_text_field"
ohos:hint="email"
ohos:left_padding="10vp"
ohos:min_height="40vp"
ohos:multiple_lines="false"
ohos:text_alignment="vertical_center"
ohos:text_color="black"
ohos:text_input_type="pattern_number"
ohos:text_size="15fp"/>
<Text
ohos:id="$+id:t_email_invalid"
ohos:height="match_content"
ohos:width="match_content"
ohos:layout_alignment="center"
ohos:text="invalid email"
ohos:text_color="$color:red"
ohos:text_size="15fp"
/>
</DirectionalLayout>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:layout_alignment="center"
ohos:orientation="vertical"
ohos:top_margin="10vp">
<TextField
ohos:id="$+id:tf_password"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_text_field"
ohos:hint="password"
ohos:left_padding="10vp"
ohos:min_height="40vp"
ohos:multiple_lines="false"
ohos:text_alignment="vertical_center"
ohos:text_color="black"
ohos:text_input_type="pattern_password"
ohos:text_size="15fp"
/>
<Text
ohos:id="$+id:t_password_invalid"
ohos:height="match_content"
ohos:width="match_content"
ohos:layout_alignment="center"
ohos:padding="0vp"
ohos:text="invalid password"
ohos:text_color="$color:red"
ohos:text_size="15fp"
/>
</DirectionalLayout>
<Button
ohos:id="$+id:btn_login"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_button"
ohos:bottom_margin="30vp"
ohos:min_height="40vp"
ohos:text="login"
ohos:text_color="#fff"
ohos:text_size="18fp"
ohos:top_margin="10vp"/>
</DirectionalLayout>
</ScrollView>
Retrofit interface
Before we move to the ViewModel, we have to setup our Retrofit service and repository class.
To keep the project clean, I will create class config.java which will hold our API URLs:
Code:
package com.megaache.mvvmdemo;
public class Config {
//todo: update base url variable with valid url
public static final String BASE_URL = "https://example.com";
public static final String API_VERSION = "/api/v1";
public static final String LOGIN_URL="auth/login";
}
Note: The url's are just for demonstration. For the demo to work, you must replace the urls.
First create interface APIServices.java:
For this tutorial, we assume the method of login EndPoint is Post, you may changes depending on your API, the method login will return an Observable, that will be observed in the ViewModel using RxJava.
Code:
package com.megaache.mvvmdemo.network;
import com.megaache.mvvmdemo.Config;
import com.megaache.mvvmdemo.network.request.LoginRequest;
import com.megaache.mvvmdemo.network.response.LoginResponse;
import io.reactivex.Observable;
import retrofit2.http.Body;
import retrofit2.http.Headers;
import retrofit2.http.POST;
public interface APIServices {
@POST(Config.LOGIN_URL)
@Headers("Content-Type: application/json;charset=UTF-8")
public Observable<LoginResponse> login(@Body LoginRequest loginRequest);
}
Note: the class LoginRequest which you will see later in this tutorial, must be equal to the request that the server expects in The names of variables and their types, otherwise the server will fail to process the request.
Then, add method createRetrofitClient() to MyApplication.java, which will create and return retrofit instance, the instance will use Moshi converter to handle the conversion of JSON to our java class, and RxJava2 adapter to return observables that can work with RxJava instead of the default Call class which requires callbacks:
Code:
package com.megaache.mvvmdemo;
import com.megaache.mvvmdemo.network.APIServices;
import ohos.aafwk.ability.AbilityPackage;
import okhttp3.OkHttpClient;
import retrofit2.Retrofit;
import retrofit2.adapter.rxjava2.RxJava2CallAdapterFactory;
import retrofit2.converter.moshi.MoshiConverterFactory;
import java.util.concurrent.TimeUnit;
public class MyApplication extends AbilityPackage {
@Override
public void onInitialize() {
super.onInitialize();
}
public static APIServices createRetrofitClient() {
OkHttpClient client = new OkHttpClient.Builder()
.connectTimeout(60L, TimeUnit.SECONDS)
.build();
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(Config.BASE_URL + Config.API_VERSION)
.addCallAdapterFactory(RxJava2CallAdapterFactory.create())
.addConverterFactory(MoshiConverterFactory.create()).client(client)
.build();
return retrofit.create(APIServices.class);
}
}
NOTE: For cleaner code, you can create file RetrofitClient.java and move the method createRetrofitClient() to it.
Now, let’s work on the Login feature, we going to first create request and response classes, then move to the ViewModel and the view:
We need LoginRequest and LoginResponse which both will extends BaseRequest and BaseRespnonse, code is shown below:
Create BaseRequest.java:
In real life project, your API may expect some parameter to be sent with every request. For example: accessToken, language, deviceId, pushToken etc. which will depende on your API. for this tutorial I added one field called deviceType with static value.
Code:
package com.megaache.mvvmdemo.network.request;
public class BaseRequest {
private String deviceType;
public BaseRequest() {
deviceType = "harmony-watch";
}
public String getDeviceType() {
return deviceType;
}
public void setDeviceType(String deviceType) {
this.deviceType = deviceType;
}
}
Create class LoginRequest.java, which will extend BaseRequest and have two fields Email and Password, which will provided by the end user:
Code:
package com.megaache.mvvmdemo.network.request;
public class LoginRequest {
private String email;
private String password;
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
}
Then for the response, Create BaseResponse.java first:
Code:
package com.megaache.mvvmdemo.network.response;
import com.megaache.mvvmdemo.MyApplication;
import java.io.Serializable;
public class BaseResponse implements Serializable {
}
Then LoginResponse.java extending BaseResponse:
Code:
package com.megaache.mvvmdemo.network.response;
import com.megaache.mvvmdemo.model.User;
import com.squareup.moshi.Json;
public class LoginResponse extends BaseResponse {
@Json(name = "user")
private User user;
@Json(name = "accessToken")
private String accessToken;
public User getUser() {
return user;
}
public void setUser(User user) {
this.user = user;
}
public String getAccessToken() {
return accessToken;
}
public void setAccessToken(String accessToken) {
this.accessToken = accessToken;
}
}
Note: this class must be equal to the response you get from server, otherwise Retrofit Gson converter will fail to convert the response to LoginResponse class, both the type of variables and their names must equal the those in the JSON response.
ViewModel
In ViewModel, we will wrap the data which was loaded with Retrofit inside class LoggedIn in LoginViewState, and observe states Observable defined in BaseViewModel in our Ability (or AbilitySlice). Whenever the value in states changes, the ability will be notified without checking whether the ability is alive or not.
The code for LoginViewState.java extending empty class BaseViewState.java, and ErrorData.java (used in LoginViewState.java) is given below:
ErrorData.java:
Code:
package com.megaache.mvvmdemo.model;
import java.io.Serializable;
public class ErrorData implements Serializable {
private String message;
private int statusCode;
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
public int getStatusCode() {
return statusCode;
}
public void setStatusCode(int statusCode) {
this.statusCode = statusCode;
}
}
LoginViewState.java:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.base.BaseViewState;
import com.megaache.mvvmdemo.model.ErrorData;
import com.megaache.mvvmdemo.network.response.LoginResponse;
public class LoginViewState extends BaseViewState {
public static class Loading extends LoginViewState {
}
public static class Error extends LoginViewState {
private ErrorData message;
public Error(ErrorData message) {
this.message = message;
}
public void setMessage(ErrorData message) {
this.message = message;
}
public ErrorData getMessage() {
return message;
}
}
public static class LoggedIn extends LoginViewState {
private LoginResponse userDataResponse;
public LoggedIn(LoginResponse userDataResponse) {
this.userDataResponse = userDataResponse;
}
public LoginResponse getUserDataResponse() {
return userDataResponse;
}
public void setUserDataResponse(LoginResponse userDataResponse) {
this.userDataResponse = userDataResponse;
}
}
}
The code for the LoginViewModel.java is given below:
When the user clicks the login button, the method sendLoginRequest() will setup our retrofit Observable, the request will not be sent until the we call the method subscribe which will be done on the View. notice we are subscribing on the Schedulers.Io() Scheduler, which is will execute the requests in a background thread to avoid freezing the UI, and because of that we have to create our custom Observer that will invoke the callback code in UI thread after we receive data, more on this later:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.base.BaseViewModel;
import com.megaache.mvvmdemo.MyApplication;
import com.megaache.mvvmdemo.model.ErrorData;
import com.megaache.mvvmdemo.network.request.LoginRequest;
import io.reactivex.Observable;
import io.reactivex.schedulers.Schedulers;
import ohos.aafwk.abilityjet.activedata.ActiveData;
public class LoginViewModel extends BaseViewModel<LoginViewState> {
private static final int MIN_PASSWORD_LENGTH = 6;
public ActiveData<Boolean> emailValid = new ActiveData<>();
public ActiveData<Boolean> passwordValid = new ActiveData<>();
public ActiveData<Boolean> loginState = new ActiveData<>();
public LoginViewModel() {
super();
}
public void login(String email, String password) {
boolean isEmailValid = isEmailValid(email);
emailValid.setData(isEmailValid);
if (!isEmailValid)
return;
boolean isPasswordValid = isPasswordValid(email);
passwordValid.setData(isPasswordValid);
if (!isPasswordValid)
return;
LoginRequest loginRequest = new LoginRequest();
loginRequest.setEmail(email);
loginRequest.setPassword(password);
super.subscribe(sendLoginRequest(loginRequest));
}
private Observable<LoginViewState> sendLoginRequest(LoginRequest loginRequest) {
return MyApplication.createRetrofitClient()
.login(loginRequest)
.doOnError(Throwable::printStackTrace)
.map(LoginViewState.LoggedIn::new)
.cast(LoginViewState.class)
.onErrorReturn(throwable -> {
ErrorData errorData = new ErrorData();
if (throwable.getMessage() != null)
errorData.setMessage(throwable.getMessage());
else
errorData.setMessage(" No internet! ");
return new LoginViewState.Error(errorData);
})
.subscribeOn(Schedulers.io())
.startWith(new LoginViewState.Loading());
}
private boolean isEmailValid(String email) {
return email != null && !email.isEmpty() && email.contains("@");
}
private boolean isPasswordValid(String password) {
return password != null && password.length() > MIN_PASSWORD_LENGTH;
}
}
Settings up the ability (View)
As you know, ability is our view, we have instantiated ViewModel and observer states and ActiveDatas in the method ObserverData(), as mentioned before, retrofit will send the request on background thread, therefore the code in the Observer will run on the same thread (Schedulars.io()), which will cause exceptions if that code attemp to update the UI, to prevent that, we will create a custom UIObserver class which extends Observer, that will run our code in the UI task dispatcher of the ability (UI Thread), code for UiObserver.java as show below:
Code:
package com.megaache.mvvmdemo.utils;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.abilityjet.activedata.DataObserver;
import ohos.app.dispatcher.TaskDispatcher;
public abstract class UiObserver<T> extends DataObserver<T> {
private TaskDispatcher uiTaskDispatcher;
public UiObserver(Ability baseAbilitySlice) {
setLifecycle(baseAbilitySlice.getLifecycle());
uiTaskDispatcher = baseAbilitySlice.getUITaskDispatcher();
}
@Override
public void onChanged(T t) {
uiTaskDispatcher.asyncDispatch(() -> onValueChanged(t));
}
public abstract void onValueChanged(T t);
}
Code for LoginAbility.java is shown below:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.ResourceTable;
import com.megaache.mvvmdemo.utils.UiObserver;
import com.megaache.mvvmdemo.model.ErrorData;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.content.Intent;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.agp.components.TextField;
import ohos.agp.window.dialog.ToastDialog;
public class LoginAbility extends Ability {
private LoginViewModel loginViewModel;
private TextField emailTF;
private Text emailInvalidT;
private TextField passwordTF;
private Text passwordInvalidT;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
loginViewModel = new LoginViewModel();
initUI();
observeData();
}
private void initUI() {
super.setUIContent(ResourceTable.Layout_ability_login);
Button loginButton = (Button) findComponentById(ResourceTable.Id_btn_login);
loginButton.setClickedListener(c -> attemptLogin());
emailTF = (TextField) findComponentById(ResourceTable.Id_tf_email);
emailInvalidT = (Text) findComponentById(ResourceTable.Id_t_email_invalid);
passwordTF = (TextField) findComponentById(ResourceTable.Id_tf_password);
passwordInvalidT = (Text) findComponentById(ResourceTable.Id_t_password_invalid);
}
private void observeData() {
loginViewModel.emailValid.addObserver(new UiObserver<Boolean>(this) {
@Override
public void onValueChanged(Boolean aBoolean) {
emailInvalidT.setVisibility(aBoolean ? Component.VISIBLE : Component.HIDE);
}
}, false);
loginViewModel.passwordValid.addObserver(new UiObserver<Boolean>(this) {
@Override
public void onValueChanged(Boolean aBoolean) {
passwordInvalidT.setVisibility(aBoolean ? Component.VISIBLE : Component.HIDE);
}
}, false);
loginViewModel.getStates().addObserver(new UiObserver<LoginViewState>(this) {
@Override
public void onValueChanged(LoginViewState loginState) {
if (loginState instanceof LoginViewState.Loading) {
toggleLoadingDialog(true);
} else if (loginState instanceof LoginViewState.Error) {
toggleLoadingDialog(false);
manageError(((LoginViewState.Error) loginState).getMessage());
} else if (loginState instanceof LoginViewState.LoggedIn) {
toggleLoadingDialog(false);
showToast("logging successful!");
}
}
}, false);
}
private void attemptLogin() {
loginViewModel.login(emailTF.getText(), passwordTF.getText());
}
private void toggleLoadingDialog(boolean show) {
//todo: show/hide loading dialog
}
private void manageError(ErrorData errorData) {
showToast(errorData.getMessage());
}
private void showToast(String message) {
new ToastDialog(this)
.setText(message)
.show();
}
@Override
protected void onStop() {
super.onStop();
loginViewModel.unbind();
}
}
Tips And Tricks
If you want your app to work offline, its best to introduce a Repository classes that will handle quering information from server if internet is available or from the cach if not
For cleaner code, try re-using the ViewModel as much as possible, by creating a base class and moving the shared code their
You should not keep a reference to a View (component) or context in the ViewModel, unless you have no option
The ViewModel should not talk directly to the View, instead the View obseve the ViewModel and update itself depending on ViewModel data
A correct implementation of ViewModel should allow you to change the UI with minimal or zero changes to the ViewModel.
Conclusion
MVVM combines the advantages of separation of concerns provided by MVP archichetecture, while leveraging the advantages of RxJava or Data binding. The result is a pattern where the model drives as many of the operations as possible, minimizing the logic in the view.
Finally, talk is cheap, and I strongly advise you to try and learn these things in the code so that you do not need to rely on people like me to tell you what to do.
Clone the project from github, replate the API Urls in class config.java and run it on HarmonyOs Device, you should see a toast that says "Logging succesful" if the credentials are correct, otherwise it should show a toast with the error that says "no internet" or and error returned from the server.
This project is available on GitHub: Click here
Recommended sources:
HarmonyOS (essential topics): Essential Topics
Retrofit: Click here
Rxjava: Click here
Original Source
Comment below if you have any questions or suggestions.
Thank you!

In Android we have Manifest file right there we will declare all the activity names like that do we have any file here?

Related

Bluetooth: stuck at connect()

I'm programming a simple Bluetooth client to send and receive text messages throught RFCOMM as a serial port. I had a look at the Android SDK tutorials and did it in the same way: an Activity which calls a thread to make the connection, and once done, another thread to take care of msg reception.
I'm trying to connect to a Parallax EasyBluetooth. Connection works all right between computer and EasyBT, and also between a Java based mobile and the EasyBT. So the problem must be at the code or, I hope not, at the Android mobile bluetooth chip. Anyway it gets on and off, and detects other devices when scanning, so I guess problem is just my coding.
The problem is that the code gets stuck at the connect() method. So let's see if anyone knows why.
The XML for the activity is simple:
Code:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/hello"
/>
<Button
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/boton"
android:id="@+id/boton_enviar"
/>
</LinearLayout>
Of course I have added the bluetooth permissions to the Manifiest:
Code:
<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.BLUETOOTH_ADMIN" />
And the code is the following:
Code:
package uniovi.PFC;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.UUID;
import android.app.Activity;
import android.bluetooth.BluetoothAdapter;
import android.bluetooth.BluetoothDevice;
import android.bluetooth.BluetoothSocket;
import android.content.Intent;
import android.os.Bundle;
import android.os.Handler;
import android.util.Log;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.ArrayAdapter;
import android.widget.Button;
public class PruebaBTActivity extends Activity {
private String TAG = "pruebaBT";
private BluetoothAdapter mBluetoothAdapter;
private Map<String, BluetoothDevice> mArrayAdapter;
private ConnectedThread hiloEscuchas;
private ConnectThread hiloConectando;
private Handler mHandler;
private Button botonEnviar;
private static final UUID MY_UUID = UUID.fromString("00001101-0000-1000-8000-00805F9B34FB");
private static final int REQUEST_ENABLE_BT = 1;
private static final int MESSAGE_READ = 1;
private byte bytes_enviar[];
private String cmd;
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
Log.d(TAG, "On create abriendo");
mArrayAdapter = new HashMap<String, BluetoothDevice>();
botonEnviar = (Button)findViewById(R.id.boton_enviar);
botonEnviar.setEnabled(false);
botonEnviar.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
cmd = "A";
bytes_enviar = cmd.getBytes();
hiloEscuchas.write(bytes_enviar);
}
});
Log.d(TAG, "On create cerrando");
}
@Override
public void onResume() {
super.onResume();
mBluetoothAdapter = BluetoothAdapter.getDefaultAdapter();
if (mBluetoothAdapter == null) {
Log.d(TAG, "Device does not support Bluetooth");
}
if (!mBluetoothAdapter.isEnabled()) {
Intent enableBtIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE);
startActivityForResult(enableBtIntent, REQUEST_ENABLE_BT);
}
Set<BluetoothDevice> pairedDevices = mBluetoothAdapter.getBondedDevices();
// If there are paired devices
if (pairedDevices.size() > 0) {
// Loop through paired devices
for (BluetoothDevice device : pairedDevices) {
// Add the name and address to an array adapter to show in a ListView
mArrayAdapter.put(device.getName(), device);
}
}
BluetoothDevice device = mArrayAdapter.get("EasyBT");
hiloConectando = new ConnectThread(device);
hiloConectando.run();
//while(hiloEscuchas.isConnected()==false);
//botonEnviar.setEnabled(true);
}
private class ConnectThread extends Thread {
private final BluetoothSocket mmSocket;
private final BluetoothDevice mmDevice;
public ConnectThread(BluetoothDevice device) {
// Use a temporary object that is later assigned to mmSocket,
// because mmSocket is final
BluetoothSocket tmp = null;
mmDevice = device;
// Get a BluetoothSocket to connect with the given BluetoothDevice
try {
// MY_UUID is the app's UUID string, also used by the server code
tmp = device.createRfcommSocketToServiceRecord(MY_UUID);
} catch (IOException e) { }
mmSocket = tmp;
}
public void run() {
// Cancel discovery because it will slow down the connection
mBluetoothAdapter.cancelDiscovery();
try {
// Connect the device through the socket. This will block
// until it succeeds or throws an exception
mmSocket.connect();
} catch (IOException connectException) {
// Unable to connect; close the socket and get out
try {
mmSocket.close();
} catch (IOException closeException) { }
return;
}
// Do work to manage the connection (in a separate thread)
hiloEscuchas = new ConnectedThread(mmSocket);
hiloEscuchas.run();
}
/** Will cancel an in-progress connection, and close the socket */
public void cancel() {
try {
mmSocket.close();
} catch (IOException e) { }
}
}
private class ConnectedThread extends Thread {
private final BluetoothSocket mmSocket;
private final InputStream mmInStream;
private final OutputStream mmOutStream;
private boolean conectado;
public ConnectedThread(BluetoothSocket socket) {
mmSocket = socket;
InputStream tmpIn = null;
OutputStream tmpOut = null;
conectado = false;
mHandler = new Handler();
// Get the input and output streams, using temp objects because
// member streams are final
try {
tmpIn = socket.getInputStream();
tmpOut = socket.getOutputStream();
} catch (IOException e) { }
mmInStream = tmpIn;
mmOutStream = tmpOut;
conectado = true;
}
public boolean isConnected(){
return conectado;
}
public void run() {
byte[] buffer = new byte[1024]; // buffer store for the stream
int bytes; // bytes returned from read()
// Keep listening to the InputStream until an exception occurs
while (true) {
try {
cmd = "A";
bytes_enviar = cmd.getBytes();
hiloEscuchas.write(bytes_enviar);
// Read from the InputStream
bytes = mmInStream.read(buffer);
// Send the obtained bytes to the UI Activity
mHandler.obtainMessage(MESSAGE_READ, bytes, -1, buffer)
.sendToTarget();
} catch (IOException e) {
break;
}
}
}
/* Call this from the main Activity to send data to the remote device */
public void write(byte[] bytes) {
try {
mmOutStream.write(bytes);
} catch (IOException e) { }
}
/* Call this from the main Activity to shutdown the connection */
public void cancel() {
try {
mmSocket.close();
} catch (IOException e) { }
}
}
}
There was some code to make a bluetooth scan for devices, but in order to get things simple until it works, I wrote the MAC address manually into a variable. Comments explain this, and also shows where it gets stuck.
Thanks
Can anybody please help me with this? I'm stuck and I need Bluetooth connection working for my thesys

React Native Made Easy Ep. 2 – Native Bridge

Introduction
React Native is a convenient tool for cross platform development, and though it has become more and more powerful through the updates, there are limits to it, for example its capability to interact with and using the native components. Bridging native code with Javascript is one of the most popular and effective ways to solve the problem. Best of both worlds!
Currently not all HMS Kits has official RN support yet, this article will walk you through how to create android native bridge to connect your RN app with HMS Kits, and Scan Kit will be used as an example here.
The tutorial is based on https://github.com/clementf-hw/rn_integration_demo/tree/4b2262aa2110041f80cb41ebd7caa1590a48528a, you can find more details about the sample project in this article: https://forums.developer.huawei.com...d=0201230857831870061&fid=0101187876626530001.
Prerequisites
Basic Android development
Basic React Native development
These areas have been covered immensely already on RN’s official site, this forum and other sources
HMS properly configured
You can also reference the above article for this matter
Major dependencies
RN Version: 0.62.2 (released on 9th April, 2020)
Gradle Version: 5.6.4
Gradle Plugin Version: 3.6.1
agcp: 1.2.1.301
This tutorial is broken into 3 parts:
Pt. 1: Create a simple native UI component as intro and warm up
Pt. 2: Bridging HMS Scan Kit into React Native
Pt. 3: Make Scan Kit into a stand alone React Native Module that you can import into other projects or even upload to npm.
Bridging HMS Scan Kit
Now we have some fundamental knowledge on how to bridge, let’s bridge something meaningful. We will bridge the Scan Kit Default View as a QR Code Scanner, and also learn how to communicate from Native side to React Native side.
First, we’ll have to configure the project following the guide to set Scan Kit up on the native side: https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/scan-preparation-4
Put agconnect-service.json in place
Add to allprojects > repositories in root level build.gradle
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Add to buildscript > repositories
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Add to buildscript > dependencies
Code:
buildscript{
dependencies {
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
}
Go to app/build.gradle and add this to header
Code:
apply plugin: 'com.huawei.agconnect'
Add this to dependencies
Code:
dependencies {
implementation 'com.huawei.hms:scanplus:1.1.3.300'
}
Add in proguard-rules.pro
Code:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.hianalytics.android.**{*;}
-keep class com.huawei.**{*;}
Now do a gradle sync. Also you can try to build and run the app to see if everything’s ok even though we have not done any actual development yet.
Add these to AndroidManifest.xml
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
<application
…
<activity android:name="com.huawei.hms.hmsscankit.ScanKitActivity" />
</application>
So the basic setup/configuration is done. Similar to the warm up, we will create a Module file first. Note that for the sake of variance and wider adaptability of the end product, this time we’ll make it a plain Native Module instead of Native UI Component.
Code:
package com.cfdemo.d001rn;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
public class ReactNativeHmsScanModule extends ReactContextBaseJavaModule {
private static final String REACT_CLASS = "ReactNativeHmsScan";
private static ReactApplicationContext reactContext;
public ReactNativeHmsScanModule(ReactApplicationContext context) {
super(context);
reactContext = context;
}
@NonNull
@Override
public String getName() {
return REACT_CLASS;
}
}
We have seen how data flows from RN to native in the warm up (e.g. @reactProp of our button), There are also several ways for data to flow from native to RN. In Scan Kit, it utilizes startActivityForResult, therefore we need to implement its subsequent listeners.
Code:
package com.cfdemo.d001rn;
import android.app.Activity;
import android.content.Intent;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ActivityEventListener;
import com.facebook.react.bridge.BaseActivityEventListener;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
public class ReactNativeHmsScanModule extends ReactContextBaseJavaModule {
private static final String REACT_CLASS = "ReactNativeHmsScan";
private static ReactApplicationContext reactContext;
public ReactNativeHmsScanModule(ReactApplicationContext context) {
super(context);
reactContext = context;
reactContext.addActivityEventListener(mActivityEventListener);
}
@NonNull
@Override
public String getName() {
return REACT_CLASS;
}
private final ActivityEventListener mActivityEventListener = new BaseActivityEventListener() {
@Override
public void onActivityResult(Activity activity, int requestCode, int resultCode, Intent intent) {
}
};
}
There are couple small details we’ll need to add. First, React Native javascript side expects a Promise from the result.
Code:
private Promise mScannerPromise;
We also need to add a request code to identify that this is our scan kit activity. 567 here is just an example, the value is of your own discretion
Code:
private static final int REQUEST_CODE_SCAN = 567
There will be several error/reject conditions, let’s identify and declare their code first
Code:
private static final String E_ACTIVITY_DOES_NOT_EXIST = "E_ACTIVITY_DOES_NOT_EXIST";
private static final String E_SCANNER_CANCELLED = "E_SCANNER_CANCELLED";
private static final String E_FAILED_TO_SHOW_SCANNER = "E_FAILED_TO_SHOW_SCANNER";
private static final String E_INVALID_CODE = "E_INVALID_CODE";
At this moment, the module should look like this
Code:
package com.cfdemo.d001rn;
import android.app.Activity;
import android.content.Intent;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ActivityEventListener;
import com.facebook.react.bridge.BaseActivityEventListener;
import com.facebook.react.bridge.Promise;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
public class ReactNativeHmsScanModule extends ReactContextBaseJavaModule {
private static final String REACT_CLASS = "ReactNativeHmsScan";
private static ReactApplicationContext reactContext;
private Promise mScannerPromise;
private static final int REQUEST_CODE_SCAN = 567;
private static final String E_ACTIVITY_DOES_NOT_EXIST = "E_ACTIVITY_DOES_NOT_EXIST";
private static final String E_SCANNER_CANCELLED = "E_SCANNER_CANCELLED";
private static final String E_FAILED_TO_SHOW_SCANNER = "E_FAILED_TO_SHOW_SCANNER";
private static final String E_INVALID_CODE = "E_INVALID_CODE";
public ReactNativeHmsScanModule(ReactApplicationContext context) {
super(context);
reactContext = context;
reactContext.addActivityEventListener(mActivityEventListener);
}
@NonNull
@Override
public String getName() {
return REACT_CLASS;
}
private final ActivityEventListener mActivityEventListener = new BaseActivityEventListener() {
@Override
public void onActivityResult(Activity activity, int requestCode, int resultCode, Intent intent) {
}
};
}
Now let’s implement the listener method
Code:
if (requestCode == REQUEST_CODE_SCAN) {
if (mScannerPromise != null) {
if (resultCode == Activity.RESULT_CANCELED) {
mScannerPromise.reject(E_SCANNER_CANCELLED, "Scanner was cancelled");
} else if (resultCode == Activity.RESULT_OK) {
Object obj = intent.getParcelableExtra(ScanUtil.RESULT);
if (obj instanceof HmsScan) {
if (!TextUtils.isEmpty(((HmsScan) obj).getOriginalValue())) {
mScannerPromise.resolve(((HmsScan) obj).getOriginalValue().toString());
} else {
mScannerPromise.reject(E_INVALID_CODE, "Invalid Code");
}
return;
}
}
}
}
Let’s walk through what this does
When the listener receives an activity result, it checks if this is our request by checking the request code.
Afterwards, it checks if the promise object is null. We will cover the promise object later, but briefly speaking this is passed from RN to native, and we rely on it to send the data back to RN.
Then, if the result is a CANCELED situation, we tell RN that the scanner is canceled, for example closed by user, by calling promise.reject()
If the result indicates OK, we’ll get the data by calling getParcelableExtra()
Now we’ll see if the resulting data matches our data type and is not empty, and then we’ll call promise.resolve()
Otherwise we will resolve a general rejection message. Of course here you can expand and give a more detailed breakdown and resolution if you wish
This is a lot of checking and validation, but one can never be too safe, right?
Cool, now we have finished the listener, let’s work on the caller! This is the method we’ll be calling in RN side, indicated by the @reactMethod annotation.
[CODE @reactMethod
public void startScan(final Promise promise) {
} [/CODE]
Give it some content
[CODE @reactMethod
public void startScan(final Promise promise) {
Activity currentActivity = getCurrentActivity();
if (currentActivity == null) {
promise.reject(E_ACTIVITY_DOES_NOT_EXIST, "Activity doesn't exist");
return;
}
// Store the promise to resolve/reject when picker returns data
mScannerPromise = promise;
try {
ScanUtil.startScan(currentActivity, REQUEST_CODE_SCAN, new HmsScanAnalyzerOptions.Creator().setHmsScanTypes(HmsScan.ALL_SCAN_TYPE).create());
} catch (Exception e) {
mScannerPromise.reject(E_FAILED_TO_SHOW_SCANNER, e);
mScannerPromise = null;
}
}[/CODE]
Let’s do a walk through again
First we get the current activity reference and check if it is valid
Then we take the input promise and assign it to mScannerPromise which we declared earlier, so we can refer and use it throughout the process
Now we call the Scan Kit! This part is same as a normal android implementation.
Of course we wrap it with a try-catch for safety purposes
At this point we have finished the Module, same as the warm up we’ll need to create a Package. This time it is a Native Module therefore we register it in createNativeModules() and also give createViewManagers() an empty list.
Code:
package com.cfdemo.d001rn;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import com.facebook.react.ReactPackage;
import com.facebook.react.bridge.NativeModule;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.uimanager.ViewManager;
public class ReactNativeHmsScanPackage implements ReactPackage {
@Override
public List<NativeModule> createNativeModules(ReactApplicationContext reactContext) {
return Arrays.<NativeModule>asList(new ReactNativeHmsScanModule(reactContext));
}
@Override
public List<ViewManager> createViewManagers(ReactApplicationContext reactContext) {
return Collections.emptyList();
}
}
Same as before, we’ll add the package to our MainApplication.java, import the Package, and add it in the getPackages() function
Code:
import com.cfdemo.d001rn.ReactNativeWarmUpPackage;
import com.cfdemo.d001rn.ReactNativeHmsScanPackage;
public class MainApplication extends Application implements ReactApplication {
...
@Override
protected List<ReactPackage> getPackages() {
@SuppressWarnings("UnnecessaryLocalVariable")
List<ReactPackage> packages = new PackageList(this).getPackages();
// Packages that cannot be autolinked yet can be added manually here, for example:
// packages.add(new MyReactNativePackage());
packages.add(new ReactNativeWarmUpPackage());
packages.add(new ReactNativeHmsScanPackage());
return packages;
}
All set! Let’s head back to RN side. This is our app from the warm up exercise(with a bit style change for the things we are going to add)
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s add a Button and set its onPress property as this.onScan() which we’ll implement after this
Code:
render() {
const { displayText, region } = this.state
return (
<View style={styles.container}>
<Text style={styles.textBox}>
{displayText}
</Text>
<RNWarmUpView
style={styles.nativeModule}
text={"Render in Javascript"}
/>
<Button
style={styles.button}
title={'Scan Button'}
onPress={() => this.onScan()}
/>
<MapView
style={styles.map}
region={region}
showCompass={true}
showsUserLocation={true}
showsMyLocationButton={true}
>
</MapView>
</View>
);
}
Reload and see the button
Similar to the one in the warm up, we can declare the Native Module using this simple way
Code:
const RNWarmUpView = requireNativeComponent('RCTWarmUpView')
const RNHMSScan = NativeModules.ReactNativeHmsScan
Now we’ll implement onScan() which uses the async/await syntax for asynchronous coding
Code:
async onScan() {
try {
const data = await RNHMSScan.startScan();
// handle your data here
} catch (e) {
console.log(e);
}
}
Important! Scan Kit requires CAMERA and READ_EXTERNAL_STORAGE permissions to function, make sure you have handled this beforehand. One of the recommended way to handle it is to use react-native-permissions library https://github.com/react-native-community/react-native-permissions. I will make another article regarding this topic, but for now you can refer to https://github.com/clementf-hw/rn_integration_demo if you are in need.
Now we click…TADA!
In this demo, this is what onScan() contains
Code:
async onScan() {
try {
const data = await RNHMSScan.startScan();
const qrcodeData = {
message: (JSON.parse(data)).message,
location: (JSON.parse(data)).location,
my_location: (JSON.parse(data)).my_location
}
this.handleData(qrcodeData)
} catch (e) {
console.log(e);
}
}
Note: one minor modification is needed if you are basing on the branch of this demo project mentioned before
Code:
onLocationReceived(locationData) {
const location = typeof locationData === "object" ? locationData : JSON.parse(locationData)
…
Now let’s try scan this
The actual data contained in the QR Code is
Code:
{"message": "Auckland", "location": {"lat": "-36.848461","lng": "174.763336"}}
Which bring us to Auckland!
Now your HMS Scan Kit in React Native is up and running!
Pt. 2 of this tutorial is done, please feel free to ask questions. You can also check out the repo of the sample project on github: https://github.com/clementf-hw/rn_integration_demo, and raise issue if you have any question or any update.
In the 3rd and final part of this tutorial, we'll go through how to make this RN HMS Scan Kit Bridge a standalone, downloadable and importable React Native Module, which you can use in multiple projects instead of creating the Native Module one by one, and you can even upload it to NPM to share with other fellow developers.

Comparison Between Huawei ML Kit Text Recognition & Firebase ML Kit Text Recognition

Comparison Between Huawei ML Kit Text Recognition & Firebase ML Kit Text Recognition
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this article, we will compare Huawei ML Kit Text Recognition and Firebase ML Kit Text Recognition usages and also we will create sample Android applications to understand how they work. Lets get start it.
Huawei ML Kit Text Recognition
About The Service
HUAWEI ML Kit allows your apps to easily leverage Huawei’s long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei’s technology accumulation, ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
Text Recognition
The text recognition service can extracts text from images of receipts, business cards, and documents. This service is widely used in office, education, transit, and other apps. For example, you can use this service in a translation app to extract text in a photo and translate the text, improving user experience.
This service can run on the cloud or device, but the supported languages differ in the two scenarios. On-device APIs can recognize text in Simplified Chinese, Japanese, Korean, and Latin-based languages (refer to Latin Script Supported by On-device Text Recognition). When running on the cloud, the service can recognize text in languages such as Simplified Chinese, English, Spanish, Portuguese, Italian, German, French, Russian, Japanese, Korean, Polish, Finnish, Norwegian, Swedish, Danish, Turkish, Thai, Arabic, Hindi, and Indonesian.
Configure your project on AppGallery Connect
Registering a Huawei ID
You need to register a Huawei ID to use the plugin. If you don’t have one, follow the instructions here.
Preparations for Integrating HUAWEI HMS Core
First of all, you need to integrate Huawei Mobile Services with your application. I will not get into details about how to integrate your application but you can use this tutorial as step by step guide.
1. Integrating the Text Recognition SDK
You need to integrate the base SDK and then one or more required language model packages in your app-level build.gradle.
Code:
//AGC Core
implementation 'com.huawei.agconnect:agconnect-core:1.4.0.300'
//ML OCR Base SDK
implementation 'com.huawei.hms:ml-computer-vision-ocr:2.0.1.300'
//Latin-based Language Model Package
implementation 'com.huawei.hms:ml-computer-vision-ocr-latin-model:2.0.1.300'
2. Automatically Updating the Machine Learning Model
To use the on-device text recognition service, add the following statements to the AndroidManifest.xml file.
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "ocr"/>
...
</manifest>
3. There will be an ImageView, a TextView and two Button in our RelativeLayout
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<ImageView
android:id="@+id/captured_image_view"
android:layout_width="match_parent"
android:layout_height="400dp" />
<TextView
android:id="@+id/detected_text_view"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_below="@+id/captured_image_view"
android:textSize="20sp"
android:maxLines="10"
android:layout_margin="10dp"
/>
<Button
android:id="@+id/capture_image"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_above="@+id/detect_text_from_image"
android:text="@string/button_text"
android:textAllCaps="false"
android:background="@color/colorAccent"
android:textColor="@android:color/white"
android:textSize="28sp"
android:layout_marginBottom="5dp"
/>
<Button
android:id="@+id/detect_text_from_image"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:text="@string/button_detect"
android:textAllCaps="false"
android:background="@color/colorPrimary"
android:textColor="@android:color/white"
android:textSize="28sp"
/>
</RelativeLayout>
4. Text Recognition from Images on the Device
Take a photo with a camera app
The Android way of delegating actions to other applications is to invoke an Intent that describes what you want done. This process involves three pieces: The Intent itself, a call to start the external Activity, and some code to handle the image data when focus returns to your activity.We will see the result into onActivityResult.
Code:
import androidx.annotation.Nullable;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Intent;
import android.graphics.Bitmap;
import android.os.Bundle;
import android.provider.MediaStore;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.MLAnalyzerFactory;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.text.MLLocalTextSetting;
import com.huawei.hms.mlsdk.text.MLText;
import com.huawei.hms.mlsdk.text.MLTextAnalyzer;
import java.io.IOException;
public class MainActivity extends AppCompatActivity {
static final int REQUEST_IMAGE_CAPTURE = 1;
private MLTextAnalyzer mTextAnalyzer;
private ImageView capturedImageView;
private TextView detectedTextView;
private Button buttonAddImage, detectTextBtn;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
capturedImageView = findViewById(R.id.captured_image_view);
detectedTextView = findViewById(R.id.detected_text_view);
buttonAddImage = findViewById(R.id.capture_image);
detectTextBtn=findViewById(R.id.detect_text_from_image);
buttonAddImage.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
dispatchTakePictureIntent();
detectedTextView.setText("");
}
});
detectTextBtn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
createMLTextAnalyzer();
}
});
}
private void dispatchTakePictureIntent() {
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (intent.resolveActivity(getPackageManager()) != null) {
startActivityForResult(intent, REQUEST_IMAGE_CAPTURE);
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_IMAGE_CAPTURE && resultCode == RESULT_OK && data != null) {
Bundle extras = data.getExtras();
Bitmap selectedBitmap = (Bitmap) extras.get("data");
capturedImageView.setImageBitmap(selectedBitmap);
asyncAnalyzeText(selectedBitmap);
}
}
5. Create the text analyzer MLTextAnalyzer to recognize text in images. You can set MLLocalTextSetting to specify languages that can be recognized. If you do not set the languages, only Latin-based languages can be recognized by default.
Code:
private void createMLTextAnalyzer() {
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("en")
.create();
mTextAnalyzer = MLAnalyzerFactory.getInstance().getLocalTextAnalyzer(setting);
}
6. Pass the MLFrame object to the asyncAnalyseFrame method for text recognition.
Code:
private void asyncAnalyzeText(Bitmap bitmap) {
if (mTextAnalyzer == null) {
createMLTextAnalyzer();
}
MLFrame frame = MLFrame.fromBitmap(bitmap);
Task<MLText> task = mTextAnalyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText text) {
detectedTextView.setText(text.getStringValue());
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
detectedTextView.setText(e.getMessage());
}
});
}
7. After the recognition is complete, stop the analyzer to release recognition resources.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
try {
if (mTextAnalyzer != null)
mTextAnalyzer.stop();
} catch (IOException e) {
e.printStackTrace();
}
}
8. You can see all the code in Main Activity below.
Code:
import androidx.annotation.Nullable;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Intent;
import android.graphics.Bitmap;
import android.os.Bundle;
import android.provider.MediaStore;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.MLAnalyzerFactory;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.text.MLLocalTextSetting;
import com.huawei.hms.mlsdk.text.MLText;
import com.huawei.hms.mlsdk.text.MLTextAnalyzer;
import java.io.IOException;
public class MainActivity extends AppCompatActivity {
static final int REQUEST_IMAGE_CAPTURE = 1;
private MLTextAnalyzer mTextAnalyzer;
private ImageView capturedImageView;
private TextView detectedTextView;
private Button buttonAddImage, detectTextBtn;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
capturedImageView = findViewById(R.id.captured_image_view);
detectedTextView = findViewById(R.id.detected_text_view);
buttonAddImage = findViewById(R.id.capture_image);
detectTextBtn=findViewById(R.id.detect_text_from_image);
buttonAddImage.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
dispatchTakePictureIntent();
detectedTextView.setText("");
}
});
detectTextBtn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
createMLTextAnalyzer();
}
});
}
private void dispatchTakePictureIntent() {
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (intent.resolveActivity(getPackageManager()) != null) {
startActivityForResult(intent, REQUEST_IMAGE_CAPTURE);
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_IMAGE_CAPTURE && resultCode == RESULT_OK && data != null) {
Bundle extras = data.getExtras();
Bitmap selectedBitmap = (Bitmap) extras.get("data");
capturedImageView.setImageBitmap(selectedBitmap);
asyncAnalyzeText(selectedBitmap);
}
}
private void createMLTextAnalyzer() {
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("en")
.create();
mTextAnalyzer = MLAnalyzerFactory.getInstance().getLocalTextAnalyzer(setting);
}
private void asyncAnalyzeText(Bitmap bitmap) {
if (mTextAnalyzer == null) {
createMLTextAnalyzer();
}
MLFrame frame = MLFrame.fromBitmap(bitmap);
Task<MLText> task = mTextAnalyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText text) {
detectedTextView.setText(text.getStringValue());
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
detectedTextView.setText(e.getMessage());
}
});
}
@Override
protected void onDestroy() {
super.onDestroy();
try {
if (mTextAnalyzer != null)
mTextAnalyzer.stop();
} catch (IOException e) {
e.printStackTrace();
}
}
}
9. Here’s the result.
Firebase ML Kit Text Recognition
You can use ML Kit to recognize text in images. ML Kit has both a general-purpose API suitable for recognizing text in images, such as the text of a street sign, and an API optimized for recognizing the text of documents. The general-purpose API has both on-device and cloud-based models.
Before you begin
1. If you haven’t already, add Firebase to your Android project.
2. In your project-level build.gradle file, make sure to include Google's Maven repository in both your buildscript and allprojects sections.
3. Add the dependencies for the ML Kit Android libraries to your module (app-level) Gradle file (usually app/build.gradle):
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.google.gms.google-services'
dependencies {
// ...
implementation 'com.google.firebase:firebase-core:15.0.2'
implementation 'com.google.firebase:firebase-ml-vision:15.0.0'
}
4. add the following declaration to your app’s AndroidManifest.xml file
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-feature android:name="android.hardware.camera"
android:required="true" />
<application ...>
...
<meta-data
android:name="com.google.firebase.ml.vision.DEPENDENCIES"
android:value="ocr" />
<!-- To use multiple models: android:value="ocr,model2,model3" -->
</application>
5. There will be an ImageView, a TextView and two Button in our RelativeLayout
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<ImageView
android:id="@+id/captured_image_view"
android:layout_width="match_parent"
android:layout_height="400dp" />
<TextView
android:id="@+id/detected_text_view"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_below="@+id/captured_image_view"
android:textSize="20sp"
android:maxLines="10"
android:layout_margin="10dp"
/>
<Button
android:id="@+id/capture_image"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_above="@+id/detect_text_from_image"
android:text="@string/button_text"
android:textAllCaps="false"
android:background="@color/colorAccent"
android:textColor="@android:color/white"
android:textSize="28sp"
android:layout_marginBottom="5dp"
/>
<Button
android:id="@+id/detect_text_from_image"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:text="@string/button_detect"
android:textAllCaps="false"
android:background="@color/colorPrimary"
android:textColor="@android:color/white"
android:textSize="28sp"
/>
</RelativeLayout>
6. Take a photo with a camera app
Here’s a function that invokes an intent to capture a photo.
Code:
import com.google.android.gms.tasks.OnFailureListener;
import com.google.android.gms.tasks.OnSuccessListener;
import com.google.firebase.ml.vision.FirebaseVision;
import com.google.firebase.ml.vision.common.FirebaseVisionImage;
import com.google.firebase.ml.vision.text.FirebaseVisionText;
import com.google.firebase.ml.vision.text.FirebaseVisionTextDetector;
import java.util.List;
public class MainActivity extends AppCompatActivity {
private Button captureImageBtn, detectTextBtn;
private ImageView capturedImageView;
private TextView detectedTextView;
static final int REQUEST_IMAGE_CAPTURE = 1;
Bitmap imageBitmap;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
captureImageBtn = findViewById(R.id.capture_image);
detectTextBtn = findViewById(R.id.detect_text_from_image);
capturedImageView = findViewById(R.id.captured_image_view);
detectedTextView = findViewById(R.id.detected_text_view);
captureImageBtn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
dispatchTakePictureIntent();
detectedTextView.setText("");
}
});
detectTextBtn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
detectTextFromImage();
}
});
}
private void dispatchTakePictureIntent() {
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (takePictureIntent.resolveActivity(getPackageManager()) != null) {
startActivityForResult(takePictureIntent, REQUEST_IMAGE_CAPTURE);
}
}
7. The Android Camera application encodes the photo in the return Intent delivered to onActivityResult() as a small Bitmap in the extras, under the key "data". The following code retrieves this image and displays it in an ImageView.
Code:
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_IMAGE_CAPTURE && resultCode == RESULT_OK) {
Bundle extras = data.getExtras();
imageBitmap = (Bitmap) extras.get("data");
capturedImageView.setImageBitmap(imageBitmap);
}
}
8. To create a FirebaseVisionImage object from a Bitmap object.
Code:
private void detectTextFromImage() {
FirebaseVisionImage firebaseVisionImage = FirebaseVisionImage.fromBitmap(imageBitmap);
FirebaseVisionTextDetector firebaseVisionTextDetector = FirebaseVision.getInstance().getVisionTextDetector();
firebaseVisionTextDetector.detectInImage(firebaseVisionImage).addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
@Override
public void onSuccess(FirebaseVisionText firebaseVisionText) {
displayTextFromImage(firebaseVisionText);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
Toast.makeText(MainActivity.this, "Error: " + e.getMessage(), Toast.LENGTH_SHORT).show();
Log.d("Error", e.getMessage());
}
});
}
9. To display text from the image
Code:
private void displayTextFromImage(FirebaseVisionText firebaseVisionText) {
List<FirebaseVisionText.Block>blockList=firebaseVisionText.getBlocks();
if(blockList.size()==0){
Toast.makeText(this, "No Text Found in Image", Toast.LENGTH_SHORT).show();
}else{
for(FirebaseVisionText.Block block: firebaseVisionText.getBlocks()){
String text=block.getText();
detectedTextView.setText(text);
}
}
}
10. You can see all the code in Main Activity below.
Code:
import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Intent;
import android.graphics.Bitmap;
import android.os.Bundle;
import android.provider.MediaStore;
import android.util.Log;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;
import com.google.android.gms.tasks.OnFailureListener;
import com.google.android.gms.tasks.OnSuccessListener;
import com.google.firebase.ml.vision.FirebaseVision;
import com.google.firebase.ml.vision.common.FirebaseVisionImage;
import com.google.firebase.ml.vision.text.FirebaseVisionText;
import com.google.firebase.ml.vision.text.FirebaseVisionTextDetector;
import java.util.List;
public class MainActivity extends AppCompatActivity {
private Button captureImageBtn, detectTextBtn;
private ImageView capturedImageView;
private TextView detectedTextView;
static final int REQUEST_IMAGE_CAPTURE = 1;
Bitmap imageBitmap;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
captureImageBtn = findViewById(R.id.capture_image);
detectTextBtn = findViewById(R.id.detect_text_from_image);
capturedImageView = findViewById(R.id.captured_image_view);
detectedTextView = findViewById(R.id.detected_text_view);
captureImageBtn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
dispatchTakePictureIntent();
detectedTextView.setText("");
}
});
detectTextBtn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
detectTextFromImage();
}
});
}
private void dispatchTakePictureIntent() {
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (takePictureIntent.resolveActivity(getPackageManager()) != null) {
startActivityForResult(takePictureIntent, REQUEST_IMAGE_CAPTURE);
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_IMAGE_CAPTURE && resultCode == RESULT_OK) {
Bundle extras = data.getExtras();
imageBitmap = (Bitmap) extras.get("data");
capturedImageView.setImageBitmap(imageBitmap);
}
}
private void detectTextFromImage() {
FirebaseVisionImage firebaseVisionImage = FirebaseVisionImage.fromBitmap(imageBitmap);
FirebaseVisionTextDetector firebaseVisionTextDetector = FirebaseVision.getInstance().getVisionTextDetector();
firebaseVisionTextDetector.detectInImage(firebaseVisionImage).addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
@Override
public void onSuccess(FirebaseVisionText firebaseVisionText) {
displayTextFromImage(firebaseVisionText);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
Toast.makeText(MainActivity.this, "Error: " + e.getMessage(), Toast.LENGTH_SHORT).show();
Log.d("Error", e.getMessage());
}
});
}
private void displayTextFromImage(FirebaseVisionText firebaseVisionText) {
List<FirebaseVisionText.Block>blockList=firebaseVisionText.getBlocks();
if(blockList.size()==0){
Toast.makeText(this, "No Text Found in Image", Toast.LENGTH_SHORT).show();
}else{
for(FirebaseVisionText.Block block: firebaseVisionText.getBlocks()){
String text=block.getText();
detectedTextView.setText(text);
}
}
}
}
11. Here’s the result.
For more information about HUAWEI ML Kit, visit:
https://developer.huawei.com/consumer/en/hms/huawei-mlkit
Other Resources:
https://developer.android.com/training/camera/photobasics#java
https://firebase.google.com/docs/ml-kit/android/recognize-text
https://firebase.google.com/support/release-notes/android
Related Links
Original post: https://medium.com/huawei-developers/comparison-between-huawei-ml-kit-text-recognition-and-firebase-ml-kit-text-recognition-98217e3cfa84
Nice article
Is huawei ML kit is better than firebase ML kit?
riteshchanchal said:
Is huawei ML kit is better than firebase ML kit?
Click to expand...
Click to collapse
I'm not sure which one is better here I just used text recognition. But in this example the Huawei Ml kit worked better. Also, payment is required to use the latest version of the Firebase ML kit but Huawei ML kit is free.

Validate your news: Feat. Huawei ML Kit (Text Image Super-Resolution)

Introduction
Quality improvement has become crucial in this era of digitalization where all our documents are kept in the folders, shared over the network and read on the digital device.
Imaging the grapple of an elderly person who has no way to read and understand an old prescribed medical document which has gone blurred and deteriorated.
Can we evade such issues??
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s unbind what Huawei ML Kit offers to overcome such challenges of our day to day life.
Huawei ML Kit provides Text Image Super-Resolution API to improvise the quality and visibility of old and blurred text on an image.
Text Image Super-Resolution can zoom in an image that contains the text and significantly improve the definition of the text.
Limitations
The text image super-resolution service requires images with the maximum resolution 800 x 800 px and the length greater than or equal to 64 px.
Development Overview
Prerequisite
Must have a Huawei Developer Account
Must have Android Studio 3.0 or later
Must have a Huawei phone with HMS Core 4.0.2.300 or later
EMUI 3.0 or later
Software Requirements
Java SDK 1.7 or later
Android 5.0 or later
Preparation
Create an app or project in the Huawei app gallery connect.
Provide the SHA Key and App Package name of the project in App Information Section and enable the ML Kit API.
Download the agconnect-services.json file.
Create an Android project.
Integration
Add below to build.gradle (project)file, under buildscript/repositories and allprojects/repositories.
Code:
Maven {url 'http://developer.huawei.com/repo/'}
Add below to build.gradle (app) file, under dependencies.
To use the Base SDK of ML Kit-Text Image Super Resolution, add the following dependencies:
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-textimagesuperresolution:2.0.3.300'
}
Adding permissions
Code:
<uses-permission android:name="android.permission.CAMERA " />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Automatically Updating the Machine Learning Model
Add the following statements to the AndroidManifest.xml file to automatically install the machine learning model on the user’s device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "tisr"/>
Development Process
This article focuses on demonstrating the capabilities of Huawei’s ML Kit: Text Image Super- Resolution API’s.
Here is the example which explains how can we integrate this powerful API to leverage the benefits of improvising the Text-Image quality and provide full accessibility to the reader to read the old and blur newspapers from an online news directory.
TextImageView Activity : Launcher Activity
This is main activity of “The News Express “application.
Code:
package com.mlkitimagetext.example;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import com.mlkitimagetext.example.textimagesuperresolution.TextImageSuperResolutionActivity;
public class TextImageView extends AppCompatActivity {
Button NewsExpress;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_text_image_view);
NewsExpress = findViewById(R.id.bt1);
NewsExpress.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
startActivity(new Intent(TextImageView.this, TextImageSuperResolutionActivity.class));
}
});
}
}
Activity_text_image_view.xml
This is the view class for the above activity class.
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/im3">
<LinearLayout
android:id="@+id/ll_buttons"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="200dp"
android:orientation="vertical">
<Button
android:id="@+id/bt1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@android:color/transparent"
android:layout_gravity="center"
android:text="The News Express"
android:textAllCaps="false"
android:textStyle="bold"
android:textSize="34dp"
android:textColor="@color/mlkit_bcr_text_color_white"></Button>
<TextView
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:textStyle="bold"
android:text="Validate Your News"
android:textSize="20sp"
android:layout_gravity="center"
android:textColor="#9fbfdf"/>
</LinearLayout>
</RelativeLayout>
TextImageSuperResolutionActivity
This activity class performs following actions:
Image picker implementation to pick the image from the gallery
Convert selected image to Bitmap
Create a text image super-resolution analyser.
Create an MLFrame object by using android.graphics.Bitmap.
Perform super-resolution processing on the image with text.
Stop the analyser to release detection resources.
Code:
package com.mlkitimagetext.example;
import android.content.Intent;
import android.graphics.Bitmap;
import android.net.Uri;
import android.os.Bundle;
import android.provider.MediaStore;
import android.view.View;
import android.widget.ImageView;
import android.widget.Toast;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.common.MLException;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolution;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolutionAnalyzer;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolutionAnalyzerFactory;
import com.mlkitimagetext.example.R;
import androidx.appcompat.app.AppCompatActivity;
import java.io.IOException;
public class TextImageSuperResolutionActivity<button> extends AppCompatActivity implements View.OnClickListener {
private static final String TAG = "TextSuperResolutionActivity";
private MLTextImageSuperResolutionAnalyzer analyzer;
private static final int INDEX_3X = 1;
private static final int INDEX_ORIGINAL = 2;
private ImageView imageView;
private Bitmap srcBitmap;
Uri imageUri;
Boolean ImageSetupFlag = false;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_text_super_resolution);
imageView = findViewById(R.id.image);
imageView.setOnClickListener(this);
findViewById(R.id.btn_load).setOnClickListener(this);
createAnalyzer();
}
@Override
public void onClick(View view) {
if (view.getId() == R.id.btn_load) {
openGallery();
}else if (view.getId() == R.id.image)
{
if(ImageSetupFlag != true)
{
detectImage(INDEX_3X);
}else {
detectImage(INDEX_ORIGINAL);
ImageSetupFlag = false;
}
}
}
private void openGallery() {
Intent gallery = new Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
startActivityForResult(gallery, 1);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data){
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK && requestCode == 1){
imageUri = data.getData();
try {
srcBitmap = MediaStore.Images.Media.getBitmap(this.getContentResolver(), imageUri);
} catch (IOException e) {
e.printStackTrace();
}
//BitmapFactory.decodeResource(getResources(), R.drawable.new1);
imageView.setImageURI(imageUri);
}
}
private void release() {
if (analyzer == null) {
return;
}
analyzer.stop();
}
private void detectImage(int type) {
if (type == INDEX_ORIGINAL) {
setImage(srcBitmap);
return;
}
if (analyzer == null) {
return;
}
// Create an MLFrame by using the bitmap.
MLFrame frame = new MLFrame.Creator().setBitmap(srcBitmap).create();
Task<MLTextImageSuperResolution> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLTextImageSuperResolution>() {
public void onSuccess(MLTextImageSuperResolution result) {
// success.
Toast.makeText(getApplicationContext(), "Success", Toast.LENGTH_SHORT).show();
setImage(result.getBitmap());
ImageSetupFlag = true;
}
})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException) e;
// Get the error code, developers can give different page prompts according to the error code.
int errorCode = mlException.getErrCode();
// Get the error message, developers can combine the error code to quickly locate the problem.
String errorMessage = mlException.getMessage();
Toast.makeText(getApplicationContext(), "Error:" + errorCode + " Message:" + errorMessage, Toast.LENGTH_SHORT).show();
} else {
// Other exception。
Toast.makeText(getApplicationContext(), "Failed:" + e.getMessage(), Toast.LENGTH_SHORT).show();
}
}
});
}
private void setImage(final Bitmap bitmap) {
imageView.setImageBitmap(bitmap);
}
private void createAnalyzer() {
analyzer = MLTextImageSuperResolutionAnalyzerFactory.getInstance().getTextImageSuperResolutionAnalyzer();
}
@Override
protected void onDestroy() {
super.onDestroy();
if (srcBitmap != null) {
srcBitmap.recycle();
}
release();
}
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202388336667910498&fid=0101187876626530001
Which all image format is supported?

Beginner: Page Ability features in Huawei Harmony OS

Introduction
In this article, we can create an application showing below features:
1. Page Ability and Ability Slice
2. Page Ability life cycle and Ability Slice life cycle
3. Switching between Ability slices
4. Switching between abilities.
5. Transfer data between abilities.
Requirements
1. DevEco IDE
2. Wearable watch (Can use simulator also)
Harmony OS Supports various 2 types of abilities
1. Feature Ability
2. Particle Ability
In this article, we will test Feature Ability template called Page template.
UI Design
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Ability Slice:
An Ability Slice represents a single screen and its control logic.
Page Template (Page Abilities):
Page template is used by Feature ability to interact with users, one page template can contain one or more Ability Slices. Like shown below.
When a Page ability appears in the foreground, it presents one of its ability slices by default.
config.json
I have declared 2 abilities with type page.
JSON:
{
"app": {
"bundleName": "com.example.threadingsample",
"vendor": "example",
"version": {
"code": 1,
"name": "1.0"
},
"apiVersion": {
"compatible": 3,
"target": 3
}
},
"deviceConfig": {},
"module": {
"package": "com.example.threadingsample",
"name": ".MyApplication",
"deviceType": [
"wearable"
],
"distro": {
"deliveryWithInstall": true,
"moduleName": "entry",
"moduleType": "entry"
},
"abilities": [
{
"skills": [
{
"entities": [
"entity.system.home"
],
"actions": [
"action.system.home"
]
}
],
"orientation": "landscape",
"name": "com.example.threadingsample.MainAbility",
"icon": "$media:icon",
"description": "$string:mainability_description",
"label": "ThreadingSample",
"type": "page",
"launchType": "standard"
},
{
"orientation": "landscape",
"name": "com.example.threadingsample.second.SecondAbility",
"icon": "$media:icon",
"description": "$string:mainability_description",
"label": "SecondAbility",
"type": "page",
"launchType": "standard"
}
]
}
}
Page Ability life cycle
For more information: https://developer.harmonyos.com/en/...uides/ability-page-lifecycle-0000000000029840
Ability Slice life cycle:
An ability slice's lifecycle is bound to the Page ability that hosts it. You must override the onStart() callback of ability slices and use setUIContent() to set the UI content to display in this callback.
Switching between slices
Add the below code in MainAbilitySlice.java
Java:
package com.example.threadingsample.slice;
import com.example.threadingsample.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.aafwk.content.Operation;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.app.dispatcher.TaskDispatcher;
import ohos.app.dispatcher.task.TaskPriority;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class MainAbilitySlice extends AbilitySlice {
static final HiLogLabel LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
Text text;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
HiLog.info(LABEL_LOG, "MainAbilitySlice->"+Thread.currentThread().getName());
text = (Text) findComponentById(ResourceTable.Id_text);
Button launchNewSlice = (Button) findComponentById(ResourceTable.Id_button_launch_new_slice);
launchNewSlice.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
present(new ChildAbilitySlice(), new Intent());
}
});
Button launchNewAbility = (Button) findComponentById(ResourceTable.Id_button_launch_new_ability);
launchNewAbility.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
HiLog.info(LABEL_LOG, "MainAbilitySlice launch new [email protected]@->"+Thread.currentThread().getName());
Intent intent = new Intent();
// Use the OperationBuilder class of Intent to construct an Operation object and set the deviceId (left empty if a local ability is required), bundleName, and abilityName attributes for the object.
Operation operation = new Intent.OperationBuilder()
.withDeviceId("")
.withBundleName("com.example.threadingsample")
.withAbilityName("com.example.threadingsample.second.SecondAbility")
.build();
intent.setParam("TEST_KEY", "apple");
// Set the created Operation object to the Intent as its operation attribute.
intent.setOperation(operation);
startAbility(intent);
}
});
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
}
Page Ability life cycle
For more information: https://developer.harmonyos.com/en/...uides/ability-page-lifecycle-0000000000029840
Ability Slice life cycle:
An ability slice's lifecycle is bound to the Page ability that hosts it. You must override the onStart() callback of ability slices and use setUIContent() to set the UI content to display in this callback.
Switching between slices
Add the below code in MainAbilitySlice.java
Java:
package com.example.threadingsample.slice;
import com.example.threadingsample.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.aafwk.content.Operation;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.app.dispatcher.TaskDispatcher;
import ohos.app.dispatcher.task.TaskPriority;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class MainAbilitySlice extends AbilitySlice {
static final HiLogLabel LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
Text text;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
HiLog.info(LABEL_LOG, "MainAbilitySlice->"+Thread.currentThread().getName());
text = (Text) findComponentById(ResourceTable.Id_text);
Button launchNewSlice = (Button) findComponentById(ResourceTable.Id_button_launch_new_slice);
launchNewSlice.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
present(new ChildAbilitySlice(), new Intent());
}
});
Button launchNewAbility = (Button) findComponentById(ResourceTable.Id_button_launch_new_ability);
launchNewAbility.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
HiLog.info(LABEL_LOG, "MainAbilitySlice launch new [email protected]@->"+Thread.currentThread().getName());
Intent intent = new Intent();
// Use the OperationBuilder class of Intent to construct an Operation object and set the deviceId (left empty if a local ability is required), bundleName, and abilityName attributes for the object.
Operation operation = new Intent.OperationBuilder()
.withDeviceId("")
.withBundleName("com.example.threadingsample")
.withAbilityName("com.example.threadingsample.second.SecondAbility")
.build();
intent.setParam("TEST_KEY", "apple");
// Set the created Operation object to the Intent as its operation attribute.
intent.setOperation(operation);
startAbility(intent);
}
});
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
}
Add below code in ability_main.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:background_element="#8c7373"
ohos:padding="32">
<Text
ohos:multiple_lines="true"
ohos:id="$+id:text"
ohos:height="match_content"
ohos:width="200"
ohos:layout_alignment="horizontal_center"
ohos:text="Text"
ohos:text_size="10fp"/>
<Button
ohos:id="$+id:button_launch_new_slice"
ohos:height="match_content"
ohos:width="match_content"
ohos:background_element="$graphic:background_button"
ohos:layout_alignment="horizontal_center"
ohos:padding="5"
ohos:text="Launch new Slice"
ohos:text_size="30"
ohos:top_margin="5"/>
<Button
ohos:id="$+id:button_launch_new_ability"
ohos:height="match_content"
ohos:width="match_content"
ohos:background_element="$graphic:background_button"
ohos:layout_alignment="horizontal_center"
ohos:padding="5"
ohos:text="Launch new Ability"
ohos:text_size="30"
ohos:top_margin="5"/>
</DirectionalLayout>
Add the below code in ChildAbilitySlice.java
Java:
package com.example.threadingsample.slice;
import com.example.threadingsample.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.aafwk.content.Operation;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.app.dispatcher.TaskDispatcher;
import ohos.app.dispatcher.task.TaskPriority;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class ChildAbilitySlice extends AbilitySlice {
static final HiLogLabel LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
Text text;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_child_slice_two);
HiLog.info(LABEL_LOG, "ChildAbilitySlice->"+Thread.currentThread().getName());
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
}
Add the below code in child_slice_two.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:background_element="#8c7373"
ohos:padding="32">
<Text
ohos:multiple_lines="true"
ohos:id="$+id:text"
ohos:height="match_content"
ohos:width="200"
ohos:layout_alignment="horizontal_center"
ohos:text="Child Slice"
ohos:text_size="10fp"/>
</DirectionalLayout>
Switch from MainAbilitySlice to ChildAbilitySlice using present method.
Java:
Button launchNewSlice = (Button) findComponentById(ResourceTable.Id_button_launch_new_slice);
launchNewSlice.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
present(new ChildAbilitySlice(), new Intent());
}
});
Switching from MainAbility to SecondAbility.
Java:
Button launchNewAbility = (Button) findComponentById(ResourceTable.Id_button_launch_new_ability);
launchNewAbility.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
HiLog.info(LABEL_LOG, "MainAbilitySlice launch new [email protected]@->"+Thread.currentThread().getName());
Intent intent = new Intent();
// Use the OperationBuilder class of Intent to construct an Operation object and set the deviceId (left empty if a local ability is required), bundleName, and abilityName attributes for the object.
Operation operation = new Intent.OperationBuilder()
.withDeviceId("")
.withBundleName("com.example.threadingsample")
.withAbilityName("com.example.threadingsample.second.SecondAbility")
.build();
intent.setParam("TEST_KEY", "apple");
// Set the created Operation object to the Intent as its operation attribute.
intent.setOperation(operation);
startAbility(intent);
}
});
Add the below code to SecondAbility.java
Java:
package com.example.threadingsample.second;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.content.Intent;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class SecondAbility extends Ability {
private static final int MY_PERMISSIONS_REQUEST_LOCATION = 1001;
static final HiLogLabel LABEL = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setMainRoute(SecondAbilitySlice.class.getName());
}
}
Add the below code in SecondAbilitySlice.java
Java:
package com.example.threadingsample.second;
import com.example.threadingsample.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.agp.components.Text;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class SecondAbilitySlice extends AbilitySlice {
static final HiLogLabel LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_two);
HiLog.info(LABEL_LOG, "inside SecondAbilitySlice!!");
if(getAbility() != null) {
if (getAbility().getIntent() != null) {
if (getAbility().getIntent().hasParameter("TEST_KEY")) {
String valueFromFirstAbility = getAbility().getIntent().getStringParam("TEST_KEY");
HiLog.info(LABEL_LOG, "inside [email protected]@-->"+valueFromFirstAbility);
Text text = (Text) findComponentById(ResourceTable.Id_text);
text.setText(valueFromFirstAbility);
} else {
HiLog.info(LABEL_LOG, "TEST_KEY parameter is not present");
}
} else {
HiLog.info(LABEL_LOG, "intent is null");
}
}else{
HiLog.info(LABEL_LOG, "ability is null");
}
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
}
Add the below code in ability_two.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:background_element="#8c7373"
ohos:padding="32">
<Text
ohos:multiple_lines="true"
ohos:id="$+id:text"
ohos:height="match_content"
ohos:width="200"
ohos:layout_alignment="horizontal_center"
ohos:text="Ability2"
ohos:text_size="10fp"/>
</DirectionalLayout>
Transfer data from one ability to another
Java:
Intent intent = new Intent();
intent.setParam("TEST_KEY", "apple");
Retrieve data on other ability
Java:
if(getAbility() != null) {
if (getAbility().getIntent() != null) {
if (getAbility().getIntent().hasParameter("TEST_KEY")) {
String valueFromFirstAbility = getAbility().getIntent().getStringParam("TEST_KEY");
HiLog.info(LABEL_LOG, "inside [email protected]@-->"+valueFromFirstAbility);
Text text = (Text) findComponentById(ResourceTable.Id_text);
text.setText(valueFromFirstAbility);
} else {
HiLog.info(LABEL_LOG, "TEST_KEY parameter is not present");
}
} else {
HiLog.info(LABEL_LOG, "intent is null");
}
}else{
HiLog.info(LABEL_LOG, "ability is null");
}
Tips and Tricks
All Abilities must be registered into config.json
Conclusion
In this article, we have UI components, there life cycle and navigation between them, transfer data between two pages.
Reference
Harmony Official document
DevEco Studio User guide
JS API Reference
Read In Forum
Does this Page ability is like Fragment in Android ?
Does
ask011 said:
Introduction
In this article, we can create an application showing below features:
1. Page Ability and Ability Slice
2. Page Ability life cycle and Ability Slice life cycle
3. Switching between Ability slices
4. Switching between abilities.
5. Transfer data between abilities.
Requirements
1. DevEco IDE
2. Wearable watch (Can use simulator also)
Harmony OS Supports various 2 types of abilities
1. Feature Ability
2. Particle Ability
In this article, we will test Feature Ability template called Page template.
UI Design
Ability Slice:
An Ability Slice represents a single screen and its control logic.
Page Template (Page Abilities):
Page template is used by Feature ability to interact with users, one page template can contain one or more Ability Slices. Like shown below.
When a Page ability appears in the foreground, it presents one of its ability slices by default.
config.json
I have declared 2 abilities with type page.
JSON:
{
"app": {
"bundleName": "com.example.threadingsample",
"vendor": "example",
"version": {
"code": 1,
"name": "1.0"
},
"apiVersion": {
"compatible": 3,
"target": 3
}
},
"deviceConfig": {},
"module": {
"package": "com.example.threadingsample",
"name": ".MyApplication",
"deviceType": [
"wearable"
],
"distro": {
"deliveryWithInstall": true,
"moduleName": "entry",
"moduleType": "entry"
},
"abilities": [
{
"skills": [
{
"entities": [
"entity.system.home"
],
"actions": [
"action.system.home"
]
}
],
"orientation": "landscape",
"name": "com.example.threadingsample.MainAbility",
"icon": "$media:icon",
"description": "$string:mainability_description",
"label": "ThreadingSample",
"type": "page",
"launchType": "standard"
},
{
"orientation": "landscape",
"name": "com.example.threadingsample.second.SecondAbility",
"icon": "$media:icon",
"description": "$string:mainability_description",
"label": "SecondAbility",
"type": "page",
"launchType": "standard"
}
]
}
}
Page Ability life cycle
For more information: https://developer.harmonyos.com/en/...uides/ability-page-lifecycle-0000000000029840
Ability Slice life cycle:
An ability slice's lifecycle is bound to the Page ability that hosts it. You must override the onStart() callback of ability slices and use setUIContent() to set the UI content to display in this callback.
Switching between slices
Add the below code in MainAbilitySlice.java
Java:
package com.example.threadingsample.slice;
import com.example.threadingsample.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.aafwk.content.Operation;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.app.dispatcher.TaskDispatcher;
import ohos.app.dispatcher.task.TaskPriority;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class MainAbilitySlice extends AbilitySlice {
static final HiLogLabel LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
Text text;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
HiLog.info(LABEL_LOG, "MainAbilitySlice->"+Thread.currentThread().getName());
text = (Text) findComponentById(ResourceTable.Id_text);
Button launchNewSlice = (Button) findComponentById(ResourceTable.Id_button_launch_new_slice);
launchNewSlice.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
present(new ChildAbilitySlice(), new Intent());
}
});
Button launchNewAbility = (Button) findComponentById(ResourceTable.Id_button_launch_new_ability);
launchNewAbility.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
HiLog.info(LABEL_LOG, "MainAbilitySlice launch new [email protected]@->"+Thread.currentThread().getName());
Intent intent = new Intent();
// Use the OperationBuilder class of Intent to construct an Operation object and set the deviceId (left empty if a local ability is required), bundleName, and abilityName attributes for the object.
Operation operation = new Intent.OperationBuilder()
.withDeviceId("")
.withBundleName("com.example.threadingsample")
.withAbilityName("com.example.threadingsample.second.SecondAbility")
.build();
intent.setParam("TEST_KEY", "apple");
// Set the created Operation object to the Intent as its operation attribute.
intent.setOperation(operation);
startAbility(intent);
}
});
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
}
Page Ability life cycle
For more information: https://developer.harmonyos.com/en/...uides/ability-page-lifecycle-0000000000029840
Ability Slice life cycle:
An ability slice's lifecycle is bound to the Page ability that hosts it. You must override the onStart() callback of ability slices and use setUIContent() to set the UI content to display in this callback.
Switching between slices
Add the below code in MainAbilitySlice.java
Java:
package com.example.threadingsample.slice;
import com.example.threadingsample.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.aafwk.content.Operation;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.app.dispatcher.TaskDispatcher;
import ohos.app.dispatcher.task.TaskPriority;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class MainAbilitySlice extends AbilitySlice {
static final HiLogLabel LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
Text text;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
HiLog.info(LABEL_LOG, "MainAbilitySlice->"+Thread.currentThread().getName());
text = (Text) findComponentById(ResourceTable.Id_text);
Button launchNewSlice = (Button) findComponentById(ResourceTable.Id_button_launch_new_slice);
launchNewSlice.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
present(new ChildAbilitySlice(), new Intent());
}
});
Button launchNewAbility = (Button) findComponentById(ResourceTable.Id_button_launch_new_ability);
launchNewAbility.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
HiLog.info(LABEL_LOG, "MainAbilitySlice launch new [email protected]@->"+Thread.currentThread().getName());
Intent intent = new Intent();
// Use the OperationBuilder class of Intent to construct an Operation object and set the deviceId (left empty if a local ability is required), bundleName, and abilityName attributes for the object.
Operation operation = new Intent.OperationBuilder()
.withDeviceId("")
.withBundleName("com.example.threadingsample")
.withAbilityName("com.example.threadingsample.second.SecondAbility")
.build();
intent.setParam("TEST_KEY", "apple");
// Set the created Operation object to the Intent as its operation attribute.
intent.setOperation(operation);
startAbility(intent);
}
});
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
}
Add below code in ability_main.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:background_element="#8c7373"
ohos:padding="32">
<Text
ohos:multiple_lines="true"
ohos:id="$+id:text"
ohos:height="match_content"
ohos:width="200"
ohos:layout_alignment="horizontal_center"
ohos:text="Text"
ohos:text_size="10fp"/>
<Button
ohos:id="$+id:button_launch_new_slice"
ohos:height="match_content"
ohos:width="match_content"
ohos:background_element="$graphic:background_button"
ohos:layout_alignment="horizontal_center"
ohos:padding="5"
ohos:text="Launch new Slice"
ohos:text_size="30"
ohos:top_margin="5"/>
<Button
ohos:id="$+id:button_launch_new_ability"
ohos:height="match_content"
ohos:width="match_content"
ohos:background_element="$graphic:background_button"
ohos:layout_alignment="horizontal_center"
ohos:padding="5"
ohos:text="Launch new Ability"
ohos:text_size="30"
ohos:top_margin="5"/>
</DirectionalLayout>
Add the below code in ChildAbilitySlice.java
Java:
package com.example.threadingsample.slice;
import com.example.threadingsample.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.aafwk.content.Operation;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.app.dispatcher.TaskDispatcher;
import ohos.app.dispatcher.task.TaskPriority;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class ChildAbilitySlice extends AbilitySlice {
static final HiLogLabel LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
Text text;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_child_slice_two);
HiLog.info(LABEL_LOG, "ChildAbilitySlice->"+Thread.currentThread().getName());
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
}
Add the below code in child_slice_two.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:background_element="#8c7373"
ohos:padding="32">
<Text
ohos:multiple_lines="true"
ohos:id="$+id:text"
ohos:height="match_content"
ohos:width="200"
ohos:layout_alignment="horizontal_center"
ohos:text="Child Slice"
ohos:text_size="10fp"/>
</DirectionalLayout>
Switch from MainAbilitySlice to ChildAbilitySlice using present method.
Java:
Button launchNewSlice = (Button) findComponentById(ResourceTable.Id_button_launch_new_slice);
launchNewSlice.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
present(new ChildAbilitySlice(), new Intent());
}
});
Switching from MainAbility to SecondAbility.
Java:
Button launchNewAbility = (Button) findComponentById(ResourceTable.Id_button_launch_new_ability);
launchNewAbility.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
HiLog.info(LABEL_LOG, "MainAbilitySlice launch new [email protected]@->"+Thread.currentThread().getName());
Intent intent = new Intent();
// Use the OperationBuilder class of Intent to construct an Operation object and set the deviceId (left empty if a local ability is required), bundleName, and abilityName attributes for the object.
Operation operation = new Intent.OperationBuilder()
.withDeviceId("")
.withBundleName("com.example.threadingsample")
.withAbilityName("com.example.threadingsample.second.SecondAbility")
.build();
intent.setParam("TEST_KEY", "apple");
// Set the created Operation object to the Intent as its operation attribute.
intent.setOperation(operation);
startAbility(intent);
}
});
Add the below code to SecondAbility.java
Java:
package com.example.threadingsample.second;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.content.Intent;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class SecondAbility extends Ability {
private static final int MY_PERMISSIONS_REQUEST_LOCATION = 1001;
static final HiLogLabel LABEL = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setMainRoute(SecondAbilitySlice.class.getName());
}
}
Add the below code in SecondAbilitySlice.java
Java:
package com.example.threadingsample.second;
import com.example.threadingsample.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.agp.components.Text;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class SecondAbilitySlice extends AbilitySlice {
static final HiLogLabel LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_two);
HiLog.info(LABEL_LOG, "inside SecondAbilitySlice!!");
if(getAbility() != null) {
if (getAbility().getIntent() != null) {
if (getAbility().getIntent().hasParameter("TEST_KEY")) {
String valueFromFirstAbility = getAbility().getIntent().getStringParam("TEST_KEY");
HiLog.info(LABEL_LOG, "inside [email protected]@-->"+valueFromFirstAbility);
Text text = (Text) findComponentById(ResourceTable.Id_text);
text.setText(valueFromFirstAbility);
} else {
HiLog.info(LABEL_LOG, "TEST_KEY parameter is not present");
}
} else {
HiLog.info(LABEL_LOG, "intent is null");
}
}else{
HiLog.info(LABEL_LOG, "ability is null");
}
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
}
Add the below code in ability_two.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:background_element="#8c7373"
ohos:padding="32">
<Text
ohos:multiple_lines="true"
ohos:id="$+id:text"
ohos:height="match_content"
ohos:width="200"
ohos:layout_alignment="horizontal_center"
ohos:text="Ability2"
ohos:text_size="10fp"/>
</DirectionalLayout>
Transfer data from one ability to another
Java:
Intent intent = new Intent();
intent.setParam("TEST_KEY", "apple");
Retrieve data on other ability
Java:
if(getAbility() != null) {
if (getAbility().getIntent() != null) {
if (getAbility().getIntent().hasParameter("TEST_KEY")) {
String valueFromFirstAbility = getAbility().getIntent().getStringParam("TEST_KEY");
HiLog.info(LABEL_LOG, "inside [email protected]@-->"+valueFromFirstAbility);
Text text = (Text) findComponentById(ResourceTable.Id_text);
text.setText(valueFromFirstAbility);
} else {
HiLog.info(LABEL_LOG, "TEST_KEY parameter is not present");
}
} else {
HiLog.info(LABEL_LOG, "intent is null");
}
}else{
HiLog.info(LABEL_LOG, "ability is null");
}
Tips and Tricks
All Abilities must be registered into config.json
Conclusion
In this article, we have UI components, there life cycle and navigation between them, transfer data between two pages.
Reference
Harmony Official document
DevEco Studio User guide
JS API Reference
Read In Forum
Click to expand...
Click to collapse
Is there any life cycle for Page ability

Categories

Resources