[Q] I have a xml file with code in it and i need to integrate that code in as3. - Other Operating Systems and Languages

I want to add several lines of code to flash AS3. How would i be able to do that if those lines are in a xml and the data that is needed is taken with a function that loads the xml. I need to add those lines in flash? How can i do that?
Example:
<?xml version="1.0" encoding="utf-8"?>
<game>
<!-- "Level id" number is not required. Just I added it for fast calculating -->
<!-- Page 1 -->
<Level id="1" star3="6" star2="10">
<Row>2,2,2,2</Row>
<Row>2,2,0,0</Row>
<Row>2,2,2,2</Row>
<Row>1,1,4,4</Row>
<Row>1,1,4,4</Row>
</Level>
<Level id="2" star3="8" star2="12">
<Row>2,2,2,2</Row>
<Row>2,2,2,2</Row>
<Row>0,1,1,0</Row>
<Row>3,3,4,4</Row>
<Row>3,3,4,4</Row>
</Level>
<Level id="3" star3="8" star2="12">
<Row>1,0,1,1</Row>
<Row>1,0,1,1</Row>
<Row>1,1,4,4</Row>
<Row>1,1,4,4</Row>
<Row>1,1,1,1</Row>
</Level>
<Level id="4" star3="8" star2="12">
<Row>3,1,1,0</Row>
<Row>3,1,1,0</Row>
<Row>2,2,2,2</Row>
<Row>3,3,4,4</Row>
<Row>3,3,4,4</Row>
</Level>
.
.
.
etc
The function to load this xml is the following one:
//playBtn.visible=false;
loadXML();
//loading data from the xml file.
function loadXML():void {
var loader:URLLoader=new URLLoader();
loader.addEventListener(Event.COMPLETE,completeXMLHandler);
//defining request.;
var request:URLRequest = new URLRequest("data.xml");
//try catch any error
try {
loader.load(request);
} catch (error:Error) {
trace('error');
}
}
//loading function
function completeXMLHandler(event:Event):void {
var loader:URLLoader = URLLoader(event.target);
var result:XML = new XML(loader.data);
//defining new xml document
var myXML:XMLDocument=new XMLDocument();
myXML.ignoreWhite = true;
myXML.parseXML(result.toXMLString());
//defining node;
var node:XMLNode = myXML.firstChild;
//defining levelNum from the lenght of the node
totalLevel = int(node.childNodes.length);
//pushing the words and their clues to the array
//creating a temporary array for each level, than pushing temporary array to the main array (dataArray)
for (var i:int=0; i<totalLevel; i++) {
scoreArray.push(9999)
var temp_array:Array=new Array();
var row:int = int(node.childNodes.childNodes.length);
starInfoArray.push(node.childNodes.attributes.star3+"_"+node.childNodes.attributes.star2);
for (var j:int=0; j<row; j++) {
var myData = node.childNodes.childNodes[j].firstChild.nodeValue;
temp_array.push(myData);
}
dataArray.push(temp_array);
}
}
Please help me

Related

How to Integrate Huawei Map Kit Javascript Api to cross-platforms

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202330537081990041&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Article Introduction
In this article we are going to cover HUAWEI Map Kit JavaScript API introduction. Next we going to implementing HUAWEI Map in Ionic/Cordova project. Lastly we will implement HUAWEI Map Kit JavaScript into Native Application.
Technology Introduction
HUAWEI Map Kit provides JavaScript APIs for you to easily build map apps applicable to browsers.
It provides the basic map display, map interaction, route planning, place search, geocoding, and other functions to meet requirements of most developers.
Restriction
Before using the service, you need to apply for an API key on the HUAWEI Developers website. For details, please refer to "Creating an API Key" in API Console Operation Guide. To enhance the API key security, you are advised to restrict the API key. You can configure restrictions by app and API on the API key editing page.
Generating API Key
Go to HMS API Services > Credentials and click Create credential.
Click API key to generate new API Key.
In the dialog box that is displayed, click Restrict to set restrictions on the key to prevent unauthorized use or quota theft. This step is optional.
The restrictions include App restrictions and API restriction.
App restrictions: control which websites or apps can use your key. Set up to one app restriction per key.
API restrictions: specify the enabled APIs that this key can call.
After setup App restriction and API restriction API key will generate.
The API key is successfully created. Copy API Key to use in your project.
Huawei Web Map API introduction
1. Make a Basic Map
Code:
function loadMapScript() {
const apiKey = encodeURIComponent(
"API_KEY"
);
const src = `https://mapapi.cloud.huawei.com/mapjs/v1/api/js?callback=initMap&key=${apiKey}`;
const mapScript = document.createElement("script");
mapScript.setAttribute("src", src);
document.head.appendChild(mapScript);
}
function initMap() { }
function initMap() {
const mapOptions = {};
mapOptions.center = { lat: 48.856613, lng: 2.352222 };
mapOptions.zoom = 8;
mapOptions.language = "ENG";
const map = new HWMapJsSDK.HWMap(
document.getElementById("map"),
mapOptions
);
}
loadMapScript();
Note: Please update API_KEY with the key which you have generated. In script url we are declaring callback function, which will automatically initiate once Huawei Map Api loaded successfully.
2. Map Interactions
Map Controls
Code:
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 10;
scaleControl
mapOptions.scaleControl = true; // Set to display the scale.
mapOptions.scaleControlOptions = {
units: "imperial" // Set the scale unit to inch.
};
zoomSlider
Code:
mapOptions.zoomSlider = true ; // Set to display the zoom slider.
zoomControl
Code:
mapOptions.zoomControl = false; // Set not to display the zoom button.
rotateControl (Manage Compass)
Code:
mapOptions.rotateControl = true; // Set to display the compass.
navigationControl
Code:
mapOptions.navigationControl = true; // Set to display the pan button.
copyrightControl
Code:
mapOptions.copyrightControl = true; // Set to display the copyright information.
mapOptions.copyrightControlOptions = {value: "HUAWEI",} // Set the copyright information.
locationControl
Code:
mapOptions.locationControl= true; // Set to display the current location.
Camera
Map moving: You can call the map.panTo(latLng)
Map shift: You can call the map.panBy(x, y)
Zoom: You can use the map.setZoom(zoom) method to set the zoom level of a map.
Area control: You can use map.fitBounds(bounds) to set the map display scope.
Map Events
Map click event:
Code:
map.on('click', () => {
map.zoomIn();
});
Map center change event:
Code:
map.onCenterChanged(centerChangePost);
function centerChangePost() {
var center = map.getCenter();
alert( 'Lng:'+map.getCenter().lng+'
'+'Lat:'+map.getCenter().lat);
}
Map heading change event:
Code:
map.onHeadingChanged(headingChangePost);
function headingChangePost() {
alert('Heading Changed!');
}
Map zoom level change event:
Code:
map.onZoomChanged(zoomChangePost);
function zoomChangePost() {
alert('Zoom Changed!')
}
3. Drawing on Map
Marker:
You can add markers to a map to identify locations such as stores and buildings, and provide additional information with information windows.
Code:
var map;
var mMarker;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
mMarker = new HWMapJsSDK.HWMarker({
map: map,
position: {lat: 48.85, lng: 2.35},
zIndex: 10,
label: 'A',
icon: {
opacity: 0.5
}
});
}
Marker Result:
Marker Clustering:
The HMS Core Map SDK allows you to cluster markers to effectively manage them on the map at different zoom levels. When a user zooms in on the map to a high level, all markers are displayed on the map. When the user zooms out, the markers are clustered on the map for orderly display.
Code:
var map;
var markers = [];
var markerCluster;
var locations = [
{lat: 51.5145160, lng: -0.1270060},
{ lat : 51.5064490, lng : -0.1244260 },
{ lat : 51.5097080, lng : -0.1200450 },
{ lat : 51.5090680, lng : -0.1421420 },
{ lat : 51.4976080, lng : -0.1456320 },
···
{ lat : 51.5061590, lng : -0.140280 },
{ lat : 51.5047420, lng : -0.1470490 },
{ lat : 51.5126760, lng : -0.1189760 },
{ lat : 51.5108480, lng : -0.1208480 }
];
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 3;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
generateMarkers(locations);
markerCluster = new HWMapJsSDK.HWMarkerCluster(map, markers);
}
function generateMarkers(locations) {
for (let i = 0; i < locations.length; i++) {
var opts = {
position: locations[i]
};
markers.push(new HWMapJsSDK.HWMarker(opts));
}
}
Cluster markers Result:
Information Window:
The HMS Core Map SDK supports the display of information windows on the map. There are two types of information windows: One is to display text or image independently, and the other is to display text or image in a popup above a marker. The information window provides details about a marker.
Code:
var infoWindow;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
var map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
infoWindow = new HWMapJsSDK.HWInfoWindow({
map,
position: {lat: 48.856613, lng: 2.352222},
content: 'This is to show mouse event of another marker',
offset: [0, -40],
});
}
Info window Result:
Ground Overlay
The builder function of GroundOverlay uses the URL, LatLngBounds, and GroundOverlayOptions of an image as the parameters to display the image in a specified area on the map. The sample code is as follows:
Code:
var map;
var mGroundOverlay;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
var imageBounds = {
north: 49,
south: 48.5,
east: 2.5,
west: 1.5,
};
mGroundOverlay = new HWMapJsSDK.HWGroundOverlay(
// Path to a local image or URL of an image.
'huawei_logo.png',
imageBounds,
{
map: map,
opacity: 1,
zIndex: 1
}
);
}
Marker Result:
Ionic / Cordova Map Implementation
In this part of article we are supposed to add Huawei Map Javascript API’s.
Update Index.html to implment Huawei Map JS scripts:
You need to update src/index.html and include Huawei map javacript cloud script url.
Code:
function loadMapScript() {
const apiKey = encodeURIComponent(
"API_KEY"
);
const src = `https://mapapi.cloud.huawei.com/mapjs/v1/api/js?callback=initMap&key=${apiKey}`;
const mapScript = document.createElement("script");
mapScript.setAttribute("src", src);
document.head.appendChild(mapScript);
}
function initMap() { }
loadMapScript();
Make new Map page:
Code:
ionic g page maps
Update maps.page.ts file and update typescript:
Code:
import { Component, OnInit, ChangeDetectorRef } from "@angular/core";
import { Observable } from "rxjs";
declare var HWMapJsSDK: any;
declare var cordova: any;
@Component({
selector: "app-maps",
templateUrl: "./maps.page.html",
styleUrls: ["./maps.page.scss"],
})
export class MapsPage implements OnInit {
map: any;
baseLat = 24.713552;
baseLng = 46.675297;
ngOnInit() {
this.showMap(his.baseLat, this.baseLng);
}
ionViewWillEnter() {
}
ionViewDidEnter() {
}
showMap(lat = this.baseLat, lng = this.baseLng) {
const mapOptions: any = {};
mapOptions.center = { lat: lat, lng: lng };
mapOptions.zoom = 10;
mapOptions.language = "ENG";
this.map = new HWMapJsSDK.HWMap(document.getElementById("map"), mapOptions);
this.map.setCenter({ lat: lat, lng: lng });
}
}
Ionic / Cordova App Result:
Native Application Huawei JS API Implementation
In this part of article we are supposed to add javascript based Huawei Map html version into our Native through webview. This part of implementation will be helpful for developer who required very minimal implementation of map.
Make assets/www/map.html file
Add the following HTML code inside map.html file:
Code:
var map;
var mMarker;
var infoWindow;
function initMap() {
const LatLng = { lat: 24.713552, lng: 46.675297 };
const mapOptions = {};
mapOptions.center = LatLng;
mapOptions.zoom = 10;
mapOptions.scaleControl = true;
mapOptions.locationControl= true;
mapOptions.language = "ENG";
map = new HWMapJsSDK.HWMap(
document.getElementById("map"),
mapOptions
);
map.setCenter(LatLng);
mMarker = new HWMapJsSDK.HWMarker({
map: map,
position: LatLng,
zIndex: 10,
label: 'A',
icon: {
opacity: 0.5
}
});
mMarker.addListener('click', () => {
infoWindow.open();
});
infoWindow = new HWMapJsSDK.HWInfoWindow({
map,
position: LatLng,
content: 'This is to info window of marker',
offset: [0, -40],
});
infoWindow.close();
}
Add the webview in your layout:
Code:
< WebView
android:id="@+id/webView_map"
android:layout_width="match_parent"
android:layout_height="match_parent"
/>
Update your Activity class to call html file
Code:
class MainActivity : AppCompatActivity() {
lateinit var context: Context
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
context = this
val mWebview = findViewById(R.id.webView_map)
mWebview.webChromeClient = WebChromeClient()
mWebview.webViewClient = WebViewClient()
mWebview.settings.javaScriptEnabled = true
mWebview.settings.setAppCacheEnabled(true)
mWebview.settings.mediaPlaybackRequiresUserGesture = true
mWebview.settings.domStorageEnabled = true
mWebview.loadUrl("file:///android_asset/www/map.html")
}
}
Internet permission:
Don’t forget to add internet permissions in androidmanifest.xml file.
Code:
< uses-permission android:name="android.permission.INTERNET" />
< uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
< uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
Native app Result:
References:
Huawei Map JavaScript API:
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/javascript-api-introduction-0000001050164063
Complete Ionic JS Map Project:
https://github.com/salmanyaqoob/Ionic-All-HMS-Kits
Conclusion
Huawei Map JavaSript Api will be helpful for JavaScript developers to implement Huawei Map on cross platforms like “Cordova, Ionic, React-Native” and also helpful for the Native developers to implement under his projects. Developers can also able to implement Huawei Maps on websites.
Thank you very much, very helpful.

Create and Monitor Geofences with HuaweiMap in Xamarin.Android Application

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
A geofence is a virtual perimeter set on a real geographic area. Combining a user position with a geofence perimeter, it is possible to know if the user is inside the geofence or if he is exiting or entering the area.
In this article, we will discuss how to use the geofence to notify the user when the device enters/exits an area using the HMS Location Kit in a Xamarin.Android application. We will also add and customize HuaweiMap, which includes drawing circles, adding pointers, and using nearby searches in search places. We are going to learn how to use the below features together:
Geofence
Reverse Geocode
HuaweiMap
Nearby Search
First of all, you need to be a registered Huawei Mobile Developer and create an application in Huawei App Console in order to use HMS Map Location and Site Kits. You can follow there steps in to complete the configuration that required for development.
Configuring App Information in AppGallery Connect --> shorturl.at/rL347
Creating Xamarin Android Binding Libraries --> shorturl.at/rBP46
Integrating the HMS Map Kit Libraries for Xamarin --> shorturl.at/vAHPX
Integrating the HMS Location Kit Libraries for Xamarin --> shorturl.at/dCX07
Integrating the HMS Site Kit Libraries for Xamarin --> shorturl.at/bmDX6
Integrating the HMS Core SDK --> shorturl.at/qBISV
Setting Package in Xamarin --> shorturl.at/brCU1
When we create our Xamarin.Android application in the above steps, we need to make sure that the package name is the same as we entered the Console. Also, don’t forget the enable them in Console.
Manifest & Permissions
We have to update the application’s manifest file by declaring permissions that we need as shown below.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
Also, add a meta-data element to embed your app id in the application tag, it is required for this app to authenticate on the Huawei’s cloud server. You can find this id in agconnect-services.json file.
Code:
<meta-data android:name="com.huawei.hms.client.appid" android:value="appid=YOUR_APP_ID" />
Request location permission
Code:
private void RequestPermissions()
{
if (ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessCoarseLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessFineLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.WriteExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.ReadExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.Internet) != (int)Permission.Granted)
{
ActivityCompat.RequestPermissions(this,
new System.String[]
{
Manifest.Permission.AccessCoarseLocation,
Manifest.Permission.AccessFineLocation,
Manifest.Permission.WriteExternalStorage,
Manifest.Permission.ReadExternalStorage,
Manifest.Permission.Internet
},
100);
}
else
GetCurrentPosition();
}
Add a Map
Add a <fragment> element to your activity’s layout file, activity_main.xml. This element defines a MapFragment to act as a container for the map and to provide access to the HuaweiMap object.
Code:
<fragment
android:id="@+id/mapfragment"
class="com.huawei.hms.maps.MapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
Implement the IOnMapReadyCallback interface to MainActivity and override OnMapReady method which is triggered when the map is ready to use. Then use GetMapAsync to register for the map callback.
We request the address corresponding to a given latitude/longitude. Also specified that the output must be in JSON format.
Code:
public class MainActivity : AppCompatActivity, IOnMapReadyCallback
{
...
public void OnMapReady(HuaweiMap map)
{
hMap = map;
hMap.UiSettings.MyLocationButtonEnabled = true;
hMap.UiSettings.CompassEnabled = true;
hMap.UiSettings.ZoomControlsEnabled = true;
hMap.UiSettings.ZoomGesturesEnabled = true;
hMap.MyLocationEnabled = true;
hMap.MapClick += HMap_MapClick;
if (selectedCoordinates == null)
selectedCoordinates = new GeofenceModel { LatLng = CurrentPosition, Radius = 30 };
}
}
As you can see above, with the UiSettings property of the HuaweiMap object we set my location button, enable compass, etc. Now when the app launch, directly get the current location and move the camera to it. In order to do that we use FusedLocationProviderClient that we instantiated and call LastLocation API.
LastLocation API returns a Task object that we can check the result by implementing the relevant listeners for success and failure.In success listener we are going to move the map’s camera position to the last known position.
Code:
private void GetCurrentPosition()
{
var locationTask = fusedLocationProviderClient.LastLocation;
locationTask.AddOnSuccessListener(new LastLocationSuccess(this));
locationTask.AddOnFailureListener(new LastLocationFail(this));
}
...
public class LastLocationSuccess : Java.Lang.Object, IOnSuccessListener
{
...
public void OnSuccess(Java.Lang.Object location)
{
Toast.MakeText(mainActivity, "LastLocation request successful", ToastLength.Long).Show();
if (location != null)
{
MainActivity.CurrentPosition = new LatLng((location as Location).Latitude, (location as Location).Longitude);
mainActivity.RepositionMapCamera((location as Location).Latitude, (location as Location).Longitude);
}
}
}
To change the position of the camera, we must specify where we want to move the camera, using a CameraUpdate. The Map Kit allows us to create many different types of CameraUpdate using CameraUpdateFactory.
There are some methods for the camera position changes as we see above. Simply these are:
NewLatLng: Change camera’s latitude and longitude, while keeping other properties
NewLatLngZoom: Changes the camera’s latitude, longitude, and zoom, while keeping other properties
NewCameraPosition: Full flexibility in changing the camera position
We are going to use NewCameraPosition. A CameraPosition can be obtained with a CameraPosition.Builder. And then we can set target, bearing, tilt and zoom properties.
Code:
public void RepositionMapCamera(double lat, double lng)
{
var cameraPosition = new CameraPosition.Builder();
cameraPosition.Target(new LatLng(lat, lng));
cameraPosition.Zoom(1000);
cameraPosition.Bearing(45);
cameraPosition.Tilt(20);
CameraUpdate cameraUpdate = CameraUpdateFactory.NewCameraPosition(cameraPosition.Build());
hMap.MoveCamera(cameraUpdate);
}
Creating Geofence
In this part, we will choose the location where we want to set geofence in two different ways. The first is to select the location by clicking on the map, and the second is to search for nearby places by keyword and select one after placing them on the map with the marker.
Set the geofence location by clicking on the map
It is always easier to select a location by seeing it. After this section, we are able to set a geofence around the clicked point when the map’s clicked. We attached the Click event to our map in the OnMapReady method. In this Click event, we will add a marker to the clicked point and draw a circle around it.
Also, we will use the Seekbar at the bottom of the page to adjust the circle radius. We set selectedCoordinates variable when adding the marker. Let’s create the following method to create the marker:
Code:
private void HMap_MapClick(object sender, HuaweiMap.MapClickEventArgs e)
{
selectedCoordinates.LatLng = e.P0;
if (circle != null)
{
circle.Remove();
circle = null;
}
AddMarkerOnMap();
}
void AddMarkerOnMap()
{
if (marker != null) marker.Remove();
var markerOption = new MarkerOptions()
.InvokeTitle("You are here now")
.InvokePosition(selectedCoordinates.LatLng);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
marker = hMap.AddMarker(markerOption);
bool isInfoWindowShown = marker.IsInfoWindowShown;
if (isInfoWindowShown)
marker.HideInfoWindow();
else
marker.ShowInfoWindow();
}
Adding MapInfoWindowAdapter class to our project for rendering the custom info model. And implement HuaweiMap.IInfoWindowAdapter interface to it. When an information window needs to be displayed for a marker, methods provided by this adapter are called in any case.
Now let’s create a custom info window layout and named it as map_info_view.xml
Code:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent">
<Button
android:text="Add geofence"
android:width="100dp"
style="@style/Widget.AppCompat.Button.Colored"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/btnInfoWindow" />
</LinearLayout>
And return it after customizing it in GetInfoWindow() method. The full code of the adapter is below:
Code:
internal class MapInfoWindowAdapter : Java.Lang.Object, HuaweiMap.IInfoWindowAdapter
{
private MainActivity activity;
private GeofenceModel selectedCoordinates;
private View addressLayout;
public MapInfoWindowAdapter(MainActivity currentActivity){activity = currentActivity;}
public View GetInfoContents(Marker marker){return null;}
public View GetInfoWindow(Marker marker)
{
if (marker == null)
return null;
selectedCoordinates = new GeofenceModel { LatLng = new LatLng(marker.Position.Latitude, marker.Position.Longitude) };
View mapInfoView = activity.LayoutInflater.Inflate(Resource.Layout.map_info_view, null);
var radiusBar = activity.FindViewById<SeekBar>(Resource.Id.radiusBar);
if (radiusBar.Visibility == Android.Views.ViewStates.Invisible)
{
radiusBar.Visibility = Android.Views.ViewStates.Visible;
radiusBar.SetProgress(30, true);
}
activity.FindViewById<SeekBar>(Resource.Id.radiusBar)?.SetProgress(30, true);
activity.DrawCircleOnMap(selectedCoordinates);
Button button = mapInfoView.FindViewById<Button>(Resource.Id.btnInfoWindow);
button.Click += btnInfoWindow_ClickAsync;
return mapInfoView;
}
}
Now we create a method to arrange a circle around the marker that representing the geofence radius. Create a new DrawCircleOnMap method in MainActivity for this. To construct a circle, we must specify the Center and Radius. Also, I set other properties like StrokeColor etc.
Code:
public void DrawCircleOnMap(GeofenceModel geoModel)
{
if (circle != null)
{
circle.Remove();
circle = null;
}
CircleOptions circleOptions = new CircleOptions()
.InvokeCenter(geoModel.LatLng)
.InvokeRadius(geoModel.Radius)
.InvokeFillColor(Color.Argb(50, 0, 14, 84))
.InvokeStrokeColor(Color.Yellow)
.InvokeStrokeWidth(15);
circle = hMap.AddCircle(circleOptions);
}
private void radiusBar_ProgressChanged(object sender, SeekBar.ProgressChangedEventArgs e)
{
selectedCoordinates.Radius = e.Progress;
DrawCircleOnMap(selectedCoordinates);
}
We will use SeekBar to change the radius of the circle. As the value changes, the drawn circle will expand or shrink.
Reverse Geocoding
Now let’s handle the click event of the info window.
But before open that window, we need to reverse geocoding selected coordinates to getting a formatted address. HUAWEI Site Kit provides us a set of HTTP API including the one that we need, reverseGeocode.
Let’s add the GeocodeManager class to our project and update it as follows:
Code:
public async Task<Site> ReverseGeocode(double lat, double lng)
{
string result = "";
using (var client = new HttpClient())
{
MyLocation location = new MyLocation();
location.Lat = lat;
location.Lng = lng;
var root = new ReverseGeocodeRequest();
root.Location = location;
var settings = new JsonSerializerSettings();
settings.ContractResolver = new LowercaseSerializer();
var json = JsonConvert.SerializeObject(root, Formatting.Indented, settings);
var data = new StringContent(json, Encoding.UTF8, "application/json");
var url = "siteapi.cloud.huawei.com/mapApi/v1/siteService/reverseGeocode?key=" + Android.Net.Uri.Encode(ApiKey);
var response = await client.PostAsync(url, data);
result = response.Content.ReadAsStringAsync().Result;
}
return JsonConvert.DeserializeObject<ReverseGeocodeResponse>(result).sites.FirstOrDefault();
}
In the above code, we request the address corresponding to a given latitude/longitude. Also specified that the output must be in JSON format.
siteapi.cloud.huawei.com/mapApi/v1/siteService/reverseGeocode?key=APIKEY
Click to expand...
Click to collapse
Request model:
Code:
public class MyLocation
{
public double Lat { get; set; }
public double Lng { get; set; }
}
public class ReverseGeocodeRequest
{
public MyLocation Location { get; set; }
}
Note that the JSON response contains three root elements:
“returnCode”: For details, please refer to Result Codes.
“returnDesc”: description
“sites” contains an array of geocoded address information
Generally, only one entry in the “sites” array is returned for address lookups, though the geocoder may return several results when address queries are ambiguous.
Add the following codes to our MapInfoWindowAdapter where we get results from the Reverse Geocode API and set the UI elements.
Code:
private async void btnInfoWindow_ClickAsync(object sender, System.EventArgs e)
{
addressLayout = activity.LayoutInflater.Inflate(Resource.Layout.reverse_alert_layout, null);
GeocodeManager geocodeManager = new GeocodeManager(activity);
var addressResult = await geocodeManager.ReverseGeocode(selectedCoordinates.LatLng.Latitude, selectedCoordinates.LatLng.Longitude);
if (addressResult.ReturnCode != 0)
return;
var address = addressResult.Sites.FirstOrDefault();
var txtAddress = addressLayout.FindViewById<TextView>(Resource.Id.txtAddress);
var txtRadius = addressLayout.FindViewById<TextView>(Resource.Id.txtRadius);
txtAddress.Text = address.FormatAddress;
txtRadius.Text = selectedCoordinates.Radius.ToString();
AlertDialog.Builder builder = new AlertDialog.Builder(activity);
builder.SetView(addressLayout);
builder.SetTitle(address.Name);
builder.SetPositiveButton("Save", (sender, arg) =>
{
selectedCoordinates.Conversion = GetSelectedConversion();
GeofenceManager geofenceManager = new GeofenceManager(activity);
geofenceManager.AddGeofences(selectedCoordinates);
});
builder.SetNegativeButton("Cancel", (sender, arg) => { builder.Dispose(); });
AlertDialog alert = builder.Create();
alert.Show();
}
Now, after selecting the conversion, we can complete the process by calling the AddGeofence method in the GeofenceManager class by pressing the save button in the dialog window.
Code:
public void AddGeofences(GeofenceModel geofenceModel)
{
//Set parameters
geofenceModel.Id = Guid.NewGuid().ToString();
if (geofenceModel.Conversion == 5) //Expiration value that indicates the geofence should never expire.
geofenceModel.Timeout = Geofence.GeofenceNeverExpire;
else
geofenceModel.Timeout = 10000;
List<IGeofence> geofenceList = new List<IGeofence>();
//Geofence Service
GeofenceService geofenceService = LocationServices.GetGeofenceService(activity);
PendingIntent pendingIntent = CreatePendingIntent();
GeofenceBuilder somewhereBuilder = new GeofenceBuilder()
.SetUniqueId(geofenceModel.Id)
.SetValidContinueTime(geofenceModel.Timeout)
.SetRoundArea(geofenceModel.LatLng.Latitude, geofenceModel.LatLng.Longitude, geofenceModel.Radius)
.SetDwellDelayTime(10000)
.SetConversions(geofenceModel.Conversion); ;
//Create geofence request
geofenceList.Add(somewhereBuilder.Build());
GeofenceRequest geofenceRequest = new GeofenceRequest.Builder()
.CreateGeofenceList(geofenceList)
.Build();
//Register geofence
var geoTask = geofenceService.CreateGeofenceList(geofenceRequest, pendingIntent);
geoTask.AddOnSuccessListener(new CreateGeoSuccessListener(activity));
geoTask.AddOnFailureListener(new CreateGeoFailListener(activity));
}
In the AddGeofence method, we need to set the geofence request parameters, like the selected conversion, unique Id and timeout according to conversion, etc. with GeofenceBuilder. We create GeofenceBroadcastReceiver and display a toast message when a geofence action occurs.
Code:
[BroadcastReceiver(Enabled = true)]
[IntentFilter(new[] { "com.huawei.hms.geofence.ACTION_PROCESS_ACTIVITY" })]
class GeofenceBroadcastReceiver : BroadcastReceiver
{
public static readonly string ActionGeofence = "com.huawei.hms.geofence.ACTION_PROCESS_ACTIVITY";
public override void OnReceive(Context context, Intent intent)
{
if (intent != null)
{
var action = intent.Action;
if (action == ActionGeofence)
{
GeofenceData geofenceData = GeofenceData.GetDataFromIntent(intent);
if (geofenceData != null)
{
Toast.MakeText(context, "Geofence triggered: " + geofenceData.ConvertingLocation.Latitude +"\n" + geofenceData.ConvertingLocation.Longitude + "\n" + geofenceData.Conversion.ToConversionName(), ToastLength.Long).Show();
}
}
}
}
}
After that in CreateGeoSuccessListener and CreateGeoFailureListener that we implement IOnSuccessListener and IOnFailureListener respectively, we display a toast message to the user like this:
Code:
public class CreateGeoFailListener : Java.Lang.Object, IOnFailureListener
{
public void OnFailure(Java.Lang.Exception ex)
{
Toast.MakeText(mainActivity, "Geofence request failed: " + GeofenceErrorCodes.GetErrorMessage((ex as ApiException).StatusCode), ToastLength.Long).Show();
}
}
public class CreateGeoSuccessListener : Java.Lang.Object, IOnSuccessListener
{
public void OnSuccess(Java.Lang.Object data)
{
Toast.MakeText(mainActivity, "Geofence request successful", ToastLength.Long).Show();
}
}
Set geofence location using Nearby Search
On the main layout when the user clicks the Search Nearby Places button, a search dialog like below appears:
Create search_alert_layout.xml with a search input In Main Activity, create click event of that button and open an alert dialog after it’s view is set to search_alert_layout. And make NearbySearch when clicking the Search button:
Code:
private void btnGeoWithAddress_Click(object sender, EventArgs e)
{
search_view = base.LayoutInflater.Inflate(Resource.Layout.search_alert_layout, null);
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.SetView(search_view);
builder.SetTitle("Search Location");
builder.SetNegativeButton("Cancel", (sender, arg) => { builder.Dispose(); });
search_view.FindViewById<Button>(Resource.Id.btnSearch).Click += btnSearchClicked;
alert = builder.Create();
alert.Show();
}
private void btnSearchClicked(object sender, EventArgs e)
{
string searchText = search_view.FindViewById<TextView>(Resource.Id.txtSearch).Text;
GeocodeManager geocodeManager = new GeocodeManager(this);
geocodeManager.NearbySearch(CurrentPosition, searchText);
}
We pass search text and Current Location into the GeocodeManager NearbySearch method as parameters. We need to modify GeoCodeManager class and add nearby search method to it.
Code:
public void NearbySearch(LatLng currentLocation, string searchText)
{
ISearchService searchService = SearchServiceFactory.Create(activity, Android.Net.Uri.Encode("YOUR_API_KEY"));
NearbySearchRequest nearbySearchRequest = new NearbySearchRequest();
nearbySearchRequest.Query = searchText;
nearbySearchRequest.Language = "en";
nearbySearchRequest.Location = new Coordinate(currentLocation.Latitude, currentLocation.Longitude);
nearbySearchRequest.Radius = (Integer)2000;
nearbySearchRequest.PageIndex = (Integer)1;
nearbySearchRequest.PageSize = (Integer)5;
nearbySearchRequest.PoiType = LocationType.Address;
searchService.NearbySearch(nearbySearchRequest, new QuerySuggestionResultListener(activity as MainActivity));
}
And to handle the result we must create a listener and implement the ISearchResultListener interface to it.
Code:
public class NearbySearchResultListener : Java.Lang.Object, ISearchResultListener
{
public void OnSearchError(SearchStatus status)
{
Toast.MakeText(context, "Error Code: " + status.ErrorCode + " Error Message: " + status.ErrorMessage, ToastLength.Long);
}
public void OnSearchResult(Java.Lang.Object results)
{
NearbySearchResponse nearbySearchResponse = (NearbySearchResponse)results;
if (nearbySearchResponse != null && nearbySearchResponse.TotalCount > 0)
context.SetSearchResultOnMap(nearbySearchResponse.Sites);
}
}
In OnSearchResult method, NearbySearchResponse object return. We will insert markers to the mapper element in this response. The map will look like this:
In Main Activity create a method named SetSearchResultOnMap and pass IList<Site> as a parameter to insert multiple markers on the map.
Code:
public void SetSearchResultOnMap(IList<Com.Huawei.Hms.Site.Api.Model.Site> sites)
{
hMap.Clear();
if (searchMarkers != null && searchMarkers.Count > 0)
foreach (var item in searchMarkers)
item.Remove();
searchMarkers = new List<Marker>();
for (int i = 0; i < sites.Count; i++)
{
MarkerOptions marker1Options = new MarkerOptions()
.InvokePosition(new LatLng(sites[i].Location.Lat, sites[i].Location.Lng))
.InvokeTitle(sites[i].Name).Clusterable(true);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
var marker1 = hMap.AddMarker(marker1Options);
searchMarkers.Add(marker1);
RepositionMapCamera(sites[i].Location.Lat, sites[i].Location.Lng);
}
hMap.SetMarkersClustering(true);
alert.Dismiss();
}
Now, we add markers as we did above. But here we use SetMarkersClustering(true) to consolidates markers into clusters when zooming out of the map.
You can download the source code from below:
github.com/stugcearar/HMSCore-Xamarin-Android-Samples/tree/master/LocationKit/HMS_Geofence
Also if you have any questions, ask away in Huawei Developer Forums.
Errors
If your location permission set “Allowed only while in use instead” of ”Allowed all the time” below exception will be thrown.
int GEOFENCE_INSUFFICIENT_PERMISSION
Insufficient permission to perform geofence-related operations.
You can see all result codes including errors, in here for Location service.
You can find result codes with details here for Geofence request.

Intermediate: An Introduction to HarmonyOs RDB using Java

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
HarmonyOs is a next-generation operating system that empowers interconnection and collaboration between smart devices. It delivers smooth simple interaction that is reliable in all scenarios.
SQLite is an open-source relational database which is used to perform database operations on devices such as storing, manipulating or retrieving persistent data from the database.
HarmonyOs uses SQLite DB for managing local database and called it is as HarmonyOs RDB (relational database).
Takeaways
Integrate HarmonyOs RDB in the application.
Navigate from one Ability Slice to another and sending data while doing it.
Learn to create UI using Directional Layout.
Default and customize Dialog.
Providing background color to buttons or layout programmatically.
HarmonyOs Animation.
Demo
To understand how HarmonyOs works with SQLite DB, I have created a Quiz App and inserted all the questions data using SQLite database as shown below:
Integrating HarmonyOs RDB
Step 1: Create Questions model (POJO) class.
Java:
public class Questions {
private int id;
private String topic;
private String question;
private String optionA;
private String optionB;
private String optionC;
private String optionD;
private String answer;
public Questions(String topc, String ques, String opta, String optb, String optc, String optd, String ans) {
topic = topc;
question = ques;
optionA = opta;
optionB = optb;
optionC = optc;
optionD = optd;
answer = ans;
}
public Questions() {
id = 0;
topic = "";
question = "";
optionA = "";
optionB = "";
optionC = "";
optionD = "";
answer = "";
}
public void setId(int id) {
this.id = id;
}
public String getTopic() {
return topic;
}
public void setTopic(String topic) {
this.topic = topic;
}
public String getQuestion() {
return question;
}
public void setQuestion(String question) {
this.question = question;
}
public String getOptionA() {
return optionA;
}
public void setOptionA(String optionA) {
this.optionA = optionA;
}
public String getOptionB() {
return optionB;
}
public void setOptionB(String optionB) {
this.optionB = optionB;
}
public String getOptionC() {
return optionC;
}
public void setOptionC(String optionC) {
this.optionC = optionC;
}
public String getOptionD() {
return optionD;
}
public void setOptionD(String optionD) {
this.optionD = optionD;
}
public String getAnswer() {
return answer;
}
public void setAnswer(String answer) {
this.answer = answer;
}
}
Step 2: Create a class and name it as QuizDatabaseHelper.
Step 3: Extends the class with DatabaseHelper class.
Step 4: After that we need to configure the RDB store. For that we need to use StoreConfig.
Java:
StoreConfig config = StoreConfig.newDefaultConfig("QuizMania.db");
Step 5: Use RdbOpenCallback abstract class to create the table and if we need to modify the table, we can use this class to upgrade the version of the Database to avoid crashes.
Java:
RdbOpenCallback callback = new RdbOpenCallback() {
@Override
public void onCreate(RdbStore store) {
store.executeSql("CREATE TABLE " + TABLE_NAME + " ( " + ID + " INTEGER PRIMARY KEY AUTOINCREMENT , " + TOPIC + " VARCHAR(255), " + QUESTION + " VARCHAR(255), " + OPTIONA + " VARCHAR(255), " + OPTIONB + " VARCHAR(255), " + OPTIONC + " VARCHAR(255), " + OPTIOND + " VARCHAR(255), " + ANSWER + " VARCHAR(255))");
}
@Override
public void onUpgrade(RdbStore store, int oldVersion, int newVersion) {
}
};
Step 6: Use DatabaseHelper class to obtain the RDB store.
Java:
DatabaseHelper helper = new DatabaseHelper(context);
store = helper.getRdbStore(config, 1, callback, null);
Step 7: In order to insert question data we will use ValueBucket of RDB.
Java:
private void insertAllQuestions(ArrayList<Questions> allQuestions){
ValuesBucket values = new ValuesBucket();
for(Questions question : allQuestions){
values.putString(TOPIC, question.getTopic());
values.putString(QUESTION, question.getQuestion());
values.putString(OPTIONA, question.getOptionA());
values.putString(OPTIONB, question.getOptionB());
values.putString(OPTIONC, question.getOptionC());
values.putString(OPTIOND, question.getOptionD());
values.putString(ANSWER, question.getAnswer());
long id = store.insert("QUIZMASTER", values);
}
}
Step 8: In order to retrieve all the question data we will use RdbPredicates and ResultSet. RdbPredicates helps us to combine SQL statements simply by calling methods using this class, such as equalTo, notEqualTo, groupBy, orderByAsc, and beginsWith. ResultSet on the other hand helps us to retrieve the data that we have queried.
Java:
public List<Questions> getAllListOfQuestions(String topicName) {
List<Questions> questionsList = new ArrayList<>();
String[] columns = new String[] {ID, TOPIC, QUESTION, OPTIONA,OPTIONB,OPTIONC,OPTIOND,ANSWER};
RdbPredicates rdbPredicates = new RdbPredicates(TABLE_NAME).equalTo(TOPIC, topicName);
ResultSet resultSet = store.query(rdbPredicates, columns);
while (resultSet.goToNextRow()){
Questions question = new Questions();
question.setId(resultSet.getInt(0));
question.setTopic(resultSet.getString(1));
question.setQuestion(resultSet.getString(2));
question.setOptionA(resultSet.getString(3));
question.setOptionB(resultSet.getString(4));
question.setOptionC(resultSet.getString(5));
question.setOptionD(resultSet.getString(6));
question.setAnswer(resultSet.getString(7));
questionsList.add(question);
}
return questionsList;
}
Step 9: Let's call the QuizDatabaseHelper class in Ability Slice and get all the question from the stored database.
Java:
QuizDatabaseHelper quizDatabaseHelper = new QuizDatabaseHelper(getContext());
quizDatabaseHelper.initDb();
if (quizDatabaseHelper.getAllListOfQuestions(topicName).size() == 0) {
quizDatabaseHelper.listOfAllQuestion();
}
List<Questions> list = quizDatabaseHelper.getAllListOfQuestions(topicName);
Collections.shuffle(list);
Questions questionObj = list.get(questionId);
QuizDatabaseHelper.java
Java:
public class QuizDatabaseHelper extends DatabaseHelper {
Context context;
StoreConfig config;
RdbStore store;
private static final String TABLE_NAME = "QUIZMASTER";
private static final String ID = "_ID";
private static final String TOPIC = "TOPIC";
private static final String QUESTION = "QUESTION";
private static final String OPTIONA = "OPTIONA";
private static final String OPTIONB = "OPTIONB";
private static final String OPTIONC = "OPTIONC";
private static final String OPTIOND = "OPTIOND";
private static final String ANSWER = "ANSWER";
public QuizDatabaseHelper(Context context) {
super(context);
this.context = context;
}
public void initDb(){
config = StoreConfig.newDefaultConfig("QuizMania.db");
RdbOpenCallback callback = new RdbOpenCallback() {
@Override
public void onCreate(RdbStore store) {
store.executeSql("CREATE TABLE " + TABLE_NAME + " ( " + ID + " INTEGER PRIMARY KEY AUTOINCREMENT , " + TOPIC + " VARCHAR(255), " + QUESTION + " VARCHAR(255), " + OPTIONA + " VARCHAR(255), " + OPTIONB + " VARCHAR(255), " + OPTIONC + " VARCHAR(255), " + OPTIOND + " VARCHAR(255), " + ANSWER + " VARCHAR(255))");
}
@Override
public void onUpgrade(RdbStore store, int oldVersion, int newVersion) {
}
};
DatabaseHelper helper = new DatabaseHelper(context);
store = helper.getRdbStore(config, 1, callback, null);
}
public void listOfAllQuestion() {
// Generic type is Questions POJO class.
ArrayList<Questions> arraylist = new ArrayList<>();
// General Knowledge Questions...
arraylist.add(new Questions("gk","India has largest deposits of ____ in the world.", "Gold", "Copper", "Mica", "None of the above", "Mica"));
arraylist.add(new Questions("gk","Who was known as Iron man of India ?", "Govind Ballabh Pant", "Jawaharlal Nehru", "Subhash Chandra Bose", "Sardar Vallabhbhai Patel", "Sardar Vallabhbhai Patel"));
arraylist.add(new Questions("gk", "India participated in Olympics Hockey in", "1918", "1928", "1938", "1948", "1928"));
arraylist.add(new Questions("gk","Who is the Flying Sikh of India ?", "Mohinder Singh", "Joginder Singh", "Ajit Pal Singh", "Milkha singh", "Milkha singh"));
arraylist.add(new Questions("gk","How many times has Brazil won the World Cup Football Championship ?", "Four times", "Twice", "Five times", "Once", "Five times"));
// Sports Questions..
arraylist.add(new Questions("sp","Which was the 1st non Test playing country to beat India in an international match ?", "Canada", "Sri Lanka", "Zimbabwe", "East Africa", "Sri Lanka"));
arraylist.add(new Questions("sp","Ricky Ponting is also known as what ?", "The Rickster", "Ponts", "Ponter", "Punter", "Punter"));
arraylist.add(new Questions("sp","India won its first Olympic hockey gold in...?", "1928", "1932", "1936", "1948", "1928"));
arraylist.add(new Questions("sp","The Asian Games were held in Delhi for the first time in...?", "1951", "1963", "1971", "1982", "1951"));
arraylist.add(new Questions("sp","The 'Dronacharya Award' is given to...?", "Sportsmen", "Coaches", "Umpires", "Sports Editors", "Coaches"));
// History Questions...
arraylist.add(new Questions("his","The Battle of Plassey was fought in", "1757", "1782", "1748", "1764", "1757"));
arraylist.add(new Questions("his","The title of 'Viceroy' was added to the office of the Governor-General of India for the first time in", "1848 AD", "1856 AD", "1858 AD", "1862 AD", "1858 AD"));
arraylist.add(new Questions("his","Tipu sultan was the ruler of", "Hyderabad", "Madurai", "Mysore", "Vijayanagar", "Mysore"));
arraylist.add(new Questions("his","The Vedas contain all the truth was interpreted by", "Swami Vivekananda", "Swami Dayananda", "Raja Rammohan Roy", "None of the above", "Swami Dayananda"));
arraylist.add(new Questions("his","The Upanishads are", "A source of Hindu philosophy", "Books of ancient Hindu laws", "Books on social behavior of man", "Prayers to God", "A source of Hindu philosophy"));
// General Science Questions...
arraylist.add(new Questions("gs","Which of the following is a non metal that remains liquid at room temperature ?", "Phosphorous", "Bromine", "Chlorine", "Helium", "Bromine"));
arraylist.add(new Questions("gs","Which of the following is used in pencils?", "Graphite", "Silicon", "Charcoal", "Phosphorous", "Graphite"));
arraylist.add(new Questions("gs","The gas usually filled in the electric bulb is", "Nitrogen", "Hydrogen", "Carbon Dioxide", "Oxygen", "Nitrogen"));
arraylist.add(new Questions("gs","Which of the gas is not known as green house gas ?", "Methane", "Nitrous oxide", "Carbon dioxide", "Hydrogen", "Hydrogen"));
arraylist.add(new Questions("gs","The hardest substance available on earth is", "Gold", "Iron", "Diamond", "Platinum", "Diamond"));
this.insertAllQuestions(arraylist);
}
private void insertAllQuestions(ArrayList<Questions> allQuestions){
ValuesBucket values = new ValuesBucket();
for(Questions question : allQuestions){
values.putString(TOPIC, question.getTopic());
values.putString(QUESTION, question.getQuestion());
values.putString(OPTIONA, question.getOptionA());
values.putString(OPTIONB, question.getOptionB());
values.putString(OPTIONC, question.getOptionC());
values.putString(OPTIOND, question.getOptionD());
values.putString(ANSWER, question.getAnswer());
long id = store.insert("QUIZMASTER", values);
}
}
public List<Questions> getAllListOfQuestions(String topicName) {
List<Questions> questionsList = new ArrayList<>();
String[] columns = new String[] {ID, TOPIC, QUESTION, OPTIONA,OPTIONB,OPTIONC,OPTIOND,ANSWER};
RdbPredicates rdbPredicates = new RdbPredicates(TABLE_NAME).equalTo(TOPIC, topicName);
ResultSet resultSet = store.query(rdbPredicates, columns);
while (resultSet.goToNextRow()){
Questions question = new Questions();
question.setId(resultSet.getInt(0));
question.setTopic(resultSet.getString(1));
question.setQuestion(resultSet.getString(2));
question.setOptionA(resultSet.getString(3));
question.setOptionB(resultSet.getString(4));
question.setOptionC(resultSet.getString(5));
question.setOptionD(resultSet.getString(6));
question.setAnswer(resultSet.getString(7));
questionsList.add(question);
}
return questionsList;
}
}
HarmonyOs Navigation
An Ability Slice represents a single screen and its control logic. In terms of Android, it is like a Fragment and Page Ability is like an Activity in Android. An ability slice's lifecycle is bound to the Page ability that hosts it.
Now, if we need to navigate with data from one Ability Slice to another, we need to use present method of HarmonyOs.
Java:
public final void present(AbilitySlice targetSlice, Intent intent) {
throw new RuntimeException("Stub!");
}
GameAbilitySlice.java
Java:
private void goToQuizPage(String topic){
Intent intent = new Intent();
intent.setParam("TEST_KEY", topic);
present(new QuizAbilitySlice(), intent);
}
Here the targetSlice is QuizAbilitySlice.
QuizAbilitySlice.java
Java:
String topicName = intent.getStringParam("TEST_KEY");
Here we getting the value from the source Ability Slice.
HarmonyOs User Interface
Layouts
There six layouts available in HarmonyOs:
DirectionalLayout
DependentLayout
StackLayout
TableLayout
PositionLayout
AdaptiveBoxLayout
We will be using DirectionalLayout for our UI. In terms of Android, it is like LinearLayout. It has orientation, weight and many more which we will find in LinearLayout as well.
Text and Button Components
Yes you heard it right. Any widget in HarmonyOs is treated as Component. Here Text as well Button are Component of HarmonyOs. As HarmonyOs uses XML for UI, all those XML properties which we see in Android can be use here. The only difference which we will find here is providing the background colour to Buttons or Layout. In order to provide background colour, we need to create a graphic XML file under the graphic folder of resource.
btn_option.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<shape
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="20"/>
<solid
ohos:color="#2c3e50"/>
</shape>
After that we will use button_option.xml file as background colour for buttons using background_element property.
Java:
<Button
ohos:id="$+id:btnD"
ohos:height="80fp"
ohos:width="match_parent"
ohos:margin="10fp"
ohos:text_color="#ecf0f1"
ohos:text_size="30fp"
ohos:text="Gold"
ohos:background_element="$graphic:btn_option"/>
ability_quiz.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:alignment="center"
ohos:orientation="vertical">
<DirectionalLayout
ohos:height="match_parent"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:weight="0.5"
ohos:alignment="center"
ohos:background_element="$graphic:background_question_area">
<Text
ohos:id="$+id:txtQuestion"
ohos:height="match_content"
ohos:width="match_content"
ohos:text_alignment="center"
ohos:multiple_lines="true"
ohos:margin="20fp"
ohos:text_size="40vp"
ohos:text="Question"
/>
</DirectionalLayout>
<DirectionalLayout
ohos:height="match_parent"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:alignment="center"
ohos:weight="1">
<Button
ohos:id="$+id:btnA"
ohos:height="80fp"
ohos:width="match_parent"
ohos:margin="10fp"
ohos:text_color="#ecf0f1"
ohos:text_size="30fp"
ohos:text="Gold"
ohos:background_element="$graphic:btn_option"
/>
<Button
ohos:id="$+id:btnB"
ohos:height="80fp"
ohos:width="match_parent"
ohos:margin="10fp"
ohos:text_color="#ecf0f1"
ohos:text_size="30fp"
ohos:text="Gold"
ohos:background_element="$graphic:btn_option"
/>
<Button
ohos:id="$+id:btnC"
ohos:height="80fp"
ohos:width="match_parent"
ohos:margin="10fp"
ohos:text_color="#ecf0f1"
ohos:text_size="30fp"
ohos:text="Gold"
ohos:background_element="$graphic:btn_option"
/>
<Button
ohos:id="$+id:btnD"
ohos:height="80fp"
ohos:width="match_parent"
ohos:margin="10fp"
ohos:text_color="#ecf0f1"
ohos:text_size="30fp"
ohos:text="Gold"
ohos:background_element="$graphic:btn_option"
/>
</DirectionalLayout>
</DirectionalLayout>
HarmonyOs Dialogs
There are five Dialog available in HarmonyOs to use:
DisplayDialog
CommonDialog
BaseDialog
PopupDialog
ListDialog
ToastDialog
We will be using CommonDialog to show default as well as customize dialog in our application. Dialog in HarmonyOs is also known as Component. CommonDialog helps us to provide Button like functionality as we see in Android Dialogs.
Default CommonDialog
Java:
private void wrongAnsDialog(){
CommonDialog commonDialog = new CommonDialog(getContext());
commonDialog.setTitleText("WRONG ANSWER");
commonDialog.setSize(1000,300);
commonDialog.setButton(1, "OKAY", new IDialog.ClickedListener() {
@Override
public void onClick(IDialog iDialog, int i) {
commonDialog.hide();
present(new GameAbilitySlice(), new Intent());
}
});
commonDialog.show();
}
Customize CommonDialog
Java:
private void correctAnsDialog(){
CommonDialog commonDialog = new CommonDialog(getContext());
DependentLayout dependentLayout = new DependentLayout (getContext());
dependentLayout.setWidth(DependentLayout.LayoutConfig.MATCH_PARENT);
dependentLayout.setHeight(DependentLayout.LayoutConfig.MATCH_PARENT);
dependentLayout.setBackground(new ShapeElement(this,ResourceTable.Graphic_correct_dialog));
Text text = new Text(getContext());
text.setText("CORRECT ANSWER");
text.setTextSize(60);
text.setTextColor(Color.WHITE);
DependentLayout.LayoutConfig textConfig = new DependentLayout.LayoutConfig(DependentLayout.LayoutConfig.MATCH_CONTENT,
DependentLayout.LayoutConfig.MATCH_CONTENT);
textConfig.addRule(DependentLayout.LayoutConfig.CENTER_IN_PARENT);
textConfig.addRule(DependentLayout.LayoutConfig.ALIGN_PARENT_TOP);
text.setLayoutConfig(textConfig);
Button btnNext = new Button(getContext());
btnNext.setText("NEXT QUESTION");
btnNext.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
commonDialog.hide();
questionId++;
questionObj = list.get(questionId);
onNextQuestionAndOption();
resetButtonColors();
enableAllButtons();
}
});
btnNext.setBackground(new ShapeElement(this,ResourceTable.Graphic_btn_next));
btnNext.setTextColor(Color.BLACK);
btnNext.setPadding(20,20,20,20);
btnNext.setTextSize(50);
DependentLayout.LayoutConfig btnConfig = new DependentLayout.LayoutConfig(DependentLayout.LayoutConfig.MATCH_PARENT,
DependentLayout.LayoutConfig.MATCH_CONTENT);
btnConfig.addRule(DependentLayout.LayoutConfig.CENTER_IN_PARENT);
btnConfig.addRule(DependentLayout.LayoutConfig.ALIGN_PARENT_BOTTOM);
btnNext.setLayoutConfig(btnConfig);
dependentLayout.addComponent(text);
dependentLayout.addComponent(btnNext);
commonDialog.setContentCustomComponent(dependentLayout);
commonDialog.setSize(1000,300);
commonDialog.show();
}
Programmatically changing color
In order to change color programmatically to buttons or layout we use ShapeElement class.
Java:
// For Buttons …
private void resetButtonColors() {
btnA.setBackground(new ShapeElement(this,ResourceTable.Graphic_btn_option));
btnB.setBackground(new ShapeElement(this,ResourceTable.Graphic_btn_option));
btnC.setBackground(new ShapeElement(this,ResourceTable.Graphic_btn_option));
btnD.setBackground(new ShapeElement(this,ResourceTable.Graphic_btn_option));
}
// For Layouts …
DependentLayout dependentLayout = new DependentLayout (getContext());
dependentLayout.setWidth(DependentLayout.LayoutConfig.MATCH_PARENT);
dependentLayout.setHeight(DependentLayout.LayoutConfig.MATCH_PARENT);
dependentLayout.setBackground(new ShapeElement(this,ResourceTable.Graphic_correct_dialog));
Here ResourceTable is treated same as R in Android.
HarmonyOs Animation
HarmonyOs provides three major classes for animation:
FrameAnimationElement
AnimatorValue
AnimatorProperty
AnimatorGroup.
We will be using AnimatorProperty to do our animation in our splash screen.
Step 1: We need to create AnimatorProperty Object.
Java:
AnimatorProperty topAnim = logImg.createAnimatorProperty();
topAnim.alphaFrom((float) 0.1).alpha((float) 1.0).moveFromY(0).moveToY(700).setDuration(2000);
Here logImg is an Image.
Step 2: Create animator_property.xml file in resource/base/animation folder.
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<animator xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:duration="2000"/>
Step 3: Parse the animator_property.xml file and use its configuration using AnimatorScatter class.
Java:
AnimatorScatter scatter = AnimatorScatter.getInstance(getContext());
Animator animator = scatter.parse(ResourceTable.Animation_topanim);
if (animator instanceof AnimatorProperty) {
topAnim = (AnimatorProperty) animator;
topAnim.setTarget(logImg);
topAnim.moveFromY(0).moveToY(700);
}
logImg.setBindStateChangedListener(new Component.BindStateChangedListener() {
@Override
public void onComponentBoundToWindow(Component component) {
topAnim.start();
}
@Override
public void onComponentUnboundFromWindow(Component component) {
topAnim.stop();
}
});
Step 4: Start Animation
Java:
topAnim.start();
Tips & Tricks
Kindly follow my article, my entire article is full of tips & tricks. I have also mentioned Android keywords to make android developers familiar with the terminology of HarmonyOs.
Conclusion
In this article, we learn how to integrate SQLite DB in HarmonyOs application. Now you can use this knowledge and create application such as Library Management, School Management, Games etc.
Feel free to comment, share and like the article. Also you can follow me to get awesome article like this every week.
For more reference
https://developer.harmonyos.com/en/docs/documentation/doc-guides/database-relational-overview-0000000000030046
https://developer.harmonyos.com/en/docs/documentation/doc-guides/ui-java-overview-0000000000500404
Original Source

Integration of Huawei Direction API in Navigation Glove IoT application Using Kotlin - Part 4

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction​
In this series of article, we will learn about Navigation Glove application and also we will learn about integration of the Huawei Direction API in Navigation Glove IoT application. We will learn how to draw polyline on the map.
If you are new to this series of articles, follow my previous articles.
Beginner: Integration of Huawei Account kit in Navigation Glove IoT application Using Kotlin - Part 1
Beginner: Integration of Huawei Map kit in Navigation Glove IoT application Using Kotlin - Part 2
Beginner: Integration of Huawei Site kit in Navigation Glove IoT application Using Kotlin - Part 3
What is Direction API?
Huawei Map Kit provides a set of HTTP/HTTPS APIs, which you can use to build map data functions like route planning, Static map, Raster map.
Directions API is a set of HTTPS-based APIs it is used to plans routes. The direction API returns data in JSON format. You can parse and draw route on the map.
It has following types of routes:
Walking: You can plan route max 150 kilometers.
Cycling: You can plan route max 100 kilometers.
Driving: Driving route gives some following functions:
1. It returns 3 routes for request.
2. It supports 5 waypoints.
3. It gives real time traffic condition.
Using Rest services we are integrating the following.
Direction API
Prerequisite​
AppGallery Account
Android Studio 3.X
SDK Platform 19 or later
Gradle 4.6 or later
HMS Core (APK) 4.0.0.300 or later
Huawei Phone EMUI 3.0 or later
Non-Huawei Phone Android 4.4 or later
Service integration on AppGallery.
1. We need to register as a developer account in AppGallery Connect.
2. Create an app by referring to Creating a Project and Creating an App in the Project.
3. Set the data storage location based on the current location.
4. Enabling Map Kit Service on AppGallery Connect.
5. Generating a Signing Certificate Fingerprint.
6. Configuring the Signing Certificate Fingerprint.
7. Get your agconnect-services.json file to the app root directory.
Client development
1. Create android project in android studio IDE.
2. Add the maven URL inside the repositories of buildscript and allprojects respectively (project level build.gradle file).
Code:
maven { url 'https://developer.huawei.com/repo/' }
3. Add the classpath inside the dependency section of the project level build.gradle file.
Code:
classpath 'com.huawei.agconnect:agcp:1.5.2.300'
4. Add the plugin in the app-level build.gradle file.
Code:
apply plugin: 'com.huawei.agconnect'
5. Add the below library in the app-level build.gradle file dependencies section.
Code:
implementation 'com.huawei.hms:maps:4.0.0.302'
implementation 'com.google.code.gson:gson:2.8.6'
implementation 'com.squareup.retrofit2:retrofit:2.7.2'
implementation 'com.squareup.retrofit2:converter-gson:2.7.2'
6. Add all the below permissions in the AndroidManifest.xml.
XML:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
7. Sync the project.
By now user has select the Source location and Destination Location. And also we have added marker on the map.
Direction API Code
For direction API, we are using retrofit library to get the direction API.
So first let’s see what will be the Request and what will be response. And also end point.
Method Type: POST
URL:
https://mapapi.cloud.huawei.com/mapApi/v1/routeService/driving?key=
API key
Note: Replace API key with you project API key
Data Format: Request: Content-Type: application/json
Response: Content-Type: application/json
Request Example
POST https://mapapi.cloud.huawei.com/mapApi/v1/routeService/driving?key=API key HTTP/1.1
Content-Type: application/json
Accept: application/json
JSON:
{
"destination":{
"lat":12.982126,
"lng":77.533103
},
"origin":{
"lat":12.9702763,
"lng":77.5373127
}
}
Response
HTTP/1.1 200 OK
Content-type: application/json
Just check it in the Postman or RestClient or any other tools which give response for HTTP request
Now let us set up
Step 1: Create Const class.
package com.huawei.navigationglove.api
object Const {
const val BASE_URL = "https://mapapi.cloud.huawei.com/mapApi/v1/"
}Copy codeCopy code
Step 2: Create ApiClient class.
Java:
package com.huawei.navigationglove.api
import android.content.Context
import android.text.TextUtils
import okhttp3.Interceptor
import okhttp3.logging.HttpLoggingInterceptor
import okhttp3.OkHttpClient
import retrofit2.converter.gson.GsonConverterFactory
import com.jakewharton.retrofit2.adapter.rxjava2.RxJava2CallAdapterFactory
import okhttp3.HttpUrl
import okhttp3.Request
import retrofit2.Retrofit
import java.util.concurrent.TimeUnit
object ApiClient {
private val retrofit: Retrofit = Retrofit.Builder()
.baseUrl(Const.BASE_URL)
.client(setInterceptors())
.addConverterFactory(GsonConverterFactory.create())
.build()
private fun setInterceptors(): OkHttpClient {
val logger = HttpLoggingInterceptor()
logger.level = HttpLoggingInterceptor.Level.BODY
return OkHttpClient.Builder()
.readTimeout(60, TimeUnit.SECONDS)
.connectTimeout(60, TimeUnit.SECONDS)
.addInterceptor { chain ->
val url: HttpUrl = chain.request().url.newBuilder()
.addQueryParameter(
"key",
"ADD_YOUR_API_KEY_HERE"
)
.build()
val request = chain.request().newBuilder()
.header("Content-Type", "application/json")
.url(url)
.build()
chain.proceed(request)
}
.addInterceptor(logger)
.build()
}
fun createApiService(): ApiService {
return ApiClient.retrofit.create(ApiService::class.java)
}
fun <S> createService(serviceClass: Class<S>?): S {
return retrofit.create(serviceClass)
}
}
Step 3: Create the ApiService interface.
Java:
package com.huawei.navigationglove.api
import com.huawei.navigationglove.api.request.DirectionRequest
import com.huawei.navigationglove.api.response.DirectionResponse
import io.reactivex.Single
import retrofit2.Call
import retrofit2.http.*
interface ApiService {
@POST("routeService/{type}")
fun getDirectionsWithType(
@Path(value = "type",encoded = true) type : String,
@Body directionRequest: DirectionRequest
): Call<DirectionResponse>
}Copy codeCopy code
Step 4: Create TypeOfDirection Enum to get routes for driving/walking/cycling
package com.huawei.navigationglove.api
enum class TypeOfDirection(val type: String) {
WALKING("walking"),
BICYCLING("bicycling"),
DRIVING("driving")
}
Before creating request and response class, let us add one plugin to convert from JSON to Kotlin data classes
Choose File > Setting > Plugin
Step 5: Now we need to create Request kotlin classes.
Origin.kt
Java:
package com.huawei.navigationglove.api.request
import com.google.gson.annotations.SerializedName
data class Origin(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
Destination.kt
Java:
package com.huawei.navigationglove.api.request
import com.google.gson.annotations.SerializedName
data class Destination(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
DirectionRequest.kt
Java:
package com.huawei.navigationglove.api.request
import com.google.gson.annotations.SerializedName
data class DirectionRequest(@SerializedName("origin")
val origin: Origin,
@SerializedName("destination")
val destination: Destination)
Step 6: Now create Response Class. Same as the Request class created.
DirectionResponse.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class DirectionResponse(@SerializedName("routes")
val routes: List<RoutesItem>?,
@SerializedName("returnCode")
val returnCode: String = "",
@SerializedName("returnDesc")
val returnDesc: String = "")
Bounds.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class Bounds(@SerializedName("southwest")
val southwest: Southwest,
@SerializedName("northeast")
val northeast: Northeast)
EndLocation.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class EndLocation(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
Northeast.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class Northeast(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
PathItems.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class PathsItem(@SerializedName("duration")
val duration: Double = 0.0,
@SerializedName("durationText")
val durationText: String = "",
@SerializedName("durationInTrafficText")
val durationInTrafficText: String = "",
@SerializedName("durationInTraffic")
val durationInTraffic: Double = 0.0,
@SerializedName("distance")
val distance: Double = 0.0,
@SerializedName("startLocation")
val startLocation: StartLocation,
@SerializedName("startAddress")
val startAddress: String = "",
@SerializedName("distanceText")
val distanceText: String = "",
@SerializedName("steps")
val steps: List<StepsItem>?,
@SerializedName("endLocation")
val endLocation: EndLocation,
@SerializedName("endAddress")
val endAddress: String = "")
PolylineItems.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class PolylineItem(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
RoutesItem.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class RoutesItem(@SerializedName("trafficLightNum")
val trafficLightNum: Int = 0,
@SerializedName("dstInDiffTimeZone")
val dstInDiffTimeZone: Int = 0,
@SerializedName("crossCountry")
val crossCountry: Int = 0,
@SerializedName("hasRestrictedRoad")
val hasRestrictedRoad: Int = 0,
@SerializedName("hasRoughRoad")
val hasRoughRoad: Int = 0,
@SerializedName("hasTrafficLight")
val hasTrafficLight: Int = 0,
@SerializedName("crossMultiCountries")
val crossMultiCountries: Int = 0,
@SerializedName("dstInRestrictedArea")
val dstInRestrictedArea: Int = 0,
@SerializedName("overviewPolyline")
val overviewPolyline: String = "",
@SerializedName("paths")
val paths: List<PathsItem>?,
@SerializedName("bounds")
val bounds: Bounds,
@SerializedName("hasTolls")
val hasTolls: Int = 0,
@SerializedName("hasFerry")
val hasFerry: Int = 0)
Southwest.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class Southwest(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
StartLocation.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class StartLocation(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
StepsItem.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class StepsItem(@SerializedName("duration")
val duration: Double = 0.0,
@SerializedName("orientation")
val orientation: Int = 0,
@SerializedName("durationText")
val durationText: String = "",
@SerializedName("distance")
val distance: Double = 0.0,
@SerializedName("startLocation")
val startLocation: StartLocation,
@SerializedName("instruction")
val instruction: String = "",
@SerializedName("action")
val action: String = "",
@SerializedName("distanceText")
val distanceText: String = "",
@SerializedName("roadName")
val roadName: String = "",
@SerializedName("endLocation")
val endLocation: EndLocation,
@SerializedName("polyline")
val polyline: List<PolylineItem>?)=
Now everything set now call the API.
If your question about originReq and destinationReq follow my previous article. These two request has been created when user selects the source location and destination location.
Java:
navigate.setOnClickListener {
val directionRequest = DirectionRequest(originReq!!, destinationReq!!)
getDirection(directionType, directionRequest)
}Copy codeCopy code
private fun getDirection(directionType: String, directionRequest: DirectionRequest) {
ApiClient.createApiService()
.getDirectionsWithType(directionType, directionRequest)
.enqueue(object : Callback<DirectionResponse> {
override fun onFailure(call: Call<DirectionResponse>, t: Throwable) {
Toast.makeText(
[email protected],
"Failure" + t.localizedMessage + "\n" + t.message,
Toast.LENGTH_SHORT
).show()
}
override fun onResponse(
call: Call<DirectionResponse>,
response: Response<DirectionResponse>
) {
if (response.isSuccessful) {
response.body()?.let {
it.routes?.get(0)?.paths?.get(0)?.let { it1 -> addPolyLines(it1) }
Toast.makeText([email protected], "Success", Toast.LENGTH_SHORT)
.show()
}
//startActivity(Intent([email protected], MapsActivity::class.java))
}
}
})
}
Adding polyline
Java:
var polyLine: Polyline? = null
private fun addPolyLines(path: PathsItem) {
if (polyLine != null) {
polyLine!!.remove()
}
val options = PolylineOptions()
options.add(LatLng(path.startLocation.lat, path.startLocation.lng))
path.steps!!.forEach {
it.polyline!!.forEach { it1 ->
options.add(LatLng(it1.lat, it1.lng))
}
}
options.add(LatLng(path.endLocation.lat, path.endLocation.lng))
options.color(Color.BLACK)
options.width(6f)
polyLine = hMap!!.addPolyline(options)
}
Now let us see the full code.
HomeScreenActivity.kt
Java:
package com.huawei.navigationglove.ui
import android.content.Intent
import android.graphics.Color
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.util.Log
import android.view.View
import android.widget.Toast
import androidx.activity.result.contract.ActivityResultContracts
import com.huawei.hms.maps.*
import com.huawei.hms.maps.model.*
import com.huawei.hms.site.api.model.Site
import com.huawei.hms.site.widget.SearchIntent
import com.huawei.navigationglove.R
import com.huawei.navigationglove.api.ApiClient
import com.huawei.navigationglove.api.TypeOfDirection
import com.huawei.navigationglove.api.request.Destination
import com.huawei.navigationglove.api.request.DirectionRequest
import com.huawei.navigationglove.api.request.Origin
import com.huawei.navigationglove.api.response.DirectionResponse
import com.huawei.navigationglove.api.response.PathsItem
import kotlinx.android.synthetic.main.activity_home_screen.*
import retrofit2.Call
import retrofit2.Callback
import retrofit2.Response
import java.net.URLEncoder
import android.widget.RadioButton
class HomeScreenActivity : AppCompatActivity(), OnMapReadyCallback {
private val TAG = HomeScreenActivity::class.java.name
private val API_KEY: String =
"Add your API Key here"
private var originReq: Origin? = null
private var destinationReq: Destination? = null
var hMap: HuaweiMap? = null
private val searchIntent = SearchIntent()
private var mMarker: Marker? = null
private var mCircle: Circle? = null
private var directionType: String = TypeOfDirection.WALKING.type
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_home_screen)
val mSupportMapFragment: SupportMapFragment? =
supportFragmentManager.findFragmentById(R.id.mapfragment_mapfragmentdemo) as SupportMapFragment?
mSupportMapFragment!!.getMapAsync(this)
//Site kit
val apiKey = URLEncoder.encode(
API_KEY,
"utf-8"
)
searchIntent.setApiKey(apiKey)
//Select source location
start.setOnClickListener {
//You can try the below one for older method
//selectSourceLocation()
/*val intent = searchIntent.getIntent(this)
startActivityForResult(intent, SearchIntent.SEARCH_REQUEST_CODE)*/
selectSourceLocation()
}
//Select Destination Location
destination.setOnClickListener {
selectDestinationLocation()
}
navigate.setOnClickListener {
val directionRequest = DirectionRequest(originReq!!, destinationReq!!)
getDirection(directionType, directionRequest)
}
directionTypeGroup.setOnCheckedChangeListener { group, checkedId -> // checkedId is the RadioButton selected
val directionTypeRadioButton = findViewById<View>(checkedId) as RadioButton
when (directionTypeRadioButton.text.toString().toLowerCase()) {
"walking" -> directionType = TypeOfDirection.WALKING.type
"driving" -> directionType = TypeOfDirection.DRIVING.type
"cycling" -> directionType = TypeOfDirection.BICYCLING.type
}
Toast.makeText(applicationContext, directionTypeRadioButton.text, Toast.LENGTH_SHORT)
.show()
}
}
private fun selectSourceLocation() {
val intent = searchIntent.getIntent(this)
sourceLocationLauncher.launch(intent)
}
private var sourceLocationLauncher =
registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result ->
val data: Intent? = result.data
if (data != null) {
if (SearchIntent.isSuccess(result.resultCode)) {
val site: Site = searchIntent.getSiteFromIntent(data)
start.text = site.getName()
originReq = Origin(site.location.lng, site.location.lat)
//originReq = Origin(-4.66529, 54.216608)
//Toast.makeText(application, site.getName(), Toast.LENGTH_LONG).show()
val build = CameraPosition.Builder()
.target(LatLng(site.location.lat, site.location.lng)).zoom(16f).build()
val cameraUpdate = CameraUpdateFactory.newCameraPosition(build)
hMap!!.animateCamera(cameraUpdate)
//Setting max and min zoom
//hMap!!.setMaxZoomPreference(10f)
//hMap!!.setMinZoomPreference(1f)
// Marker can be add by HuaweiMap
mMarker = hMap!!.addMarker(
MarkerOptions().position(LatLng(site.location.lat, site.location.lng))
.icon(BitmapDescriptorFactory.fromResource(R.drawable.ic_baseline_location_on_24))
.clusterable(true)
)
mMarker?.showInfoWindow()
// circle can be added to HuaweiMap
/*mCircle = hMap!!.addCircle(
CircleOptions().center(LatLng(28.7041, 77.1025)).radius(45000.0).fillColor(
Color.GREEN))
mCircle?.fillColor = Color.TRANSPARENT*/
}
} else {
Toast.makeText(application, "Unable to find the data", Toast.LENGTH_LONG).show()
}
}
private fun selectDestinationLocation() {
val intent = searchIntent.getIntent(this)
destinationLocationLauncher.launch(intent)
}
private var destinationLocationLauncher =
registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result ->
val data: Intent? = result.data
if (data != null) {
if (SearchIntent.isSuccess(result.resultCode)) {
val site: Site = searchIntent.getSiteFromIntent(data)
destination.text = site.getName()
destinationReq = Destination(site.location.lng, site.location.lat)
//destinationReq = Destination(-4.66552, 54.2166)
val build = CameraPosition.Builder()
.target(LatLng(site.location.lat, site.location.lng)).zoom(16f).build()
val cameraUpdate = CameraUpdateFactory.newCameraPosition(build)
hMap!!.animateCamera(cameraUpdate)
mMarker = hMap!!.addMarker(
MarkerOptions().position(LatLng(site.location.lat, site.location.lng))
.icon(BitmapDescriptorFactory.fromResource(R.drawable.ic_baseline_destination_location_on_24))
.clusterable(true)
)
mMarker?.showInfoWindow()
}
} else {
Toast.makeText(application, "Unable to find the data", Toast.LENGTH_LONG).show()
}
}
override fun onMapReady(huaweiMap: HuaweiMap) {
Log.d(TAG, "onMapReady: ")
hMap = huaweiMap
hMap!!.mapType = HuaweiMap.MAP_TYPE_NORMAL
hMap!!.uiSettings.isMyLocationButtonEnabled = true
// Add a polyline (mPolyline) to a map.
/* val mPolyline = hMap!!.addPolyline(
PolylineOptions().add(
LatLng(47.893478, 2.334595),
LatLng(48.993478, 3.434595),
LatLng(48.693478, 2.134595),
LatLng(48.793478, 2.334595)
)
)
// Set the color of the polyline (mPolyline) to red.
mPolyline.color = Color.RED
// Set the width of the polyline (mPolyline) to 10 pixels.
mPolyline.width = 10f*/
//Adding Polygon
/*hMap!!.addPolygon(PolygonOptions().addAll(createRectangle(LatLng(12.9716, 77.5946), 0.1, 0.1))
.fillColor(Color.GREEN)
.strokeColor(Color.BLACK))*/
//Adding Circle on the Map
/*hMap!!.addCircle(CircleOptions()
.center(LatLng(12.9716, 77.5946))
.radius(500.0)
.fillColor(Color.GREEN))*/
}
private fun createRectangle(
center: LatLng,
halfWidth: Double,
halfHeight: Double
): List<LatLng> {
return listOf(
LatLng(center.latitude - halfHeight, center.longitude - halfWidth),
LatLng(center.latitude - halfHeight, center.longitude + halfWidth),
LatLng(center.latitude + halfHeight, center.longitude + halfWidth),
LatLng(center.latitude + halfHeight, center.longitude - halfWidth)
)
}
private fun getDirection(directionType: String, directionRequest: DirectionRequest) {
ApiClient.createApiService()
.getDirectionsWithType(directionType, directionRequest)
.enqueue(object : Callback<DirectionResponse> {
override fun onFailure(call: Call<DirectionResponse>, t: Throwable) {
Toast.makeText(
[email protected],
"Failure" + t.localizedMessage + "\n" + t.message,
Toast.LENGTH_SHORT
).show()
}
override fun onResponse(
call: Call<DirectionResponse>,
response: Response<DirectionResponse>
) {
if (response.isSuccessful) {
response.body()?.let {
it.routes?.get(0)?.paths?.get(0)?.let { it1 -> addPolyLines(it1) }
Toast.makeText([email protected], "Success", Toast.LENGTH_SHORT)
.show()
}
//startActivity(Intent([email protected], MapsActivity::class.java))
}
}
})
}
var polyLine: Polyline? = null
private fun addPolyLines(path: PathsItem) {
if (polyLine != null) {
polyLine!!.remove()
}
val options = PolylineOptions()
options.add(LatLng(path.startLocation.lat, path.startLocation.lng))
path.steps!!.forEach {
it.polyline!!.forEach { it1 ->
options.add(LatLng(it1.lat, it1.lng))
}
}
options.add(LatLng(path.endLocation.lat, path.endLocation.lng))
options.color(Color.BLACK)
options.width(6f)
polyLine = hMap!!.addPolyline(options)
}
}
activity_home_screen.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:map="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".ui.HomeScreenActivity">
<fragment
android:id="@+id/mapfragment_mapfragmentdemo"
class="com.huawei.hms.maps.SupportMapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"
map:cameraTargetLat="12.9716"
map:cameraTargetLng="77.5946"
map:cameraZoom="10" />
<androidx.cardview.widget.CardView
android:id="@+id/cardview"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal|top"
android:layout_marginLeft="20dp"
android:layout_marginTop="20dp"
android:layout_marginRight="20dp"
android:elevation="100dp"
app:cardBackgroundColor="@android:color/white"
app:cardCornerRadius="8dp">
<LinearLayout
android:layout_width="fill_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="horizontal">
<LinearLayout
android:id="@+id/locationLayout"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_weight="9.5"
android:background="@color/white"
android:orientation="vertical"
android:padding="20dp">
<TextView
android:id="@+id/start"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="1dp"
android:background="@android:color/transparent"
android:hint="Choose a starting point..."
android:maxLines="1"
android:textColor="@color/purple_200"
android:textSize="18sp" />
<View
android:layout_width="match_parent"
android:layout_height="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="50dp"
android:layout_marginBottom="5dp"
android:background="@drawable/dottet_line" />
<TextView
android:id="@+id/destination"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@android:color/transparent"
android:hint="Choose a destination..."
android:maxLines="1"
android:textSize="18sp"
android:textColor="@color/purple_200"/>
</LinearLayout>
<ImageView
android:id="@+id/navigate"
android:layout_width="36dp"
android:layout_height="36dp"
android:layout_gravity="center"
android:layout_marginRight="10dp"
android:layout_weight="0.5"
android:src="@drawable/ic_baseline_send_24" />
</LinearLayout>
<View
android:layout_width="match_parent"
android:layout_height="5dp"
android:background="@drawable/dottet_line" />
<RadioGroup
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="5dp"
android:gravity="center"
android:id="@+id/directionTypeGroup"
android:orientation="horizontal">
<RadioButton
android:id="@+id/walking"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="10dp"
android:layout_weight="1"
android:checked="true"
android:text="Walking" />
<RadioButton
android:id="@+id/driving"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="10dp"
android:layout_weight="1"
android:text="Driving" />
<RadioButton
android:id="@+id/cycling"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="10dp"
android:layout_weight="1"
android:text="Cycling" />
</RadioGroup>
</LinearLayout>
</androidx.cardview.widget.CardView>
</FrameLayout>
Result​
Tips and Tricks​
1. Make sure you are already registered as a Huawei developer.
2. Set min SDK version to 21 or later, otherwise you will get AndriodManifest to merge issue.
3. Make sure you have added the agconnect-services.json file to the app folder.
4. Make sure you have added the SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
6. If you want location feature in the Map, make sure you add location permission in AndroidManifest.xml file.
7. If you install app in android version 6 or greater than make sure you handled run time permission.
8. Make sure you have encoded your API key with URLEncode UTF-8.
Conclusion​
In this article, we have learnt the integration of the Direction API in Navigation Glove application using Android Studio and Kotlin. And also we have learnt how to convert the JSON to kotlin data classes. And also we have learnt the integration of the retrofit in the application.
Reference​
Map Kit - Official document
Map Kit - Code lab
Map Kit - Training Video
Map Kit - Direction API

Find hand points using Hand Gesture Recognition feature by Huawei ML Kit in Android (Kotlin)

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we can learn how to find the hand key points using Huawei ML Kit of Hand Gesture Recognition feature. This service provides two capabilities: hand keypoint detection and hand gesture recognition. The hand keypoint detection capability can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return the positions of the key points. The hand gesture recognition capability can detect and return the positions of all rectangle areas of the hand from images and videos, and the type and confidence of a gesture. This capability can recognize 14 gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time camera streams.
Use Cases
Hand keypoint detection is widely used in daily life. For example, after integrating this capability, users can convert the detected hand keypoints into a 2D model, and synchronize the model to the character model, to produce a vivid 2D animation. In addition, when shooting a short video, special effects can be generated based on dynamic hand trajectories. This allows users to play finger games, thereby making the video shooting process more creative and interactive. Hand gesture recognition enables your app to call various commands by recognizing users' gestures. Users can control their smart home appliances without touching them. In this way, this capability makes the human-machine interaction more efficient.
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Must have a Huawei phone with HMS 4.0.0.300 or later.
3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 and above installed.
4. Minimum API Level 21 is required.
5. Required EMUI 9.0.0 and later version devices.
How to integrate HMS Dependencies
1. First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.
2. Create a project in android studio, refer Creating an Android Studio Project.
3. Generate a SHA-256 certificate fingerprint.
4. To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.
Note: Project Name depends on the user created name.
5. Create an App in AppGallery Connect.
6. Download the agconnect-services.json file from App information, copy and paste in android Project under app directory, as follows.
7. Enter SHA-256 certificate fingerprint and click Save button, as follows.
Note: Above steps from Step 1 to 7 is common for all Huawei Kits.
8. Click Manage APIs tab and enable ML Kit.
9. Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
Java:
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
10. Add the below plugin and dependencies in build.gradle(Module) file.
Java:
apply plugin: 'com.huawei.agconnect'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.6.5.300'
/ ML Kit Hand Gesture
// Import the base SDK
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.1.0.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.1.0.300'
11. Now Sync the gradle.
12. Add the required permission to the AndroidManifest.xml file.
Java:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
Let us move to development
I have created a project on Android studio with empty activity let us start coding.
In the MainActivity.kt we can find the business logic for buttons.
Java:
class MainActivity : AppCompatActivity() {
private var staticButton: Button? = null
private var liveButton: Button? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
staticButton = findViewById(R.id.btn_static)
liveButton = findViewById(R.id.btn_live)
staticButton!!.setOnClickListener {
val intent = Intent([email protected], StaticHandKeyPointAnalyse::class.java)
startActivity(intent)
}
liveButton!!.setOnClickListener {
val intent = Intent([email protected], LiveHandKeyPointAnalyse::class.java)
startActivity(intent)
}
}
}
In the LiveHandKeyPointAnalyse.kt we can find the business logic for live analysis.
Java:
class LiveHandKeyPointAnalyse : AppCompatActivity(), View.OnClickListener {
private val TAG: String = LiveHandKeyPointAnalyse::class.java.getSimpleName()
private var mPreview: LensEnginePreview? = null
private var mOverlay: GraphicOverlay? = null
private var mFacingSwitch: Button? = null
private var mAnalyzer: MLHandKeypointAnalyzer? = null
private var mLensEngine: LensEngine? = null
private val lensType = LensEngine.BACK_LENS
private var mLensType = 0
private var isFront = false
private var isPermissionRequested = false
private val CAMERA_PERMISSION_CODE = 0
private val ALL_PERMISSION = arrayOf(Manifest.permission.CAMERA)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_live_hand_key_point_analyse)
if (savedInstanceState != null) {
mLensType = savedInstanceState.getInt("lensType")
}
initView()
createHandAnalyzer()
if (Camera.getNumberOfCameras() == 1) {
mFacingSwitch!!.visibility = View.GONE
}
// Checking Camera Permissions
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
} else {
checkPermission()
}
}
private fun initView() {
mPreview = findViewById(R.id.hand_preview)
mOverlay = findViewById(R.id.hand_overlay)
mFacingSwitch = findViewById(R.id.handswitch)
mFacingSwitch!!.setOnClickListener(this)
}
private fun createHandAnalyzer() {
// Create a analyzer. You can create an analyzer using the provided customized face detection parameter: MLHandKeypointAnalyzerSetting
val setting = MLHandKeypointAnalyzerSetting.Factory()
.setMaxHandResults(2)
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
.create()
mAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting)
mAnalyzer!!.setTransactor(HandAnalyzerTransactor(this, mOverlay!!) )
}
// Check the permissions required by the SDK.
private fun checkPermission() {
if (Build.VERSION.SDK_INT >= 23 && !isPermissionRequested) {
isPermissionRequested = true
val permissionsList = ArrayList<String>()
for (perm in getAllPermission()!!) {
if (PackageManager.PERMISSION_GRANTED != checkSelfPermission(perm.toString())) {
permissionsList.add(perm.toString())
}
}
if (!permissionsList.isEmpty()) {
requestPermissions(permissionsList.toTypedArray(), 0)
}
}
}
private fun getAllPermission(): MutableList<Array<String>> {
return Collections.unmodifiableList(listOf(ALL_PERMISSION))
}
private fun createLensEngine() {
val context = this.applicationContext
// Create LensEngine.
mLensEngine = LensEngine.Creator(context, mAnalyzer)
.setLensType(mLensType)
.applyDisplayDimension(640, 480)
.applyFps(25.0f)
.enableAutomaticFocus(true)
.create()
}
private fun startLensEngine() {
if (mLensEngine != null) {
try {
mPreview!!.start(mLensEngine, mOverlay)
} catch (e: IOException) {
Log.e(TAG, "Failed to start lens engine.", e)
mLensEngine!!.release()
mLensEngine = null
}
}
}
// Permission application callback.
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String?>, grantResults: IntArray) {
var hasAllGranted = true
if (requestCode == CAMERA_PERMISSION_CODE) {
if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
} else if (grantResults[0] == PackageManager.PERMISSION_DENIED) {
hasAllGranted = false
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, permissions[0]!!)) {
showWaringDialog()
} else {
Toast.makeText(this, R.string.toast, Toast.LENGTH_SHORT).show()
finish()
}
}
return
}
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
}
override fun onSaveInstanceState(outState: Bundle) {
outState.putInt("lensType", lensType)
super.onSaveInstanceState(outState)
}
private class HandAnalyzerTransactor internal constructor(mainActivity: LiveHandKeyPointAnalyse?,
private val mGraphicOverlay: GraphicOverlay) : MLTransactor<MLHandKeypoints?> {
// Process the results returned by the analyzer.
override fun transactResult(result: MLAnalyzer.Result<MLHandKeypoints?>) {
mGraphicOverlay.clear()
val handKeypointsSparseArray = result.analyseList
val list: MutableList<MLHandKeypoints?> = ArrayList()
for (i in 0 until handKeypointsSparseArray.size()) {
list.add(handKeypointsSparseArray.valueAt(i))
}
val graphic = HandKeypointGraphic(mGraphicOverlay, list)
mGraphicOverlay.add(graphic)
}
override fun destroy() {
mGraphicOverlay.clear()
}
}
override fun onClick(v: View?) {
when (v!!.id) {
R.id.handswitch -> switchCamera()
else -> {}
}
}
private fun switchCamera() {
isFront = !isFront
mLensType = if (isFront) {
LensEngine.FRONT_LENS
} else {
LensEngine.BACK_LENS
}
if (mLensEngine != null) {
mLensEngine!!.close()
}
createLensEngine()
startLensEngine()
}
private fun showWaringDialog() {
val dialog = AlertDialog.Builder(this)
dialog.setMessage(R.string.Information_permission)
.setPositiveButton(R.string.go_authorization,
DialogInterface.OnClickListener { dialog, which ->
val intent = Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS)
val uri = Uri.fromParts("package", applicationContext.packageName, null)
intent.data = uri
startActivity(intent)
})
.setNegativeButton("Cancel", DialogInterface.OnClickListener { dialog, which -> finish() })
.setOnCancelListener(dialogInterface)
dialog.setCancelable(false)
dialog.show()
}
var dialogInterface = DialogInterface.OnCancelListener { }
override fun onResume() {
super.onResume()
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
startLensEngine()
} else {
checkPermission()
}
}
override fun onPause() {
super.onPause()
mPreview!!.stop()
}
override fun onDestroy() {
super.onDestroy()
if (mLensEngine != null) {
mLensEngine!!.release()
}
if (mAnalyzer != null) {
mAnalyzer!!.stop()
}
}
}
Create LensEnginePreview.kt class to find the business logic for lens engine view.
Java:
class LensEnginePreview(private val mContext: Context, attrs: AttributeSet?) : ViewGroup(mContext, attrs) {
private val mSurfaceView: SurfaceView
private var mStartRequested = false
private var mSurfaceAvailable = false
private var mLensEngine: LensEngine? = null
private var mOverlay: GraphicOverlay? = null
@Throws(IOException::class)
fun start(lensEngine: LensEngine?) {
if (lensEngine == null) {
stop()
}
mLensEngine = lensEngine
if (mLensEngine != null) {
mStartRequested = true
startIfReady()
}
}
@Throws(IOException::class)
fun start(lensEngine: LensEngine?, overlay: GraphicOverlay?) {
mOverlay = overlay
this.start(lensEngine)
}
fun stop() {
if (mLensEngine != null) {
mLensEngine!!.close()
}
}
@Throws(IOException::class)
private fun startIfReady() {
if (mStartRequested && mSurfaceAvailable) {
mLensEngine!!.run(mSurfaceView.holder)
if (mOverlay != null) {
val size = mLensEngine!!.displayDimension
val min = Math.min(size.width, size.height)
val max = Math.max(size.width, size.height)
if (isPortraitMode) {
// Swap width and height sizes when in portrait, since it will be rotated by 90 degrees.
mOverlay!!.setCameraInfo(min, max, mLensEngine!!.lensType)
} else {
mOverlay!!.setCameraInfo(max, min, mLensEngine!!.lensType)
}
mOverlay!!.clear()
}
mStartRequested = false
}
}
private inner class SurfaceCallback : SurfaceHolder.Callback {
override fun surfaceCreated(surface: SurfaceHolder) {
mSurfaceAvailable = true
try {
startIfReady()
} catch (e: IOException) {
Log.e(TAG, "Could not start camera source.", e)
}
}
override fun surfaceDestroyed(surface: SurfaceHolder) {
mSurfaceAvailable = false
}
override fun surfaceChanged(holder: SurfaceHolder, format: Int, width: Int, height: Int) {}
}
override fun onLayout(changed: Boolean, left: Int, top: Int, right: Int, bottom: Int) {
var previewWidth = 480
var previewHeight = 360
if (mLensEngine != null) {
val size = mLensEngine!!.displayDimension
if (size != null) {
previewWidth = size.width
previewHeight = size.height
}
}
// Swap width and height sizes when in portrait, since it will be rotated 90 degrees
if (isPortraitMode) {
val tmp = previewWidth
previewWidth = previewHeight
previewHeight = tmp
}
val viewWidth = right - left
val viewHeight = bottom - top
val childWidth: Int
val childHeight: Int
var childXOffset = 0
var childYOffset = 0
val widthRatio = viewWidth.toFloat() / previewWidth.toFloat()
val heightRatio = viewHeight.toFloat() / previewHeight.toFloat()
// To fill the view with the camera preview, while also preserving the correct aspect ratio,
// it is usually necessary to slightly oversize the child and to crop off portions along one
// of the dimensions. We scale up based on the dimension requiring the most correction, and
// compute a crop offset for the other dimension.
if (widthRatio > heightRatio) {
childWidth = viewWidth
childHeight = (previewHeight.toFloat() * widthRatio).toInt()
childYOffset = (childHeight - viewHeight) / 2
} else {
childWidth = (previewWidth.toFloat() * heightRatio).toInt()
childHeight = viewHeight
childXOffset = (childWidth - viewWidth) / 2
}
for (i in 0 until this.childCount) {
// One dimension will be cropped. We shift child over or up by this offset and adjust
// the size to maintain the proper aspect ratio.
getChildAt(i).layout(-1 * childXOffset, -1 * childYOffset,
childWidth - childXOffset,childHeight - childYOffset )
}
try {
startIfReady()
} catch (e: IOException) {
Log.e(TAG, "Could not start camera source.", e)
}
}
private val isPortraitMode: Boolean
get() {
val orientation = mContext.resources.configuration.orientation
if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
return false
}
if (orientation == Configuration.ORIENTATION_PORTRAIT) {
return true
}
Log.d(TAG, "isPortraitMode returning false by default")
return false
}
companion object {
private val TAG = LensEnginePreview::class.java.simpleName
}
init {
mSurfaceView = SurfaceView(mContext)
mSurfaceView.holder.addCallback(SurfaceCallback())
this.addView(mSurfaceView)
}
}
Create HandKeypointGraphic.kt class to find the business logic for hand key point.
Java:
class HandKeypointGraphic(overlay: GraphicOverlay?, private val handKeypoints: MutableList<MLHandKeypoints?>) : GraphicOverlay.Graphic(overlay!!) {
private val rectPaint: Paint
private val idPaintnew: Paint
companion object {
private const val BOX_STROKE_WIDTH = 5.0f
}
private fun translateRect(rect: Rect): Rect {
var left: Float = translateX(rect.left)
var right: Float = translateX(rect.right)
var bottom: Float = translateY(rect.bottom)
var top: Float = translateY(rect.top)
if (left > right) {
val size = left
left = right
right = size
}
if (bottom < top) {
val size = bottom
bottom = top
top = size
}
return Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())
}
init {
val selectedColor = Color.WHITE
idPaintnew = Paint()
idPaintnew.color = Color.GREEN
idPaintnew.textSize = 32f
rectPaint = Paint()
rectPaint.color = selectedColor
rectPaint.style = Paint.Style.STROKE
rectPaint.strokeWidth = BOX_STROKE_WIDTH
}
override fun draw(canvas: Canvas?) {
for (i in handKeypoints.indices) {
val mHandKeypoints = handKeypoints[i]
if (mHandKeypoints!!.getHandKeypoints() == null) {
continue
}
val rect = translateRect(handKeypoints[i]!!.getRect())
canvas!!.drawRect(rect, rectPaint)
for (handKeypoint in mHandKeypoints.getHandKeypoints()) {
if (!(Math.abs(handKeypoint.getPointX() - 0f) == 0f && Math.abs(handKeypoint.getPointY() - 0f) == 0f)) {
canvas!!.drawCircle(translateX(handKeypoint.getPointX().toInt()),
translateY(handKeypoint.getPointY().toInt()), 24f, idPaintnew)
}
}
}
}
}
Create GraphicOverlay.kt class to find the business logic for graphic overlay.
Java:
class GraphicOverlay(context: Context?, attrs: AttributeSet?) : View(context, attrs) {
private val mLock = Any()
private var mPreviewWidth = 0
private var mWidthScaleFactor = 1.0f
private var mPreviewHeight = 0
private var mHeightScaleFactor = 1.0f
private var mFacing = LensEngine.BACK_LENS
private val mGraphics: MutableSet<Graphic> = HashSet()
// Base class for a custom graphics object to be rendered within the graphic overlay. Subclass
// this and implement the [Graphic.draw] method to define the graphics element. Add instances to the overlay using [GraphicOverlay.add].
abstract class Graphic(private val mOverlay: GraphicOverlay) {
// Draw the graphic on the supplied canvas. Drawing should use the following methods to
// convert to view coordinates for the graphics that are drawn:
// 1. [Graphic.scaleX] and [Graphic.scaleY] adjust the size of the supplied value from the preview scale to the view scale.
// 2. [Graphic.translateX] and [Graphic.translateY] adjust the coordinate from the preview's coordinate system to the view coordinate system.
// @param canvas drawing canvas
abstract fun draw(canvas: Canvas?)
// Adjusts a horizontal value of the supplied value from the preview scale to the view scale.
fun scaleX(horizontal: Float): Float {
return horizontal * mOverlay.mWidthScaleFactor
}
// Adjusts a vertical value of the supplied value from the preview scale to the view scale.
fun scaleY(vertical: Float): Float {
return vertical * mOverlay.mHeightScaleFactor
}
// Adjusts the x coordinate from the preview's coordinate system to the view coordinate system.
fun translateX(x: Int): Float {
return if (mOverlay.mFacing == LensEngine.FRONT_LENS) {
mOverlay.width - scaleX(x.toFloat())
} else {
scaleX(x.toFloat())
}
}
// Adjusts the y coordinate from the preview's coordinate system to the view coordinate system.
fun translateY(y: Int): Float {
return scaleY(y.toFloat())
}
}
// Removes all graphics from the overlay.
fun clear() {
synchronized(mLock) { mGraphics.clear() }
postInvalidate()
}
// Adds a graphic to the overlay.
fun add(graphic: Graphic) {
synchronized(mLock) { mGraphics.add(graphic) }
postInvalidate()
}
// Sets the camera attributes for size and facing direction, which informs how to transform image coordinates later.
fun setCameraInfo(previewWidth: Int, previewHeight: Int, facing: Int) {
synchronized(mLock) {
mPreviewWidth = previewWidth
mPreviewHeight = previewHeight
mFacing = facing
}
postInvalidate()
}
// Draws the overlay with its associated graphic objects.
override fun onDraw(canvas: Canvas) {
super.onDraw(canvas)
synchronized(mLock) {
if (mPreviewWidth != 0 && mPreviewHeight != 0) {
mWidthScaleFactor = canvas.width.toFloat() / mPreviewWidth.toFloat()
mHeightScaleFactor = canvas.height.toFloat() / mPreviewHeight.toFloat()
}
for (graphic in mGraphics) {
graphic.draw(canvas)
}
}
}
}
In the StaticHandKeyPointAnalyse.kt we can find the business logic static hand key point analyses.
Java:
class StaticHandKeyPointAnalyse : AppCompatActivity() {
var analyzer: MLHandKeypointAnalyzer? = null
var bitmap: Bitmap? = null
var mutableBitmap: Bitmap? = null
var mlFrame: MLFrame? = null
var imageSelected: ImageView? = null
var picUri: Uri? = null
var pickButton: Button? = null
var analyzeButton:Button? = null
var permissions = arrayOf(Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.CAMERA)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_static_hand_key_point_analyse)
pickButton = findViewById(R.id.pick_img)
analyzeButton = findViewById(R.id.analyse_img)
imageSelected = findViewById(R.id.selected_img)
initialiseSettings()
pickButton!!.setOnClickListener(View.OnClickListener {
pickRequiredImage()
})
analyzeButton!!.setOnClickListener(View.OnClickListener {
asynchronouslyStaticHandkey()
})
checkRequiredPermission()
}
private fun checkRequiredPermission() {
if (PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE)
|| PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE)
|| PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, 111)
}
}
private fun initialiseSettings() {
val setting = MLHandKeypointAnalyzerSetting.Factory() // MLHandKeypointAnalyzerSetting.TYPE_ALL indicates that all results are returned.
// MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY indicates that only hand keypoint information is returned.
// MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY indicates that only palm information is returned.
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL) // Set the maximum number of hand regions that can be detected in an image. By default, a maximum of 10 hand regions can be detected.
.setMaxHandResults(1)
.create()
analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting)
}
private fun asynchronouslyStaticHandkey() {
val task = analyzer!!.asyncAnalyseFrame(mlFrame)
task.addOnSuccessListener { results ->
val canvas = Canvas(mutableBitmap!!)
val paint = Paint()
paint.color = Color.GREEN
paint.style = Paint.Style.FILL
val mlHandKeypoints = results[0]
for (mlHandKeypoint in mlHandKeypoints.getHandKeypoints()) {
canvas.drawCircle(mlHandKeypoint.pointX, mlHandKeypoint.pointY, 48f, paint)
}
imageSelected!!.setImageBitmap(mutableBitmap)
checkAnalyserForStop()
}.addOnFailureListener { // Detection failure.
checkAnalyserForStop()
}
}
private fun checkAnalyserForStop() {
if (analyzer != null) {
analyzer!!.stop()
}
}
private fun pickRequiredImage() {
val intent = Intent()
intent.type = "image/*"
intent.action = Intent.ACTION_PICK
startActivityForResult(Intent.createChooser(intent, "Select Picture"), 20)
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == 20 && resultCode == RESULT_OK && null != data) {
picUri = data.data
val filePathColumn = arrayOf(MediaStore.Images.Media.DATA)
val cursor = contentResolver.query( picUri!!, filePathColumn, null, null, null)
cursor!!.moveToFirst()
cursor.close()
imageSelected!!.setImageURI(picUri)
imageSelected!!.invalidate()
val drawable = imageSelected!!.drawable as BitmapDrawable
bitmap = drawable.bitmap
mutableBitmap = bitmap!!.copy(Bitmap.Config.ARGB_8888, true)
mlFrame = null
mlFrame = MLFrame.fromBitmap(bitmap)
}
}
}
In the activity_main.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<Button
android:id="@+id/btn_static"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Static Detection"
android:textAllCaps="false"
android:textSize="18sp"
app:layout_constraintBottom_toTopOf="@+id/btn_live"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
android:textColor="@color/black"
style="@style/Widget.MaterialComponents.Button.MyTextButton"
app:layout_constraintTop_toTopOf="parent" />
<Button
android:id="@+id/btn_live"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Live Detection"
android:textAllCaps="false"
android:textSize="18sp"
android:layout_marginBottom="150dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
android:textColor="@color/black"
style="@style/Widget.MaterialComponents.Button.MyTextButton"
app:layout_constraintRight_toRightOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
In the activity_live_hand_key_point_analyse.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".LiveHandKeyPointAnalyse">
<com.example.mlhandgesturesample.LensEnginePreview
android:id="@+id/hand_preview"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:ignore="MissingClass">
<com.example.mlhandgesturesample.GraphicOverlay
android:id="@+id/hand_overlay"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</com.example.mlhandgesturesample.LensEnginePreview>
<Button
android:id="@+id/handswitch"
android:layout_width="35dp"
android:layout_height="35dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
android:layout_marginBottom="35dp"
android:background="@drawable/front_back_switch"
android:textOff=""
android:textOn=""
tools:ignore="MissingConstraints" />
</androidx.constraintlayout.widget.ConstraintLayout>
In the activity_static_hand_key_point_analyse.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".StaticHandKeyPointAnalyse">
<com.google.android.material.button.MaterialButton
android:id="@+id/pick_img"
android:text="Pick Image"
android:textSize="18sp"
android:textColor="@android:color/black"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textAllCaps="false"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@id/selected_img"
app:layout_constraintLeft_toLeftOf="@id/selected_img"
app:layout_constraintRight_toRightOf="@id/selected_img"
style="@style/Widget.MaterialComponents.Button.MyTextButton"/>
<ImageView
android:visibility="visible"
android:id="@+id/selected_img"
android:layout_width="350dp"
android:layout_height="350dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<com.google.android.material.button.MaterialButton
android:id="@+id/analyse_img"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="@android:color/black"
android:text="Analyse"
android:textSize="18sp"
android:textAllCaps="false"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintTop_toBottomOf="@id/selected_img"
app:layout_constraintLeft_toLeftOf="@id/selected_img"
app:layout_constraintRight_toRightOf="@id/selected_img"
style="@style/Widget.MaterialComponents.Button.MyTextButton"/>
</androidx.constraintlayout.widget.ConstraintLayout>
Demo
Tips and Tricks
1. Make sure you are already registered as Huawei developer.
2. Set minSDK version to 21 or later, otherwise you will get AndriodManifest merge issue.
3. Make sure you have added the agconnect-services.json file to app folder.
4. Make sure you have added SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
Conclusion
In this article, we have learned how to find the hand key points using Huawei ML Kit of Hand Gesture Recognition feature. This service provides two capabilities: hand keypoint detection and hand gesture recognition. The hand keypoint detection capability can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return the positions of the key points. The hand gesture recognition capability can detect and return the positions of all rectangle areas of the hand from images and videos, and the type and confidence of a gesture. This capability can recognize 14 gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time camera streams.
I hope you have read this article. If you found it is helpful, please provide likes and comments.
Reference
ML Kit – Hand Gesture Recognition
ML Kit – Training Video

Categories

Resources