How to Integrate Huawei Map Kit Javascript Api to cross-platforms - Huawei Developers

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202330537081990041&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Article Introduction
In this article we are going to cover HUAWEI Map Kit JavaScript API introduction. Next we going to implementing HUAWEI Map in Ionic/Cordova project. Lastly we will implement HUAWEI Map Kit JavaScript into Native Application.
Technology Introduction
HUAWEI Map Kit provides JavaScript APIs for you to easily build map apps applicable to browsers.
It provides the basic map display, map interaction, route planning, place search, geocoding, and other functions to meet requirements of most developers.
Restriction
Before using the service, you need to apply for an API key on the HUAWEI Developers website. For details, please refer to "Creating an API Key" in API Console Operation Guide. To enhance the API key security, you are advised to restrict the API key. You can configure restrictions by app and API on the API key editing page.
Generating API Key
Go to HMS API Services > Credentials and click Create credential.
Click API key to generate new API Key.
In the dialog box that is displayed, click Restrict to set restrictions on the key to prevent unauthorized use or quota theft. This step is optional.
The restrictions include App restrictions and API restriction.
App restrictions: control which websites or apps can use your key. Set up to one app restriction per key.
API restrictions: specify the enabled APIs that this key can call.
After setup App restriction and API restriction API key will generate.
The API key is successfully created. Copy API Key to use in your project.
Huawei Web Map API introduction
1. Make a Basic Map
Code:
function loadMapScript() {
const apiKey = encodeURIComponent(
"API_KEY"
);
const src = `https://mapapi.cloud.huawei.com/mapjs/v1/api/js?callback=initMap&key=${apiKey}`;
const mapScript = document.createElement("script");
mapScript.setAttribute("src", src);
document.head.appendChild(mapScript);
}
function initMap() { }
function initMap() {
const mapOptions = {};
mapOptions.center = { lat: 48.856613, lng: 2.352222 };
mapOptions.zoom = 8;
mapOptions.language = "ENG";
const map = new HWMapJsSDK.HWMap(
document.getElementById("map"),
mapOptions
);
}
loadMapScript();
Note: Please update API_KEY with the key which you have generated. In script url we are declaring callback function, which will automatically initiate once Huawei Map Api loaded successfully.
2. Map Interactions
Map Controls
Code:
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 10;
scaleControl
mapOptions.scaleControl = true; // Set to display the scale.
mapOptions.scaleControlOptions = {
units: "imperial" // Set the scale unit to inch.
};
zoomSlider
Code:
mapOptions.zoomSlider = true ; // Set to display the zoom slider.
zoomControl
Code:
mapOptions.zoomControl = false; // Set not to display the zoom button.
rotateControl (Manage Compass)
Code:
mapOptions.rotateControl = true; // Set to display the compass.
navigationControl
Code:
mapOptions.navigationControl = true; // Set to display the pan button.
copyrightControl
Code:
mapOptions.copyrightControl = true; // Set to display the copyright information.
mapOptions.copyrightControlOptions = {value: "HUAWEI",} // Set the copyright information.
locationControl
Code:
mapOptions.locationControl= true; // Set to display the current location.
Camera
Map moving: You can call the map.panTo(latLng)
Map shift: You can call the map.panBy(x, y)
Zoom: You can use the map.setZoom(zoom) method to set the zoom level of a map.
Area control: You can use map.fitBounds(bounds) to set the map display scope.
Map Events
Map click event:
Code:
map.on('click', () => {
map.zoomIn();
});
Map center change event:
Code:
map.onCenterChanged(centerChangePost);
function centerChangePost() {
var center = map.getCenter();
alert( 'Lng:'+map.getCenter().lng+'
'+'Lat:'+map.getCenter().lat);
}
Map heading change event:
Code:
map.onHeadingChanged(headingChangePost);
function headingChangePost() {
alert('Heading Changed!');
}
Map zoom level change event:
Code:
map.onZoomChanged(zoomChangePost);
function zoomChangePost() {
alert('Zoom Changed!')
}
3. Drawing on Map
Marker:
You can add markers to a map to identify locations such as stores and buildings, and provide additional information with information windows.
Code:
var map;
var mMarker;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
mMarker = new HWMapJsSDK.HWMarker({
map: map,
position: {lat: 48.85, lng: 2.35},
zIndex: 10,
label: 'A',
icon: {
opacity: 0.5
}
});
}
Marker Result:
Marker Clustering:
The HMS Core Map SDK allows you to cluster markers to effectively manage them on the map at different zoom levels. When a user zooms in on the map to a high level, all markers are displayed on the map. When the user zooms out, the markers are clustered on the map for orderly display.
Code:
var map;
var markers = [];
var markerCluster;
var locations = [
{lat: 51.5145160, lng: -0.1270060},
{ lat : 51.5064490, lng : -0.1244260 },
{ lat : 51.5097080, lng : -0.1200450 },
{ lat : 51.5090680, lng : -0.1421420 },
{ lat : 51.4976080, lng : -0.1456320 },
···
{ lat : 51.5061590, lng : -0.140280 },
{ lat : 51.5047420, lng : -0.1470490 },
{ lat : 51.5126760, lng : -0.1189760 },
{ lat : 51.5108480, lng : -0.1208480 }
];
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 3;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
generateMarkers(locations);
markerCluster = new HWMapJsSDK.HWMarkerCluster(map, markers);
}
function generateMarkers(locations) {
for (let i = 0; i < locations.length; i++) {
var opts = {
position: locations[i]
};
markers.push(new HWMapJsSDK.HWMarker(opts));
}
}
Cluster markers Result:
Information Window:
The HMS Core Map SDK supports the display of information windows on the map. There are two types of information windows: One is to display text or image independently, and the other is to display text or image in a popup above a marker. The information window provides details about a marker.
Code:
var infoWindow;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
var map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
infoWindow = new HWMapJsSDK.HWInfoWindow({
map,
position: {lat: 48.856613, lng: 2.352222},
content: 'This is to show mouse event of another marker',
offset: [0, -40],
});
}
Info window Result:
Ground Overlay
The builder function of GroundOverlay uses the URL, LatLngBounds, and GroundOverlayOptions of an image as the parameters to display the image in a specified area on the map. The sample code is as follows:
Code:
var map;
var mGroundOverlay;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
var imageBounds = {
north: 49,
south: 48.5,
east: 2.5,
west: 1.5,
};
mGroundOverlay = new HWMapJsSDK.HWGroundOverlay(
// Path to a local image or URL of an image.
'huawei_logo.png',
imageBounds,
{
map: map,
opacity: 1,
zIndex: 1
}
);
}
Marker Result:
Ionic / Cordova Map Implementation
In this part of article we are supposed to add Huawei Map Javascript API’s.
Update Index.html to implment Huawei Map JS scripts:
You need to update src/index.html and include Huawei map javacript cloud script url.
Code:
function loadMapScript() {
const apiKey = encodeURIComponent(
"API_KEY"
);
const src = `https://mapapi.cloud.huawei.com/mapjs/v1/api/js?callback=initMap&key=${apiKey}`;
const mapScript = document.createElement("script");
mapScript.setAttribute("src", src);
document.head.appendChild(mapScript);
}
function initMap() { }
loadMapScript();
Make new Map page:
Code:
ionic g page maps
Update maps.page.ts file and update typescript:
Code:
import { Component, OnInit, ChangeDetectorRef } from "@angular/core";
import { Observable } from "rxjs";
declare var HWMapJsSDK: any;
declare var cordova: any;
@Component({
selector: "app-maps",
templateUrl: "./maps.page.html",
styleUrls: ["./maps.page.scss"],
})
export class MapsPage implements OnInit {
map: any;
baseLat = 24.713552;
baseLng = 46.675297;
ngOnInit() {
this.showMap(his.baseLat, this.baseLng);
}
ionViewWillEnter() {
}
ionViewDidEnter() {
}
showMap(lat = this.baseLat, lng = this.baseLng) {
const mapOptions: any = {};
mapOptions.center = { lat: lat, lng: lng };
mapOptions.zoom = 10;
mapOptions.language = "ENG";
this.map = new HWMapJsSDK.HWMap(document.getElementById("map"), mapOptions);
this.map.setCenter({ lat: lat, lng: lng });
}
}
Ionic / Cordova App Result:
Native Application Huawei JS API Implementation
In this part of article we are supposed to add javascript based Huawei Map html version into our Native through webview. This part of implementation will be helpful for developer who required very minimal implementation of map.
Make assets/www/map.html file
Add the following HTML code inside map.html file:
Code:
var map;
var mMarker;
var infoWindow;
function initMap() {
const LatLng = { lat: 24.713552, lng: 46.675297 };
const mapOptions = {};
mapOptions.center = LatLng;
mapOptions.zoom = 10;
mapOptions.scaleControl = true;
mapOptions.locationControl= true;
mapOptions.language = "ENG";
map = new HWMapJsSDK.HWMap(
document.getElementById("map"),
mapOptions
);
map.setCenter(LatLng);
mMarker = new HWMapJsSDK.HWMarker({
map: map,
position: LatLng,
zIndex: 10,
label: 'A',
icon: {
opacity: 0.5
}
});
mMarker.addListener('click', () => {
infoWindow.open();
});
infoWindow = new HWMapJsSDK.HWInfoWindow({
map,
position: LatLng,
content: 'This is to info window of marker',
offset: [0, -40],
});
infoWindow.close();
}
Add the webview in your layout:
Code:
< WebView
android:id="@+id/webView_map"
android:layout_width="match_parent"
android:layout_height="match_parent"
/>
Update your Activity class to call html file
Code:
class MainActivity : AppCompatActivity() {
lateinit var context: Context
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
context = this
val mWebview = findViewById(R.id.webView_map)
mWebview.webChromeClient = WebChromeClient()
mWebview.webViewClient = WebViewClient()
mWebview.settings.javaScriptEnabled = true
mWebview.settings.setAppCacheEnabled(true)
mWebview.settings.mediaPlaybackRequiresUserGesture = true
mWebview.settings.domStorageEnabled = true
mWebview.loadUrl("file:///android_asset/www/map.html")
}
}
Internet permission:
Don’t forget to add internet permissions in androidmanifest.xml file.
Code:
< uses-permission android:name="android.permission.INTERNET" />
< uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
< uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
Native app Result:
References:
Huawei Map JavaScript API:
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/javascript-api-introduction-0000001050164063
Complete Ionic JS Map Project:
https://github.com/salmanyaqoob/Ionic-All-HMS-Kits
Conclusion
Huawei Map JavaSript Api will be helpful for JavaScript developers to implement Huawei Map on cross platforms like “Cordova, Ionic, React-Native” and also helpful for the Native developers to implement under his projects. Developers can also able to implement Huawei Maps on websites.

Thank you very much, very helpful.

Related

Cordova HMS Location Kit | Installation and Example

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
This article covers, how to integrate Cordova Hms Location Kit to a Cordova application.
Cordova Hms Location Kit supports services listed below
Fused Location Service
Activity Identification Service
Geofence Service
There are several number of uses cases of these services, you can combine them or just use them to create different functionalities in your app. For basic understanding, please read uses cases from here.
Prerequisites
Step 1
Prepare your development environment using this guide.
After reading this guide you should have Cordova Development Environment setted up, Hms Core (APK) installed and Android Sdk installed.
Step 2
Configure your app information in App Gallery by following this guide.
After reading this guide you should have a Huawei Developer Account, an App Gallery app, a keystore file and enabled Location kit service from AppGallery.
Integrating Cordova Hms Location Kit
To download or use HUAWEI Location Kit, you must agree to Huawei Developer Service Agreement, HUAWEI APIs Use Agreement, and AppGallery Connect Data Processing Addendum. You understand and undertake that downloading or using said HUAWEI Location Kit shall be deemed that you have agreed to all the preceding agreements, and you will fulfill and assume the legal responsibilities and obligations in accordance with said agreements.
Warning : Please make sure that, prerequisites part successfully completed.
Step 1
Add Android platform to your project if you haven’t yet.
Code:
cordova platform add android
Step 2
Check Cordova requirements with following command.
Code:
cordova requirements
Step 3
You can either install the plugin thorough npm or by downloading from downloads page, Cordova Location Plugin. Run the following command in the root directory of your Cordova project to install it through npm.
Code:
cordova plugin add @hms-core/cordova-plugin-hms-location
Run the following command in the root directory of your Cordova project to install it manually after downloading the plugin.
Code:
cordova plugin add <CORDOVA_PLUGIN_PATH>
Step 4
Check whether the Cordova Location Plugin is successfully added to “plugins” folder in the root directory of the Cordova project.
Step 5
Open build.gradle file in project-dir > android folder.
Go to buildscript > repositories and allprojects > repositories, and configure the Maven repository address.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > dependencies and add dependency configurations.
Code:
buildscript {
dependencies {
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
}
Step 6
Open build.gradle file which is located under project.dir > android > app directory.
Add the AppGallery Connect plug-in dependency to the file header.
The apply plugin: ‘com.huawei.agconnect’ configuration must be added after the apply plugin: ‘com.android.application’ configuration.
The minimum Android API level (minSdkVersion) required for Location Kit is 19.
Configure build dependencies of your project.
Code:
dependencies {
...
implementation 'com.huawei.agconnect:agconnect-core:1.0.0.301'
}
Create An Application for Cordova Location Kit
In the application we will do, the user will be able to view his/him last location and receive a notification when he/she leaves the area he/she has determined.
So, we need to initialize HMSFusedLocation, HMSLocationKit, HMSGeofence services in onCreate method.
Step 1
Code:
function onDeviceReady() {
// Initialize LocationKit
HMSLocationKit.init();
HMSFusedLocation.init();
HMSGeofence.init();
// Device ready effect
var parentElement = $('deviceready');
var listeningElement = parentElement.querySelector('.listening');
var receivedElement = parentElement.querySelector('.received');
listeningElement.setAttribute('style', 'display:none;');
receivedElement.setAttribute('style', 'display:block;');
};
An App can use the HUAWEI Location Kit service API to obtain the last known location of a device. In most cases, the last known location is the current location of the device. The following is the sample code for calling the getLastLocation() method to obtain the last known location.
Note That : Before running this code snippet please check for your app permissions.
Step 2
Code:
$('requestPermission').onclick = async () => {
try {
const permissionResult = await HMSFusedLocation.requestPermission();
console.log({permissionResult});
$('requestPermissionResult').innerHTML = JSON.stringify(permissionResult, null, 1);
} catch (ex) {
$('requestPermissionResult').innerHTML = JSON.stringify(ex, null, 1);
}
};
$('hasPermission').onclick = async () => {
try {
const hasPermissionResult = await HMSFusedLocation.hasPermission();
console.log({hasPermissionResult});
$('hasPermissionResult').innerHTML = JSON.stringify(hasPermissionResult, null, 1);
} catch (ex) {
$('hasPermissionResult').innerHTML = JSON.stringify(ex, null, 1);
}
};
Step 3
Code:
const $ = (x) => document.getElementById(x);
document.addEventListener('deviceready', onDeviceReady, false);
function onDeviceReady() {
// Initialize LocationKit
HMSLocationKit.init();
HMSFusedLocation.init();
HMSGeofence.init();
// Device ready effect
var parentElement = $('deviceready');
var listeningElement = parentElement.querySelector('.listening');
var receivedElement = parentElement.querySelector('.received');
listeningElement.setAttribute('style', 'display:none;');
receivedElement.setAttribute('style', 'display:block;');
};
const newLocationRequest = () => { return {
id: "locationRequest" + Math.random() * 10000,
priority:
HMSFusedLocation.PriorityConstants.PRIORITY_HIGH_ACCURACY,
interval: 3,
numUpdates: 1,
fastestInterval: 1000.0,
expirationTime: 1000.0,
expirationTimeDuration: 1000.0,
smallestDisplacement: 0.0,
maxWaitTime: 1000.0,
needAddress: false,
language: "en",
countryCode: "en",
}};
$('getLastLocation').onclick = async () => {
try {
const lastLocation = await HMSFusedLocation.getLastLocation();
console.log({lastLocation});
$('getLastLocationResult').innerHTML = JSON.stringify(lastLocation, null, 1);
} catch (ex) {
$('getLastLocationResult').innerHTML = JSON.stringify(ex, null, 1);
}
};
$('requestPermission').onclick = async () => {
try {
const permissionResult = await HMSFusedLocation.requestPermission();
console.log({permissionResult});
$('requestPermissionResult').innerHTML = JSON.stringify(permissionResult, null, 1);
} catch (ex) {
$('requestPermissionResult').innerHTML = JSON.stringify(ex, null, 1);
}
};
$('hasPermission').onclick = async () => {
try {
const hasPermissionResult = await HMSFusedLocation.hasPermission();
console.log({hasPermissionResult});
$('hasPermissionResult').innerHTML = JSON.stringify(hasPermissionResult, null, 1);
} catch (ex) {
$('hasPermissionResult').innerHTML = JSON.stringify(ex, null, 1);
}
};
$('getLocationAvailability').onclick = async () => {
try {
const locationAvailability = await HMSFusedLocation.getLocationAvailability();
console.log({locationAvailability});
$('getLocationAvailabilityResult').innerHTML = JSON.stringify(locationAvailability, null, 1);
} catch (ex) {
$('getLocationAvailabilityResult').innerHTML = JSON.stringify(locationAvailability, null, 1);
}
};
$('getLastLocationWithAddress').onclick = async () => {
try {
const getLastLocationWithAddressResult = await HMSFusedLocation.getLastLocationWithAddress(newLocationRequest());
console.log({getLastLocationWithAddressResult});
$('getLastLocationWithAddressResult').innerHTML = JSON.stringify(getLastLocationWithAddressResult, null, 1);
} catch (ex) {
$('getLastLocationWithAddressResult').innerHTML = JSON.stringify(ex, null, 1);
}
};
$('createGeofenceList').onclick = async () => {
const geofence1 = {
longitude: 29.117645,
latitude: 41.012429,
radius: 2.0,
uniqueId: 'geofence' + Math.random() * 10000,
conversions: 1,
validContinueTime: 10000.0,
dwellDelayTime: 10,
notificationInterval: 1,
};
const geofence2 = {
longitude: 41.0,
latitude: 27.0,
radius: 340.0,
uniqueId: 'geofence' + Math.random() * 10000,
conversions: 2,
validContinueTime: 1000.0,
dwellDelayTime: 10,
notificationInterval: 1,
};
try {
const createGeofenceListResult = await HMSGeofence.createGeofenceList(
[geofence1],
HMSGeofence.GeofenceRequestConstants.ENTER_INIT_CONVERSION,
HMSGeofence.GeofenceRequestConstants.COORDINATE_TYPE_WGS_84
);
console.log({createGeofenceListResult});
$('createGeofenceListResult').innerHTML = JSON.stringify(createGeofenceListResult, null, 1);
} catch (ex) {
$('createGeofenceListResult').innerHTML = JSON.stringify(ex, null, 1);
}
};
$('registerGeofenceUpdates').onclick = async () => {
registerHMSEvent(HMSGeofence.Events.GEOFENCE_RESULT, (result) => {
console.log('new geofence update');
$('geofenceUpdateResult').innerHTML = JSON.stringify(result, null, 1);
});
};
Test the Location Kit App
Run the application.
Code:
cordova run android
Press “Get Last Location” button.
Wait for the result.
Here it comes. You will see your last location on screen.
Conclusion
In this article, we integrated and used Cordova Hms Location Kit to our application. You can enrich your application by integrating this useful and simple to use kit into your applications.
You can write comments for your questions.
Related Links
Thanks to Furkan Aybastı for this article.

Create and Monitor Geofences with HuaweiMap in Xamarin.Android Application

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
A geofence is a virtual perimeter set on a real geographic area. Combining a user position with a geofence perimeter, it is possible to know if the user is inside the geofence or if he is exiting or entering the area.
In this article, we will discuss how to use the geofence to notify the user when the device enters/exits an area using the HMS Location Kit in a Xamarin.Android application. We will also add and customize HuaweiMap, which includes drawing circles, adding pointers, and using nearby searches in search places. We are going to learn how to use the below features together:
Geofence
Reverse Geocode
HuaweiMap
Nearby Search
First of all, you need to be a registered Huawei Mobile Developer and create an application in Huawei App Console in order to use HMS Map Location and Site Kits. You can follow there steps in to complete the configuration that required for development.
Configuring App Information in AppGallery Connect --> shorturl.at/rL347
Creating Xamarin Android Binding Libraries --> shorturl.at/rBP46
Integrating the HMS Map Kit Libraries for Xamarin --> shorturl.at/vAHPX
Integrating the HMS Location Kit Libraries for Xamarin --> shorturl.at/dCX07
Integrating the HMS Site Kit Libraries for Xamarin --> shorturl.at/bmDX6
Integrating the HMS Core SDK --> shorturl.at/qBISV
Setting Package in Xamarin --> shorturl.at/brCU1
When we create our Xamarin.Android application in the above steps, we need to make sure that the package name is the same as we entered the Console. Also, don’t forget the enable them in Console.
Manifest & Permissions
We have to update the application’s manifest file by declaring permissions that we need as shown below.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
Also, add a meta-data element to embed your app id in the application tag, it is required for this app to authenticate on the Huawei’s cloud server. You can find this id in agconnect-services.json file.
Code:
<meta-data android:name="com.huawei.hms.client.appid" android:value="appid=YOUR_APP_ID" />
Request location permission
Code:
private void RequestPermissions()
{
if (ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessCoarseLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessFineLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.WriteExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.ReadExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.Internet) != (int)Permission.Granted)
{
ActivityCompat.RequestPermissions(this,
new System.String[]
{
Manifest.Permission.AccessCoarseLocation,
Manifest.Permission.AccessFineLocation,
Manifest.Permission.WriteExternalStorage,
Manifest.Permission.ReadExternalStorage,
Manifest.Permission.Internet
},
100);
}
else
GetCurrentPosition();
}
Add a Map
Add a <fragment> element to your activity’s layout file, activity_main.xml. This element defines a MapFragment to act as a container for the map and to provide access to the HuaweiMap object.
Code:
<fragment
android:id="@+id/mapfragment"
class="com.huawei.hms.maps.MapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
Implement the IOnMapReadyCallback interface to MainActivity and override OnMapReady method which is triggered when the map is ready to use. Then use GetMapAsync to register for the map callback.
We request the address corresponding to a given latitude/longitude. Also specified that the output must be in JSON format.
Code:
public class MainActivity : AppCompatActivity, IOnMapReadyCallback
{
...
public void OnMapReady(HuaweiMap map)
{
hMap = map;
hMap.UiSettings.MyLocationButtonEnabled = true;
hMap.UiSettings.CompassEnabled = true;
hMap.UiSettings.ZoomControlsEnabled = true;
hMap.UiSettings.ZoomGesturesEnabled = true;
hMap.MyLocationEnabled = true;
hMap.MapClick += HMap_MapClick;
if (selectedCoordinates == null)
selectedCoordinates = new GeofenceModel { LatLng = CurrentPosition, Radius = 30 };
}
}
As you can see above, with the UiSettings property of the HuaweiMap object we set my location button, enable compass, etc. Now when the app launch, directly get the current location and move the camera to it. In order to do that we use FusedLocationProviderClient that we instantiated and call LastLocation API.
LastLocation API returns a Task object that we can check the result by implementing the relevant listeners for success and failure.In success listener we are going to move the map’s camera position to the last known position.
Code:
private void GetCurrentPosition()
{
var locationTask = fusedLocationProviderClient.LastLocation;
locationTask.AddOnSuccessListener(new LastLocationSuccess(this));
locationTask.AddOnFailureListener(new LastLocationFail(this));
}
...
public class LastLocationSuccess : Java.Lang.Object, IOnSuccessListener
{
...
public void OnSuccess(Java.Lang.Object location)
{
Toast.MakeText(mainActivity, "LastLocation request successful", ToastLength.Long).Show();
if (location != null)
{
MainActivity.CurrentPosition = new LatLng((location as Location).Latitude, (location as Location).Longitude);
mainActivity.RepositionMapCamera((location as Location).Latitude, (location as Location).Longitude);
}
}
}
To change the position of the camera, we must specify where we want to move the camera, using a CameraUpdate. The Map Kit allows us to create many different types of CameraUpdate using CameraUpdateFactory.
There are some methods for the camera position changes as we see above. Simply these are:
NewLatLng: Change camera’s latitude and longitude, while keeping other properties
NewLatLngZoom: Changes the camera’s latitude, longitude, and zoom, while keeping other properties
NewCameraPosition: Full flexibility in changing the camera position
We are going to use NewCameraPosition. A CameraPosition can be obtained with a CameraPosition.Builder. And then we can set target, bearing, tilt and zoom properties.
Code:
public void RepositionMapCamera(double lat, double lng)
{
var cameraPosition = new CameraPosition.Builder();
cameraPosition.Target(new LatLng(lat, lng));
cameraPosition.Zoom(1000);
cameraPosition.Bearing(45);
cameraPosition.Tilt(20);
CameraUpdate cameraUpdate = CameraUpdateFactory.NewCameraPosition(cameraPosition.Build());
hMap.MoveCamera(cameraUpdate);
}
Creating Geofence
In this part, we will choose the location where we want to set geofence in two different ways. The first is to select the location by clicking on the map, and the second is to search for nearby places by keyword and select one after placing them on the map with the marker.
Set the geofence location by clicking on the map
It is always easier to select a location by seeing it. After this section, we are able to set a geofence around the clicked point when the map’s clicked. We attached the Click event to our map in the OnMapReady method. In this Click event, we will add a marker to the clicked point and draw a circle around it.
Also, we will use the Seekbar at the bottom of the page to adjust the circle radius. We set selectedCoordinates variable when adding the marker. Let’s create the following method to create the marker:
Code:
private void HMap_MapClick(object sender, HuaweiMap.MapClickEventArgs e)
{
selectedCoordinates.LatLng = e.P0;
if (circle != null)
{
circle.Remove();
circle = null;
}
AddMarkerOnMap();
}
void AddMarkerOnMap()
{
if (marker != null) marker.Remove();
var markerOption = new MarkerOptions()
.InvokeTitle("You are here now")
.InvokePosition(selectedCoordinates.LatLng);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
marker = hMap.AddMarker(markerOption);
bool isInfoWindowShown = marker.IsInfoWindowShown;
if (isInfoWindowShown)
marker.HideInfoWindow();
else
marker.ShowInfoWindow();
}
Adding MapInfoWindowAdapter class to our project for rendering the custom info model. And implement HuaweiMap.IInfoWindowAdapter interface to it. When an information window needs to be displayed for a marker, methods provided by this adapter are called in any case.
Now let’s create a custom info window layout and named it as map_info_view.xml
Code:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent">
<Button
android:text="Add geofence"
android:width="100dp"
style="@style/Widget.AppCompat.Button.Colored"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/btnInfoWindow" />
</LinearLayout>
And return it after customizing it in GetInfoWindow() method. The full code of the adapter is below:
Code:
internal class MapInfoWindowAdapter : Java.Lang.Object, HuaweiMap.IInfoWindowAdapter
{
private MainActivity activity;
private GeofenceModel selectedCoordinates;
private View addressLayout;
public MapInfoWindowAdapter(MainActivity currentActivity){activity = currentActivity;}
public View GetInfoContents(Marker marker){return null;}
public View GetInfoWindow(Marker marker)
{
if (marker == null)
return null;
selectedCoordinates = new GeofenceModel { LatLng = new LatLng(marker.Position.Latitude, marker.Position.Longitude) };
View mapInfoView = activity.LayoutInflater.Inflate(Resource.Layout.map_info_view, null);
var radiusBar = activity.FindViewById<SeekBar>(Resource.Id.radiusBar);
if (radiusBar.Visibility == Android.Views.ViewStates.Invisible)
{
radiusBar.Visibility = Android.Views.ViewStates.Visible;
radiusBar.SetProgress(30, true);
}
activity.FindViewById<SeekBar>(Resource.Id.radiusBar)?.SetProgress(30, true);
activity.DrawCircleOnMap(selectedCoordinates);
Button button = mapInfoView.FindViewById<Button>(Resource.Id.btnInfoWindow);
button.Click += btnInfoWindow_ClickAsync;
return mapInfoView;
}
}
Now we create a method to arrange a circle around the marker that representing the geofence radius. Create a new DrawCircleOnMap method in MainActivity for this. To construct a circle, we must specify the Center and Radius. Also, I set other properties like StrokeColor etc.
Code:
public void DrawCircleOnMap(GeofenceModel geoModel)
{
if (circle != null)
{
circle.Remove();
circle = null;
}
CircleOptions circleOptions = new CircleOptions()
.InvokeCenter(geoModel.LatLng)
.InvokeRadius(geoModel.Radius)
.InvokeFillColor(Color.Argb(50, 0, 14, 84))
.InvokeStrokeColor(Color.Yellow)
.InvokeStrokeWidth(15);
circle = hMap.AddCircle(circleOptions);
}
private void radiusBar_ProgressChanged(object sender, SeekBar.ProgressChangedEventArgs e)
{
selectedCoordinates.Radius = e.Progress;
DrawCircleOnMap(selectedCoordinates);
}
We will use SeekBar to change the radius of the circle. As the value changes, the drawn circle will expand or shrink.
Reverse Geocoding
Now let’s handle the click event of the info window.
But before open that window, we need to reverse geocoding selected coordinates to getting a formatted address. HUAWEI Site Kit provides us a set of HTTP API including the one that we need, reverseGeocode.
Let’s add the GeocodeManager class to our project and update it as follows:
Code:
public async Task<Site> ReverseGeocode(double lat, double lng)
{
string result = "";
using (var client = new HttpClient())
{
MyLocation location = new MyLocation();
location.Lat = lat;
location.Lng = lng;
var root = new ReverseGeocodeRequest();
root.Location = location;
var settings = new JsonSerializerSettings();
settings.ContractResolver = new LowercaseSerializer();
var json = JsonConvert.SerializeObject(root, Formatting.Indented, settings);
var data = new StringContent(json, Encoding.UTF8, "application/json");
var url = "siteapi.cloud.huawei.com/mapApi/v1/siteService/reverseGeocode?key=" + Android.Net.Uri.Encode(ApiKey);
var response = await client.PostAsync(url, data);
result = response.Content.ReadAsStringAsync().Result;
}
return JsonConvert.DeserializeObject<ReverseGeocodeResponse>(result).sites.FirstOrDefault();
}
In the above code, we request the address corresponding to a given latitude/longitude. Also specified that the output must be in JSON format.
siteapi.cloud.huawei.com/mapApi/v1/siteService/reverseGeocode?key=APIKEY
Click to expand...
Click to collapse
Request model:
Code:
public class MyLocation
{
public double Lat { get; set; }
public double Lng { get; set; }
}
public class ReverseGeocodeRequest
{
public MyLocation Location { get; set; }
}
Note that the JSON response contains three root elements:
“returnCode”: For details, please refer to Result Codes.
“returnDesc”: description
“sites” contains an array of geocoded address information
Generally, only one entry in the “sites” array is returned for address lookups, though the geocoder may return several results when address queries are ambiguous.
Add the following codes to our MapInfoWindowAdapter where we get results from the Reverse Geocode API and set the UI elements.
Code:
private async void btnInfoWindow_ClickAsync(object sender, System.EventArgs e)
{
addressLayout = activity.LayoutInflater.Inflate(Resource.Layout.reverse_alert_layout, null);
GeocodeManager geocodeManager = new GeocodeManager(activity);
var addressResult = await geocodeManager.ReverseGeocode(selectedCoordinates.LatLng.Latitude, selectedCoordinates.LatLng.Longitude);
if (addressResult.ReturnCode != 0)
return;
var address = addressResult.Sites.FirstOrDefault();
var txtAddress = addressLayout.FindViewById<TextView>(Resource.Id.txtAddress);
var txtRadius = addressLayout.FindViewById<TextView>(Resource.Id.txtRadius);
txtAddress.Text = address.FormatAddress;
txtRadius.Text = selectedCoordinates.Radius.ToString();
AlertDialog.Builder builder = new AlertDialog.Builder(activity);
builder.SetView(addressLayout);
builder.SetTitle(address.Name);
builder.SetPositiveButton("Save", (sender, arg) =>
{
selectedCoordinates.Conversion = GetSelectedConversion();
GeofenceManager geofenceManager = new GeofenceManager(activity);
geofenceManager.AddGeofences(selectedCoordinates);
});
builder.SetNegativeButton("Cancel", (sender, arg) => { builder.Dispose(); });
AlertDialog alert = builder.Create();
alert.Show();
}
Now, after selecting the conversion, we can complete the process by calling the AddGeofence method in the GeofenceManager class by pressing the save button in the dialog window.
Code:
public void AddGeofences(GeofenceModel geofenceModel)
{
//Set parameters
geofenceModel.Id = Guid.NewGuid().ToString();
if (geofenceModel.Conversion == 5) //Expiration value that indicates the geofence should never expire.
geofenceModel.Timeout = Geofence.GeofenceNeverExpire;
else
geofenceModel.Timeout = 10000;
List<IGeofence> geofenceList = new List<IGeofence>();
//Geofence Service
GeofenceService geofenceService = LocationServices.GetGeofenceService(activity);
PendingIntent pendingIntent = CreatePendingIntent();
GeofenceBuilder somewhereBuilder = new GeofenceBuilder()
.SetUniqueId(geofenceModel.Id)
.SetValidContinueTime(geofenceModel.Timeout)
.SetRoundArea(geofenceModel.LatLng.Latitude, geofenceModel.LatLng.Longitude, geofenceModel.Radius)
.SetDwellDelayTime(10000)
.SetConversions(geofenceModel.Conversion); ;
//Create geofence request
geofenceList.Add(somewhereBuilder.Build());
GeofenceRequest geofenceRequest = new GeofenceRequest.Builder()
.CreateGeofenceList(geofenceList)
.Build();
//Register geofence
var geoTask = geofenceService.CreateGeofenceList(geofenceRequest, pendingIntent);
geoTask.AddOnSuccessListener(new CreateGeoSuccessListener(activity));
geoTask.AddOnFailureListener(new CreateGeoFailListener(activity));
}
In the AddGeofence method, we need to set the geofence request parameters, like the selected conversion, unique Id and timeout according to conversion, etc. with GeofenceBuilder. We create GeofenceBroadcastReceiver and display a toast message when a geofence action occurs.
Code:
[BroadcastReceiver(Enabled = true)]
[IntentFilter(new[] { "com.huawei.hms.geofence.ACTION_PROCESS_ACTIVITY" })]
class GeofenceBroadcastReceiver : BroadcastReceiver
{
public static readonly string ActionGeofence = "com.huawei.hms.geofence.ACTION_PROCESS_ACTIVITY";
public override void OnReceive(Context context, Intent intent)
{
if (intent != null)
{
var action = intent.Action;
if (action == ActionGeofence)
{
GeofenceData geofenceData = GeofenceData.GetDataFromIntent(intent);
if (geofenceData != null)
{
Toast.MakeText(context, "Geofence triggered: " + geofenceData.ConvertingLocation.Latitude +"\n" + geofenceData.ConvertingLocation.Longitude + "\n" + geofenceData.Conversion.ToConversionName(), ToastLength.Long).Show();
}
}
}
}
}
After that in CreateGeoSuccessListener and CreateGeoFailureListener that we implement IOnSuccessListener and IOnFailureListener respectively, we display a toast message to the user like this:
Code:
public class CreateGeoFailListener : Java.Lang.Object, IOnFailureListener
{
public void OnFailure(Java.Lang.Exception ex)
{
Toast.MakeText(mainActivity, "Geofence request failed: " + GeofenceErrorCodes.GetErrorMessage((ex as ApiException).StatusCode), ToastLength.Long).Show();
}
}
public class CreateGeoSuccessListener : Java.Lang.Object, IOnSuccessListener
{
public void OnSuccess(Java.Lang.Object data)
{
Toast.MakeText(mainActivity, "Geofence request successful", ToastLength.Long).Show();
}
}
Set geofence location using Nearby Search
On the main layout when the user clicks the Search Nearby Places button, a search dialog like below appears:
Create search_alert_layout.xml with a search input In Main Activity, create click event of that button and open an alert dialog after it’s view is set to search_alert_layout. And make NearbySearch when clicking the Search button:
Code:
private void btnGeoWithAddress_Click(object sender, EventArgs e)
{
search_view = base.LayoutInflater.Inflate(Resource.Layout.search_alert_layout, null);
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.SetView(search_view);
builder.SetTitle("Search Location");
builder.SetNegativeButton("Cancel", (sender, arg) => { builder.Dispose(); });
search_view.FindViewById<Button>(Resource.Id.btnSearch).Click += btnSearchClicked;
alert = builder.Create();
alert.Show();
}
private void btnSearchClicked(object sender, EventArgs e)
{
string searchText = search_view.FindViewById<TextView>(Resource.Id.txtSearch).Text;
GeocodeManager geocodeManager = new GeocodeManager(this);
geocodeManager.NearbySearch(CurrentPosition, searchText);
}
We pass search text and Current Location into the GeocodeManager NearbySearch method as parameters. We need to modify GeoCodeManager class and add nearby search method to it.
Code:
public void NearbySearch(LatLng currentLocation, string searchText)
{
ISearchService searchService = SearchServiceFactory.Create(activity, Android.Net.Uri.Encode("YOUR_API_KEY"));
NearbySearchRequest nearbySearchRequest = new NearbySearchRequest();
nearbySearchRequest.Query = searchText;
nearbySearchRequest.Language = "en";
nearbySearchRequest.Location = new Coordinate(currentLocation.Latitude, currentLocation.Longitude);
nearbySearchRequest.Radius = (Integer)2000;
nearbySearchRequest.PageIndex = (Integer)1;
nearbySearchRequest.PageSize = (Integer)5;
nearbySearchRequest.PoiType = LocationType.Address;
searchService.NearbySearch(nearbySearchRequest, new QuerySuggestionResultListener(activity as MainActivity));
}
And to handle the result we must create a listener and implement the ISearchResultListener interface to it.
Code:
public class NearbySearchResultListener : Java.Lang.Object, ISearchResultListener
{
public void OnSearchError(SearchStatus status)
{
Toast.MakeText(context, "Error Code: " + status.ErrorCode + " Error Message: " + status.ErrorMessage, ToastLength.Long);
}
public void OnSearchResult(Java.Lang.Object results)
{
NearbySearchResponse nearbySearchResponse = (NearbySearchResponse)results;
if (nearbySearchResponse != null && nearbySearchResponse.TotalCount > 0)
context.SetSearchResultOnMap(nearbySearchResponse.Sites);
}
}
In OnSearchResult method, NearbySearchResponse object return. We will insert markers to the mapper element in this response. The map will look like this:
In Main Activity create a method named SetSearchResultOnMap and pass IList<Site> as a parameter to insert multiple markers on the map.
Code:
public void SetSearchResultOnMap(IList<Com.Huawei.Hms.Site.Api.Model.Site> sites)
{
hMap.Clear();
if (searchMarkers != null && searchMarkers.Count > 0)
foreach (var item in searchMarkers)
item.Remove();
searchMarkers = new List<Marker>();
for (int i = 0; i < sites.Count; i++)
{
MarkerOptions marker1Options = new MarkerOptions()
.InvokePosition(new LatLng(sites[i].Location.Lat, sites[i].Location.Lng))
.InvokeTitle(sites[i].Name).Clusterable(true);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
var marker1 = hMap.AddMarker(marker1Options);
searchMarkers.Add(marker1);
RepositionMapCamera(sites[i].Location.Lat, sites[i].Location.Lng);
}
hMap.SetMarkersClustering(true);
alert.Dismiss();
}
Now, we add markers as we did above. But here we use SetMarkersClustering(true) to consolidates markers into clusters when zooming out of the map.
You can download the source code from below:
github.com/stugcearar/HMSCore-Xamarin-Android-Samples/tree/master/LocationKit/HMS_Geofence
Also if you have any questions, ask away in Huawei Developer Forums.
Errors
If your location permission set “Allowed only while in use instead” of ”Allowed all the time” below exception will be thrown.
int GEOFENCE_INSUFFICIENT_PERMISSION
Insufficient permission to perform geofence-related operations.
You can see all result codes including errors, in here for Location service.
You can find result codes with details here for Geofence request.

Huawei Cab Application Part-2 – Get Location updates using Map and Site Kit

More information like this, you can visit HUAWEI Developer Forum​
Introduction
Huawei Cab Application is to explore more about HMS Kits in real time scenario, we can use this app as reference to CP during HMS integration, and they can understand easily how HMS works in real time scenario, refer previous article.
Dashboard Module
The Dashboard page allows a user to book a cab with help of Huawei location, map and site kits, as follows:
Location kit provides to get the current location and location updates, and it provides flexible location based services globally to the users.
Huawei Map kit is to display maps, it covers map data of more than 200 countries and regions for searching any location address.
Site kit provides with convenient and secure access to diverse, place-related services to users.
Kits covered in Dashboard Module:
1. Location kit
2. Map kit
3. Site kit
4. Ads kit
Third Party API’s:
1. TomTom for Directions API
App Screenshots:
Auto fetch current location and also customize current location button in huawei map.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Pick destination you want to travel with marker in the huawei map view.
Pick destination using HMS Site Kit.
Draw route between start and end location.
Huawei Location Kit
HUAWEI Location Kit combines the GPS, Wi-Fi, and base station location functionalities in your app to build up global positioning capabilities, allowing you to provide flexible location-based services targeted at users around globally. Currently, it provides three main capabilities: fused location, activity identification, and geo-fence. You can call one or more of these capabilities as needed.
Huawei Map Kit
The HMS Core Map SDK is a set of APIs for map development in Android. The map data covers most countries outside China and supports multiple languages. The Map SDK uses the WGS 84 GPS coordinate system, which can meet most requirements of map development outside China. You can easily add map-related functions in your Android app, including:
Map display: Displays buildings, roads, water systems, and Points of Interest (POIs).
Map interaction: Controls the interaction gestures and buttons on the map.
Map drawing: Adds location markers, map layers, overlays, and various shapes.
Site Kit
HUAWEI Site Kit provide to users with convenient and secure access to diverse, place-related services.
Ads Kit
Huawei Ads provide developers extensive data capabilities to deliver high quality ad content to users. By integrating HMS ads kit we can start earning right away. It is very useful particularly when we are publishing a free app and want to earn some money from it.
Integrating HMS ads kit does not take more than 10 minuts HMS ads kit currently offers five types of ad format like Banner, Native, Rewarded, Interstitial and Splash ads.
In this article, we will see banner ad also.
Banner Ad: Banner ads are rectangular images that occupy a spot at the top, middle, or bottom within an app's layout. Banner ads refresh automatically at regular intervals. When a user taps a banner ad, the user is redirected to the advertiser's page in most cases.
TomTom for Direction API
TomTom Technology for a moving world. Meet the leading independent location, navigation and map technology specialist.
Follow the steps for TomTom Direction API.
Step1: Visit TomTom Developer Portal. https://developer.tomtom.com/
Step2: Login/Signup.
Step3: Click PRODUCTS and select Directions API
Step 4: Route API, as follows: https://developer.tomtom.com/routing-api/routing-api-documentation-routing/calculate-route
Step5: Click MY DASHBOARD and Copy Key and Use it in your application.
Integration Preparations
To integrate HUAWEI Map, Site, Location and Ads Kit, you must complete the following preparations:
Create an app in AppGallery Connect.
Create a project in Android Studio.
Generate a signing certificate.
Generate a signing certificate fingerprint.
Configure the signing certificate fingerprint.
Add the app package name and save the configuration file.
Add the AppGallery Connect plug-in and the Maven repository in the project-level build.gradle file.
Configure the signature file in Android Studio.
Configuring the Development Environment
Enabling HUAWEI Map, Site, Location and Ads Kit.
1. Sign in to AppGallery Connect, select My apps, click an app, and navigate to Develop > Overview > Manage APIs.
2. Click agconnect-services.json to download the configuration file.
3. Copy the agconnect-services.json file to the app's root directory.
4. Open the build.gradle file in the root directory of your Android Studio project.
5. Configure the following information in the build.gradle file.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath 'com.huawei.agconnect:agcp:1.3.2.301'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
6. Open the build.gradle file in the app directory.
7. Add Dependencies in app level build.gradle.
Code:
apply plugin: 'com.huawei.agconnect'dependencies {
implementation 'com.huawei.hms:maps:5.0.1.300'
implementation 'com.huawei.hms:location:5.0.0.302'
implementation 'com.huawei.hms:site:5.0.0.300'
implementation 'com.huawei.hms:ads-lite:13.4.30.307'
}
8. Apply for relevant permissions in sections at the same level as the application section in the AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
9. Add Huawei MapView and Banner Ad in layout file: activity_maview.xml
Code:
<!--MapView-->
<com.huawei.hms.maps.MapView
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/mapview"
android:layout_width="match_parent"
android:layout_height="match_parent"
map:cameraTargetLat="51"
map:cameraTargetLng="10"
map:cameraZoom="8.5"
map:mapType="normal"
map:uiCompass="true"
map:uiZoomControls="true" />
<!--BannerView-->
<com.huawei.hms.ads.banner.BannerView
android:id="@+id/hw_banner_view"
android:layout_width="match_parent"
android:layout_height="wrap_content"
hwads:adId="@string/banner_ad_id"
android:layout_alignParentTop="true"
hwads:bannerSize="BANNER_SIZE_360_57" />
10. Add the configuration for calling the MapView to the activity.
Code:
class DashboardFragment : Fragment(), OnMapReadyCallback, ReverseGeoCodeListener {private var hMap: HuaweiMap? = null
private var mMapView: MapView? = null
private var pickupLat: Double = 0.0
private var pickupLng: Double = 0.0
private var dropLat: Double = 0.0
private var dropLng: Double = 0.0
private var mPolyline: Polyline? = null
private var mMarkerDestination: Marker? = nulloverride fun onActivityCreated(savedInstanceState: Bundle?) {
super.onActivityCreated(savedInstanceState) // create fusedLocationProviderClient
mFusedLocationProviderClient = LocationServices.getFusedLocation ProviderClient(activity) mapView?.onCreate(savedInstanceState)
mapView?.onResume()
mapView?.getMapAsync(this)}
fun initBannerAds() {
// Obtain BannerView based on the configuration in layout.
val adParam: AdParam = AdParam.Builder().build()
hw_banner_view.loadAd(adParam)
}
override fun onMapReady(map: HuaweiMap) {
Log.d(TAG, "onMapReady: ")
hMap = map hMap?.isMyLocationEnabled = true
hMap?.uiSettings?.isMyLocationButtonEnabled = false
getLastLocation()
hMap?.setOnCameraMoveStartedListener {
when (it) {
HuaweiMap.OnCameraMoveStartedListener.REASON_GESTURE -> {
Log.d(TAG, "The user gestured on the map.")
val midLatLng: LatLng = hMap?.cameraPosition!!.target
Log.d("Moving_LatLng ", "" + midLatLng)
dropLat = midLatLng.latitude
dropLng = midLatLng.longitude
val task = MyAsyncTask(this, false)
task.execute(midLatLng.latitude, midLatLng.longitude) }
HuaweiMap.OnCameraMoveStartedListener
.REASON_API_ANIMATION -> {
Log.d(TAG, "The user tapped something on the map.")
}
HuaweiMap.OnCameraMoveStartedListener
.REASON_DEVELOPER_ANIMATION -> {
Log.d(TAG, "The app moved the camera.")
}
}
}
}
11. Add the life cycle method of the MapView
Code:
override fun onStart() {
super.onStart()
mMapView?.onStart()
}
override fun onStop() {
super.onStop()
mMapView?.onStop()
}
override fun onDestroy() {
super.onDestroy()
mMapView?.onDestroy()
}
override fun onPause() {
mMapView?.onPause()
super.onPause()
}
override fun onResume() {
super.onResume()
mMapView?.onResume()
}
override fun onLowMemory() {
super.onLowMemory()
mMapView?.onLowMemory()
}
12. Check permissions to let the user allow App to access user location from android 6.0
Code:
if (Build.VERSION.SDK_INT <= Build.VERSION_CODES.P) {
Log.i(TAG, "sdk < 28 Q")
if (checkSelfPermission(
this,
ACCESS_FINE_LOCATION
) != PERMISSION_GRANTED && checkSelfPermission(
this,
ACCESS_COARSE_LOCATION
) != PERMISSION_GRANTED
) {
val strings = arrayOf(ACCESS_FINE_LOCATION, ACCESS_COARSE_LOCATION)
requestPermissions(this, strings, 1)
}
} else {
if (checkSelfPermission(
[email protected],
ACCESS_FINE_LOCATION
) != PERMISSION_GRANTED && checkSelfPermission(
[email protected], ACCESS_COARSE_LOCATION
) != PERMISSION_GRANTED && checkSelfPermission(
[email protected],
"android.permission.ACCESS_BACKGROUND_LOCATION"
) != PERMISSION_GRANTED
) {
val strings = arrayOf(
ACCESS_FINE_LOCATION,
ACCESS_COARSE_LOCATION,
"android.permission.ACCESS_BACKGROUND_LOCATION"
)
requestPermissions(this, strings, 2)
}
}
13. In you activity you need to handle the response by this way.
Code:
Array<String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == 1) {
if (grantResults.size > 1 && grantResults[0] == PERMISSION_GRANTED && grantResults[1] == PERMISSION_GRANTED) {
Log.i(TAG, "onRequestPermissionsResult: apply LOCATION PERMISSION successful")
} else {
Log.i(TAG, "onRequestPermissionsResult: apply LOCATION PERMISSSION failed")
}
}
if (requestCode == 2) {
if (grantResults.size > 2 && grantResults[2] == PERMISSION_GRANTED && grantResults[0] == PERMISSION_GRANTED && grantResults[1] == PERMISSION_GRANTED) { override fun onRequestPermissionsResult(requestCode: Int, permissions: Log.i(TAG, "onRequestPermissionsResult: apply ACCESS_BACKGROUND_LOCATION successful")
} else {
Log.i(TAG, "onRequestPermissionsResult: apply ACCESS_BACKGROUND_LOCATION failed")
}
}
}
You can perform subsequent steps by referring to the RequestLocationUpdatesWithCallbackActivity.kt file.
14. Create a location provider client and device setting client.
Code:
// create fusedLocationProviderClient
fusedLocationProviderClient=LocationServices.getFusedLocationProviderClient(this)
// create settingsClient
settingsClient = LocationServices.getSettingsClient(this)
15. Create a location request.
Code:
mLocationRequest = LocationRequest().apply {
// set the interval for location updates, in milliseconds
interval = 1000
needAddress = true
// set the priority of the request
priority = LocationRequest.PRIORITY_HIGH_ACCURACY
}
16. Create a result callback.
Code:
mLocationCallback = object : LocationCallback() {
override fun onLocationResult(locationResult: LocationResult?) {
if (locationResult != null) {
val locations: List<Location> =
locationResult.locations
if (locations.isNotEmpty()) {
for (location in locations) {
LocationLog.i(TAG,"onLocationResult location[Longitude,Latitude,Accuracy]:${location.longitude} , ${location.latitude} , ${location.accuracy}")
}
}
}
}
override fun onLocationAvailability(locationAvailability: LocationAvailability?) {
locationAvailability?.let {
val flag: Boolean = locationAvailability.isLocationAvailable
LocationLog.i(TAG, "onLocationAvailability isLocationAvailable:$flag")
}
}
}
17. Request location updates.
Code:
private fun requestLocationUpdatesWithCallback() {
try {
val builder = LocationSettingsRequest.Builder()
builder.addLocationRequest(mLocationRequest)
val locationSettingsRequest = builder.build()
// check devices settings before request location updates.
//Before requesting location update, invoke checkLocationSettings to check device settings.
val locationSettingsResponseTask: Task<LocationSettingsResponse> = settingsClient.checkLocationSettings(locationSettingsRequest)
locationSettingsResponseTask.addOnSuccessListener { locationSettingsResponse: LocationSettingsResponse? ->
Log.i(TAG, "check location settings success {$locationSettingsResponse}")
// request location updates
fusedLocationProviderClient.requestLocationUpdates(mLocationRequest, mLocationCallback, Looper.getMainLooper())
.addOnSuccessListener {
LocationLog.i(TAG, "requestLocationUpdatesWithCallback onSuccess")
}
.addOnFailureListener { e ->
LocationLog.e(TAG, "requestLocationUpdatesWithCallback onFailure:${e.message}")
}
}
.addOnFailureListener { e: Exception ->
LocationLog.e(TAG, "checkLocationSetting onFailure:${e.message}")
when ((e as ApiException).statusCode) {
LocationSettingsStatusCodes.RESOLUTION_REQUIRED -> try {
val rae = e as ResolvableApiException
rae.startResolutionForResult(
[email protected]uestLocationUpdatesWithCallbackActivity, 0
)
} catch (sie: SendIntentException) {
Log.e(TAG, "PendingIntent unable to execute request.")
}
}
}
} catch (e: Exception) {
LocationLog.e(TAG, "requestLocationUpdatesWithCallback exception:${e.message}")
}
}
18. Remove location updates.
Code:
private fun removeLocationUpdatesWithCallback() {
try {
fusedLocationProviderClient.removeLocationUpdates(mLocationCallback)
.addOnSuccessListener {
LocationLog.i(
TAG,
"removeLocationUpdatesWithCallback onSuccess"
)
}
.addOnFailureListener { e ->
LocationLog.e(
TAG,
"removeLocationUpdatesWithCallback onFailure:${e.message}"
)
}
} catch (e: Exception) {
LocationLog.e(
TAG,
"removeLocationUpdatesWithCallback exception:${e.message}"
)
}
}
19. Reverse Geocoding
Code:
companion object {
private const val TAG = "MapViewDemoActivity"
private const val MAPVIEW_BUNDLE_KEY = "MapViewBundleKey"
class MyAsyncTask internal constructor(
private val context: DashboardFragment,
private val isStartPos: Boolean
) : AsyncTask<Double, Double, String?>() {
private var resp: String? = null
lateinit var geocoding: Geocoding
override fun onPreExecute() {
}
override fun doInBackground(vararg params: Double?): String? {
try {
geocoding = Geocoding()
geocoding.reverseGeoCodeListener = context
geocoding.reverseGeocoding(
"reverseGeocode",
"YOUR_API_KEY",
params[0],
params[1],
isStartPos
)
} catch (e: InterruptedException) {
e.printStackTrace()
} catch (e: Exception) {
e.printStackTrace()
}
return resp
} override fun onPostExecute(result: String?) {
}
override fun onProgressUpdate(vararg values: Double?) {
}
}
}
20. Geocoding class for converting latlng into Address
Code:
class Geocoding {
var reverseGeoCodeListener: ReverseGeoCodeListener? = null
val ROOT_URL = "https://siteapi.cloud.huawei.com/mapApi/v1/siteService/"
val conection =
"?key="
val JSON = MediaType.parse("application/json; charset=utf-8")
open fun reverseGeocoding(serviceName: String, apiKey: String?, lat: Double?, lng: Double?, isStartPos: Boolean) {
var sites: String = ""
val json = JSONObject()
val location = JSONObject()
try {
location.put("lng", lng)
location.put("lat", lat)
json.put("location", location)
json.put("language", "en")
json.put("politicalView", "CN")
json.put("returnPoi", true)
Log.d("MapViewDemoActivity",json.toString())
} catch (e: JSONException) {
Log.e("error", e.message)
}
val body : RequestBody = RequestBody.create(JSON, json.toString())
val client = OkHttpClient()
val request = Request.Builder()
.url(ROOT_URL + serviceName + conection + URLEncoder.encode(apiKey, "UTF-8"))
.post(body)
.build()
client.newCall(request).enqueue(object : Callback {
@Throws(IOException::class)
override fun onResponse(call: Call, response: Response) {
var str_response = response.body()!!.string()
val json_response:JSONObject = JSONObject(str_response)
val responseCode:String = json_response.getString("returnCode")
if (responseCode.contentEquals("0")) {
Log.d("ReverseGeocoding", str_response)
var jsonarray_sites: JSONArray = json_response.getJSONArray("sites")
var i:Int = 0
var size:Int = jsonarray_sites.length()
var json_objectdetail:JSONObject=jsonarray_sites.getJSONObject(0)
sites = json_objectdetail.getString("formatAddress")
Log.d("formatAddress", sites)
reverseGeoCodeListener?.getAddress(sites, isStartPos)
} else{
sites = "No Result"
Log.d("formatAddress", "")
reverseGeoCodeListener?.onAdddressError(sites)
}
}
override fun onFailure(call: Call, e: IOException) {
sites = "No Result"
Log.e("ReverseGeocoding", e.toString())
reverseGeoCodeListener?.onAdddressError(sites)
}
})
}
}
This is not the end. For full content, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201352135916100198&fid=0101187876626530001
When am integrating site getting error code 6? can you explain why
sujith.e said:
When am integrating site getting error code 6? can you explain why
Click to expand...
Click to collapse
Hi, Sujith.e. The possible cause is that the API key contains special characters. You need to encode the special characters using encodeURI.

Exo Player — HMS Video Kit Comparison by building a flavored Android Application . Part I

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this comparison article you will read about:
Learn about simple transition animations.
Learn about JUnit testing for URL.
Learn how to record videos using the CameraX library.
Learn to create a simple messaging system with Firebase Realtime Database. ( Bonus Content )
Learn how to add video files to Firebase Storage, call their URLs to Realtime NoSQL of Firebase Database, and use those as our video sources.
How to implement both Exo Player, Widevine, and HMS Video Kit, WisePlayer, and their pros and cons while building both of these projects by their flavor dimensions.
Please also note that the code language that supports this article will be on Kotlin and XML.
Global Setups for our application
What you will need for building this application is listed below:
Hardware Requirements
A computer that can run Android Studio.
An Android phone for debugging.
Software Requirements
Android SDK package
Android Studio 3.X
API level of at least 23
HMS Core (APK) 4.0.1.300 or later or later (Not needed for Exo Player 2.0)
To not hold too much space on our phone let’s start with adding necessary plugins to necessary builds.
Code:
plugins {
id 'com.android.application'
id 'kotlin-android'
id 'kotlin-kapt'
id 'kotlin-android-extensions'
id 'com.huawei.agconnect'
id 'com.google.gms.google-services'
}
if (getGradle().getStartParameter().getTaskRequests().toString().toLowerCase().contains("gms")) {
apply plugin: 'com.google.gms.google-services'
} else {
apply plugin: 'com.huawei.agconnect'
}
To compare them both let us create a flavor dimension first on our project. Separating GMS (Google Mobile Services) and HMS (Huawei Mobile Services) products.
Code:
flavorDimensions "version"
productFlavors {
hms {
dimension "version"
}
gms {
dimension "version"
}
}
After that add gms and hms as a directory in your src folder. The selected build variant will be highlighted as blue. Note that “java” and “res” must also be added by hand! And don’t worry I’ll show you how to fill them properly.
Binding build features that we’ll use a lot in this project.
Code:
buildFeatures {
dataBinding true
viewBinding = true
}
Let us separate our dependencies now that we have flavor product builds. Note separated implementations are starting with their prefixed names.
Code:
/* ExoPlayer */
def exoP= "2.12.1"
/* Slider Activity */
def roadkill = "2.1.0"
/* CameraX */
def camerax_version = "1.0.0-beta03"
def camerax_extensions = "1.0.0-alpha10"
/* BottomBar */
implementation 'nl.joery.animatedbottombar:library:1.0.9'
/* Firebase */
implementation platform('com.google.firebase:firebase-bom:26.1.0')
implementation 'com.google.firebase:firebase-analytics'
implementation 'com.google.firebase:firebase-database:19.5.1'
implementation 'com.google.firebase:firebase-storage:19.2.1'
implementation 'com.firebaseui:firebase-ui-auth:6.2.0'
implementation 'com.firebaseui:firebase-ui-database:6.2.0'
implementation 'com.firebaseui:firebase-ui-firestore:6.2.0'
implementation 'com.firebaseui:firebase-ui-storage:6.2.0'
/* Simple Video View */
implementation 'com.klinkerapps:simple_videoview:1.2.4'
/* Lottie */
implementation "com.airbnb.android:lottie: 3.4.0"
/*Sliding Activity*/
implementation "com.r0adkll:slidableactivity:$roadkill"
/* Exo */
gmsImplementation "com.google.android.exoplayer:exoplayer:$exoP"
gmsImplementation "com.google.android.exoplayer:exoplayer-core:$exoP"
gmsImplementation "com.google.android.exoplayer:exoplayer-dash:$exoP"
gmsImplementation "com.google.android.exoplayer:exoplayer-ui:$exoP"
gmsImplementation "com.google.android.exoplayer:exoplayer-hls:$exoP"
gmsImplementation "com.google.android.exoplayer:extension-okhttp:$exoP"
/* Huawei */
implementation 'com.huawei.agconnect:agconnect-core:1.4.1.300'
hmsImplementation 'com.huawei.agconnect:agconnect-auth:1.4.1.300'
hmsImplementation 'com.huawei.hms:hwid:5.0.3.302'
hmsImplementation 'com.huawei.hms:videokit-player:1.0.1.300'
// CameraX core library using the camera2 implementation
//noinspection GradleDependency
implementation "androidx.camera:camera-core:${camerax_version}"
//noinspection GradleDependency
implementation "androidx.camera:camera-camera2:${camerax_version}"
// If you want to additionally use the CameraX Lifecycle library
//noinspection GradleDependency
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
// If you want to additionally use the CameraX View class
//noinspection GradleDependency
implementation "androidx.camera:camera-view:${camerax_extensions}"
// If you want to additionally use the CameraX Extensions library
//noinspection GradleDependency
implementation "androidx.camera:camera-extensions:${camerax_extensions}"
// Login
…
Adding Predefined Video URL’s
These URL’s will be our predefined tests that will be used.
XML:
<resources>
<string-array name="data_url">
<item>http://videoplay-mos-dra.dbankcdn.com/P_VT/video_injection/61/v3/519249A7370974110613784576/MP4Mix_H.264_1920x1080_6000_HEAAC1_PVC_NoCut.mp4?accountinfo=Qj3ukBa%2B5OssJ6UBs%2FNh3iJ24kpPHADlWrk80tR3gxSjRYb5YH0Gk7Vv6TMUZcd5Q%2FK%2BEJYB%2BKZvpCwiL007kA%3D%3D%3A20200720094445%3AUTC%2C%2C%2C20200720094445%2C%2C%2C-1%2C1%2C0%2C%2C%2C1%2C%2C%2C%2C1%2C%2C0%2C%2C%2C%2C%2C1%2CEND&GuardEncType=2&contentCode=M2020072015070339800030113000000&spVolumeId=MP2020072015070339800030113000000&server=videocontent-dra.himovie.hicloud.com&protocolType=1&formatPriority=504*%2C204*%2C2</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ElephantsDream.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerEscapes.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerFun.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerJoyrides.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerMeltdowns.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/Sintel.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/SubaruOutbackOnStreetAndDirt.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/TearsOfSteel.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/VolkswagenGTIReview.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/WeAreGoingOnBullrun.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/WhatCarCanYouGetForAGrand.mp4</item>
</string-array>
<string-array name="data_name">
<item>Analytics - HMS Core</item>
<item>Big Buck Bunny</item>
<item>Elephant Dream</item>
<item>For Bigger Blazes</item>
<item>For Bigger Escape</item>
<item>For Bigger Fun</item>
<item>For Bigger Joyrides</item>
<item>For Bigger Meltdowns</item>
<item>Sintel</item>
<item>Subaru Outback On Street And Dirt</item>
<item>Tears of Steel</item>
<item>Volkswagen GTI Review</item>
<item>We Are Going On Bullrun</item>
<item>What care can you get for a grand?</item>
</string-array>
</resources>
Defining Firebase constants
I am skipping all those setups a project part in Firebase part and moving directly to the part that interests you the most, my dear reader. Since we are about to bind videos to each individual user there must be also login processes. I’ll also leave you, my dear reader.
Let us start with getting our logged-in users ID.
Code:
object AppUser {
private var userId = ""
fun setUserId(userId: String) {
if (userId != "")
this.userId = userId
else
this.userId = "dummy"
}
fun getUserId() = userId
}
Firebase Database Helper:
Code:
object FirebaseDbHelper {
fun rootRef() : DatabaseReference {
return FirebaseDatabase.getInstance().reference
}
fun getUser(userID : String) : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("User").child(userID)
}
fun onDisconnect(db : DatabaseReference) : Task<Void> {
return db.child("onlineStatus").onDisconnect().setValue(ServerValue.TIMESTAMP)
}
fun onConnect(db : DatabaseReference) : Task<Void> {
return db.child("onlineStatus").setValue("true")
}
fun getUserInfo(userID : String) : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("UserInfo").child(userID)
}
fun getVideoFeed(userID : String) : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("UserFeed/Video/$userID")
}
fun getVideoFeedItem(userID: String, listId:String):DatabaseReference{
return FirebaseDatabase.getInstance().reference.child("UserFeed/Video/$userID/$listId")
}
fun getShared(): DatabaseReference{
return FirebaseDatabase.getInstance().reference.child("uploads/Shareable")
}
fun getShareItem(listId: String):DatabaseReference{
return FirebaseDatabase.getInstance().reference.child("uploads/Shareable/$listId")
}
fun getUnSharedItem(listId: String): DatabaseReference{
return FirebaseDatabase.getInstance().reference.child("uploads/UnShareable/$listId")
}
fun getPostMessageRootRef() : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("PostMessage")
}
fun getPostMessageRef(listID : String) : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("PostMessage/$listID")
}
fun getPostMessageUnderRef(postID : String, listID : String) : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("PostMessageUnder/$postID/$listID/Message")
}
}
Now that we have helper lets create our global constants:
Code:
class Constants {
companion object {
/* Firebase References */
val fUserInfoDB = FirebaseDbHelper.getUserInfo(AppUser.getUserId())
val fFeedRef = FirebaseDbHelper.getVideoFeed(AppUser.getUserId())
val fSharedRef = FirebaseDbHelper.getShared()
/* Recording Video */
const val FILENAME = "yyyy-MM-dd-HH-mm-ss-SSS" //"yyyy-MM-dd-HH-mm-ss-SSS"
const val VIDEO_EXTENSION = ".mp4"
var recPath = Environment.getExternalStorageDirectory().path + "/Pictures/ExoVideoReference"
fun getOutputDirectory(context: Context): File {
val appContext = context.applicationContext
val mediaDir = context.externalMediaDirs.firstOrNull()?.let {
File(
recPath
).apply { mkdirs() }
}
return if (mediaDir != null && mediaDir.exists()) mediaDir else appContext.filesDir
}
fun createFile(baseFolder: File, format: String, extension: String) =
File(
baseFolder, SimpleDateFormat(format, Locale.ROOT)
.format(System.currentTimeMillis()) + extension
)
}
}
Let’s create our Data classes to use:
Code:
object DataClass {
data class ProfileVideoDataClass(
val shareStat: String = "",
val like: String = "",
val timeDate: String = "",
val timeMillis: String = "",
val uploaderID: String = "",
val videoUrl: String = "",
val videoName:String = ""
)
data class UploadsShareableDataClass(
val like: String = "",
val timeDate: String = "",
val timeMillis: String = "",
val uploaderID: String = "",
val videoUrl: String = "",
val videoName:String = ""
)
data class PostMessageDataClass(
val comment : String = "",
val comment_lovely : String = "",
val commenter_ID : String = "",
val commenter_image : String = "",
val commenter_name : String = "",
val time : String = "",
val type : String = ""
)
}
Custom URL Test Implementation with JUnit
We need a test based controller to help our users to enter a valid URL address for that let’s begin by creating a new Url Validator class:
Code:
class UrlValidatorHelper : TextWatcher {
internal var isValid = false
override fun beforeTextChanged(s: CharSequence?, start: Int, count: Int, after: Int) {}
override fun onTextChanged(s: CharSequence?, start: Int, before: Int, count: Int) {}
override fun afterTextChanged(s: Editable?) {
isValid = isValidUrl(s)
}
companion object {
fun isValidUrl(url: CharSequence?): Boolean {
return url!=null && URLUtil.isValidUrl(url.trim().toString()) && Patterns.WEB_URL.matcher(url).matches()
}
}
}
Now that we have URL helper lets create our test classes under test>>java>>”package_name”>>UrlValidatorTest:
Code:
class UrlValidatorTest {
@Test
fun urlValidator_InvalidCertificate_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl("www.youtube.com/watch?v=Yr8xDSPjII8&list=RDMMYr8xDSPjII8&start_radio=1"))
}
@Test
fun urlValidator_InvalidIdentifier_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl("https://youtube.com/watch?v=Yr8xDSPjII8&list=RDMMYr8xDSPjII8&start_radio=1"))
}
@Test
fun urlValidator_InvalidDomain_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl("https://www.com/watch?v=Yr8xDSPjII8&list=RDMMYr8xDSPjII8&start_radio=1"))
}
@Test
fun urlValidator_InvalidExtension_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl("https://www.youtube/watch?v=Yr8xDSPjII8&list=RDMMYr8xDSPjII8&start_radio=1"))
}
@Test
fun urlValidator_EmptyUrl_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl(""))
}
@Test
fun urlValidator_NullUrl_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl(null))
}
}
------------------------------
END OF PART - I
Interesting.

Find hand points using Hand Gesture Recognition feature by Huawei ML Kit in Android (Kotlin)

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we can learn how to find the hand key points using Huawei ML Kit of Hand Gesture Recognition feature. This service provides two capabilities: hand keypoint detection and hand gesture recognition. The hand keypoint detection capability can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return the positions of the key points. The hand gesture recognition capability can detect and return the positions of all rectangle areas of the hand from images and videos, and the type and confidence of a gesture. This capability can recognize 14 gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time camera streams.
Use Cases
Hand keypoint detection is widely used in daily life. For example, after integrating this capability, users can convert the detected hand keypoints into a 2D model, and synchronize the model to the character model, to produce a vivid 2D animation. In addition, when shooting a short video, special effects can be generated based on dynamic hand trajectories. This allows users to play finger games, thereby making the video shooting process more creative and interactive. Hand gesture recognition enables your app to call various commands by recognizing users' gestures. Users can control their smart home appliances without touching them. In this way, this capability makes the human-machine interaction more efficient.
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Must have a Huawei phone with HMS 4.0.0.300 or later.
3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 and above installed.
4. Minimum API Level 21 is required.
5. Required EMUI 9.0.0 and later version devices.
How to integrate HMS Dependencies
1. First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.
2. Create a project in android studio, refer Creating an Android Studio Project.
3. Generate a SHA-256 certificate fingerprint.
4. To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.
Note: Project Name depends on the user created name.
5. Create an App in AppGallery Connect.
6. Download the agconnect-services.json file from App information, copy and paste in android Project under app directory, as follows.
7. Enter SHA-256 certificate fingerprint and click Save button, as follows.
Note: Above steps from Step 1 to 7 is common for all Huawei Kits.
8. Click Manage APIs tab and enable ML Kit.
9. Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
Java:
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
10. Add the below plugin and dependencies in build.gradle(Module) file.
Java:
apply plugin: 'com.huawei.agconnect'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.6.5.300'
/ ML Kit Hand Gesture
// Import the base SDK
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.1.0.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.1.0.300'
11. Now Sync the gradle.
12. Add the required permission to the AndroidManifest.xml file.
Java:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
Let us move to development
I have created a project on Android studio with empty activity let us start coding.
In the MainActivity.kt we can find the business logic for buttons.
Java:
class MainActivity : AppCompatActivity() {
private var staticButton: Button? = null
private var liveButton: Button? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
staticButton = findViewById(R.id.btn_static)
liveButton = findViewById(R.id.btn_live)
staticButton!!.setOnClickListener {
val intent = Intent([email protected], StaticHandKeyPointAnalyse::class.java)
startActivity(intent)
}
liveButton!!.setOnClickListener {
val intent = Intent([email protected], LiveHandKeyPointAnalyse::class.java)
startActivity(intent)
}
}
}
In the LiveHandKeyPointAnalyse.kt we can find the business logic for live analysis.
Java:
class LiveHandKeyPointAnalyse : AppCompatActivity(), View.OnClickListener {
private val TAG: String = LiveHandKeyPointAnalyse::class.java.getSimpleName()
private var mPreview: LensEnginePreview? = null
private var mOverlay: GraphicOverlay? = null
private var mFacingSwitch: Button? = null
private var mAnalyzer: MLHandKeypointAnalyzer? = null
private var mLensEngine: LensEngine? = null
private val lensType = LensEngine.BACK_LENS
private var mLensType = 0
private var isFront = false
private var isPermissionRequested = false
private val CAMERA_PERMISSION_CODE = 0
private val ALL_PERMISSION = arrayOf(Manifest.permission.CAMERA)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_live_hand_key_point_analyse)
if (savedInstanceState != null) {
mLensType = savedInstanceState.getInt("lensType")
}
initView()
createHandAnalyzer()
if (Camera.getNumberOfCameras() == 1) {
mFacingSwitch!!.visibility = View.GONE
}
// Checking Camera Permissions
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
} else {
checkPermission()
}
}
private fun initView() {
mPreview = findViewById(R.id.hand_preview)
mOverlay = findViewById(R.id.hand_overlay)
mFacingSwitch = findViewById(R.id.handswitch)
mFacingSwitch!!.setOnClickListener(this)
}
private fun createHandAnalyzer() {
// Create a analyzer. You can create an analyzer using the provided customized face detection parameter: MLHandKeypointAnalyzerSetting
val setting = MLHandKeypointAnalyzerSetting.Factory()
.setMaxHandResults(2)
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
.create()
mAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting)
mAnalyzer!!.setTransactor(HandAnalyzerTransactor(this, mOverlay!!) )
}
// Check the permissions required by the SDK.
private fun checkPermission() {
if (Build.VERSION.SDK_INT >= 23 && !isPermissionRequested) {
isPermissionRequested = true
val permissionsList = ArrayList<String>()
for (perm in getAllPermission()!!) {
if (PackageManager.PERMISSION_GRANTED != checkSelfPermission(perm.toString())) {
permissionsList.add(perm.toString())
}
}
if (!permissionsList.isEmpty()) {
requestPermissions(permissionsList.toTypedArray(), 0)
}
}
}
private fun getAllPermission(): MutableList<Array<String>> {
return Collections.unmodifiableList(listOf(ALL_PERMISSION))
}
private fun createLensEngine() {
val context = this.applicationContext
// Create LensEngine.
mLensEngine = LensEngine.Creator(context, mAnalyzer)
.setLensType(mLensType)
.applyDisplayDimension(640, 480)
.applyFps(25.0f)
.enableAutomaticFocus(true)
.create()
}
private fun startLensEngine() {
if (mLensEngine != null) {
try {
mPreview!!.start(mLensEngine, mOverlay)
} catch (e: IOException) {
Log.e(TAG, "Failed to start lens engine.", e)
mLensEngine!!.release()
mLensEngine = null
}
}
}
// Permission application callback.
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String?>, grantResults: IntArray) {
var hasAllGranted = true
if (requestCode == CAMERA_PERMISSION_CODE) {
if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
} else if (grantResults[0] == PackageManager.PERMISSION_DENIED) {
hasAllGranted = false
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, permissions[0]!!)) {
showWaringDialog()
} else {
Toast.makeText(this, R.string.toast, Toast.LENGTH_SHORT).show()
finish()
}
}
return
}
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
}
override fun onSaveInstanceState(outState: Bundle) {
outState.putInt("lensType", lensType)
super.onSaveInstanceState(outState)
}
private class HandAnalyzerTransactor internal constructor(mainActivity: LiveHandKeyPointAnalyse?,
private val mGraphicOverlay: GraphicOverlay) : MLTransactor<MLHandKeypoints?> {
// Process the results returned by the analyzer.
override fun transactResult(result: MLAnalyzer.Result<MLHandKeypoints?>) {
mGraphicOverlay.clear()
val handKeypointsSparseArray = result.analyseList
val list: MutableList<MLHandKeypoints?> = ArrayList()
for (i in 0 until handKeypointsSparseArray.size()) {
list.add(handKeypointsSparseArray.valueAt(i))
}
val graphic = HandKeypointGraphic(mGraphicOverlay, list)
mGraphicOverlay.add(graphic)
}
override fun destroy() {
mGraphicOverlay.clear()
}
}
override fun onClick(v: View?) {
when (v!!.id) {
R.id.handswitch -> switchCamera()
else -> {}
}
}
private fun switchCamera() {
isFront = !isFront
mLensType = if (isFront) {
LensEngine.FRONT_LENS
} else {
LensEngine.BACK_LENS
}
if (mLensEngine != null) {
mLensEngine!!.close()
}
createLensEngine()
startLensEngine()
}
private fun showWaringDialog() {
val dialog = AlertDialog.Builder(this)
dialog.setMessage(R.string.Information_permission)
.setPositiveButton(R.string.go_authorization,
DialogInterface.OnClickListener { dialog, which ->
val intent = Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS)
val uri = Uri.fromParts("package", applicationContext.packageName, null)
intent.data = uri
startActivity(intent)
})
.setNegativeButton("Cancel", DialogInterface.OnClickListener { dialog, which -> finish() })
.setOnCancelListener(dialogInterface)
dialog.setCancelable(false)
dialog.show()
}
var dialogInterface = DialogInterface.OnCancelListener { }
override fun onResume() {
super.onResume()
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
startLensEngine()
} else {
checkPermission()
}
}
override fun onPause() {
super.onPause()
mPreview!!.stop()
}
override fun onDestroy() {
super.onDestroy()
if (mLensEngine != null) {
mLensEngine!!.release()
}
if (mAnalyzer != null) {
mAnalyzer!!.stop()
}
}
}
Create LensEnginePreview.kt class to find the business logic for lens engine view.
Java:
class LensEnginePreview(private val mContext: Context, attrs: AttributeSet?) : ViewGroup(mContext, attrs) {
private val mSurfaceView: SurfaceView
private var mStartRequested = false
private var mSurfaceAvailable = false
private var mLensEngine: LensEngine? = null
private var mOverlay: GraphicOverlay? = null
@Throws(IOException::class)
fun start(lensEngine: LensEngine?) {
if (lensEngine == null) {
stop()
}
mLensEngine = lensEngine
if (mLensEngine != null) {
mStartRequested = true
startIfReady()
}
}
@Throws(IOException::class)
fun start(lensEngine: LensEngine?, overlay: GraphicOverlay?) {
mOverlay = overlay
this.start(lensEngine)
}
fun stop() {
if (mLensEngine != null) {
mLensEngine!!.close()
}
}
@Throws(IOException::class)
private fun startIfReady() {
if (mStartRequested && mSurfaceAvailable) {
mLensEngine!!.run(mSurfaceView.holder)
if (mOverlay != null) {
val size = mLensEngine!!.displayDimension
val min = Math.min(size.width, size.height)
val max = Math.max(size.width, size.height)
if (isPortraitMode) {
// Swap width and height sizes when in portrait, since it will be rotated by 90 degrees.
mOverlay!!.setCameraInfo(min, max, mLensEngine!!.lensType)
} else {
mOverlay!!.setCameraInfo(max, min, mLensEngine!!.lensType)
}
mOverlay!!.clear()
}
mStartRequested = false
}
}
private inner class SurfaceCallback : SurfaceHolder.Callback {
override fun surfaceCreated(surface: SurfaceHolder) {
mSurfaceAvailable = true
try {
startIfReady()
} catch (e: IOException) {
Log.e(TAG, "Could not start camera source.", e)
}
}
override fun surfaceDestroyed(surface: SurfaceHolder) {
mSurfaceAvailable = false
}
override fun surfaceChanged(holder: SurfaceHolder, format: Int, width: Int, height: Int) {}
}
override fun onLayout(changed: Boolean, left: Int, top: Int, right: Int, bottom: Int) {
var previewWidth = 480
var previewHeight = 360
if (mLensEngine != null) {
val size = mLensEngine!!.displayDimension
if (size != null) {
previewWidth = size.width
previewHeight = size.height
}
}
// Swap width and height sizes when in portrait, since it will be rotated 90 degrees
if (isPortraitMode) {
val tmp = previewWidth
previewWidth = previewHeight
previewHeight = tmp
}
val viewWidth = right - left
val viewHeight = bottom - top
val childWidth: Int
val childHeight: Int
var childXOffset = 0
var childYOffset = 0
val widthRatio = viewWidth.toFloat() / previewWidth.toFloat()
val heightRatio = viewHeight.toFloat() / previewHeight.toFloat()
// To fill the view with the camera preview, while also preserving the correct aspect ratio,
// it is usually necessary to slightly oversize the child and to crop off portions along one
// of the dimensions. We scale up based on the dimension requiring the most correction, and
// compute a crop offset for the other dimension.
if (widthRatio > heightRatio) {
childWidth = viewWidth
childHeight = (previewHeight.toFloat() * widthRatio).toInt()
childYOffset = (childHeight - viewHeight) / 2
} else {
childWidth = (previewWidth.toFloat() * heightRatio).toInt()
childHeight = viewHeight
childXOffset = (childWidth - viewWidth) / 2
}
for (i in 0 until this.childCount) {
// One dimension will be cropped. We shift child over or up by this offset and adjust
// the size to maintain the proper aspect ratio.
getChildAt(i).layout(-1 * childXOffset, -1 * childYOffset,
childWidth - childXOffset,childHeight - childYOffset )
}
try {
startIfReady()
} catch (e: IOException) {
Log.e(TAG, "Could not start camera source.", e)
}
}
private val isPortraitMode: Boolean
get() {
val orientation = mContext.resources.configuration.orientation
if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
return false
}
if (orientation == Configuration.ORIENTATION_PORTRAIT) {
return true
}
Log.d(TAG, "isPortraitMode returning false by default")
return false
}
companion object {
private val TAG = LensEnginePreview::class.java.simpleName
}
init {
mSurfaceView = SurfaceView(mContext)
mSurfaceView.holder.addCallback(SurfaceCallback())
this.addView(mSurfaceView)
}
}
Create HandKeypointGraphic.kt class to find the business logic for hand key point.
Java:
class HandKeypointGraphic(overlay: GraphicOverlay?, private val handKeypoints: MutableList<MLHandKeypoints?>) : GraphicOverlay.Graphic(overlay!!) {
private val rectPaint: Paint
private val idPaintnew: Paint
companion object {
private const val BOX_STROKE_WIDTH = 5.0f
}
private fun translateRect(rect: Rect): Rect {
var left: Float = translateX(rect.left)
var right: Float = translateX(rect.right)
var bottom: Float = translateY(rect.bottom)
var top: Float = translateY(rect.top)
if (left > right) {
val size = left
left = right
right = size
}
if (bottom < top) {
val size = bottom
bottom = top
top = size
}
return Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())
}
init {
val selectedColor = Color.WHITE
idPaintnew = Paint()
idPaintnew.color = Color.GREEN
idPaintnew.textSize = 32f
rectPaint = Paint()
rectPaint.color = selectedColor
rectPaint.style = Paint.Style.STROKE
rectPaint.strokeWidth = BOX_STROKE_WIDTH
}
override fun draw(canvas: Canvas?) {
for (i in handKeypoints.indices) {
val mHandKeypoints = handKeypoints[i]
if (mHandKeypoints!!.getHandKeypoints() == null) {
continue
}
val rect = translateRect(handKeypoints[i]!!.getRect())
canvas!!.drawRect(rect, rectPaint)
for (handKeypoint in mHandKeypoints.getHandKeypoints()) {
if (!(Math.abs(handKeypoint.getPointX() - 0f) == 0f && Math.abs(handKeypoint.getPointY() - 0f) == 0f)) {
canvas!!.drawCircle(translateX(handKeypoint.getPointX().toInt()),
translateY(handKeypoint.getPointY().toInt()), 24f, idPaintnew)
}
}
}
}
}
Create GraphicOverlay.kt class to find the business logic for graphic overlay.
Java:
class GraphicOverlay(context: Context?, attrs: AttributeSet?) : View(context, attrs) {
private val mLock = Any()
private var mPreviewWidth = 0
private var mWidthScaleFactor = 1.0f
private var mPreviewHeight = 0
private var mHeightScaleFactor = 1.0f
private var mFacing = LensEngine.BACK_LENS
private val mGraphics: MutableSet<Graphic> = HashSet()
// Base class for a custom graphics object to be rendered within the graphic overlay. Subclass
// this and implement the [Graphic.draw] method to define the graphics element. Add instances to the overlay using [GraphicOverlay.add].
abstract class Graphic(private val mOverlay: GraphicOverlay) {
// Draw the graphic on the supplied canvas. Drawing should use the following methods to
// convert to view coordinates for the graphics that are drawn:
// 1. [Graphic.scaleX] and [Graphic.scaleY] adjust the size of the supplied value from the preview scale to the view scale.
// 2. [Graphic.translateX] and [Graphic.translateY] adjust the coordinate from the preview's coordinate system to the view coordinate system.
// @param canvas drawing canvas
abstract fun draw(canvas: Canvas?)
// Adjusts a horizontal value of the supplied value from the preview scale to the view scale.
fun scaleX(horizontal: Float): Float {
return horizontal * mOverlay.mWidthScaleFactor
}
// Adjusts a vertical value of the supplied value from the preview scale to the view scale.
fun scaleY(vertical: Float): Float {
return vertical * mOverlay.mHeightScaleFactor
}
// Adjusts the x coordinate from the preview's coordinate system to the view coordinate system.
fun translateX(x: Int): Float {
return if (mOverlay.mFacing == LensEngine.FRONT_LENS) {
mOverlay.width - scaleX(x.toFloat())
} else {
scaleX(x.toFloat())
}
}
// Adjusts the y coordinate from the preview's coordinate system to the view coordinate system.
fun translateY(y: Int): Float {
return scaleY(y.toFloat())
}
}
// Removes all graphics from the overlay.
fun clear() {
synchronized(mLock) { mGraphics.clear() }
postInvalidate()
}
// Adds a graphic to the overlay.
fun add(graphic: Graphic) {
synchronized(mLock) { mGraphics.add(graphic) }
postInvalidate()
}
// Sets the camera attributes for size and facing direction, which informs how to transform image coordinates later.
fun setCameraInfo(previewWidth: Int, previewHeight: Int, facing: Int) {
synchronized(mLock) {
mPreviewWidth = previewWidth
mPreviewHeight = previewHeight
mFacing = facing
}
postInvalidate()
}
// Draws the overlay with its associated graphic objects.
override fun onDraw(canvas: Canvas) {
super.onDraw(canvas)
synchronized(mLock) {
if (mPreviewWidth != 0 && mPreviewHeight != 0) {
mWidthScaleFactor = canvas.width.toFloat() / mPreviewWidth.toFloat()
mHeightScaleFactor = canvas.height.toFloat() / mPreviewHeight.toFloat()
}
for (graphic in mGraphics) {
graphic.draw(canvas)
}
}
}
}
In the StaticHandKeyPointAnalyse.kt we can find the business logic static hand key point analyses.
Java:
class StaticHandKeyPointAnalyse : AppCompatActivity() {
var analyzer: MLHandKeypointAnalyzer? = null
var bitmap: Bitmap? = null
var mutableBitmap: Bitmap? = null
var mlFrame: MLFrame? = null
var imageSelected: ImageView? = null
var picUri: Uri? = null
var pickButton: Button? = null
var analyzeButton:Button? = null
var permissions = arrayOf(Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.CAMERA)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_static_hand_key_point_analyse)
pickButton = findViewById(R.id.pick_img)
analyzeButton = findViewById(R.id.analyse_img)
imageSelected = findViewById(R.id.selected_img)
initialiseSettings()
pickButton!!.setOnClickListener(View.OnClickListener {
pickRequiredImage()
})
analyzeButton!!.setOnClickListener(View.OnClickListener {
asynchronouslyStaticHandkey()
})
checkRequiredPermission()
}
private fun checkRequiredPermission() {
if (PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE)
|| PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE)
|| PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, 111)
}
}
private fun initialiseSettings() {
val setting = MLHandKeypointAnalyzerSetting.Factory() // MLHandKeypointAnalyzerSetting.TYPE_ALL indicates that all results are returned.
// MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY indicates that only hand keypoint information is returned.
// MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY indicates that only palm information is returned.
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL) // Set the maximum number of hand regions that can be detected in an image. By default, a maximum of 10 hand regions can be detected.
.setMaxHandResults(1)
.create()
analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting)
}
private fun asynchronouslyStaticHandkey() {
val task = analyzer!!.asyncAnalyseFrame(mlFrame)
task.addOnSuccessListener { results ->
val canvas = Canvas(mutableBitmap!!)
val paint = Paint()
paint.color = Color.GREEN
paint.style = Paint.Style.FILL
val mlHandKeypoints = results[0]
for (mlHandKeypoint in mlHandKeypoints.getHandKeypoints()) {
canvas.drawCircle(mlHandKeypoint.pointX, mlHandKeypoint.pointY, 48f, paint)
}
imageSelected!!.setImageBitmap(mutableBitmap)
checkAnalyserForStop()
}.addOnFailureListener { // Detection failure.
checkAnalyserForStop()
}
}
private fun checkAnalyserForStop() {
if (analyzer != null) {
analyzer!!.stop()
}
}
private fun pickRequiredImage() {
val intent = Intent()
intent.type = "image/*"
intent.action = Intent.ACTION_PICK
startActivityForResult(Intent.createChooser(intent, "Select Picture"), 20)
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == 20 && resultCode == RESULT_OK && null != data) {
picUri = data.data
val filePathColumn = arrayOf(MediaStore.Images.Media.DATA)
val cursor = contentResolver.query( picUri!!, filePathColumn, null, null, null)
cursor!!.moveToFirst()
cursor.close()
imageSelected!!.setImageURI(picUri)
imageSelected!!.invalidate()
val drawable = imageSelected!!.drawable as BitmapDrawable
bitmap = drawable.bitmap
mutableBitmap = bitmap!!.copy(Bitmap.Config.ARGB_8888, true)
mlFrame = null
mlFrame = MLFrame.fromBitmap(bitmap)
}
}
}
In the activity_main.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<Button
android:id="@+id/btn_static"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Static Detection"
android:textAllCaps="false"
android:textSize="18sp"
app:layout_constraintBottom_toTopOf="@+id/btn_live"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
android:textColor="@color/black"
style="@style/Widget.MaterialComponents.Button.MyTextButton"
app:layout_constraintTop_toTopOf="parent" />
<Button
android:id="@+id/btn_live"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Live Detection"
android:textAllCaps="false"
android:textSize="18sp"
android:layout_marginBottom="150dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
android:textColor="@color/black"
style="@style/Widget.MaterialComponents.Button.MyTextButton"
app:layout_constraintRight_toRightOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
In the activity_live_hand_key_point_analyse.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".LiveHandKeyPointAnalyse">
<com.example.mlhandgesturesample.LensEnginePreview
android:id="@+id/hand_preview"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:ignore="MissingClass">
<com.example.mlhandgesturesample.GraphicOverlay
android:id="@+id/hand_overlay"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</com.example.mlhandgesturesample.LensEnginePreview>
<Button
android:id="@+id/handswitch"
android:layout_width="35dp"
android:layout_height="35dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
android:layout_marginBottom="35dp"
android:background="@drawable/front_back_switch"
android:textOff=""
android:textOn=""
tools:ignore="MissingConstraints" />
</androidx.constraintlayout.widget.ConstraintLayout>
In the activity_static_hand_key_point_analyse.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".StaticHandKeyPointAnalyse">
<com.google.android.material.button.MaterialButton
android:id="@+id/pick_img"
android:text="Pick Image"
android:textSize="18sp"
android:textColor="@android:color/black"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textAllCaps="false"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@id/selected_img"
app:layout_constraintLeft_toLeftOf="@id/selected_img"
app:layout_constraintRight_toRightOf="@id/selected_img"
style="@style/Widget.MaterialComponents.Button.MyTextButton"/>
<ImageView
android:visibility="visible"
android:id="@+id/selected_img"
android:layout_width="350dp"
android:layout_height="350dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<com.google.android.material.button.MaterialButton
android:id="@+id/analyse_img"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="@android:color/black"
android:text="Analyse"
android:textSize="18sp"
android:textAllCaps="false"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintTop_toBottomOf="@id/selected_img"
app:layout_constraintLeft_toLeftOf="@id/selected_img"
app:layout_constraintRight_toRightOf="@id/selected_img"
style="@style/Widget.MaterialComponents.Button.MyTextButton"/>
</androidx.constraintlayout.widget.ConstraintLayout>
Demo
Tips and Tricks
1. Make sure you are already registered as Huawei developer.
2. Set minSDK version to 21 or later, otherwise you will get AndriodManifest merge issue.
3. Make sure you have added the agconnect-services.json file to app folder.
4. Make sure you have added SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
Conclusion
In this article, we have learned how to find the hand key points using Huawei ML Kit of Hand Gesture Recognition feature. This service provides two capabilities: hand keypoint detection and hand gesture recognition. The hand keypoint detection capability can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return the positions of the key points. The hand gesture recognition capability can detect and return the positions of all rectangle areas of the hand from images and videos, and the type and confidence of a gesture. This capability can recognize 14 gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time camera streams.
I hope you have read this article. If you found it is helpful, please provide likes and comments.
Reference
ML Kit – Hand Gesture Recognition
ML Kit – Training Video

Categories

Resources