[GUIDE]For Flash Developers - C++ or Other Android Development Languages

Though most of the Android Development forums advocate native applications, for those who are more fluent with flash, don't give up hope just yet. Android's support for flash environment ain't too shabby and it's far better than those running on Apple (trust us, we tried). Flash provides some useful tools to do a quick mock up of concepts and/or visuals.
To get you started, check out Adobe's Guide to Android Air on Android.
But of course, nothing runs better or faster than a native app.
Happy Coding!

How do you intercept device sleep and wake events on adobe air
If you are developing your app on adobe air then you will definitely come across the need to decide what to do with your app when the user closes the app and how will it resume.
The following short guide will show you how to do this easily.
Add the following lines to attach the function calls to the respective events:
import flash.desktop.NativeApplication;
NativeApplication.nativeApplication.addEventListener(Event.DEACTIVATE , handleDeactivate, false, 0, true);
NativeApplication.nativeApplication.addEventListener(Event.ACTIVATE, handleActivate, false, 0, true);
function handleDeactivate(event:Event):void {
//Function called on app sleeps
}
function handleActivate(event:Event):void {
//Function called when app resumes.
}

How to handle back button key press
To control the behavior of the app when the back button is pressed, simply attach an event listener as shown below:
import flash.desktop.NativeApplication;
NativeApplication.nativeApplication.addEventListener(KeyboardEvent.KEY_DOWN, onKeyDown, false, 0, true);
protected function onKeyDown(event:KeyboardEvent):void{
if( event.keyCode == Keyboard.BACK ) {
event.preventDefault();
event.stopImmediatePropagation();
//handle the button press here.
}
}

How to enable your Adobe Air app to be transferrable to the external SDcard
By default, the setting in Adobe Air is set such that apps cannot be moved to external storage. Some users prefer this due to limited storage space on their phones or because some apps take up alot of space.
To allow this behavior, you need to manually change the setting in the application.xml file under the android settings. Simply add the line in bold and wallah, your android will now allow your app to be moved between local and external storage.
<android>
<manifestAdditions>
<![CDATA[<manifest android:installLocation="auto">
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.DISABLE_KEYGUARD"/>
<uses-permission android:name="android.permission.WAKE_LOCK"/>
</manifest>]]>
</manifestAdditions>
</android>

I appreciate you work man

Related

[DEV] Acessing IR interface

I just started developing for my new Sony tablet and was curious if I can send IR codes with my own app instead of using the Sony app. Here is what I've achieved so far. Maybe someone finds this information usefull and maybe we can provide further information based on this.
The service kinda works, the callbacks get called, I can read raw commands using learnKey() method and I can get the keys for specific devices using getKeyList() but sending IR pattern or key codes to devices seems not to be working quire right, although the callback gets status OK after sending.
Sony created a DataProvider to share data between the app and the service. Thanks to this fact I queried following URI content://com.sony.nfx.app.irremote.provider/learnt and found all custom learnt codes I added via the Sony app. But even using this exact data doesn't seem to do anything when sending to the device.
Access to IR service is restricted by permission and process so our AndroidManifest.xml should look like this:
Code:
<uses-permission android:name="com.sony.nfx.app.irremoteservice.permission.EXECUTE_SERVICE"/>
<application ... android:process=":remote">
...
</application>
Required AIDL files:
Code:
package com.sony.nfx.app.irremoteservice;
import com.sony.nfx.app.irremoteservice.IUEIControlServiceCallback;
interface IUEIControlService {
int sendKey(int i, int j, int k, byte byte0);
int sendSonyCode(int i, int j, byte byte0, int k, byte byte1);
int sendIRPattern(int i, int j, byte byte0, in byte[] pattern);
int sendStopIRSend(int i);
int getKeyList(int i, int j);
int learnKey(int i);
void registerCallback(IUEIControlServiceCallback callback);
void unregisterCallback(IUEIControlServiceCallback callback);
}
Code:
package com.sony.nfx.app.irremoteservice;
interface IUEIControlServiceCallback {
void onCommandComplete(int i, int j);
void onLearntKey(int i, in byte[] abyte0);
void onGetKeyList(int i, int j, in int[] ai);
}
Basic sample code:
Code:
package ir.remote.android;
import java.util.Arrays;
import android.app.Activity;
import android.content.ComponentName;
import android.content.Context;
import android.content.Intent;
import android.content.ServiceConnection;
import android.os.Bundle;
import android.os.IBinder;
import android.os.RemoteException;
import android.util.Log;
import com.sony.nfx.app.irremoteservice.IUEIControlService;
import com.sony.nfx.app.irremoteservice.IUEIControlServiceCallback;
public class RemoteTest extends Activity implements ServiceConnection {
private static final String LOG_TAG = "RemoteTest";
private ServiceBinder serviceBinder = new ServiceBinder();
private IUEIControlService remoteService = null;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
String serviceAction = IUEIControlService.class.getName();
Intent serviceIntent = new Intent(serviceAction);
bindService(serviceIntent, this, Context.BIND_AUTO_CREATE);
}
public void onServiceConnected(ComponentName name, IBinder service) {
try {
Log.d(LOG_TAG, "Service connected!");
remoteService = IUEIControlService.Stub.asInterface(service);
remoteService.registerCallback(serviceBinder);
} catch (Exception e) {
Log.e(LOG_TAG, "Error connecting to service!", e);
}
}
public void onServiceDisconnected(ComponentName name) {
Log.d(LOG_TAG, "Service disconnected!");
remoteService = null;
}
public static class ServiceBinder extends IUEIControlServiceCallback.Stub {
public void onCommandComplete(int i, int j) throws RemoteException {
Log.d(LOG_TAG, "onCommandComplete(" + i + ", " + j + ")");
}
public void onGetKeyList(int i, int j, int[] ai) throws RemoteException {
Log.d(LOG_TAG, "onGetKeyList(" + i + ", " + j + ", " + Arrays.toString(ai) + ")");
}
public void onLearntKey(int key, byte[] value) throws RemoteException {
Log.d(LOG_TAG, "onLearntKey(" + key + ", " + Arrays.toString(value) + ")");
}
}
}
Good start
I'm really glad to see someone working on accessing the IR interface. The IR is the primary differentiator that made me decide to buy the Sony Tablet S. I haven't got it yet though, getting it on Monday evening.
I'm only a noob at development, but have started learning Android dev and the start you've made is bound to help. So thanks a lot.
I look forward to seeing your progress.
Luke
Hello Peacemaker2000,
I'm quite new to programing on android (and in java for that matter) so, sorry for the noob questions here.
I have just received my sony tablet last week-end and I think building a custom remote to my needs would be a great hands on exercice
Also I'm a bit disapointed by the natif app which doe not support macro nor look customizaton.
I have tried to start working with all your samples and code to see what I can get to work, but my first concern is about the AIDL files, how do you import the "com.sony.nfx.app.irremoteservice.IUEIControlServiceCallback" ?
The only file that i have found on my tablet, in a "framewok" folder somewhere is "com.sony.nfx.app.irremoteserviceif.jar", I have tried to referenced it by adding to the build path but it is as far as I have been for now.
Any more help / explanation would be greatly appreciated !
Thanks,
Tom.
Good Job Peacemaker,
Main reason i bought it is universal remote control (of course i m an android fan also). If you think that you have to pay 2000$ for a philips pronto remote, SONY did a good start. I am waiting for a customized remote software so to be able to add new buttons, keys etc.
Thank you,
Chris
+1 for me. the ability to have a custom layed out remote with macros and such, I would buy that app.
I ASSUME you have signed up for the sony developers kit they just released, I dl'd it but don't have the time to get into development currently.
Alan
I did subscripbe and download the sdk but only found samples / references for the largescreen / dualscreen layout of the 2 sony tablets, nothing related to the IR.
I did look in to doing something with the IR blaster but couldn't find anything about how to access it. I might start digging around to build the app I was thinking of in my time off over Christmas
Hello to everybody and i wish you a Happy New Year!!!!!!!!!
Any news about IR?
I wonder who will make a new software remote control more customizable, with macros, buttons etc.
Chris
Athens GREECE
I agree with everyone so far about wanting a better app than the remote control one supplied by Sony. I tried learning a macro from my PDA with its universal remote control app, but it refused to learn it. It was OK with single key presses, indeed it is very sensitive.
I bought the Tablet S for my wife, simply because of the eBook reader capability amongst other things but also as a trial for a new replacement universal remote control. This is the first device for ages to have an IR facility built in. I use an old HP Jornada but the screen is mis-behaving after around 10 years of hard labour as a Universal Remote Control. I have found the ability to program my own designed GUI for my numerous cinema, TV etc. devices has been excellent. The Tablet S app is OK in that it has enough keys to cover most remotes. The facility to be able to re-label the keys and choose which of the icons can be which keys is good. It is slightly limited as most of the icons on the left hand panel are fixed labels. Therefore it could be improved.
I would support anyone writing such an app. I would even pay for it! Now that's novel..... Good luck developers.
Thank you so much Peacemaker2000 for sharing your code and ideas.
It works like a charm getting the keys from a remote control and using the data for your own app. I would really like to know what kind of parameters you have to send to get a full keylist of a TV.
Right now I'm just saving the key data from my remote control and don't have any clues how to generally get all possible keys my TV understands. Would be great if you or anybody else who knows this, could share it with us.
If anybody would like to have a functional piece of code from my current prototype, just let me know, right now I have no problems of controlling my Sony Bravia KDL-32W4000.
Perhaps a Mystery Gift app could be developed for use with pokemon gold on a gameboy colour, those were the days
Seems to be quite easy to get a keylist from the actual service:
Code:
int REMOTE_TYPE_ID_KDL_32W4000 = 508;
remoteService.getKeyList(0, REMOTE_TYPE_ID_KDL_32W4000);
You'll get the list in IUEIControlServiceCallback.Stub.onGetKeyList() and execute the appropriate key with:
Code:
int REMOTE_TYPE_ID_KDL_32W4000 = 508;
int ON_OFF_KEY = 18;
remoteService.sendKey(0, REMOTE_TYPE_ID_KDL_32W4000, ON_OFF_KEY, (byte) 0);
remoteService.sendStopIRSend(0);
There you go, TV goes on and off.
I will donate to anyone developing better IR control app.
It will put all that smart LCD screen remotes that they are selling for $1000 and more out of existence.
There is an example of really good IR remote control application. Extremely feature rich. It is called RemoteControl II from http://wincesoft.de/html/remotecontrol_ii.html.
It is only available for Windows CE/Mobile devices.
open source project
hi all, it could be nice to start an open source project, so everyone can collaborate. like me.
david8 said:
hi all, it could be nice to start an open source project, so everyone can collaborate. like me.
Click to expand...
Click to collapse
I've just whipped up a quick Git repo now. Here's the link: https://github.com/agc93/Tablet-S-IR-Control
It'll only be empty atm, because I'm a bit busy at work, but if anyone wants to give it a try, feel free!
agc93 said:
I've just whipped up a quick Git repo now. Here's the link: https://github.com/agc93/Tablet-S-IR-Control
It'll only be empty atm, because I'm a bit busy at work, but if anyone wants to give it a try, feel free!
Click to expand...
Click to collapse
Great! It's a good beginning.
Hi,
I'm an Android beginner developer.
I recently bought a Sony Tablet S and trying to create my own IR Remote App...why?
Sony default IR Remote App don't provides macros.
Here are the main features i'm thiking about:
A dialog box popup automatically when tablet is removed from his dock and shaked...this is managed by a local service)
This dialog provides two options : 1) Default usage of the tablet or 2) Launch my own IR Remote App
This custom App provides only 2 macros : 1) Listen to music or 2) Watch TV
Main challenges i'm facing (as you can easily imagine) is the IR interface implementation.
There's not much code sample on the web, probably because Sony Tablet S is the only device on the market embedding IR interface.
Here are my questions :
Do some one has successfully experienced this technology ?
Is it necessary to root the device before coding with IR interface ?
Do Sony provides specific library to access to IR interface ?
I suppose that each device that i want to remote control (Denon amplifier AVR-2311 and DVico Tvix 6600n for music listening) has an Infrared Hex Code. How do i use it in my classes ?
Waiting for your help
Regards
I for one am very interested in any application that comes close to a Philips Pronto in respect of functionality. The ability to design your own buttons and layouts as wll as have macros is all that is missing from the sony software and I'm sure its possible.
My Pronto is over 10 years old now and one of the reasons for purchasing the Tablet s was as a replacement.
Have a look at the pronto forums on remotecentral.com theres a vast array of downloads of layouts for Pronto edit and hence the hex codes which can be extracted. Many include discrete codes that can't be learned from remotes and are invaluable for macros.
You can also obtain the original editing software and load onto a virtual pronto. If you can produce an ap anything like that then I think you're onto a winner.
Apologies I'm unable to assist on a coding front as I'm a bit of a technonumpty.
Another very confused developer here trying to work out how Sony's bizarre remote calls function. To be honest, not having much luck, and Sony won't help.
If anyone knows how to get it going, please share, because that's all thats stopping me at the moment.
I already posted a working example for sending an IR signal with the tablet. ;-)
I'm also working on an own app with makro-capabilities, custom button layouting (own text, pictures, size, placement) and even custom gestures. I think I can give out a demo in about two weeks, so be a little bit more patient. ;-)
Of course the app will only work with the Tablet S.

[APP] WearShell

Having a LG G Watch for a while I thought that it would be interesting to run code directly on the watch without having to create an APK. Sometimes I just wanted to run some code snippets on the watch and view the result instantly.
Creating a complete project, compiling and deploying the APK on the Wear device is quite time consuming and somewhat annoying.
I tried to write an app to execute BeanShell code directly on the Wear device. This is all experimental and the possibilities are quite limited compared to an regular app, but for an execution of some code snippets that seemed to be a good idea.
The result is Wear Shell, an app that consists of a mobile and a Wear part. The mobile part moves the code for execution to the smart watch, collects the result and passes it to the calling application.
So I hope owners of an Android Wear smart watch have fun with the app and find it as interesting as I did to explore things from the perspective of a watch.
The complete blog post can be found here: Wear Shell - Exploring Android Wear
Now also available on Google Play.
Current Version
0.5.0
Download
WearShell APK
Extensions
Additional BeanShell Commands
Screenshot
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Does this run java code only?
I had an idea of displaying web view info...like some nfl scores and stats think that be pretty sweet
After reading I think I found my answers, still pretty cool
Thank you so much. You don't understand what a pain in the ass it is to have to get on my laptop every time I want to execute a simple command in the watch.
I just realized how great this project was, yet so long time it has raised not much attention... The barrier for most readers may be the complicated JAVA source codes, where a lot of people may choose easier Python when using sl4a.
well, I just started as a newbie. My little suggestion is to add some notification push examples (main function of watch isn't it?) to attract more people.
For example I wrote a toast
import android.widget.Toast;
Toast.makeText(context, "message", Toast.LENGTH_SHORT).show();
I am also wondering if it is possible to popup or send a image to watch..
I need some time to understand the intent. But honestly I will appreciate if anyone can tell me how to call it through "am" or "tasker intent".
Besides, is there an option to hide the running popup?
After half day of trying, I found it hard to go on. The main reason is there lacks a tutorial explaining what commands and libraries are available and what are not.
For example, the desktop version I can use classBrowser() but no longer available in wear (apparent for a window, but what about others..).
For example, I can import android.widget.Toast but cannot import android.support.v4.app.NotificationCompat.WearableExtender; Besides, the error output is no so user-friendly that error only shows when later calling.
As a beginner should not use such an ineffective way that I try every command/library to see if there is an error.
Any resources for helping?
Androd Wear supports almost the complete Android API.
Here are the exceptions:
android.webkit
android.print
android.app.backup
android.appwidget
android.hardware.usb
The Wear app does not include any support library, so android.support.v4 classes are not available.
Nevertheless you should be able to use notifications the same way you do on a normal Android device.
Here is an example:
Code:
import android.app.Notification;
import android.content.Context;
noti = new Notification.Builder(context)
.setContentTitle("Title")
.setContentText("Text")
.setSmallIcon(com.android.internal.R.drawable.emo_im_cool)
.build();
​
notificationManager = context.getSystemService(Context.NOTIFICATION_SERVICE);
notificationManager.notify(3, noti);​
I can't get exec intent to work. I tried Tasker, am and custom app and none seem to work.
Code:
Intent intent = new Intent("de.fun2code.android.wear.shell.EXEC");
intent.putExtra("bsh", "source(\"/sdcard/alwaysoff.bsh\");");
sendBroadcast(intent);
(File is there, if I execute same exact code via web interface it works.)
Any ideas?
I've noticed that WearShell only works when my phone's screen is on and unlocked. Thanks to qingtest and matejdro I have Tasker toggle theater mode in certain conditions, but it doesn't work if my phone's screen is off or it's locked. Secure Settings can allow Tasker to turn on the screen, but I haven't found anything to swipe at the lock screen to unlock it. That being the case, it'd be nice if WearShell could send the command while the phone is locked.
@joschi70 I followed the instructions in this topic http://forum.xda-developers.com/showthread.php?t=3098425 and I am using Tasker to send an intent. However, I noticed that sometimes the command is not sent to the watch if the phone is off and when I open the phone I find a message "Communicating with Wear Device".
Posted via Tapatalk
The Result Intent requires the phone to be unlocked.
I'm currently working on a new version that supports Broadcasts which should hopefully solve this issue.
The new version will support background service, an option to start the service on boot and an enhanced web interface.
Hope I can finish the new version in a couple of weeks.
WearShell 0.4 is available for download.
joschi70 said:
WearShell 0.4 is available for download.
Click to expand...
Click to collapse
Where can I find the change log. Is the intent result still needs the phone to be unlocked.
Posted via Tapatalk
The new version supports broadcasts, so the phone does not need to be unlocked.
The below info is also available inside the web interface.
Sending a Broadcast
Request
Action: de.fun2code.android.wear.shell.EXEC
Request String Extra: bsh
RequestCode Integer Extra: requestCode (optional)
Response
Intent Filter: de.fun2code.android.wear.shell.EXEC_RESULT
RequestCode Integer Extra: requestCode
Response String Extra: result
Hope this is working as expected.
joschi70 said:
The new version supports broadcasts, so the phone does not need to be unlocked.
The below info is also available inside the web interface.
Sending a Broadcast
Request
Action: de.fun2code.android.wear.shell.EXEC
Request String Extra: bsh
RequestCode Integer Extra: requestCode (optional)
Response
Intent Filter: de.fun2code.android.wear.shell.EXEC_RESULT
RequestCode Integer Extra: requestCode
Response String Extra: result
Hope this is working as expected.
Click to expand...
Click to collapse
Thanks a lot. It is working fine and serve my purpose.
joschi70 said:
The new version supports broadcasts, so the phone does not need to be unlocked.
The below info is also available inside the web interface.
Sending a Broadcast
Request
Action: de.fun2code.android.wear.shell.EXEC
Request String Extra: bsh
RequestCode Integer Extra: requestCode (optional)
Response
Intent Filter: de.fun2code.android.wear.shell.EXEC_RESULT
RequestCode Integer Extra: requestCode
Response String Extra: result
Hope this is working as expected.
Click to expand...
Click to collapse
I can't seem to receive the result intent inside of Tasker. I have tasker sending a piece of inline beanshell to the watch, but I can't seem to get any response. That said, my beanshell script doesn't return anything, could that be the reason? Or should I still be receiving something as a result? I have attempted to clone the %result variable that the userguide says should be created to %RESULT (global) but it's empty when I check.
EDIT: My apologies, I answered my own question and got the result to appear in a test task.
By the way, great app! You solved my problem with not being able to automate theater mode.
hello, is possibile send file smartphone to smartwatch?
or play sound file of smartphone in to smartwatch?
thanks
Inviato dal mio SM-N9005 utilizzando Tapatalk
Cool! I will try this sometime.
How Do You Send A Browser Intent To Wearshell From Tasker?
First off this app has so much potential and it's so cool. I admit I don't have much skill with Java or scripting, and I've looked all over the Internet for a tutorial on how to do this but I can't find one. All I want to know is how to send a browser intent to wear shell from Tasker. I put UC mini browser on my Huawei watch and it is working, and I can run Google searches from a tasker APK that I put on the watch, but that's a little slow and consumes a lot of battery. If I knew how to send an intent from Tasker to wearshell I think that would be faster and save battery. I have AutoVoice set up perfectly to intercept Google Now commands on my Galaxy Note 3, but the ability to backfeed commands to wearshell would be so awesome. I also think it is so cool that this app will allow commands to be sent to my watch without debugging being enabled.
The BeanShell code for opening an URL should look like this:
Code:
import android.content.Intent;
import android.net.Uri;
uri = "http://xda-developers.com";
intent = new Intent(Intent.ACTION_VIEW, Uri.parse(uri));
intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
context.startActivity(intent);
joschi70 said:
The BeanShell code for opening an URL should look like this:
Code:
import android.content.Intent;
import android.net.Uri;
uri = "http://xda-developers.com";
intent = new Intent(Intent.ACTION_VIEW, Uri.parse(uri));
intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
context.startActivity(intent);
Click to expand...
Click to collapse
I appreciate the help. I input this into the "data" section of tasker's 'send intent' action, as well as the 'extra' section. it did open the wearshell app on my phone showing the server is running though. thanks for helping someone with no coding knowledge. I'll keep trying variations of this.
EDIT: got it working by just putting a "bsh:" in the beginning of that code in the 'Extra' section.. this is awesome and so fast and uc mini so fast over bluetooth .. wow my projects hardly ever exceed my expectations, thanks Wearshell !

How do I integrate HMS to my game using HMS Unity Plugin 2.0? — Part 2

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction​If you are not coming to this article from the Part 1 of the series, you can read here. This is the second part of our HMS Unity Plugin 2.0 integration guide. As you know, I wanted to talk about a part of GameService here because it requires a bit more work, not because of the plugin but because of its intrinsic nature. Adding products, managing behaviors, configuring achievements etc. take a bit more time in the AGC side. I will try to give as many details as I can give in this article; but since some of the topics are not directly related with the plugin, you may further research on how to do the tasks that I do not extensively talk about here.
I will be talking about just the Achievements parts of the GameService. However, it also has the capabilities of SaveGame and Leaderboard. You can read more about them in other articles and believe me, they are as easy to integrate as the ones I talk about, thanks to the HMS Unity Plugin 2.0.
This article also assumes that you have completed the steps in Part 1, at least the ones that are essential. GameService is already dependent on Account Kit you must check Account Kit as well from the Kit Settings menu, even if you will not use it directly in your game. (Plugin should automatically tick it for you once you tick GameService)
You do not have to integrate other kits to integrate these two kits, but some AGC side requirements are standard for all kits. I will talk about the specific parts that are about GameService here and IAP (In-app purchases) in part 3.
Without further ado, let’s get started.
GameService​After enabling GameService in HMS Settings (aka Kit Settings) menu, a “Game Service” tab will be automatically added to the settings menu, as can be seen below in the screenshots. Now, I walk you through GameService step by step for those who want a bit of additional information.
Sign-In Function Implementation​As I have warned in part 1, for the use of GameService, sign-in is required. This must be either done through Account Kit by yourself, or through GameService.
The easiest way to the do this is to just to check the box at the bottom of GameService tab. When you tick “Initialize On Start”, whenever users start your game, your game will try to log the user in immediately and they will see the “Welcome *username*” greeting message immediately if they logged in at least once in your game.
In the first-ever login in your game, they will be directed to the Huawei login page automatically, which will be done at the very first opening of your app. If you choose to do this, you do not even have to implement Account Kit. That’s it, login is done and you are ready to continue with just one click.
If you opt for not checking the box because this is not a desirable in-app behavior for you, then you must initialize the GameService manually and use the Account Kit in your own logic to log the user in.
It requires a bit of code but is not hard at all. Let’s assume that you sign-in at the Start() function of your app using Account Kit. What you have to do is to implement the SignInSuccess callback. If login is successful, success callback will be automatically executed and in there, you must initialize the GameService with just one line of code.
void Start()
{
HMSAccountManager.Instance.SignIn(); //sign the user in HMSAccountManager.Instance.OnSignInSuccess = OnSignInSuccess;
//implement callback on Start()
}
private void OnSignInSuccess(AuthAccount obj)
{
HMSGameManager.Instance.Init();
}
That’s it for the manual control. Now, you control where you want to sign your users in and also initialize the GameManager so that you can use Achievements, Leaderboards and SaveGame features.
I suggest that whichever way you choose, you do this at the first scene of the app (like a main menu etc.) and not inside the game itself, so users will not be bothered by sign-in process in-game.
Achievements​I want to add achievements to my app so when the user has done certain actions, I will reward them by unlocking some achievements. There are mainly two actions required to be done by you, the developer: First, add achievements to your app in AGC and get their ID. And second, enter the IDs to Achievements part of the plugin and implement in-game logic. That means, you need to determine where you will grant your users an achievement in your game. What kind of actions are needed to be carried out to get them?
In my case, this process is very simple. I have “Beginner, Medium and Master Scorer” achievements defined and I grant them whenever the user completes a certain score in my game. Since my game is very simple, the score is the utmost indicator of a “skilled” player, so I thought, why not?
First, let’s go to AGC (AppGallery Connect) together to add some achievements. You can go to AGC by using this link. Sign in to your developer account, click “My apps” and choose your game from the list. You will be directed to “Distribute” tab. From the left-upper bar, choose “Operate” tab instead. There, you will have “Product Management” tab opened at first from the left navigation menu, which I will use it for IAP later. Now, move to the Achievements tab to add some achievements to your game. Click Create on right to create an achievement.
You enter a name and a description to remember what this achievement is for. You can leave “incremental achievement” unchecked because I do not need it for this simple game. Also for the “revealed, hidden” option, what I did was to make the BeginnerScorer achievement revealed and the other two are hidden. So user will see them in achievements list but will not know what they are before achieving the previous achievement. You can configure them however you like. Make sure they are fitting to your game content, so users will try to play longer to achieve them. Also, I set the same logo for every one of them but I suggest you design different icons for each and every one of your achievements.
After you are done, it should look like something like this:
Do not release your achievements so you can test them. If you release them, they will be checked by AGC and be approved if they are proper. However, then, you cannot reset their progress even if you did not publish your game yet. Thus, to make sure that the development side works correctly, I will leave them as it is. Whenever you achieve them in your own game testing, you can just reset the progress and keep testing if you want to change something you do not like.
Now that I am done creating them, you can click “Obtain Resources” above and copy their IDs one by one. Then, paste them to our HMS Settings menu. After you copied them all, click “Create Constant Classes”, so HMS Unity Plugin can create a constant class for you.
The constant class will be called HMSAchievementConstants. Now let’s see how can I use them. I will need the “state”s of these achievements for my game implementation because I will check the states to grant the achievements one by one. Imagine a scenario where BeginnerScorer needs 15 points and MediumScorer needs 25 points to unlock. If the user surpasses 25 points in the first game, then the game would grant them consecutively in one run. This is not what I want, so I will access the achievement states and that requires Achievement objects. You do not have to use Achievement objects, you can just use the constant class to retrieve the IDs and immediately reveal and/or unlock them.
public void TakeDamage(int damageAmount)
{
//...
if (health <= 0)
{
//...
//Player is dead
losePanel.SetActive(true);
HMSAchievementsManager.Instance.GetAchievementsList();
}
}
Remember my TakeDamage function shown above. Since I will be unlocking achievements when the game is done, I will call my GetAchievementsList() function after the player dies. This function is necessary because it has several callbacks that which I will use. You should decide to call this function depending on your game logic and code structure. As I always do, I tell my structure in detail so you can project where you should put yours.
void Start()
{
//... HMSAchievementsManager.Instance.OnGetAchievementsListSuccess = OnGetAchievemenListSuccess;
HMSAchievementsManager.Instance.OnGetAchievementsListFailure = OnGetAchievementsListFailure; //optional
}
In the Start() function of wherever you will call GetAchievementsList() function, do as above. Basically, you are registering the these callbacks so when getting the achievements list is successful, the OnGetAchievemenListSuccess that you will write will be triggered. Failure callback is optional, you can track the errors and add some user warning if you like.
using System.Linq;
private void OnGetAchievemenListSuccess(IList<Achievement> achievementList)
{
//Implement your own achievement system here...
//Achievement beginnerScorer = achievementList[3]; -> Same thing as the line below
Achievement beginnerScorer = achievementList.First(ach => ach.Id == HMSAchievementConstants.BeginnerScorer); //Score of 15 is needed
Achievement mediumScorer = achievementList[4]; //Score of 25 is needed
Achievement masterScorer = achievementList[5]; //Score of 50 is needed
if (score >= 15 && beginnerScorer.State != 3)
{
HMSAchievementsManager.Instance.UnlockAchievement(beginnerScorer.Id);
//HMSAchievementsManager.Instance.UnlockAchievement(HMSAchievementConstants.BeginnerScorer); -> same as above
HMSAchievementsManager.Instance.RevealAchievement(mediumScorer.Id);
}
else if (score >= 25 && beginnerScorer.State == 3 && mediumScorer.State != 3)
{
HMSAchievementsManager.Instance.UnlockAchievement(mediumScorer.Id);
HMSAchievementsManager.Instance.RevealAchievement(masterScorer.Id);
}
else if (score >= 50 && mediumScorer.State == 3 && masterScorer.State != 3)
{
HMSAchievementsManager.Instance.UnlockAchievement(masterScorer.Id);
}
}
private void OnGetAchievementsListFailure(HMSException obj)
{
Debug.Log("OnGetAchievementsListFailure with code: " + obj.ErrorCode);
}
Let me explain the code above. It may look a bit complicated but it is not hard to understand. Since I registered to my callbacks, I need to implement them now. You need to implement the this callback yourself, so users can unlock achievements.
As I said, since I need the states, I use the objects of Achievement class. Normally, if I were not to care about the states, I would not even need them. I would just do:
HMSAchievementsManager.Instance.UnlockAchievement(HMSAchievementConstants.BeginnerScorer);
So, if you do not need states or other properties of Achievement class, you can also do the same. Your development cost is much less this way, thanks to the plugin. As you see, you do not even need to copy the long IDs to wherever you want to use them, you can just call constants class and use the IDs by the name you gave to them.
In the following part of the code, I get my achievements one by one from the callback parameter. A list already returned to me and I can pick what I want. Since I previously added 3 more achievements that I did not show you, my ordinal numbers start from 4. (you can check AGC console screenshot above)
Achievement beginnerScorer = achievementList.First(ach => ach.Id == HMSAchievementConstants.BeginnerScorer); //15 score is needed
Achievement mediumScorer = achievementList[4]; //25 score is needed
What I do is to get to the (4–1)rd index to get my beginner achievement. You can always match the indices of the achievements from the AGC console ordinals. There is also another way. If you import System.Linq, you can also use First function to get the achievements without using index numbers. Example is shown above. This is just to provide some alternatives for you.
In the rest of the code, I check the states and if they are not unlocked yet, or surely unlocked in the next step, I unlock my achievements. Since I made the other two achievements hidden, I also reveal them when the user unlocks the previous achievement. It is all under my control, so you can code your own logic however you like.
Also notice that I use the instance of HMSAchievementsManager when revealing and unlocking achievements. No further code required to call this because plugin handles the other cumbersome processes for you.
Achievements are done. You can see how I have become the master of my own game.
One little thing is left though. Users should be able to see what kind of achievements are there even if they are hidden. (It will be shown as hidden)
AppGallery already provides an interface for this, thus, if you want to implement this functionality you can just call one line function.
public void ShowAchievements()
{
HMSAchievementsManager.Instance.ShowAchievements();
}
Since I use a button click to call this function, I put the code in another local function. Depending on your requirements, you can call it directly.
Tips & Tricks​
There are certain other callbacks related to the kits that you use through plugin. I did not talk about them because they were irrelevant for my use case. You can always check them with IntelliSense suggesting while coding. It should suggest available callbacks after Instance.
Conclusion​I have integrated simple achievements to my game so that users could spend more time in my game. You can adjust the details I provided for your use case and devise a scenario that works for you.
I hope that this article has been helpful for you. You can always ask questions below, if you have anything unanswered in your mind.
The only remaining kit is IAP and it will talked about here, the part 3.
See you there!
References​
HMS Unity Plugin 2.0 Branch (Github Page)
Documentation of every single kit in Huawei Docs (Links are present in the GitHub readme)
Checkout in forum

How do I integrate HMS to my game using HMS Unity Plugin 2.0? — Part 4

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction​Before I begin, if you are not coming to this article from part 3 of the article, you can read here. For part 2 of the series, you can read here. If you have not seen part 1 either, you can click here. This is the fourth part of our HMS Unity Plugin 2.0 integration guide. This time I will be talking about other features of GameService: Leaderboards and SaveGame.
Normally, this part of the series was not planned, however, I thought that developers who might be interested in the other two parts of the GameService may be left off without a guide. Thus, I am adding this 4th part. I will be using a different game than the other three, but, I will try to be as helpful and as guiding as I can in this article as well, so you can adjust these two features to wherever you want to.
Small Warning Before We Proceed​I will show the AGC side steps as much as I can, but this article also assumes that you have completed the part 1 app/project creation etc. and have an app running in contact with AGC and the plugin is ready to use. (You can just enable Banner Ads and tick test ads to test if the plugin is working.) Also, for the tests, make sure your account is registered as a sandbox test account. Details can be found in the docs link, if you have not done it yet.
My Game​
As I said, I am using a different game for this part, but again a very simple hyper-casual one. You have a rock and 5 rock counts at the beginning of the game. You throw it in a projected trajectory to hit the balloons and you score points. Since the balloon generation and speed are determined at random, it is not as easy as it looks but it has a very simple logic. Its name is “Hit The Target”.
GameService — Leaderboards​Leaderboards let you create leaderboards in your game so that the players can compete and see how they rank in comparison to others. Huawei, like achievements, has its own UI to help you up setting up the leaderboard system. All you have to do is to make sure that GameService is enabled in AppGallery Connect (aka AGC), then create a leaderboard with some pre-defined rules and use the plugins easy-to-use managers to send/submit scores to leaderboards. It literally takes one line to submit the score in simple scenarios after the AGC is set up, thanks to HMS Unity Plugin 2.0.x.
AGC Side​You need to sign in to AGC and go to My apps. Then choose your app. My app in this case is “Hit The Target”. Go from “Distribute” tab to “Operate” tab and choose “Leaderboards”. Then click “New” button. You should see the screen below.
Add the details of your leaderboard. What kind of scores you want, how the formatting should be, min/max numbers that can be submitted etc. are all can be edited here. When you are done, click “Save”.
Now, we need to copy the ID of the leaderboard, so we can feed it to the plugin and use it in our game.
Do not release the leaderboards. Click “Obtain Resources” and copy the ID of the leaderboard you just created.
Unity Side​Now head to Unity. Open the drop-down Huawei menu, click Kit Settings. Enable GameService (Account Kit will automatically be enabled and it is okay.) and go to the GameService tab.
Add a name to your leaderboard (which I used the same long name that I used in AGC) and paste the ID you copied in the previous step. Then, click create constant classes. Make sure to check the “Initialize On Start” button, or else you will have to write additional code.
Coding Phase​The coding phase in Leaderboards is very easy. All you have to do is to submit the score to the leaderboard you have created.
Code:
HMSLeaderboardManager.Instance.SubmitScore(HMSLeaderboardConstants.HitTheTargetGeneralLeaderboard, GameManager.score /*score you want to submit*/);
You use the instance of HMSLeaderboardManager as usual and just call SubmitScore() function. Use the constant class that is automatically generated by the plugin to get which leaderboard you want to submit and enter the score type as the second parameter.
That's it for submitting the score, you should see it in the leaderboards and in the AGC.
One thing left for the integration. You should allow your users to see the leaderboard UI done by Huawei and check which leaderboards are there and which scores are submitted. This will help with the competitiveness of the game.
For that, all you need to do is to call again a one-liner code thanks to the plugin.
Code:
HMSLeaderboardManager.Instance.ShowLeaderboards();
I use this line inside a function and call that function in a UI Button onClick. Thus, whenever users click on the button, they are directed to the leaderboard UI and check which leaderboards are present. It should look like below.
GameService — SaveGame​SaveGame takes more time than usual because of its very nature and purpose but it is a very powerful tool. As the name suggests, this kit helps you save the game progress of the player to the Huawei Cloud and lets the players load the saved progress to the current game. By this way, users never lose progress. It has its own UI to show saved games but it is also possible to implement your own UI, if you wish to do so.
You may save and load the game progress automatically in the background and set up a load-on-prompt system, or, like I would do it, save and load by the user’s actions. It is totally up to and to your game.
In my game, since it is a very simple game, I save the progress (score) and the rockCount and let the user save whenever s/he wishes. Later, the user can load this progress anytime in the pause menu and keep playing from that saved game. I will use the default Huawei UI, but if you wish to implement your own UI, I will leave links to docs where it talks about custom UI in the reference section. Make sure you check out that link, or alternatively, click here. I will talk about the code details later. First, let’s solve some error codes that you may possibly bump into.
Error Code 7219 in HMS GameService and Its Solution​
If you have started the development already, you might have gotten the error 7219 in GameService SaveGame implementation and wonder why that could arise. It is because you need to agree to the user agreement in Drive Kit by Huawei located in https://cloud.huawei.com/ to be able to use SaveGame feature. The reason is that SaveGame saves the game files to the cloud using Drive Kit and if that agreement is not signed by your developer account, you will receive an error called 7219 and will not be able to proceed/test your code. Make sure you click the link, sign in, and agree to it. This is suggested before you start the implementation.
Coding Phase​Before going into actual coding, let me mention this first. To let the users see the saved games in default UI and load the games with simple clicks, call the one-liner function below. (just like leaderboards) It will open the UI provided by AppGallery.
Code:
HMSSaveGameManager.Instance.ShowArchive();
Make sure you assign this code as an onClick to a UI button, or implement your own logic to access that UI.
Now, for the SaveGame we follow this doc, but on a Unity setting with the plugin. The order will not change but to see how you should code, bear with me. I will share the full new class in my game and explain/break down the code later. You do not have to open the docs, I will share the steps with you below, but always keep this doc in mind for the latest updates.
What needs to be done:​The order in the official doc (written in Java):
Request DRIVE_DATA permission from the user and get ArchivesClient() object.
Get maxThumbnailSize and detailSize from the SDK. These must be requested, although you may not need them in your code.
Determine the details to save (your own parameters to save) and create ArchiveDetails object.
Write the archive metadata (such as the archive description, progress, and cover image) to the ArchiveSummaryUpdate object.
Call addArchive() method to save the game to the drive.
Notes:
You do not need to request a user permission in Unity side thanks to the plugin. It will be handled automatically.
Others will be talked about in detail below on a simple game I mentioned. If you have more complicated cases that cannot be adjusted, please refer to official documentation.
Coding in C#​Let me share the code first.
Code:
using UnityEngine;
using HmsPlugin;
using HuaweiMobileServices.Game;
using System.Text;
public class ManagerOfSaveGame : MonoBehaviour
{
// Start is called before the first frame update
int maxThumbnailSize;
int detailSize;
GameStarterScript gameStarterScript;
void Start()
{
gameStarterScript = GameObject.Find("PauseButton").GetComponent<GameStarterScript>();
//HMSSaveGameManager.Instance.GetArchivesClient().LimitThumbnailSize.AddOnSuccessListener((x) => { });
HMSSaveGameManager.Instance.GetArchivesClient().LimitThumbnailSize.AddOnSuccessListener(LimitThumbnailSizeSuccess);
HMSSaveGameManager.Instance.GetArchivesClient().LimitDetailsSize.AddOnSuccessListener(LimitDetailSizeSuccess);
HMSSaveGameManager.Instance.SelectedAction = SelectedActionCreator;
HMSSaveGameManager.Instance.AddAction = AddActionCreator;
}
private void LimitDetailSizeSuccess(int thumbnailSize)
{
maxThumbnailSize = thumbnailSize;
}
private void LimitThumbnailSizeSuccess(int returnedDetailSize)
{
detailSize = returnedDetailSize;
}
private void SelectedActionCreator(ArchiveSummary archiveSummary)
{
//load your game
Debug.Log("YOU ENTERED SELECTED ACTION CALLBACK!");
long score = archiveSummary.CurrentProgress;
long rockCount = archiveSummary.ActiveTime;
if (GameManager.rockCount <= 0)
{
gameStarterScript.PlayGameWithParameters((int)score, (int)rockCount);
}
else
{
Debug.Log("Cannot load a finished game");
}
//start the game but change the parameters to load.
}
private void AddActionCreator(bool obj)
{
if(GameManager.rockCount != 0)
{
//save your game
string description = "Rock:" + GameManager.rockCount + " Score:" + GameManager.score;
long playedTime = GameManager.rockCount; //rock count
long progress = GameManager.score;
ArchiveDetails archiveContents = new ArchiveDetails.Builder().Build();
archiveContents.Set(Encoding.ASCII.GetBytes(progress + description + playedTime));
ArchiveSummaryUpdate archiveSummaryUpdate =
new ArchiveSummaryUpdate.Builder()
.SetActiveTime(playedTime)
.SetCurrentProgress(progress)
.SetDescInfo(description)
//.SetThumbnail(bitmap)
//.SetThumbnailMimeType(imageType)
.Build();
HMSSaveGameManager.Instance.GetArchivesClient().AddArchive(archiveContents, archiveSummaryUpdate, true).AddOnSuccessListener((archiveSummary) => {
string fileName = archiveSummary.FileName;
string archiveId = archiveSummary.Id;
//if you wanna use these you can. But this just indicates that it is successfully saved.
print("fileName is: " + fileName + " and archiveId is " + archiveId);
print("GamePlayer is: " + archiveSummary.GamePlayer + " and GameSummary is " + archiveSummary.GameSummary);
print("CurrentProgress is: " + archiveSummary.CurrentProgress + " and ActiveTime is " + archiveSummary.ActiveTime);
}).AddOnFailureListener((exception) => {
print("statusCode:" + exception.GetBaseException());
});
}
else
{
print("Game is over. Cannot save a finished game!");
}
}
}
Let's break down the code to understand.
First, I created a separate function called ManagerOfSaveGame.cs to manage SaveGames. I also create a game object in my scene and put the script in it. It has no appearance in the scene to the user. This is just to control it.
In the Start() function, I get the methods to request the parameters because documentation lists them as first thing to do. I will not use them later, so I just get the parameters and be done with it.
Then I create the corresponding functions in my script to SelectedAction and AddAction fields. First is to load the game from UI on click and the second is to save the game to the drive.
AddAction (Save your game)​If I were you, I would copy the contents of the shown function and paste it to my game. Then, I would alter the parameters I want to alter. I first put an if check to see if the game is over. Since my game is a simple throw game and my rock count goes from 5 to 0, 0 rock count means a finished game. Although I could, I do not allow my users to save their games if they are already done because my UI technically allows users to access save game screen after the game is over.
You have several parameters that you can adjust. My code above follows the documentation order so you can be sure of that. What you should do with your game is to determine a description of the save games, a progress indicator, and if needed, the active time. I keep my score in score parameter and my rock count in active time parameter. Normally, I do not use time related functions but since the parameters that one can save is limited, I decided to use active time as an in-game save functionality. You can also do the same if you need. Typically, you can keep the level information, score information etc. in the progress “long” type parameter, and retrieve it when loading the game.
Rest goes according to “rules”. You can take them as is and adjust where needed. I do not use a Bitmap or an image to save with my save files. You can alternatively take a screenshot of the save moment and save it with the current progress. Huawei SDK allows that too, but I do not need it in my game.
Then you call AddArchive method as shown and success callback indicates that the game is saved. You need need to do anything with return parameter but I showed how to retrieve values nonetheless, if anyone ever needs it.
You can also get the exception message if the game cannot be saved for some reason.
SelectedAction (Load your game)​This function will be automatically called when the users click on a previously saved game in default Huawei UI. Thus, what you need to do is to retrieve the values that you saved while saving the game and load the game according to your game logic.
For my case, I retrieve the rock count and score as shown, then start my scene with these parameter. They are set as static, so I can alter them easily.
You can adjust here depending on your game logic and how you want to load your game when the user clicks it. For example, if you kept the level information in progress parameter, then you try reloading that Unity scene to start that level from scratch.
Tips and Tricks​
Do not publish the leaderboards if you want to keep testing them. Unless you are done with testing and want to publish your app in AppGallery, it should always be left as in Testable mod and releasing it will hinder your testing efforts.
Custom UI can be programmed, although Huawei already provides a UI for Leaderboards and SaveGames (for Achievements too!). Please refer to docs below in references to see the details.
When loading your game, beware that progress parameter is called “CurrentProgress”. If you called it something else, like I called just “progress”, make sure you retrieve the “CurrentProgress” field because there is no such field called “progress”.
Conclusion​That's it! You have successfully integrated Leaderboards and SaveGame features. They have a wide variety of use cases and I know that mine are simple; but at least, I believe, I gave you the insight so that you can adapt these kits to your game and draw more users.
I hope that this article series has been helpful for you. You can always ask questions below, if you have anything unanswered in your mind.
Good luck on the store and see you in my other articles!
References​
HMS Unity Plugin 2.0 Branch (Github Page)
GameService Result Codes Page
SaveGame Docs
Leaderboard Docs
Documentation of every kit in Huawei Docs (Links are present in the GitHub readme)
Original Source

Intuitive Controls with AR-based Gesture Recognition

The emergence of AR technology has allowed us to interact with our devices in a new and unexpected way. With regard to smart device development, from PCs to mobile phones and beyond, the process has been dramatically simplified. Interactions have been streamlined to the point where only slides and taps are required, and even children as young as 2 or 3 can use devices.
Rather than having to rely on tools like keyboards, mouse devices, and touchscreens, we can now control devices in a refreshingly natural and easy way. Traditional interactions with smart devices have tended to be cumbersome and unintuitive, and there is a hunger for new engaging methods, particularly among young people. Many developers have taken heed of this, building practical but exhilarating AR features into their apps. For example, during live streams, or when shooting videos or images, AR-based apps allow users to add stickers and special effects with newfound ease, simply by striking a pose; in smart home scenarios, users can use specific gestures to turn smart home appliances on and off, or switch settings, all without any screen operations required; or when dancing using a video game console, the dancer can raise a palm to pause or resume the game at any time, or swipe left or right to switch between settings, without having to touch the console itself.
So what is the technology behind these groundbreaking interactions between human and devices?
HMS Core AR Engine is a preferred choice among AR app developers. Its SDK provides AR-based capabilities that streamline the development process. This SDK is able to recognize specific gestures with a high level of accuracy, output the recognition result, and provide the screen coordinates of the palm detection box, and both the left and right hands can be recognized. However, it is important to note that when there are multiple hands within an image, only the recognition results and coordinates from the hand that has been most clearly captured, with the highest degree of confidence, will be sent back to your app. You can switch freely between the front and rear cameras during the recognition.
Gesture recognition allows you to place virtual objects in the user's hand, and trigger certain statuses based on the changes to the hand gestures, providing a wealth of fun interactions within your AR app.
The hand skeleton tracking capability works by detecting and tracking the positions and postures of up to 21 hand joints in real time, and generating true-to-life hand skeleton models with attributes like fingertip endpoints and palm orientation, as well as the hand skeleton itself.
AR Engine detects the hand skeleton in a precise manner, allowing your app to superimpose virtual objects on the hand with a high degree of accuracy, including on the fingertips or palm. You can also perform a greater number of precise operations on virtual hands and objects, to enrich your AR app with fun new experiences and interactions.
Getting Started​Prepare the development environment as follows:
JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Before getting started, make sure that the AR Engine APK is installed on the device. You can download it from AppGallery. Click here to learn on which devices you can test the demo.
Note that you will need to first register as a Huawei developer and verify your identity on HUAWEI Developers. Then, you will be able to integrate the AR Engine SDK via the Maven repository in Android Studio. Check which Gradle plugin version you are using, and configure the Maven repository address according to the specific version.
App Development​1. Check whether AR Engine has been installed on the current device. Your app can run properly only on devices with AR Engine installed. If it is not installed, you need to prompt the user to download and install AR Engine, for example, by redirecting the user to AppGallery. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Initialize an AR scene. AR Engine supports the following five scenes: motion tracking (ARWorldTrackingConfig), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).
Call ARHandTrackingConfig to initialize the hand recognition scene.
Code:
mArSession = new ARSession(context);
ARHandTrackingConfig config = new ARHandTrackingconfig(mArSession);
3. You can set the front or rear camera as follows after obtaining an ARhandTrackingconfig object.
Code:
Config.setCameraLensFacing(ARConfigBase.CameraLensFacing.FRONT);
4. After obtaining config, configure it in ArSession, and start hand recognition.
Code:
mArSession.configure(config);
mArSession.resume();
5. Initialize the HandSkeletonLineDisplay class, which draws the hand skeleton based on the coordinates of the hand skeleton points.
Code:
Class HandSkeletonLineDisplay implements HandRelatedDisplay{
// Methods used in this class are as follows:
// Initialization method.
public void init(){
}
// Method for drawing the hand skeleton. When calling this method, you need to pass the ARHand object to obtain data.
public void onDrawFrame(Collection<ARHand> hands,){
// Call the getHandskeletonArray() method to obtain the coordinates of hand skeleton points.
Float[] handSkeletons = hand.getHandskeletonArray();
// Pass handSkeletons to the method for updating data in real time.
updateHandSkeletonsData(handSkeletons);
}
// Method for updating the hand skeleton point connection data. Call this method when any frame is updated.
public void updateHandSkeletonLinesData(){
// Method for creating and initializing the data stored in the buffer object.
GLES20.glBufferData(..., mVboSize, ...);
//Update the data in the buffer object.
GLES20.glBufferSubData(..., mPointsNum, ...);
}
}
6. Initialize the HandRenderManager class, which is used to render the data obtained from AR Engine.
Code:
Public class HandRenderManager implements GLSurfaceView.Renderer{
// Set the ARSession object to obtain the latest data in the onDrawFrame method.
Public void setArSession(){
}
}
7. Initialize the onDrawFrame() method in the HandRenderManager class.
Code:
Public void onDrawFrame(){
// In this method, call methods such as setCameraTextureName() and update() to update the calculation result of ArEngine.
// Call this API when the latest data is obtained.
mSession.setCameraTextureName();
ARFrame arFrame = mSession.update();
ARCamera arCamera = arFrame.getCamera();
// Obtain the tracking result returned during hand tracking.
Collection<ARHand> hands = mSession.getAllTrackables(ARHand.class);
// Pass the obtained hands object in a loop to the method for updating gesture recognition information cyclically for processing.
For(ARHand hand : hands){
updateMessageData(hand);
}
}
8. On the HandActivity page, set a render for SurfaceView.
Code:
mSurfaceView.setRenderer(mHandRenderManager);
Setting the rendering mode.
mSurfaceView.setRenderMode(GLEurfaceView.RENDERMODE_CONTINUOUSLY);
Physical controls and gesture-based interactions come with unique advantages and disadvantages. For example, gestures are unable to provide the tactile feedback provided by keys, especially crucial for shooting games, in which pulling the trigger is an essential operation; but in simulation games and social networking, gesture-based interactions provide a high level of versatility.
Gestures are unable to replace physical controls in situations that require tactile feedback, and physical controls are unable to naturally reproduce the effects of hand movements and complex hand gestures, but there is no doubt that gestures will become indispensable to future smart device interactions.
Many somatosensory games, smart home appliances, and camera-dependent games are now using AR to offer a diverse range of smart, convenient features. Common gestures include eye movements, pinches, taps, swipes, and shakes, which users can strike without having to learn additionally. These gestures are captured and identified by mobile devices, and used to implement specific functions for users. When developing an AR-based mobile app, you will need to first enable your app to identify these gestures. AR Engine helps by dramatically streamlining the development process. Integrate the SDK to equip your app with the capability to accurately identify common user gestures, and trigger corresponding operations. Try out the toolkit for yourself, to explore a treasure trove of powerful, interesting AR features.
References​
AR Engine Development Guide
AR Engine Sample Code

Categories

Resources