CSS and javascript - Advantage X7500, MDA Ameo ROM Development

Has anyone been able to view cascading style sheets (CSS) and/or javascript on your Athena/Advantage using any browser in WM6? WM5?

If you navigate to this site and you can see the menus drop down on the sample they provide, then css and javascript work.
Thanks!

Related

[TOOL] Augmented android code snippets in web pages

A new extension for Chrome, livecode by Codota, augments android code snippets in web pages.
Livecode makes important Java source code elements interactive:
highlights calls to android APIs and shows the relevant docs in neat bubbles
finds related real-world code examples from GitHub
warns when a deprecated API is used in the snippet
Codota livecode is available (for free) at the Chrome Web Store, currently working in StackOverflow for android standard APIs. Support for additional sites and platforms is on the way.
Check it out at codota.com/livecode
It's free, would be very happy to hear what you think...

Android using native and java activity in the same app

Is it possible to have mix of native and regular activity in the same app? I have a scenario where some GUI handling needs to be done with a c++ native library.
Android documentation "Sample: native-activity" shows how to write a full native activity but how do i add it to an existing android app. If its possible how to invoke the native activity ?
I followed the steps in the documentation and added a new native activity (everything except not as main activity) but couldnt find a way launch it from java as the intent is looking for a class object.

How to identify requests from quick apps on an HTML5 page

How to identify requests from quick apps on an HTML5 page so that the service logic will not instruct users to download an app?
The web component of a quick app uses the same standard HTML execution environment as browsers such as Google Chrome and Safari. When an HTML5 web page is loaded, the User-Agent attribute is reported to the server. The HTML5 web page can obtain the User-Agent object of the current execution environment through the JavaScript function. Therefore, you can set the User-Agent attribute for the web component to identity requests from quick apps.
The implementation procedure is as follows:
1. In the quick app, set the User-Agent attribute of the web component to default.
Code:
<web id='web' src="{{websrc}}" allowthirdpartycookies="true" useragent="default"></web>
2. On the HTML5 page, check whether the window.navigator.userAgent object contains the hap keyword. If yes, the request is from a quick app.
Code:
if (window.navigator.userAgent.indexOf("hap") >= 0) {
// The request comes from a quick app.
}
For details about Huawei developers and HMS, visit the website.
https://forums.developer.huawei.com/forumPortal/en/home?fid=0101246461018590361

How Can I Integrate HUAWEI Ads into a Huawei HTML5 Quick Game?

Symptom:
Currently, no ad API is provided for an HTML5 quick game. How can I integrate HUAWEI Ads into my quick game?
Analysis:
Currently, HUAWEI Ads supports only quick apps and runtime quick games, but not HTML5 quick games. You can use a two-way communication mechanism between the web component on the UX page of a quick game and the HTML5 web page of the game to integrate HUAWEI Ads into an HTML5 quick game. The onMessage lifecycle function on the UX page receives messages from an HTML5 page, calls the ad API of quick apps to obtain ad information (only available for native ads and rewarded video ads), and sends the obtained ad information to the HTML5 page through this.$element(‘web’).postMessage({ message: JSON.stringify(result) });.
Solution:
It is recommended that the ad creation and ad API request processes be separately encapsulated in different functions, instead together in the onInit or onMessage lifecycle function. This is because the onInit function is called during page initialization, which implements quicker loading, but it is not suitable for processing complex logic; the onMessage function is used to receive character strings passed by HTML5 pages. After you add a judgment branch, the corresponding function is called to perform specific ad processing.
Note: Currently, the quick app framework supports the creation of ad objects only in the onInit function, but not in functions such as onMessage. Keep the position of the code block unchanged.
For more sample code, you can refer to this site.
https://gist.github.com/Mayism/0a9f12438da0e86f06594e01e7643895
For more FAQs and cases about HUAWEI Ads integration, visit the following link:
https://developer.huawei.com/consumer/en/doc/development/quickApp-Guides/quickapp-access-ads-kit

How to Integrate the Volumetric Cloud Plug-in of HMS Core CG Kit

1. Introduction
Since childhood, I've always wondered what it would be like to walk among the clouds. And now as a graphics programmer, I've become fascinated by an actual sea of clouds, as in volumetric clouds in game clients, which are semi-transparent irregular clouds produced by a physically-based cloud rendering system. However, due to the computing bottleneck resulting from mobile apps, I'm still exploring how to best balance performance with effects.
As an avid fan of game development, I've kept an eye on the cutting-edge technologies in this field, and have even developed plug-ins based on some of them. When recently I came across the Volumetric Cloud plug-in introduced by Computer Graphics (CG) Kit of HUAWEI HMS Core, I spent two days integrating the plug-in into Unity by following the official documents. The following figure shows a simple integration (the upper clouds are the skybox effect, and the lower ones are the rendered volumetric clouds). You'll notice that the volumetric clouds are more true-to-life, with clear silver linings. Better yet, the plug-in supports dynamic lighting and player free travel amidst the clouds. Perhaps most surprisingly, I tested its performance on a low-end smartphone (HONOR 8 Lite), at a resolution of 720p, the frame rate reached an astonishing 50 fps! Another highlight of this plug-in is cloud shape customization, which allowed to shape cloud I desired.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Now, let's go through the process of integrating the Volumetric Cloud plug-in into Unity.
2. Prerequisites​1. Visual Studio 2017 or later
2. Android Studio 4.0 or later
3. Unity 2018.4.12 or later
4. Huawei phones running EMUI 8.0 or later or non-Huawei phones running Android 8.0 or later
5. Volumetric Cloud plug-in SDK
Click the links below to download the SDK, and reference the relevant documents on HUAWEI Developers:
SDK download
Development guide
API reference
The directory structure of the downloaded SDK is as follows.
According to the official documents, RenderingVolumeCloud.dll is a PC plug-in based on OpenGL, and libRenderingVolumeCloud.so is an Android plug-in based on OpenGL ES 3.0. The two .bin files in the assets directory are the resource files required for volumetric cloud rendering, and the header file in the include directory defines the APIs for the Volumetric Cloud plug-in.
3. Development Walkthrough​The Volumetric Cloud plug-in is available as C++ APIs. Therefore, to integrate this plug-in into Unity, it needs to be encapsulated into a native plug-in of Unity, and then integrated into Unity to render volumetric clouds. Next, I'm going to show my integration details.
3.1 Native Plug-in Creation​A native Unity plug-in is a library of native code written in C, C++, or Objective-C, which enables game code (in JavaScript or C#) to call functions from the library. You can visit the following links for more details on how to create a native Unity plug-in:
Unity - Manual: Low-level native plug-in interface
docs.unity3d.com
https://github.com/Unity-Technologies/NativeRenderingPlugin
Unity code samples show that you need to build dynamic link libraries (.so and .dll) for Unity to call. A simple way to do this is to modify the open-source code of Unity in Android Studio and Visual Studio, respectively. This enables you to integrate the volumetric cloud rendering function, and generate the required libraries in the following manner.
The functions in the RenderingPlugin.def file are APIs of the native plug-in for Unity, and are implemented in the RenderingPlugin.cpp file. In the RenderingPlugin.cpp file, you'll need to retain the required functions, including UnityPluginLoad, UnityPluginUnload, OnGraphicsDeviceEvent, OnRenderEvent, and GetRenderEventFunc, as well as corresponding static global variables, and then add three APIs (ReleaseSource, BakeMultiMesh, and SetRenderParasFromUnity), as shown below.
To integrate the Volumetric Cloud plug-in of CG Kit, modify these APIs in the given source code as follows:
(1) Modify OnGraphicsDeviceEvent. If eventType is set to kUnityGfxDeviceEventInitialize, call the CreateRenderAPI function of the Volumetric Cloud plug-in to create a variable of the RenderAPI class, and call the RenderAPI.CreateResources() function. If eventType is set to kUnityGfxDeviceEventShutdown, delete the variable of the RenderAPI class.
(2) Modify OnRenderEvent. Pass the static global variable set in the SetTextureFromUnity function to this function, and directly call RenderAPI.RenderCloudFrameTexture() in this function.
(3) Define SetTextureFromUnity. Pass the four inputs required by RenderAPI.RenderCloudFrameTexture() to the defined static global variable to facilitate future calls to this function for volumetric cloud rendering.
(4) Define SetRenderParasFromUnity. Call RenderAPI.SetRenderCloudParas() in this function.
(5) Define ReleaseSource. Call RenderAPI.ReleaseData() in this function.
The plug-in for PC will need to integrate the baking function for volumetric cloud shape customization. Therefore, an extra API is necessary for the .dll file, which means that the BakeMultiMesh function needs to be defined. Call CreateBakeShapeAPI to create a variable of the BakeShapeAPI class, and then call BakeShapeAPI.BakeMultiMesh() to perform baking.
3.2 Integration into Unity​Once the native plug-in is successfully created, you can obtain the libUnityPluginAdaptive.so and UnityPluginAdaptive.dll files that adapt to Unity and the Volumetric Cloud plug-in.
Next, you'll need to create a Unity 3D project to implement volumetric cloud rendering. Here, I've used the ARM64 version as an example.
Place libUnityPluginAdaptive.so, libRenderingVolumeCloud.so, UnityPluginAdaptive.dll, and RenderingVolumeCloud.dll of the ARM64 version in the Assets/Plugins/x86_64 directory (if this directory does not exist, create it). Configure the .so and .dll files as follows.
In addition, you'll need to configure the OpenGL-based PC plug-in of the Volumetric Cloud plug-in as follows.
Also, configure the OpenGL ES 3.0–based Android plug-in of the Volumetric Cloud plug-in by performing the following.
The Volumetric Cloud plug-in contains two .bin files. Place them in any directory of the project, and set the corresponding input parameter to the path. noise.bin is the detail noise texture of the volumetric clouds, and shape.bin is the 3D shape texture. Cloud shape customization can also be performed by calling the BakeMultiMesh API for the plug-in, which I'll detail later. The following uses the provided 3D shape texture as an example.
3.3 Real-Time Volumetric Cloud Rendering​Before calling the Volumetric Cloud plug-in, you'll need to add the dependency (.jar) for counting calls to CG Kit, by modifying the app-level Gradle file. You can choose any package version.
Copy the downloaded .jar package to the Assets/Plugins/Android/bin directory of your Unity project (if this directory does not exist, create it).
Next, you can write a C# script to call the relevant APIs. The adaptation layer APIs that are explicitly called are as follows.
(1) The SetTextureFromUnity function sets the cloudTexture pointer, depthTexture pointer, and cloudTexture size. cloudTexture is the texture for volumetric cloud drawing, and depthTexture is the texture with the depth of the current frame. Execute this function once.
(2) The SetRenderParasFromUnity function calls the API for setting parameters of the Volumetric Cloud plug-in of CG Kit. These parameters are updated in each frame. Therefore, this function needs to be executed for each frame.
(3) The GetRenderEventFunc function calls plug-in APIs for drawing volumetric clouds on cloudTexture. This function can be called in the following ways: GL.IssuePluginEvent(GetRenderEventFunc(), 1) or commandBuffer.IssuePluginEvent (GetRenderEventFunc(), 1). Execute this function for each frame.
(4) The ReleaseSource function calls the Volumetric Cloud plug-in to destroy resources. Call this function once at the end.
The call process is as follows.
The gray APIs in the figure should be implemented according to your own specific requirements. Here, I'll show a simple implementation. Before calling the rendering API, you'll need to create two RenderTextures. One stores the rendering result of the Volumetric Cloud plug-in, and the other stores the depth. Call the SetTextureFromUnity API, and pass NativeTexturePtrs and the sizes of the two RenderTextures to the API. This ensures that the volumetric cloud rendering result is obtained.
In update phase, the struct parameters of the Volumetric Cloud plug-in need to be updated by referring to the VolumeRenderParas.h file in the include directory of the plug-in package. The same struct needs to be defined in the C# script. For details about the parameters, please refer to the relevant documents for the Volumetric Cloud plug-in. Please note that the struct must be 1-byte aligned, and that the four arrays indicating matrices in the struct should be in row-major order. The following is a simple example.
After the struct variables are updated, call the SetRenderParasFromUnity API to pass the volumetric cloud rendering parameter references. Later rendering will be performed with these parameters.
After calling the rendering API, you can call OnRenderImage in postprocessing phrase to draw volumetric clouds on the screen. You can also use command buffers in other phases. In the OnRenderImage call, first, you'll need to draw the depth on the created depthTexture RenderTexture, then call GL.IssuePluginEvent(GetRenderEventFunc() to draw the volumetric clouds on the created cloudTexture RenderTexture, and lastly, apply cloudTexture and the input src of OnRenderImage to dst by setting transparency. The volumetric clouds can then be seen on the screen.
3.4 APK Testing on the Phone​After debugging on the PC, you can directly generate the APK for testing on an Android phone. The only difference between the Android version and the PC version is the two string arrays in the struct, which indicate the paths of the shape.bin and noise.bin files. The two paths should differ on these two platforms. You can put the two .bin files in the Application.persistentDataPath directory.
3.5 3D Shape Texture Baking​The Volumetric Cloud plug-in also offers a baking API for 3D shape texture customization. Integrate this API in the adaptation layer as detailed above, with the following API function:
The function takes the BakeData struct variable and the .bin file path as its inputs, and outputs a file similar to shape.bin. savePath should be a complete path, including the file name extension (.bin). The member variables of this struct are array pointers or integers. For more details, please refer to the relevant documents for the Volumetric Cloud plug-in of CG Kit. According to the documents, the baking API is used to bake meshes in a bounding box into 3D textures. The size of a bounding box depends on minBox and maxBox in the struct.
As shown below, before the API calls, you'll need to combine multiple 3D models. To visualize which areas can be baked, you can draw a wireframe in the space, based on minBox and maxBox. The areas outside the wireframe are not baked. Once each model is set to a position, you can call this API to perform baking.
It's worth noting that the baked 3D shape texture is cyclically sampled during volumetric cloud rendering. Therefore, when arranging 3D models, you'll need to ensure the horizontal continuity of the 3D models intersecting the vertical bounding boxes. Take the x axis for example. You'll need to make sure that models that intersect the two vertical bounding boxes on the x axis are the same and have an x-axis distance of the bounding box's x-length (while having the same y- and z-coordinates).
Upon the completion of baking, you can render volumetric clouds with the customized 3D texture. A similar process, as officially outlined for using shape.bin, can be followed for using the customized texture.
Demo Download​The demo is uploaded as a .unitypackage file to Google Drive, and available at:
VolumeCloud - Google Drive
drive.google.com
Choose Assets > Import Package > Custom Package to import the .unitypackage package, where the Plugin directory contains the .dll and .so files of the adaptation layer and the Volumetric Cloud plug-in of CG Kit. The volumeCloud directory includes the relevant scripts, shaders, and pre-built scenes. The StreamingAssets directory consists of resource files (shape.bin and noise.bin). The volumeCloud/Readme.txt file provides some notes for running the demo. You can read it, and configure the corresponding parameters as detailed, before running the demo.
For more details, you can go to:
Our official website
Our Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download demos and sample codes
Stack Overflow to solve any integration problems
Original Source

Categories

Resources