ART, Why? - Android Q&A, Help & Troubleshooting

I know some people will not like what i'm going to write here, So should user's use ART runtime? No! .
First let's talk about Google-in-Go (Go), Go is the language from Google and Android is the mobile OS from Google. So far there is no Android SDK for Go, and Go doesn't support JNI so switch to ART sound a good idea since ART can compile to two banckends "Quick" and "Portable" "JIT and LLVM".
If it's LLVM, that pretty much opens the door for running Native Client apps on Android, and possibly even merging Android with ChromeOS, right?
Go 2.0 can handles 64-bit much better than 32-bit and 64-bit applications is Google next target. Porting Android's Dalvik to 64-bit would be like converting a subcompact econobox into an all terrain SUV. It would make far more sense to throw it away and start from scratch, where scratch is, for Google, ART + Go
Why Go?
Go is fast, Go launguage is bundled with high quality libraries, Go is from Google!
.
Now, FlexyCore. Flexycore's most prominent product is "droidbooster," /generating heavily optimized ARM binaries/ an app that will make your Android device run 10x faster and increase battery life... sound familiar? "ART"
FlexyCore was development outside the public eye, And now Google acquisition the company for $23.1 million!, Google statement was: The FlexyCore team has strong expertise in building software to optimize Android device performance, and we think they’d be a great fit with our team.
Let's talk again why not to use ART now?
Android is implemented in Java. Pretty much all of Google Play Services is implemented in Java.
That means android can rely less on the fact that graphics and guts of many UI widgets are native code under a thin layer of Java. Android was always a Java OS. So you will not find any different if you switch to ART unless you are doing significant computation in your own app, a synthetic benchmark will vastly over-emphasize the impact of a better JIT. ART is there only so Google can obtain early feedback from developer's and partner's.
Sources:
(ycombinator - arstechnica - paradigmx - Android)

Related

Android insights: HW Acceleration, performance, Lags

Hi @all,
yesterday i posted a nice article about how HW Acceleration is done since honeycomb (and the difference before).
I think its a good idea to post here nice and interesting articles - this can help the devs here but the users to why
sometimes its simply impossible to code things or fix things.
Please dont spam this thread, even its in general section - let users read interesting things instead of pages full of ****
"The Reason Android is Laggy"
Dianne starts off her post with a surprising revelation:
“Looking at drawing inside of a window, you don’t necessarily need to do this in hardware to achieve full 60fps rendering.
This depends very much on the number of pixels in your display and the speed of your CPU. For example, Nexus S has no
trouble doing 60fps rendering of all the normal stuff you see in the Android UI like scrolling lists on its 800x480 screen.”
Hun? How can this be the case? Anybody who’s used a Nexus S knows it slows down in all but the simplest of ListViews.
And forget any semblance of decent performance if a background task is occurring, like installing an app or updating the
UI from disk. On the other hand, iOS is 100% smooth even when installing apps. But we know Dianne isn’t lying about the
potential CPU performance, so what’s going on?
The Root Cause
It’s not GC pauses. It’s not because Android runs bytecode and iOS runs native code. It’s because on iOS all UI rendering
occurs in a dedicated UI thread with real-time priority. On the other hand, Android follows the traditional PC model of rendering
occurring on the main thread with normal priority.
This is a not an abstract or academic difference. You can see it for yourself. Grab your closest iPad or iPhone and open Safari.
Start loading a complex web page like Facebook. Half way through loading, put your finger on the screen and move it around.
All rendering instantly stops. The website will literally never load until you remove your finger. This is because the UI thread is
intercepting all events and rendering the UI at real-time priority.
If you repeat this exercise on Android, you’ll notice that the browser will attempt to both animate the page and render the HTML,
and do an ‘ok’ job at both. On Android, this a case where an efficient dual core processor really helps, which is why the Galaxy
S II is famous for its smoothness.
On iOS when an app is installing from the app store and you put your finger on the screen, the installation instantly pauses until
all rendering is finished. Android tries to do both at the same priority, so the frame rate suffers. Once you notice this happening,
you’ll see it everywhere on an Android phone. Why is scrolling in the Movies app slow? Because movie cover thumbnails are
dynamically added to the movie list as you scroll down, while on iOS they are lazily added after all scrolling stops.
Other Reasons
The fundamental reason Android is laggy is UI rendering threading and priority, but it’s not the only reason. First, hardware
acceleration, despite Dianna’s reservations, does help. My Nexus S has never been snappier since upgrading to ICS. Hardware
acceleration makes a huge difference in apps like the home screen and Android market. Offloading rendering to the GPU also
increases battery life, because GPUs are fixed-function hardware, so they operate at a lower power envelope.
Second, contrary to what I claimed earlier, garbage collection is still a problem, even with the work on concurrent GC in Dalvik.
For example, if you’ve ever used the photo gallery app in Honeycomb or ICS you may wonder why the frame rate is low. It turns
out the frame rate is capped at 30 FPS because without the cap, swiping through photos proceeds at 60 FPS most of the time,
but occasionally a GC pause causes a noticeable “hiccup”. Capping the frame rate at 30 fixes the hiccup problem at the expense
of buttery smooth animations at all times.
Third, there are the hardware problems that Dianne discussed. The Tegra 2, despite Nvidia’s grandiose marketing claims, is hurt
by low memory bandwidth and no NEON instruction set support (NEON instructions are the ARM equivalent of Intel’s SSE, which
allow for faster matrix math on CPUs). Honeycomb tablets would be better off with a different GPU, even if it was theoretically
less powerful in some respects than the Tegra 2. For example, the Samsung Hummingbird in the Nexus S or Apple A4. It’s telling
that the fastest released Honeycomb tablet, the Galaxy Tab 7.7, is running the Exynos CPU from the Galaxy S II.
Fourth, Android has a ways to go toward more efficient UI compositing. On iOS, each UI view is rendered separately and stored
in memory, so many animations only require the GPU to recomposite UI views. GPUs are extremely good at this. Unfortunately, on
Android, the UI hierarchy is flattened before rendering, so animations require every animating section of the screen to be redrawn.
Fifth, the Dalvik VM is not as mature as a desktop class JVM. Java is notorious for terrible GUI performance on desktop. However,
many of the issues don’t carry over to the Dalvik implementation. Swing was terrible because it was a cross platform layer on top
of native APIs. It is interesting to note that Windows Phone 7’s core UI is built in native code, even though the original plan was to
base it entirely on Silverlight. Microsoft ultimately decided that to get the kind of UI performance required, the code would have to
be native. It’s easy to see the difference between native and bytecode on Windows Phone 7, because third party apps are written
in Silverlight and have inferior performance (NoDo and Mango have alleviated this problem and the Silverlight UIs are generally very
smooth now).
Thankfully, each of the five issues listed above is solvable without radical changes to Android. Hardware acceleration will be on all
Android phones running ICS, Dalvik continues to improve GC efficiency, the Tegra 2 is finally obsolete, there are existing workarounds
for the UI compositing problems, and Dalvik becomes a faster VM with every release. I recently asked +Jason Kincaid of +TechCrunch
if his Galaxy Nexus was smooth, and he had this to say:
“In general I've found ICS on the Galaxy Nexus to be quite smooth. There are occasional stutters — the one place where I can
consistently get jitters on the Galaxy Nexus is when I hit the multitasking button, where it often will pause for a quarter second.
That said, I find that the iPhone 4S also jitters more than I had expected, especially when I go to access the systemwide search
(where you swipe left from the home screen).”
So there you go, the Android lag problem is mostly solved, right? Not so fast.
Going Forward
Android UI will never be completely smooth because of the design constraints I discussed at the beginning:
- UI rendering occurs on the main thread of an app
- UI rendering has normal priority
Even with a Galaxy Nexus, or the quad-core EeePad Transformer Prime, there is no way to guarantee a smooth frame rate if these
two design constraints remain true. It’s telling that it takes the power of a Galaxy Nexus to approach the smoothness of a three year
old iPhone. So why did the Android team design the rendering framework like this?
Work on Android started before the release of the iPhone, and at the time Android was designed to be a competitor to the Blackberry.
The original Android prototype wasn’t a touch screen device. Android’s rendering trade-offs make sense for a keyboard and trackball device.
When the iPhone came out, the Android team rushed to release a competitor product, but unfortunately it was too late to rewrite the UI
framework.
This is the same reason why Windows Mobile 6.5, Blackberry OS, and Symbian have terrible touch screen performance. Like Android, they
were not designed to prioritise UI rendering. Since the iPhone’s release, RIM, Microsoft, and Nokia have abandoned their mobile OS’s and
started from scratch. Android is the only mobile OS left that existed pre-iPhone.
So, why doesn’t the Android team rewrite the rendering framework? I’ll let Romain Guy explain:
“...a lot of the work we have to do today is because of certain choices made years ago... ...having the UI thread handle animations is the
biggest problem. We are working on other solutions to try to improve this (schedule drawing on vsync instead of block on vsync after drawing,
possible use a separate rendering thread, etc.) An easy solution would of course to create a new UI toolkit but there are many downsides to
this also.”
Romain doesn’t elaborate on what the downsides are, but it’s not difficult to speculate:
- All Apps would have to be re-written to support the new framework
- Android would need a legacy support mode for old apps
- Work on other Android features would be stalled while the new framework is developed
However, I believe the rewrite must happen, despite the downsides. As an aspiring product manager, I find Android’s lagginess absolutely
unacceptable. It should be priority #1 for the Android team.
When the topic of Android comes up with both technical and nontechnical friends, I hear over and over that Android is laggy and slow.
The reality is that Android can open apps and render web pages as fast or faster than iOS, but perception is everything. Fixing the UI lag
will go a long way to repairing Android’s image.
Beyond the perception issue, lag is a violation of one of Google’s core philosophies. Google believes that things should be fast. That’s a driving
philosophy behind Google Search, Gmail, and Chrome. It’s why Google created SPDY to improve on HTTP. It’s why Google builds tools to help
websites optimize their site. It’s why Google runs it’s own CDN. It’s why Google Maps is rendered in WebGL. It’s why buffering on Youtube is
something most of us remember, but rarely see anymore.
But perhaps the most salient reason why UI lag in Android is unacceptable comes from the field of Human-Computer Interaction (HCI). Modern
touch screens imply an affordance language of 1 to 1 mapping between your finger and animations on the screen. This is why the iOS over-scroll
(elastic band) effect is so cool, fun, and intuitive. And this is why the touch screens on Virgin America Flights are so frustrating: they are incredibly
laggy, unresponsive, and imprecise.
A laggy UI breaks the core affordance language of a touch screen. The device no longer feels natural. It loses the magic. The user is pulled out of
their interaction and must implicitly acknowledge they are using an imperfect computer simulation. I often get “lost” in an iPad, but I cringe when a
Xoom stutters between home screens. The 200 million users of Android deserve better.
And I know they will have it eventually. The Android team is one of the most dedicated and talented development teams in the world. With stars like
+Dianne Hackborn and +Romain Guy around, the Android rendering framework is in good hands.
I hope this post has reduced confusion surrounding Android lag. With some luck, Android 5.0 will bring the buttery-smooth Android we’ve all dreamed
about since we first held an HTC G1. In the mean time, I’ll be in Redmond working my butt off trying to get a beautiful and smooth mobile OS some
of the recognition it deserves.
Click to expand...
Click to collapse
How do Android Apps work- Java, its compilation and role of DalvikVM
OK, here goes mine..
was researching around about the role of java in android and I found this piece of info..
it explains the way android apps work and stuff..
Visit here for the full article..
What is Java?
Android applications are developed using the Java language. As of now, that’s really your only option for native applications. Java is a very popular programming language developed by Sun Microsystems (now owned by Oracle). Developed long after C and C++, Java incorporates many of the powerful features of those powerful languages while addressing some of their drawbacks. Still, programming languages are only as powerful as their libraries. These libraries exist to help developers build applications.
Some of the Java’s important core features are:
It’s easy to learn and understand
It’s designed to be platform-independent and secure, using
virtual machines
It’s object-oriented
Android relies heavily on these Java fundamentals. The Android SDK includes many standard Java libraries (data structure libraries, math libraries, graphics libraries, networking libraries and everything else you could want) as well as special Android libraries that will help you develop awesome Android applications.
Why is Platform Independence Important?
With many programming languages, you need to use a compiler to reduce your code down into machine language that the device can understand. While this is well and good, different devices use different machine languages. This means that you might need to compile your applications for each different device or machine language—in other words, your code isn’t very portable. This is not the case with Java. The Java compilers convert your code from human readable Java source files to something called “bytecode” in the Java world. These are interpreted by a Java Virtual Machine, which operates much like a physical CPU might operate on machine code, to actually execute the compiled code. Although it might seem like this is inefficient, much effort has been put into making this process very fast and efficient. These efforts have paid off in that Java performance in generally second only to C/C++ in common language performance comparisons.
Android applications run in a special virtual machine called the Dalvik VM. While the details of this VM are unimportant to the average developer, it can be helpful to think of the Dalvik VM as a bubble in which your Android application runs, allowing you to not have to worry about whether the device is a Motorola Droid, an HTC Evo, or the latest toaster running Android. You don’t care so long as the device is Dalvik VM friendly—and that’s the device manufacturer’s job to implement, not yours.
Why is Java Secure?
Let’s take this bubble idea a bit further. Because Java applications run within the bubble that is a virtual machine, they are isolated from the underlying device hardware. Therefore, a virtual machine can encapsulate, contain, and manage code execution in a safe manner compared to languages that operate in machine code directly. The Android platform takes things a step further. Each Android application runs on the (Linux-based) operating system using a different user account and in its own instance of the Dalvik VM. Android applications are closely monitored by the operating system and shut down if they don’t play nice (e.g. use too much processing power, become unresponsive, waste resources, etc.). Therefore, it’s important to develop applications that are stable and responsive. Applications can communicate with one another using well-defined protocols.
Compiling Your Code
Like many languages, Java is still a compiled language even though it doesn’t compile all the way down to machine code. This means you, the developer, need to compile your Android projects and package them up to deploy onto devices. The Eclipse development environment (used with the Android Development plug-in) makes this pretty painless. In Eclipse, automatic compilation is often turned on by default. This means that every time you save a project file, Eclipse recompiles the changes for your application package. You immediately see compile errors. Eclipse also interprets Java as you type, providing handy code coloring and formatting as well as showing many types of errors as you go. Often, you can click on the error and have Eclipse automatically fix a typo, or add an import statement, or provide a method stub for you, saving lots of typing.
You can still manually compile your code if you so desire. Within Eclipse, you’ll find the Build settings under the project menu. If you have “Build Automatically” turned on, you can still choose the “Clean…” option that will allow you to do full rebuild of all files. If “Build Automatically” is turned off, “Build All” and “Build Project” menu options are enabled. “Build All” means to build all of the projects in the workspace. You can have many projects in an Eclipse workspace.
The build process, for regular Java projects, results in a file with the extension of JAR – Java ARchive. Android applications take JAR files and package them for deployment on devices as Android PacKage files with an extension .apk. These formats not only include your compiled Java code, but also any other resources, such as strings, images, or sound files, that your application requires to run as well as the Application Manifest file, AndroidManifest.xml. The Android Manifest file is a file required by all Android applications, which you use to define configuration details about your app.
Click to expand...
Click to collapse
And here goes another article, by an Ex-Intern Andrew Munn who worked on the android project..
i just post the link here, its a huge article...
Follow up to “Android graphics true facts”, or The Reason Android is Laggy
Click to expand...
Click to collapse
An Extremely important thread for me..........My friend has an iPhone 3GS and he always considers it better than Android, underestimating my LG O1 I've many a times proved him wrong, but not with technical aspects.....Now he'd understand what is ANDROID!!!!
D3oDex3D_Ayush717 said:
An Extremely important thread for me..........My friend has an iPhone 3GS and he always considers it better than Android, underestimating my LG O1 I've many a times proved him wrong, but not with technical aspects.....Now he'd understand what is ANDROID!!!!
Click to expand...
Click to collapse
exactly ! android is 100 times better and powerfull than ios ! if in an iphone ui rendering didnt happen didicatedly, it would be 100 times more laggy than android. one other thing that shows that ios does concentrate completly on ui when scrolling- swipe left right through homscreens in speed (even with all apps closed) - ull see that the dots below which indicate which screen ur on , doesnt change at all untill uve stopped scrolling and then it moves directly to the current screen indicator!
---------- Post added at 03:16 PM ---------- Previous post was at 03:09 PM ----------
btw this here is a contradicting article to what andy you posted ! here ! :
Dianne Hackborn - 00:38 (edited) - Public
A few days ago I wrote a post trying to correct a lot of the inaccurate statements I have seen repeatedly mentioned about how graphics on Android works. This resulted in a lot of nice discussion, but unfortunately has also lead some people to come up with new, novel, and often technically inaccurate complaints about how Android works.
These new topics have been more about some fundamental design decisions in Android, and why they are wrong. I’d like to help people better understand (and judge) these discussions by giving some real background on why Android’s UI was designed the way it is and how it actually works.
One issue that has been raised is that Android doesn’t use thread priorities to reduce how much background work interrupts the user interface. This is outright wrong. It actually uses a number of priorities, which you can even find defined right here http://developer.android.com/reference/android/os/Process.html#THREAD_PRIORITY_AUDIO in the SDK.
The most important of these are the background and default priorities. User interface threads normally run at the default priority; background threads run in the background priority. Application processes that are in the background have all of their threads forced to the background priority.
Android’s background priority is actually pretty interesting. It uses a Linux facility called cgroups to put all background threads into a special scheduling group which, all together, can’t use more than 10% of the CPU. That is, if you have 10 processes in the background all trying to run at the same time, when combined they can't take away more than 10% of the time needed by foreground threads. This is enough to allow background threads to make some forward progress, without having enough of an impact on the foreground threads to be generally visible to the user.
(You may have noticed that a “foreground” priority is also defined. This is not used in current Android; it was in the original implementation, but we found that the Linux scheduler does not give enough preference to threads based on pure priority, so switched to cgroups in Android 1.6.)
I have also seen a number of claims that the basic Android design is fundamentally flawed and archaic because it doesn’t use a rendering thread like iOS. There are certainly some advantages to how iOS work, but this view is too focused on one specific detail to be useful, and glosses over actual similarities in how they behave.
Android had a number of very different original design goals than iOS did. A key goal of Android was to provide an open application platform, using application sandboxes to create a much more secure environment that doesn’t rely on a central authority to verify that applications do what they claim. To achieve this, it uses Linux process isolation and user IDs to prevent each application from being able to access the system or other application in ways that are not controlled and secure.
This is very different from iOS’s original design constraints, which remember didn’t allow any third party applications at all.
An important part of achieving this security is having a way for (EDIT: It has been pointed out to me that iOS does in fact use multiple windows and multiple GL contexts. Lesson to me, just don't talk about anything I haven't directly verified. That still doesn't change things for Android, though, where as I mention later we simply did not have hardware and drivers that could do multiple GL contexts until fairly recently.)
individual UI elements to share the screen in a secure way. This is why there are windows on Android. The status bar and its notification shade are windows owned and drawn by the system. These are separate from the application’s window, so the application can not touch anything about the status bar, such as to scrape the text of SMS messages as they are displayed there. Likewise the soft keyboard is a separate window, owned by a separate application, and it and the application can only interact with each other through a well defined and controlled interface. (This is also why Android can safely support third party input methods.)
Another objective of Android was to allow close collaboration between applications, so that for example it is easy to implement a share API that launches a part of another application integrated with the original application’s flow. As part of this, Android applications traditionally are split into pieces (called “Activities”) that handle a single specific part of the UI of the application. For example, the contacts lists is one activity, the details of a contact is another, and editing a contact is a third. Moving between those parts of the contacts UI means switching between these activities, and each of these activities is its own separate window.
Now we can see something interesting: in almost all of the places in the original Android UI where you see animations, you are actually seeing windows animate. Launching Contacts is an animation of the home screen window and the contacts list window. Tapping on a contact to see its details is an animation of the contacts list window and the contacts details window. Displaying the soft keyboard is an animation of the keyboard window. Showing the dialog where you pick an app to share with is an animation of a window displaying that dialog.
When you see a window on screen, what you are seeing is actually something called a “surface”. This is a separate piece of shared memory that the window draws its UI in, and is composited with the other windows to the screen by a separate system service (in a separate thread, running at a higher than normal priority) called the “surface flinger.” Does this sound familiar? In fact this is very much like what iOS is doing with its views being composited by a separate thread, just at a less fine-grained but significantly more secure level. (And this window composition has been hardware accelerated in Android from the beginning.)
The other main interesting interaction in the UI is tracking your finger -- scrolling and flinging a list, swiping a gallery, etc. These interactions involve updating the contents inside of a window, so require re-rendering that window for each movement. However, being able to do this rendering off the main thread probably doesn’t gain you much. These are not simple “move this part of the UI from X to Y, and maybe tell me when you are done” animations -- each movement is based on events received about the finger on the screen, which need to be processed by the application on its main thread.
That said, being able to avoid redrawing all of the contents of the parts of the UI that are moving can help performance. And this is also a technique that Android has employed since before 1.0; UI elements like a ListView that want to scroll their content can call http://developer.android.com/reference/android/view/View.html#setDrawingCacheEnabled(boolean) to have that content rendered into a cache so that only the bitmap needs to be drawn as it moves.
Traditionally on Android, views only have their drawing cache enabled as a transient state, such as while scrolling or tracking a finger. This is because they introduce a fair amount more overhead: extra memory for the bitmap (which can easily total to multiple times larger than the actual frame buffer if there are a number of visual layers), and when the contents inside of a cached view need to be redrawn it is more expensive because there is an additional step required to draw the cached bitmap back to the window.
So, all those things considered, in Android 1.0 having each view drawn into a texture and those textures composited to the window in another thread is just not that much of a gain, with a lot of cost. The cost is also in engineering time -- our time was better spent working on other things like a layout-based view hierarchy (to provide flexibility in adjusting for different screen sizes) and “remote views” for notifications and widgets, which have significantly benefited the platform as it develops.
In fact it was just not feasible to implement hardware accelerated drawing inside windows until recently. Because Android is designed around having multiple windows on the screen, to have the drawing inside each window be hardware accelerated means requiring that the GPU and driver support multiple active GL contexts in different processes running at the same time. The hardware at that time just didn’t support this, even ignoring the additional memory needed for it that was not available. Even today we are in the early stages of this -- most mobile GPUs still have fairly expensive GL context switching.
I hope this helps people better understand how Android works. And just to be clear again from my last point -- I am not writing this to make excuses for whatever things people don’t like about Android, I just get tired of seeing people write egregiously wrong explanations about how Android works and worse present themselves as authorities on the topic.
There are of course many things that can be improved in Android today, just as there are many things that have been improved since 1.0. As other more pressing issues are addressed, and hardware capabilities improve and change, we continue to push the platform forward and make it better.
One final thought. I saw an interesting comment from Brent Royal-Gordon on what developers sometimes need to do to achieve 60fps scrolling in iOS lists: “Getting it up to sixty is more difficult—you may have to simplify the cell's view hierarchy, or delay adding some of the content, or remove text formatting that would otherwise require a more expensive text rendering API, or even rip the subviews out of the cell altogether and draw everything by hand.”
I am no expert on iOS, so I’ll take that as as true. These are the exact same recommendations that we have given to Android’s app developers, and based on this statement I don't see any indication that there is something intrinsically flawed about Android in making lists scroll at 60fps, any more than there is in iOS.
D3oDex3D_Ayush717 said:
An Extremely important thread for me..........My friend has an iPhone 3GS and he always considers it better than Android, underestimating my LG O1 I've many a times proved him wrong, but not with technical aspects.....Now he'd understand what is ANDROID!!!!
Click to expand...
Click to collapse
Ehhhh I still think 3GS is better than optimus one
That's a lot of bull**** in even more words...
Kidding
But I don't understand anything of it, I'll leave it to the real devs (A)
Luck dev'ing
ok, here some basic informations on how long it take and why before a developer can
release a complete OS:
http://developer.sonyericsson.com/w...from-source-code-release-to-software-upgrade/
About Accelerated Android Rendering:
It's pretty interesting although these guys aren't talking about Ice Cream Sandwich (they're talking about Honeycomb that introduced hardware accelerated 2D rendering)
http://www.youtube.com/watch?v=v9S5EO7CLjo
I could be way off base here, but is there anything useful that could be extracted from qualcomm's adreno sdk?
https://developer.qualcomm.com/
terratrix said:
Ehhhh I still think 3GS is better than optimus one
Click to expand...
Click to collapse
I am not comparing 3Gs and O1, am talking abt. difference between AndroidOS and iOS............
Sent from my LG Optimus One P500 using XDA App
This week, google started a nice topic on the android developer page "Best practices to develop android applications".
Reading some articles is recommended for developers who want to save some battery and/or network traffic, want to spped
up listviews, save ram and other good things:
look here:
Improving Layout Performance
Optimizing Battery Life
Sharing content between applications
i hope, this can someone help to understand what we can do to make things nice.

Questions about going from JAVA to C++

Hi everyone
I've been coding games in OpenGL ES 2, 100% in JAVA. My question : Will I have a performance boost (in FPS) if I coded some parts of my games in C++ ? Like the rendering part ect. Can I have an estimation ? (2x, ect.)
Also, C++ is compiled, so I suppose I will need to make 2 APKs, one for ARM and an other for x86 ?
kamuikun said:
Hi everyone
I've been coding games in OpenGL ES 2, 100% in JAVA. My question : Will I have a performance boost (in FPS) if I coded some parts of my games in C++ ? Like the rendering part ect. Can I have an estimation ? (2x, ect.)
Also, C++ is compiled, so I suppose I will need to make 2 APKs, one for ARM and an other for x86 ?
Click to expand...
Click to collapse
You will have performance boost, c++ runs natively while java runs on vm
How much boost i don't know, i have never used c/c++. On today's modern hardware i presume not too much
You can make 2 apks, google play store allows adding separate apks for each supported architecture (mips, arm, x86)
But you dont have to if you dont want.
You can compile native libraries for both arm and x86. Then in java you determine which one to use
Thanks for your answer
Can someone that actually did the jump (from JAVA to C++) share about his experience ? Hom much FPS boost should I expect ?
I have a simple scene right now in 100% JAVA that is rendered at 17 FPS on an old mono 1 GHZ device, which is quite low. I was wondering if I optimized the rendering in C++ I would have 60 FPS on that device...
I can't talk from an OpenGL point of view but I made an Equation Solver a while back using the NDK and C++ as the engine for it. I can say the performance increase is quite dramatic.
http://www.youtube.com/watch?v=pHS4aXqPo-A
Go to 7:30, it shows an application that executes the same algorithm via Java and Native. You can see a large difference in performance.
It depends on what you're doing. If you have some heavy physics engine or massive calculation of whatever else you gain a lot by using the NDK in addition to the SDK. Small calculations and simple games aren't worth the overhead because each call to native code via JNI has a huge impact.
C++ performance gain
Hi,
Indeed you should expect performance gain, as a rule of thumb, C si between 3x and 50x faster than Java and C++ is roughly 3% slower than C. These were the figures of 2000, Java might have been improving since. Definetly, you will have a performance boost. For all serious animated games, it should be C++, no doubt.
There might be a trick to compile C++ code at runtime saving you the burden to compile for all platforms. For instance, for RenderScript, which allows computation on GPU, the C code (C99) is compiled at runtime.
Hope this will help.
DP
It could be faster. Remember Java & C++ have quite a few differences: no garbage collection in C++, pointers, no reflection for example. Not trying to put you off but it could mean altering your design to compensate. Having said that I have used JNI in a project years ago to talk to some hardware (not for performance) & that was no problem.
it is not easy from java to c++. c++ has some concept very different with java. for example pointer, free, delete memory. i think how to manage the memory space is very hard to java engineer.
kamuikun said:
I've been coding games in OpenGL ES 2, 100% in JAVA. My question : Will I have a performance boost (in FPS) if I coded some parts of my games in C++ ? Like the rendering part ect. Can I have an estimation ? (2x, ect.)
Click to expand...
Click to collapse
Nice stone age writing (for/about loopers) with some metrics
The first think I recommend you is to profile your code and get a clear idea of where is most of the time spent, if your time is mostly spent on gpu / draw calls then I don't think language is your problem.
my experience
I initially implemented my PlotimFree plotting app with Java and was not satisfied with the results. For the move to C++ I gave the Marmalade sdk a shot, and while I;m not sure I'd use it again if I started from scratch (due to pretty awful support), the performance boost was amazing. I expect pure NDK to be at least as good.
About two versions: Indeed, you need to compile for each target separately, which Marmalade limits, depending on the license you acquire. And there are a few more targets besides Arm and x86. I can testify, though, that converting my Android app to BB10 (which is also Arm based but still a different target) was no more than a two hour process.
Lyonsbane said:
The first think I recommend you is to profile your code and get a clear idea of where is most of the time spent, if your time is mostly spent on gpu / draw calls then I don't think language is your problem.
Click to expand...
Click to collapse
This is exactly right, profile first and find out what is consuming the most processing time. Then from there you can determine if that component is something you can write natively in C and invoke from your Java code.
kamuikun said:
Hi everyone
I've been coding games in OpenGL ES 2, 100% in JAVA. My question : Will I have a performance boost (in FPS) if I coded some parts of my games in C++ ? Like the rendering part ect. Can I have an estimation ? (2x, ect.)
Also, C++ is compiled, so I suppose I will need to make 2 APKs, one for ARM and an other for x86 ?
Click to expand...
Click to collapse
It depends on your skills, I would consider to include shades rather than migrating to different language.
As others have mentioned.. maybe look at the way you are rendering or the shaders. On most Android devices now we have a JIT, this basically compiles sections of the Java code at runtime to the platforms native format.
Many times I consider switching to C++ but really cannot convince myself yet! The main benefits I see to it are:
- No garbage collector, so hopefully you can control your allocations more easily
- Platform independence, you can write most of your code in modules and keep non-platform specific parts away to make it easier to port later.
A lot depends on the quality of your Java code as well.
A big problem with doing graphics in Java is getting things smooth. If you create a lot of objects each frame, you will overload the garbage collector, and every few seconds, you will see a slowdown.
C++ won't suffer from this issue.
I have done a Java software rendering demo years ago (search for 'Croissant 9' on pouet.net), and spent a lot of time on minimizing the load on the garbage collector by re-using objects with a simple pool system, so that the demo ran smoothly throughout.
I recently tried to port it to Android, and found that Dalvik is worse at memory management than the JVMs I used 10 years ago when developing the original demo. Dalvik is also a lot slower. The original ran fine even on simple 1.6 GHz Northwood Celeron, easily 30-50 fps... A modern high-end Android phone should be faster, but the code runs with single-digit framerates.
So there's a lot to gain with C++ there.
But with hardware rendering, the bottleneck is not so much on the CPU, but on the GPU, if your code is designed properly.
As for APKs, I believe if you just add multiple platforms to your makefile, it will pack the multiple platforms into a single APK, and the proper code will get deployed on the device automatically.
I can confirm what many of the pro-C++ posts are saying, but I can also correct a few assumptions about Java development for OpenGL ES 2 targets.
I have decades of experience in C/C++, with targets ranging from 3D engines to robotics control systems.
Most of the high performance work you see on Android devices was written in C++, not Java.
There is very little if any benefit, contrary to the Android documentation, in mixing Java and C++ together. The JNI interface is a considerble bottleneck. If you're going to work in C++ for OpenGL ES 2 targets, you should work entirely in C++ - viewing the Java nature of Android applications as a necessary evil for getting I/O from touch.
There are a number of free C++ engines targeting Android. I can't heartily recommend them for you, but they exist as examples from which to base judgement. One that you can EASILY unpack and compare right now is GamePlay3D - from Blackberry (I know, it's a surprise source). With that, and the NDK, you can build 3D example games from the package and see for yourself what an all C++ development target does on your device.
Contrary to Google's claims, Java is slow by the standards expected from engineers familiary with C++ development. Little to zero can be done to change that. However, C++ is a complicated language to use effectively. The learning curve is steep, the potential perils are high and for Android it was an unwelcome, highly resisted addition to the platform (when the NDK was introduced). Since NDK 7 C++ has become a relatively first class member of Android development work, and virtually all high performance games use it for both portability and performance.
A lot of people seem to expect all kinds of magic performance improvements from going to C++, without realizing that their graphics coding might be sub-optimal. Too many state changes, too many draw calls, textures that are too big or shaders that are too complex, those are all quite common causes of slow graphics that are not going to be fixed by moving to C++.
Sure, if you really need every last bit of performance (and you know how to get it), go with C++. But given the power of todays hardware, most people should be able to get by just fine with Java.
It would be a different story if you're planning to port your stuff to IOS: then coding in C++ actually makes sense. I also do a lot of OpenGL stuff in C++; simply because it allows me to plug the renderer into a Qt application on the desktop, avoiding the upload to the device, and making debugging a lot easier.
BTW: the pain of writing JNI code can be eased a lot by using SWIG, which generates all the required wrapper code based on interface definitions.

Explanation of the new Android RunTime (ART)

Looking through the forums on this and other devices with 4.4+ ROMs, the question always comes up, what is ART? Well I happened to run across this explanation. This should clarify it to even the noobist of flashers out there, so I felt it should be shared. Hope this helps everyone understand what it is and why not all apps are compatible just yet. (credit to XDA member @bippi79 for the great writeup!)
"A quick little post for something we have been working on lately.
Quite a few of you people would have heard about Android Kitkat and the bag of goodies that it brings along with it. One of those important changes though, is very much under the hood. It is called ART or Android Runtime. So what is it, and why is it important to us?
As described by Google, ART is a new Android runtime being introduced experimentally in the 4.4 release. This is a preview of work in progress in KitKat that can be turned on in Settings > developer options. Before this, all these days, every Android application used to run in its own process, with its own instance of the Dalvik virtual machine. Dalvik has been written so that a device can run multiple VMs efficiently. The Dalvik VM executed files in the Dalvik Executable (.dex) format which is optimized for minimal memory footprint. Now, with Android 4.4, Google has revealed that the Dalvik replacement, called Android Runtime (ART), should improve the performance of Android apps by a huge margin. The early version of ART in Android 4.4 already has been reported to have sped up apps by around 100%, though it is too early to know the whole truth. It might as well be a placebo till then.
ART straddles an interesting mid-ground between compiled and interpreted code, called ahead-of-time (AOT) compilation. Currently with Android apps, they are interpreted at runtime every time you open them up. This is slow. (iOS apps, by comparison, are compiled in native code, which is much faster.) With ART enabled, each Android app is compiled to native code when you install it. Then, when it’s time to run the app, it performs with all the speed and responsiveness of a native app.
Now, a lot of developers as well as enthusiasts will be really interested in this thing. It allows Android developers to continue writing the exact same code, and having their apps work across a wide range of hardware specs and form factors — but now their apps will now run significantly faster, feel more responsive, and your device’s battery life should improve.
And this is where our new website, www.androidruntime.com comes in. Right now, in these initial stages, people are still confused as to which apps can run in ART and which cannot. Our website solves this tiny issue by letting you search for any app, and see if ART supports it or not. Missing an app but you know the answer to the question? You can even contribute and add to the supported app list. We will do a quick verification and update the search results.
Now the website is still in its infancy, but we are sure that a lot of people might be interested in it. So spread the word and do what you guys really do best.
Oh, and have a great weekend!"
Sent from my XT912 using XDA Premium 4 mobile app
5444482
Great article!! As our device evolve, so should supporting framework!! Most of us rely on our devices (is it just a "phone" anymore?), for social interactivity, so speed, responsiveness, and abilities, improved by ART, will be a boon. Time to "Let sleeping Dalvik lie"!!
Thanks for the article. Very helpful.
Thx for the great article
Thanks for this great article... It ws very useful...
898846487b
Good write up
Thanks for the article on ART very helpful!

Android app framework suggestion.

Hello all!
Would it be possible to develop a framework of some sorts to install android apps natively instead of using Darwin or JVM on Ubuntu Touch?
The way I see it (might be a little too simple) is to convert the bytecode to native code and write wrapper functions for Android function calls or something like that. Is it possible?
Android doesn't use Darwin or the JVM (In the strictest sense), it uses either the Dalvik VM (which is based off Java, but is not Java, nor is it a JVM in the strictest sense [as in, executes applications made for the Java JVM]) or the Android Runtime (ART).
To my knowledge there is no Android analog to WINE (what you're proposing). There are applications such as Genymotion and Shashlik but those either aren't targeting ARM or aren't ready for ARM yet.
It's quite possible, if you ported the Android Runtime or Dalvik VM over to a WINE-esque application, but no, to my knowledge there are no projects doing this.
As Android is coming to Chromebooks it may be possible to just replicate the same container on Ubuntu Touch - it may require some work to make Mir work with whatever interface Chromebook's android port talks to (Freon?) but Google will do most of the effort here
grandrew said:
As Android is coming to Chromebooks it may be possible to just replicate the same container on Ubuntu Touch - it may require some work to make Mir work with whatever interface Chromebook's android port talks to (Freon?) but Google will do most of the effort here
Click to expand...
Click to collapse
Android itself is NOT coming to Chromebooks, this is simply Google's Android Runtime for Chrome being released into mainstream usage (see the Archon project for more information about this runtime.), This is still an emulated Android device (albeit more optimized so it runs faster, likely using x86 binaries) that runs under Chrome's NACL platform, it is NOT Android on Chromebooks as many news sources would like to suggest.
At the moment, Chromium does not run very well inside XMir (which is why I didn't give it any real thought there), but yes, this and the Android Runtime for Chrome should fill this gap perfectly. In the future though, it may be possible to use the LXC container with some work done to surface_flinger and the Android frameworks (to support Mir's windowing system ofc) to run the applications inside the Android container.
Like I said though, Google's Chromium runtime would work. But we currently have no hardware acceleration inside of legacy X11 applications, limited filesystem access inside said legacy container (only the XDG standard folders: Downloads, Documents, Music, Pictures are mounted into the legacy container), We also don't really have good, working OpenGL available to X11 applications since GPU access is done through libhybris which X11 knows nothing about.
It's possible to get around this similar to how Canonical has done it on the PC's development version of Unity8 and get Hardware Acceleration (truthfully, I haven't tested to see if it works with the android mirplatform packages though) by turning on DRI/DRM, I did this in my tweaked kernel, but it doesn't seem to provide any performance improvements with the freedreno X11 driver installed (I saw no indication that XMir initialized the freedreno driver at all, only Mesa's Software Rasterizing Interface driver.)

Very lightweight Android emulator

Sorry if this isn't the best section to be asking in, it's a big forum out there with so many boards.
I'm trying to run many instances of a certain Android app concurrently. I need either Google Play services to be available for a one-time sign in on each instance, or preferably a way to import app data to remove the need for Google Play or any other components of the Android environment besides the basic runtime needed to run the app.
I'm currently using an Android emulator (Nox) running on 640x360 at 20 fps, which is able to get me about 14 instances running on my local Windows. machine before things start crashing. The limiting factors seem to be the frequency of snapshots taken by the VMs and running low on RAM, which in turn increases CPU usage for defragmentation and page file management.
Is there any more efficient way to accomplish this task? Perhaps an x86 Android runtime with settings to reduce graphics quality? I've also looked at the Genymotion AMI on AWS but all of Amazon's VM options seem too powerful (and costly) to run my app on so many machines.
Thanks!

Categories

Resources