The power of Cortex A-15 - Nexus 10 General

Here is a video guys Cortex A-15 in comparison to Cortex A-9 quad:
http://www.youtube.com/watch?v=zx84_mQ1YLw

video removed.

Loading times can also be affected by other hardware present in the device (flash speeds, available RAM and it's speeds, etc.). Only way to do a true comparison is to have the same exact device specs with different CPU's imo. A closer comparison I guess would also be to just run some CPU benchmark-related apps on Performance governor and ensure that no thermal throttling is taking place.

Video has been removed.

Related

[Q] Is Galaxy S's GPU more powerful than S II's ??

So, is it? Can anyone provide links?
SGS: PowerVR SGX540 GPU
SGS2: Mali-400MP GPU
Benchmarks here:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
And a detailed analysis of the Mali architecture here:
http://www.design-reuse.com/article...ntages-of-the-mali-graphics-architecture.html
Basically, the conclusion seems that it's definitely a more advanced GPU, however this may not result in better raw benchmark scores in all situations. Focus seems to be more on power consumption and efficiency.

dual core vs quad core

So I've been lurking on the prime's forums for a while now and noticed the debate of whether the new qualcomm dual core will be better or the current tegra 3 that the prime has. Obviously if both were clocked the same then the tegra 3 would be better. Also I understand that the gpu of the tegra 3 is better. However, for normal user (surf web, play a movie, songs etc) isn't dual core at 1.5 ghz better in that an average user will rarely use more 2 cores? The way I understand it each core is able to handle 1 task so in order to activate the 3rd core you would have to have 3 things going on at the same time? Could someone please explain this to me?
First of all, the tegra 3 can go up to 1.6 ghz. Secondly, all 4 cores can be utilized by a multi threading app. Lastly, battery is great on tegra III due to teh companion core.
jdeoxys said:
First of all, the tegra 3 can go up to 1.6 ghz. Secondly, all 4 cores can be utilized by a multi threading app. Lastly, battery is great on tegra III due to teh companion core.
Click to expand...
Click to collapse
But the native clock for that qualcomm would be 1.5 meaning o/c can take it higher. Also doesn't being dual core compared to quad core give it an edge in battery? You do bring up a good point with the multi threading app. Also to clarify I am not standing up for the qualcomm chip or putting down the tegra 3 just trying to get things straight.
Thanks
Hey I'm the ....idiot aboard here....lol
But the tegra 3 has a companion core, being a fifth core, to take over when the tablet is not stressed. Thus saving the battery.
I am just repeating what I have read, I have no knowledge of how it all works. I guess that is how we can get better battery life.
Just trying to help the OP, maybe some one way smarter can chime in. Shouldn't be hard....lol
Quad core is better by far. On low level tasks, simple things, and screen off/deep sleep the companion core takes over. Meaning its running on a low powered single core. This companion core only has a Max of 500Mhz speed. So when in deep sleep or low level tasks, companion core alone is running everything at only 102mhz -500Mhz. Most of the time on the lower end. Therefore tegra3 has the better battery life since all it's low power level tasks are ran by a single low powered companion core. That's 1 low powered core compared to 2 high powered cores trying to save battery. Quad core better all around. We hsvent even begun real overclocking yet. The 1.6Ghz speed was already in the kernel. So if you rooted n using vipercontrol or ATP tweaks or virtuous rom, we can access those speeds at any time. Once we really start overclocking higher than 1.6Ghz we will have an even more superior advantage. Anyone knows 4 strong men are stronger than 2..lol. tegra3 and nvidia is the future. Tegra3 is just the chip that kicked down the door on an evolution of mobile chip SoC.
---------- Post added at 10:13 PM ---------- Previous post was at 10:06 PM ----------
If you really want to learn the in and outs of tegra3, all the details, and how its better than any dual core, check out this thread I made. I have a whitepaper attachment in that thread you can download and read. Its made by nvidia themselves and goes into great detail on tegra3 by the people who created it, Nvidia. Check it out.
http://forum.xda-developers.com/showthread.php?t=1512936
aamir123 said:
But the native clock for that qualcomm would be 1.5 meaning o/c can take it higher. Also doesn't being dual core compared to quad core give it an edge in battery? You do bring up a good point with the multi threading app. Also to clarify I am not standing up for the qualcomm chip or putting down the tegra 3 just trying to get things straight.
Thanks
Click to expand...
Click to collapse
The maximum clock speed isn't all that important, since during tasks like web browsing, watching videos & movies and listening to music you will never push the processor to its highest available clock speed anyway. All mobile devices will underclock their processors so that you rarely have unused clock cycles eating up battery life. So all things being relatively equal performance would be about the same between both tablets during these types of lightweight tasks.
If you have a lot of background processes running, then the quad-core system might have an edge in performance since theoretically different tasks can be pushed off to different processors. However this use case is rarely found in Android. You might have an app checking weather or syncing photos in the background, or you might have music playing while you web surf, but those are generally fairly lightweight tasks that usually won't test the processor performance of your device.
In tasks that will stress you processor, such as 3D gaming, then quad cores have a very large advantage over dual core systems, despite the slight difference in maximum clock speeds. In addition the Tegra 3 has a more powerful GPU than the new Qualcomm chip, which will definitely make a noticeable difference in gaming performance.
Now when it comes to ultra-low power tasks or when the tablet is on Standby, the Tegra 3 uses its "companion core" which has incredibly low power requirements, so it can continue to sync your email, twitter and weather updates for days (or weeks) while having very little impact on the Transformer Prime's battery.
So in short, the Tegra 3 is more likely to outperform the Qualcomm in situations where you actually need extra performance. In light tasks performance between the two should be about the same. Battery life is yet to be definitively determined, however the Tegra's 3 ultra-low power companion core should give it an edge when only doing light tasks or on standb.
Keep in mind, the Tegra 3 in the TF Prime has a maximum clock speed of 1300Mhz. One core has a maximum clock speed of 1400Mhz. If all things were equal, a difference of 100-200 Mhz n a 1Ghz+ processor is practically unnoticeable in daily usage.
almightywhacko said:
The maximum clock speed isn't all that important, since during tasks like web browsing, watching videos & movies and listening to music you will never push the processor to its highest available clock speed anyway. All mobile devices will underclock their processors so that you rarely have unused clock cycles eating up battery life. So all things being relatively equal performance would be about the same between both tablets during these types of lightweight tasks.
If you have a lot of background processes running, then the quad-core system might have an edge in performance since theoretically different tasks can be pushed off to different processors. However this use case is rarely found in Android. You might have an app checking weather or syncing photos in the background, or you might have music playing while you web surf, but those are generally fairly lightweight tasks that usually won't test the processor performance of your device.
In tasks that will stress you processor, such as 3D gaming, then quad cores have a very large advantage over dual core systems, despite the slight difference in maximum clock speeds. In addition the Tegra 3 has a more powerful GPU than the new Qualcomm chip, which will definitely make a noticeable difference in gaming performance.
Now when it comes to ultra-low power tasks or when the tablet is on Standby, the Tegra 3 uses its "companion core" which has incredibly low power requirements, so it can continue to sync your email, twitter and weather updates for days (or weeks) while having very little impact on the Transformer Prime's battery.
So in short, the Tegra 3 is more likely to outperform the Qualcomm in situations where you actually need extra performance. In light tasks performance between the two should be about the same. Battery life is yet to be definitively determined, however the Tegra's 3 ultra-low power companion core should give it an edge when only doing light tasks or on standb.
Keep in mind, the Tegra 3 in the TF Prime has a maximum clock speed of 1300Mhz. One core has a maximum clock speed of 1400Mhz. If all things were equal, a difference of 100-200 Mhz n a 1Ghz+ processor is practically unnoticeable in daily usage.
Click to expand...
Click to collapse
Wow! Thanks for taking the time for breaking it down for me like that! I understand exactly where your coming from and now have to agree.
demandarin said:
Quad core is better by far.
Click to expand...
Click to collapse
At least that is what Nvidia would like you to think.
The Tegra 3 uses an older ARM core for it's quad core design while Qualcomm uses their own ARM instruction set compatible core for their Krait S4 design. For most current benchmarks the Qualcomm Krait S4 dual core seems to outpace the Tegra 3 by quite a large margin. And of course Krait will be expanded to quad core later this year.
http://www.anandtech.com/show/5563/qualcomms-snapdragon-s4-krait-vs-nvidias-tegra-3
Dave_S said:
At least that is what Nvidia would like you to think.
The Tegra 3 uses an older ARM core for it's quad core design while Qualcomm uses their own ARM instruction set compatible core for their Krait S4 design. For most current benchmarks the Qualcomm Krait S4 dual core seems to outpace the Tegra 3 by quite a large margin. And of course Krait will be expanded to quad core later this year.
http://www.anandtech.com/show/5563/qualcomms-snapdragon-s4-krait-vs-nvidias-tegra-3
Click to expand...
Click to collapse
There's already another thread on what you just mentioned and the Krait claims were easily shot down. Tegra3 still a better chip overall. Plus krait gpu was subpar to tegra3. We have more links and stuff in other thread showing Prime still right up there
demandarin said:
There's already another thread on what you just mentioned and the Krait claims were easily shot down. Tegra3 still a better chip overall. Plus krait gpu was subpar to tegra3. We have more links and stuff in other thread showing Prime still right up there
Click to expand...
Click to collapse
As unlikely as that seems considering the slower cores that Nvidia uses, links to real benchmarks ( not self serving white papers ) would be appreciated. I have glanced at your Tegra3 thread but have not read it all the way through after I saw that it seemed to depend a lot on a white paper and not real comparison tests. It is true that the current GPU the Krait uses is not as fast as the one in the Tegra 3, but graphics is only one element of overall performance. The only benchmarks that I have seen Tegra beat out Krait on were benchmarks that emphasized more than two threads, and then not by much.
Dave_S said:
As unlikely as that seems considering the slower cores that Nvidia uses, links to real benchmarks ( not self serving white papers ) would be appreciated. I have glanced at your Tegra3 thread but have not read it all the way through after I saw that it seemed to depend a lot on a white paper and not real comparison tests. It is true that the current GPU the Krait uses is not as fast as the one in the Tegra 3, but graphics is only one element of overall performance. The only benchmarks that I have seen Tegra beat out Krait on were benchmarks that emphasized more than two threads, and then not by much.
Click to expand...
Click to collapse
Its not my tegra3 thread I'm talking about. I think its the Prime alternatives thread created by shinzz. We had a huge debate over it. More benchmarks n supporting arguments in that thread. Check it out if you get the chance.
demandarin said:
Its not my tegra3 thread I'm talking about. I think its the Prime alternatives thread created by shinzz. We had a huge debate over it. More benchmarks n supporting arguments in that thread. Check it out if you get the chance.
Click to expand...
Click to collapse
Thanks, Will do. Gotta run for a doctor appointment right now though.
I frankly think the power savings with the fifth core is mostly hype. According to many battery tests I've read online and my own experiences with my Prime, it doesn't get much different battery life from dual core tablets.
Quad core is better for future but problem for backwards compatibility... it's definitely good for tablet.
jedi5diah said:
Quad core is better for future but problem for backwards compatibility... it's definitely good for tablet.
Click to expand...
Click to collapse
Here is another benchmark that shows that there is a least one current dual core that can soundly beat the Nvida quad core at benchmarks that are not heavily multithreaded.
http://www.extremetech.com/computin...ragon-s4-has-the-competition-on-the-defensive
Buddy Revell said:
I frankly think the power savings with the fifth core is mostly hype. According to many battery tests I've read online and my own experiences with my Prime, it doesn't get much different battery life from dual core tablets.
Click to expand...
Click to collapse
No dual core android tablet battery last longer than an ipad1. My prime easily outlasts my Ipad in battery life. The battery hype is real. Tons of people here seeing 9-11hrs+ on a single charge with moderate to semi heavy use on balanced mode. Even longer on power savings mode.
demandarin said:
No dual core android tablet battery last longer than an ipad1. My prime easily outlasts my Ipad in battery life. The battery hype is real. Tons of people here seeing 9-11hrs+ on a single charge with moderate to semi heavy use on balanced mode. Even longer on power savings mode.
Click to expand...
Click to collapse
Really? I get 9-12 hours constant use on balanced. Plus 6 more with the dock.
Sent from my PG8610000 using xda premium
I think if Krait were to come out with quad core then it would beat out tegra 3 otherwise no. Also they are supposed to improve the chip with updated gpu to 3.xx in future releases. Also benchmarks have been proven to be wrong in the past so who knows? Not like benchmarks can determine real life performance, nor does the average user need that much power.
Companion core really does work
jdeoxys said:
Really? I get 9-12 hours constant use on balanced. Plus 6 more with the dock.
Sent from my PG8610000 using xda premium
Click to expand...
Click to collapse
Strange, we just started uni here (Australia) and I've been using my prime all day, showing it off to friends (to their absolute amazement!) showing off glowball, camera effects with eyes, mouth etc. 2 hours of lecture typing, gaming on the train, watched a few videos and an episode of community played music on speaker for about 40 mins, webbrowsed etc etc started using at lightly at 9 am (only properly at say 1:30 pm) and it's 10:00pm now and GET THIS!!:
72% battery on tablet and 41% on the dock. It's just crazy man. No joke, it just keeps going, I can't help but admit the power saving must be real :/
Edit: Whoops, I quoted the wrong guy, but you get the idea.
That's what I'm saying. Battery life on prime is great. Add a dock n battery life is sick!
I do agree a quad core variant of krait or S4 will give tegra3 a really good battle. Regardless I'm more than satisfied with power of tegra3. I'm not the type as soon as i see a newer or higher spec tab, ill feel like mines is useless or outdated. With have developement going hard now for this device. Just wait till the 1.8-2ghz+ overclock roms n kernels drop. Then we would even give new quad core higher speed chips a good run.
Above all of that, Android needs to developement more apps to take advantage of the more powerful chips like tegra3 and those that's upcoming. Software is still trying to catch up to hardware spec. Android apps haven't even all been made yet to take advantage of tegra2 power..yet lol. With nvidia/tegra3 we have advantage because developers are encouraged to make apps n games to take advantage of tegra3 power.
Regardless we all Android. Need to focus more on the bigger enemy, apple n IOS

iPad 4 vs 5250 (Nexus 10 Soc) GLBenchmark full results. UPDATE now with Anandtech!!

XXXUPDATEXXX
Anandtech have now published the perfromance preview of the Nexus 10, lets the comparison begin!
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Well, the first full result has appeared on GLBenchmark for the iPad4, so I have created a comparison with the Samsung Arndale board, which uses exactly the same SoC as the Nexus 10, so will be very close in performance to Google's newest tablet. GLBenchmark, as its name suggests test Open GL graphics perrformance, which is important criteria for gaming.
Which device wins, click the link to find out.
http://www.glbenchmark.com/compare....ly=1&D1=Apple iPad 4&D2=Samsung Arndale Board
If you're really impatient, the iPad 4 maintains it lead in tablet graphics, the Nexus 10 may performance slightly better in final spec, but the underlying low-level performance will not change much.
I've also made a comparison between the iPad 3 & 4.
Interestingly the in-game tests GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p) :, which is run independent of native screen resolution show the following
iPad 4: 48.6 FPS
iPad 3: 25.9 FPS
5250 : 33.7 FPS
So the iPad is twice as fast as its older brother, the Exynos will probably score nearer 40 FPS in final spec, with new drivers and running 4.2, the board runs ICS, however Jelly Bean did not really boost GL performance over ICS. What is interesting is that iPad 4, whose GPU is supposed to clocked at 500 MHz vs 250 MHz in the iPad 3 does not perform twice as fast in low-level test.
Fill Rate, triangle throughtput, vertex output etc is not double the power of the iPad 3, so although the faster A6 cpu helps, I reckon a lot of the improvement in the Egypt HD test is caused by improved drivers for the SGX 543 MP4 in the Pad 4. The Galaxy S2 received a big jump in GL performance when it got updated Mali drivers, so I imagine we should see good improvements for the T604, which is still a new product and not as mature as the SGX 543.
http://www.glbenchmark.com/compare....tified_only=1&D1=Apple iPad 4&D2=Apple iPad 3
I'd imagine the new new iPad would take the lead in benchmarks for now as it'll take Sammy and Google some time to optimize the beast, in the end however actual app and user interface performance is what matters, and reports are overwhelmingly positive on the Nexus 10.
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
________________
Edit: I forgot that Exynos 4210 with Mali400MP4 GPU had very bad GLbenchmark initially (even worse than PowerVR SGX540), but after updating firmware it's way better than other SoCs on Android handsets.
hung2900 said:
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
Click to expand...
Click to collapse
Not sure about this, but don't benchmark tools need to be upgraded for new architectures to? A15 is quite a big step, SW updates may be necessary for proper bench.
Damn..now I have to get an iPad.
I believe we have to take the Arndale board numbers with pinch of salt. It's a dev board, and I doubt it has optimized drivers for the SoC like it's expected for N10. Samsung has this habit of optimizing the drivers with further updates.
SGS2 makes for a good case study. When it was launched in MWC2011, it's numbers were really pathetic. It was even worse than Tegra2.
Anand ran benchmark on the pre-release version of SGS2 on MWC2011, check this:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
It was showing less than Tegra2 numbers! It was that bad initially.
Then look when Anand finally reviewed the device after few months:
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
Egypt (native resolution) numbers went up by 3.6x and Pro also got 20% higher. Now they could have been higher if not limited by vsync. GLbenchmark moved from 2.0 to 2.1 during that phase, but I am sure this would not make such a big difference in numbers.
If you again check the numbers now for SGS2, it's again another 50% improvement in performance from the time Anand did his review.
Check this SGS2 numbers now:
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
This is just to show that how driver optimization can have a big affect on the performance. My point is that we have to wait for proper testing on final release of N10 device.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Samsung has clocked the GPU @ 533MHz. So, it shouldn't be getting so less.
According to Samsung, it more like 2.1 GPixels/s: http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Fill rate is a low-level test, and there shouldn't be such a big difference from the quoted value. Let's wait and see how the final device shapes up.
hung2900 said:
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
________________
Edit: I forgot that Exynos 4210 with Mali400MP4 GPU had very bad GLbenchmark initially (even worse than PowerVR SGX540), but after updating firmware it's way better than other SoCs on Android handsets.
Click to expand...
Click to collapse
In areas where the Mali 400 lacked performance, like fragment and vertex lit triangle output T604 is comfortably 5 x the performance, whereas in these low-level tests iPad is not a concrete 2x the power of iPad 3, but achieves twice the FPS in Egypt HD than its older brother. I suspect drivers are a big factor here, and Exynos 5250 will get better as they drivers mature.
hot_spare said:
I believe we have to take the Arndale board numbers with pinch of salt. It's a dev board, and I doubt it has optimized drivers for the SoC like it's expected for N10. Samsung has this habit of optimizing the drivers with further updates.
SGS2 makes for a good case study. When it was launched in MWC2011, it's numbers were really pathetic. It was even worse than Tegra2.
Anand ran benchmark on the pre-release version of SGS2 on MWC2011, check this:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
It was showing less than Tegra2 numbers! It was that bad initially.
Then look when Anand finally reviewed the device after few months:
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
Egypt (native resolution) numbers went up by 3.6x and Pro also got 20% higher. Now they could have been higher if not limited by vsync. GLbenchmark moved from 2.0 to 2.1 during that phase, but I am sure this would not make such a big difference in numbers.
If you again check the numbers now for SGS2, it's again another 50% improvement in performance from the time Anand did his review.
Check this SGS2 numbers now:
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
This is just to show that how driver optimization can have a big affect on the performance. My point is that we have to wait for proper testing on final release of N10 device.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Samsung has clocked the GPU @ 533MHz. So, it shouldn't be getting so less.
According to Samsung, it more like 2.1 GPixels/s: http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Fill rate is a low-level test, and there shouldn't be such a big difference from the quoted value. Let's wait and see how the final device shapes up.
Click to expand...
Click to collapse
I agree with most of what you have said. On the GPixel figure this is like ATI GPU teraflops figures always being much higher than Nvidia, in theory with code written to hit the device perfectly you might see that those high figures, but in reality the Nvidia cards with lower on paper numbers equaled or beat ATI in actual game FPS. It all depends on whether the underlying architecture is as efficient in real-world tests, vs maximum technical numbers that can't be replicated in actual game environments.
I think the current resolution of the iPad / Nexus 10 is actually crazy, and will would see prettier games with lower resolutions, the amount of resources needed to drive those high MP displays, means lots of compromises will be made in terms of effects / polygon complexity etc to ensure decent FPS, especially when you think that to drive Battlefield 3 at 2560 x 1600 with AA and high textures, requires a PC that burn 400+ watts of power, not a 10 watt SoC.
Overall when we consider that Nexus 10 has twice the RAM for game developers to use and faster CPU cores, games should look equally as nice as both, the biggest effect will be the level of support game developers provide for each device, the iPad will probably be stronger i that regard. Nvidia was able to coax prettier games out of Tegra 3 through developer support, hopefully Google won't forget the importance of this.
What's the point of speculation? Just wait for the device to be released and run all the test you want to get confirmation on performance. Doesn't hurt to wait
BoneXDA said:
Not sure about this, but don't benchmark tools need to be upgraded for new architectures to? A15 is quite a big step, SW updates may be necessary for proper bench.
Click to expand...
Click to collapse
Both A9 & A15 use the same instruction set architecture (ISA) so no they won't. Benchmarks may need to be modified, if the new SoC are too powerful and max out the old benches, but for GL Benchmark, that has not happened yet and there are already new updates in the pipeline.
I can't wait to see this Exynos 5250 in a 2.0ghz quad-core variant in the semi near future... Ohhhh the possibilities. Samsung has one hell of a piece of silicon on their hand.
Chrome
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Google if you want to use Chrome as the stock browser, then develop to fast and smooth and not an insult, stock AOSP browser would be so much faster.
Turbotab said:
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Google if you want to use Chrome as the stock browser, then develop to fast and smooth and not an insult, stock AOSP browser would be so much faster.
Click to expand...
Click to collapse
True.. Chrome on mobile is still not upto desktop level yet. I believe it's v18 or something, right? The stock browser would have much better result in SunSpider/Browsermark. The N4 numbers looks even worse. Somewhere the optimizations isn't working.
The GLbenchmark tests are weird. Optimus G posts much better result than N4 when both are same hardware. It infact scores lower than Adreno 225 in some cases. This is totally whacked.
For N10, I am still wondering about fill rate. Need to check what guys say about this.
Is it running some debugging code in the devices at this time?
Turbotab said:
Both A9 & A15 use the same instruction set architecture (ISA) so no they won't. Benchmarks may need to be modified, if the new SoC are too powerful and max out the old benches, but for GL Benchmark, that has not happened yet and there are already new updates in the pipeline.
Click to expand...
Click to collapse
Actually not. A8 and A9 are the same ISA (Armv7), while A5 A7 and A15 are in another group (Armv7a)
Once we get rid of the underclock no tablet will be able to match. I'm sure the Mali t604 at 750 MHz would destroy everything.
hung2900 said:
Actually not. A8 and A9 are the same ISA (Armv7), while A5 A7 and A15 are in another group (Armv7a)
Click to expand...
Click to collapse
I have to disagree, this is from ARM's info site.
The ARM Cortex-A15 MPCore processor has an out-of-order superscalar pipeline with a tightly-coupled low-latency level-2 cache that can be up to 4MB in size. The Cortex-A15 processor implements the ARMv7-A architecture.
The ARM Cortex-A9 processor is a very high-performance, low-power, ARM macrocell with an L1 cache subsystem that provides full virtual memory capabilities. The Cortex-A9 processor implements the ARMv7-A architecture and runs 32-bit ARM instructions, 16-bit and 32-bit Thumb instructions, and 8-bit Java bytecodes in Jazelle state.
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.set.cortexa/index.html
Keion said:
Once we get rid of the underclock no tablet will be able to match. I'm sure the Mali t604 at 750 MHz would destroy everything.
Click to expand...
Click to collapse
Except the iPad 4, which has a GPU that is currently 57% faster than the T604.
Sent from my iPad Mini using Tapatalk
Do remember that Awesome resolution does tax the GPU a lot. Heck most lower end desktop GPUs would struggle
Harry GT-S5830 said:
Do remember that Awesome resolution does tax the GPU a lot. Heck most lower end desktop GPUs would struggle
Click to expand...
Click to collapse
Indeed it does,but not in offscreen testing, where Anand made his proclamation.
Sent from my iPad Mini using Tapatalk
Hemlocke said:
Except the iPad 4, which has a GPU that is currently 57% faster than the T604.
Sent from my iPad Mini using Tapatalk
Click to expand...
Click to collapse
Nah, I think we can beat that too.
Drivers + OC.

Nexus 10's Mali T-604 gpu specs

Does anyone know an app or a website with the the Mali T-604's full specifications. Like gpu clockspeed, V-RAM, GFLOPS, anything else. Thank you in advanced. (I already know it's quad-core, and I read that it's 423Mhz, and 512Mb of V-RAM, but I need confirmation lol).
Sent from my shooter using xda app-developers app
http://www.arm.com/products/multimedia/mali-graphics-plus-gpu-compute/mali-t604.php Shows a bit of features but nothing like clock speed or GFLOPS.
Judging from: http://forum.xda-developers.com/showthread.php?p=39719050#post39719050 It would seem the GPU can clock at 100Mhz, 160MHz, 266MHz, 400MHz, 450MHz, and 533MHz (not sure if this is custom behavior or stock).
and custom kernels have shown we EASILY (like on stock volts easy) can overclock from 533MHz up to 720MHz. This GPU can become a real powerhouse as it gets clocked higher, and I am thinking that if we wanted to overvolt it enough we could probably even have it running at 1GHz.
As for vram, it doesnt have any. VRAM is shared with system RAM and it uses something like 1GB of system memory in reserve for the GPU on this tablet. People theorize that it dedicates so much because of our huge resolution, and that lesser devices would not need to hoard as much of the memory.
EniGmA1987 said:
...and I am thinking that if we wanted to overvolt it enough we could probably even have it running at 1GHz.
Click to expand...
Click to collapse
That'd be insane lol... my desktop GPU (Radeon HD 7850) is factory OC'd and isn't even 1GHz
espionage724 said:
That'd be insane lol... my desktop GPU (Radeon HD 7850) is factory OC'd and isn't even 1GHz
Click to expand...
Click to collapse
Ya, but you cant compare MHz between architectures very easily. Your desktop card has WAY more power than this tablet grade GPU. Makes me wish I could get my hands on a Mali T-628 though, with the same OC we have now on that thing I could see it blowing away anything else on the market or coming out soon.
Unfortunately Ktoonsez said it looks like our frequency table is maxed out on the GPU, so I dont know if we will be able to OC higher despite if the GPU is capable of it or not.
Gpu clockspeed isn't always THAT important just look at the GTX Titan, it's only 700-800Mhz yet it's the world's fastest gpu.
Sent from my shooter using xda app-developers app
Afroninja said:
Gpu clockspeed isn't always THAT important just look at the GTX Titan, it's only 700-800Mhz yet it's the world's fastest gpu.
Click to expand...
Click to collapse
At the risk of turning this into a Desktop GPU thread; I believe AMD"s 7990 takes the spot at world's fastest GPU currently. Almost certain it slaughters the Titan at compute, and pretty sure it beats Titan in most gaming benchmarks. In terms of frame latency though, AMD might be lacking in that department, but not for long :good:
I do agree though clock speed isn't that important in most cases. Almost got a Radeon HD 7770 GHz Edition card just because of the 1GHz core clock, but the 7850 I got still outperforms it (to be fair though, it's only 50Mhz lower than 1GHz).
Regardless, with the Nexus 10's resolution, pretty sure we need a nice balance of memory frequency and GPU clock speed. GPU can be as fast as it wants, but it won't help much if the memory bandwidth is being choked :/
Afroninja said:
Gpu clockspeed isn't always THAT important just look at the GTX Titan, it's only 700-800Mhz yet it's the world's fastest gpu.
Sent from my shooter using xda app-developers app
Click to expand...
Click to collapse
Thats why I said the T-628 could be the fastest if we can OC it the same. Our current GPU has "four cores" and at 720MHz GPU speed we can push 2560x1440 pixels at 58 frames per second average on the Unreal 3 engine. The T-628 is the same as what we have but twice as many cores, so twice as many computing resources. Sure there are other things coming out that are pretty fast, but think of what 2x the power of our current GPU could do At this point though espionage724 would be right, we would probably see memory bottleneck so we would need to step up from DDR-1600 to DDR-2133. Still, testing I have done shows we are just barely starting to hit a memory bottleneck with our GPU @ 720MHz, and if we OC the memory up to DDR-1728 we have lots of extra bandwidth to spare. So changing the memory up to 2133 would alleviate any sort of bottleneck that would ever show up in that area even with twice as many GPU cores.

Overclocking / Undervolting GPU and CPU?

hi all,
is there a way to undervolt and or OC the CPU and GPU?
I remember reading a article a month ago about a GPU OC, but somehow that's it. No way to download the mentioned app etc.
Is there anything for the Mi 10 / SD 865?
Snapdragons don't overclock because they're not underclocked.
shivadow said:
Snapdragons don't overclock because they're not underclocked.
Click to expand...
Click to collapse
According to this news oage, we still have some potential left in our SD865
Xiaomi Mi 10 Overclocking Has Improved Significantly
he game performance has also been improved in addition to the higher running scores of the overclocked Snapdragon 865 models.
www.igeekphone.com
here is even a XDA link to another phone without any links to the app itself...
Abandoned
abandoned.
forum.xda-developers.com
so there is a way to OC and UV the SD865
That isn't what it appears to be to me. It appears to be an ongoing project to oc/uv the snapdragon processor and gpu and so far all they've done is managed to change the ram clockspeed. The pros of undervolting the ram is less heat but the cons is bottlenecking under load because ram uses voltage in correlation to clockspeed. If you overvolt the ram it produces more heat, processes more data but stability goes out of the window completely. This is NOT CPU/GPU core clocking and won't have any effect on the cores whatsoever, only data throughput.
I'll stand by my word coming from HTC to Xiaomi, both snapdragon phones, you can't overclock a snapdragon because they're not underclocked. It has been that way for a long time. What they advertise the chip as capable of is what the chip is capable of as by design and will actually be that way in the field.
If you want proof just take a browse around the later HTC phones and you won't see anything about core clocking, probably not ram volting either..
shivadow said:
That isn't what it appears to be to me. It appears to be an ongoing project to oc/uv the snapdragon processor and gpu and so far all they've done is managed to change the ram clockspeed. The pros of undervolting the ram is less heat but the cons is bottlenecking under load because ram uses voltage in correlation to clockspeed. If you overvolt the ram it produces more heat, processes more data but stability goes out of the window completely. This is NOT CPU/GPU core clocking and won't have any effect on the cores whatsoever, only data throughput.
I'll stand by my word coming from HTC to Xiaomi, both snapdragon phones, you can't overclock a snapdragon because they're not underclocked. It has been that way for a long time. What they advertise the chip as capable of is what the chip is capable of as by design and will actually be that way in the field.
If you want proof just take a browse around the later HTC phones and you won't see anything about core clocking, probably not ram volting either..
Click to expand...
Click to collapse
No offense, but have you read any of the two links I posted?
They literally explain, that they changed/overclocked the GPU frequency to 865mhz
stock frequency should be 587mhz.
They did overclock the GPU.
They even proved it by showing some benchmarks.. and compared it to the SD888
And this is the first time I read, that SDs are not overclockable...
865mhz is the bus and ram frequency. The cores are in the ghz.
All that has been achieved is a higher throughput and that equates to more heat and more used power. This stuff is well researched.
I honestly dont get, where you getting the RAM OC thing.
They do not use ddr7 or ddr8 on a mobile SoC, because the tech isnt there yet...
If you search for the Andreno 650 GPU you will see its specs that it is clocked at 587mhz (and not the Ram).
They are of course adjusting the ram timings too, but the xda links tells the following:
"2. edit your settings in the 3 tables. (start with adding the extra step form 865+ to 865)
3. press "Save GPU Freq Table" after editing any page, before you move to another."
If you want to stay with your resolution on this topic, it is fine.
I just want to know where we can get the KonaBess app, because google only links me to chinese pages and somehow this topic isnt as popular as I thought.
Not 100% sure if this link is allowed.... https://github.com/xzr467706992/KonaBess/releases/tag/v0.12
Scroll down to assets and its in there.
shivadow said:
Not 100% sure if this link is allowed.... https://github.com/xzr467706992/KonaBess/releases/tag/v0.12
Scroll down to assets and its in there.
Click to expand...
Click to collapse
Thanks alot
I dont know why I couldnt find it with google
btw: I hope I really didnt offend you with any of the sentences.
RaZoR No1 said:
Thanks alot
I dont know why I couldnt find it with google
btw: I hope I really didnt offend you with any of the sentences.
Click to expand...
Click to collapse
Just a heads up, not all snapdragons are created equally. That said, there is definitely performance left to gain via OC the adreno 650 gpu of the 865. I'm currently running a massive 930mhz on my adreno 650 and a very small cpu OC and with that, it blows the 865+ away in benchmarking and trades wins with a stock SD 888 with CPU and GPU bench scores. Any OC'ing you do I highly recommend doing a stress test before thinking you're stable.
1dopewrx05 said:
Just a heads up, not all snapdragons are created equally. That said, there is definitely performance left to gain via OC the adreno 650 gpu of the 865. I'm currently running a massive 930mhz on my adreno 650 and a very small cpu OC and with that, it blows the 865+ away in benchmarking and trades wins with a stock SD 888 with CPU and GPU bench scores. Any OC'ing you do I highly recommend doing a stress test before thinking you're stable.
Click to expand...
Click to collapse
Thanks for the heads up, I am already aware of the "silicon lottery".
I am amazed how much juice is still left in the SD865, that OCd it can even beat the 888 and is more consistent.
Do you use any App to monitor your temps and how did you OC your CPU? Afaik KonaBess only allows GPU oc?
1dopewrx05 said:
Just a heads up, not all snapdragons are created equally. That said, there is definitely performance left to gain via OC the adreno 650 gpu of the 865. I'm currently running a massive 930mhz on my adreno 650 and a very small cpu OC and with that, it blows the 865+ away in benchmarking and trades wins with a stock SD 888 with CPU and GPU bench scores. Any OC'ing you do I highly recommend doing a stress test before thinking you're stable.
Click to expand...
Click to collapse
Hi, I can't seem to find any tutorial online on how to overclock. Could you help me out ? I am gonna order a Black Shark 4 with the Snpadragon 870 soon, which is the best cherrypicked chip along the same chipset as the 865, which means that it's more likely to be a silicon lottery win... Can we get in touch on Discord ? My username is Meli #6318.
please guide me to modify the necessary things to overclock (kernel, file...)
anyway i still want to overclock it and gpu
1dopewrx05 said:
Just a heads up, not all snapdragons are created equally. That said, there is definitely performance left to gain via OC the adreno 650 gpu of the 865. I'm currently running a massive 930mhz on my adreno 650 and a very small cpu OC and with that, it blows the 865+ away in benchmarking and trades wins with a stock SD 888 with CPU and GPU bench scores. Any OC'ing you do I highly recommend doing a stress test before thinking you're stable.
Click to expand...
Click to collapse
How did you oc the cpu ? and is it possible to oc the gpu of the sd870 which is also the adreno650 but I’ve heard that it’s locked by Qualcomm trust zone , is it just a problem with sd870 or even 888 and 8 gen 1 ? Thanks in advance

Categories

Resources