Droid DNA benchmarked and lives upto expectations ! - HTC Droid DNA

Source : http://mblog.gsmarena.com/htc-droid-dna-snapdragon-s4-pro-chipset-performance-examined-in-detail/
BenchmarkPi : 263
We started with the
BenchmarkPi test, which
reflects per-core CPU
performance. As expected
the Krait architecture
helped the DROID DNA
beat its quad-core rivals.
More impressively,
though, it managed to
beat the LG Optimus G
despite packing identical
CPU cores as the LG
flagship.
Linpack:602
Linpack is another CPU-
testing benchmark, but it
supports multi-threading
and thus brings a better
idea of the quad-core
architecture overall
processing power. Here
the HTC DROID DNA lost
to the Optimus G by a
very small margin, but
blew the rest of the
competitors out of the
water.
Quadrant:8513
The Quadrant benchmark
tries to evaluate the
overall processing power
of the chipset – CPU,
GPU and memory
performance. The HTC
DROID DNA managed to
top the competition here,
which is a really
impressive achievement
given that its screen
resolution poses a bigger
challenge to the graphical
power.
Antutu:11551
The HTC DROID DNA
didn’t do as well in
AnTuTu, which is our
other all-round
benchmark. The
smartphone did beat the
LG Optimus G and the
Galaxy S III, but came
behind the rest of the
quad-core top dogs .
Glbenchmark2.5(1080p off screen):30
The 3D graphics
department is handled by
the Adreno 320 GPU. The
first test we ran was
NenaMark 2, but just like
its competitors the HTC
DROID DNA managed to
hit the 60fps limit of the
screen, rendering the
results moot.
GLBenchmark let us get a
better idea of the
graphics power of the
DROID DNA – the
smartphone got the
highest score of any
smartphone we have
tested so far. Given that
its screen resolution is
much higher than its
competitors, though it’s
can’t compete as
successfully in on-screen
tests.
Sunspider:1421
Browsermark:140270
The only area of the HTC
DROID DNA performance
that wasn’t exactly
inspiring were the browser
tests. The smartphone
came some way off the
leaders in both SunSpider
and BrowserMark, which
is rather hard to
understand given that it
runs Android Jelly Bean
and has its optimized
JavaScript engine. We
suspect this is caused by
some kind of driver issue
that the relatively new
Snapdragon S4 Pro
chipset has and we
expect to see better
results from it in the
future.

nikufellow said:
Source : http://mblog.gsmarena.com/htc-droid-dna-snapdragon-s4-pro-chipset-performance-examined-in-detail/
BenchmarkPi : 263
We started with the
BenchmarkPi test, which
reflects per-core CPU
performance. As expected
the Krait architecture
helped the DROID DNA
beat its quad-core rivals.
More impressively,
though, it managed to
beat the LG Optimus G
despite packing identical
CPU cores as the LG
flagship.
Linpack:602
Linpack is another CPU-
testing benchmark, but it
supports multi-threading
and thus brings a better
idea of the quad-core
architecture overall
processing power. Here
the HTC DROID DNA lost
to the Optimus G by a
very small margin, but
blew the rest of the
competitors out of the
water.
Quadrant:8513
The Quadrant benchmark
tries to evaluate the
overall processing power
of the chipset – CPU,
GPU and memory
performance. The HTC
DROID DNA managed to
top the competition here,
which is a really
impressive achievement
given that its screen
resolution poses a bigger
challenge to the graphical
power.
Antutu:11551
The HTC DROID DNA
didn’t do as well in
AnTuTu, which is our
other all-round
benchmark. The
smartphone did beat the
LG Optimus G and the
Galaxy S III, but came
behind the rest of the
quad-core top dogs .
Glbenchmark2.5(1080p off screen):30
The 3D graphics
department is handled by
the Adreno 320 GPU. The
first test we ran was
NenaMark 2, but just like
its competitors the HTC
DROID DNA managed to
hit the 60fps limit of the
screen, rendering the
results moot.
GLBenchmark let us get a
better idea of the
graphics power of the
DROID DNA – the
smartphone got the
highest score of any
smartphone we have
tested so far. Given that
its screen resolution is
much higher than its
competitors, though it’s
can’t compete as
successfully in on-screen
tests.
Sunspider:1421
Browsermark:140270
The only area of the HTC
DROID DNA performance
that wasn’t exactly
inspiring were the browser
tests. The smartphone
came some way off the
leaders in both SunSpider
and BrowserMark, which
is rather hard to
understand given that it
runs Android Jelly Bean
and has its optimized
JavaScript engine. We
suspect this is caused by
some kind of driver issue
that the relatively new
Snapdragon S4 Pro
chipset has and we
expect to see better
results from it in the
future.
Click to expand...
Click to collapse
Awesome what about geekbench 2?
Sent from my CM9 HTC Thunderbolt from Tapatalk 2.4

Don't know mate will let you know if i get that from any other sources .
Btw gb 2.5 is there in the list !

nikufellow said:
Don't know mate will let you know if i get that from any other sources .
Btw gb 2.5 is there in the list !
Click to expand...
Click to collapse
Oh didn't see that.
Sent from my CM9 HTC Thunderbolt from Tapatalk 2.4

Is this what you are looking for?
Sent from my HTC6435LVW using xda app-developers app

Hey thanks north

Related

Why are benchmarks so low?

I've noticed that the evo 3d has been scoring lower than the sensation, which is spec'd performance-wise. Exactly the same except for the evo having more ram. So I don't really understand why it would eb scoeing lower
Also if you're here to try and dispute the credibility of benchmarks leave now because that's not the point of this topic.
Sent from my G2X
Well, the Evo 3D does have the ability to do 3D, so I imagine it will take up some resources, but I have a feeling that the benchmark scores will only get better as HTC and Sprint release updates and fixes for it.
Probably the bloatware
Benchmarks are boo boo! For a benchmark to read correctly the cores need to be ramped up to max for the test. The app does not draw full ramp from the dual cores. Plus they are asynchronous, once we root and have kernel source for added tweeks we will blow tegra away (even with tegra tweeked)!
For the most part synthetic benchmarks are not really useful. How much are they off anyways? I'll bet you'll never notice the difference.
Swyped from my Atari 2600
because you touch yourself at night.
cordell12 said:
Benchmarks are boo boo! For a benchmark to read correctly the cores need to be ramped up to max for the test. The app does not draw full ramp from the dual cores. Plus they are asynchronous, once we root and have kernel source for added tweeks we will blow tegra away (even with tegra tweeked)!
Click to expand...
Click to collapse
pretty much what I came into say. the Nexus S scores don't blow you away before you root either,but once Rooted, it is capable of truly amazing power.
pretty much every review says the Evo 3d feels much faster and much more fluid than the sensation.
hondarider525 said:
because you touch yourself at night.
Click to expand...
Click to collapse
LMAO!
10 char
the processor is an ASYNC and the cores are able to run at different speeds for different task. The programs testing are better suited to your normal SYNC processor which are both always running at full all the time.
The need to write code to take advantage of the ASYNC and its methods to reach max must be included in the programming before they will ever be able to measure the full potential of the ASYNC.
you could say in those test I could garuntee you one processor is running max one is not. if at all. But if it is. its just a little as the program has not told it to run both processors at max if its a ASYNC,
ADD the qHD and the program would need to account for that.
imagine if the screen was amoled or just 800 x 480. this thing would be brutal beast.
but at the end of the day I love HTC phones.
HTC Sense is known to produce low benchmark scores. Once AOSP gets on this baby, it will fly through the irrelevant benchmarks like nothing.
Not only that but benchmarks are known to produce pointless infighting and petty bickering over measures that are not only highly suspect but also not related to actual use...
...or so the old wives tale goes...
Sent from my PC36100
xdmds said:
I've noticed that the evo 3d has been scoring lower than the sensation, which is spec'd performance-wise. Exactly the same except for the evo having more ram. So I don't really understand why it would eb scoeing lower
Also if you're here to try and dispute the credibility of benchmarks leave now because that's not the point of this topic.
Sent from my G2X
Click to expand...
Click to collapse
Actually, check out Anandtech's bench of the Evo 3D and Sensation from a couple of weeks ago, and the check out the same bench of those 2 devices when they tested the Droid 3 a couple of days ago.
Comparing the scores, the 3vo scored the same both times. First time it was higher than the Sensation, and second time lower. So somewhere in between, the Sensation got a software update that made it score higher on those benchmarks. I'm guessing we'll see the same kind of improvement with the 3vo in time.
leaving now. Just beating a dead horse here, this has been debated a million times.
your holding it wrong?
NewZJ said:
your holding it wrong?
Click to expand...
Click to collapse
Yeah he should call up for his free rubber band.
freeza said:
HTC Sense is known to produce low benchmark scores. Once AOSP gets on this baby, it will fly through the irrelevant benchmarks like nothing.
Click to expand...
Click to collapse
While I did run asop on my evo no way will an asop rom touch my evo 3d. Sense 3.0 is great and I doubt asop will supoort 3d.
Sent from my GT-P7510 using XDA Premium App
I hate people who point out benchmarks on a phone... :|
LOL if htc's scrap snapdragon duel core had good benchmarks I bet all of you would be posting about how elite your phone is and how good it does in benchmarks but since it sucks you say benchmarks don't matter. Don't fool yourself benchmarks do matter. Yes quadrant can be tricked by unlocked phones with edits but benchmarks ran on tegra 2 & crap snapdragon using smartbench 2011 (does use both cores) gives realistic performance.
evo 3d
2089 cpu
1648 gpu (lol slower then galaxy s 1)
tegra 2 (stock atrix 2.3.4 with crap motoblur)
2737 cpu
2661 gpu
tegra 2 overclocked
3989 cpu
2900 gpu
shep211 said:
LOL if htc's scrap snapdragon duel core had good benchmarks I bet all of you would be posting about how elite your phone is and how good it does in benchmarks but since it sucks you say benchmarks don't matter. Don't fool yourself benchmarks do matter. Yes quadrant can be tricked by unlocked phones with edits but benchmarks ran on tegra 2 & crap snapdragon using smartbench 2011 (does use both cores) gives realistic performance.
evo 3d
2089 cpu
1648 gpu (lol slower then galaxy s 1)
tegra 2 (stock atrix 2.3.4 with crap motoblur)
2737 cpu
2661 gpu
tegra 2 overclocked
3989 cpu
2900 gpu
Click to expand...
Click to collapse
Bro its because tegra manages different. Tegra uses both cores to do one single task. While the EVO 3D chip set is asynchronous. This means when you run a benchmark only one core is being processed during that application. The other core is running other processes to keep your EVO lag free and running smoothly. Benchmark is only a number anyway.
Remember this tho forever. benchmarks are like a girl in a bikini, they show a lot but not quite everything.
Sent from a dual core beast 3VO. Do this on your iFail 4

iPhone 4S faster than Galaxy SII?

I picked up my Galaxy SII after seeing the disappointing specs on the iPhone 4S. But today I read preliminary benchmarks and it smokes the SII.
Sorry, unable to post a link yet.
How can a 800 mhz cpu beat the SII's 1.2 ghz processor?
I am confused. Am I missing something?
026TB4U said:
I picked up my Galaxy SII after seeing the disappointing specs on the iPhone 4S. But today I read preliminary benchmarks and it smokes the SII.
Sorry, unable to post a link yet.
How can a 800 mhz cpu beat the SII's 1.2 ghz processor?
I am confused. Am I missing something?
Click to expand...
Click to collapse
Because benchmarks don't mean jack ****.
Look at how Quadrant scores are all over the damned place with no correspondence to actual usability.
its all about the software. I expect some good gains when moving over to ICS.
Edit, corrected iPhone processor family name.
Trying to benchmark across different operating systems and hardware is not easy to accomplish, but I can tell you that an (Apple A5) A9 800 mhz duel core Samsung processor is not faster than (Exynos) A9 1.2 ghz duel core Samsung processor.
Yes both phones processors are made by Samsung
Sent from my SAMSUNG-SGH-I777 using XDA App
Entropy512 said:
Because benchmarks don't mean jack ****.
Look at how Quadrant scores are all over the damned place with no correspondence to actual usability.
Click to expand...
Click to collapse
+1 10 char
dayv said:
Trying to benchmark across different operating systems and hardware is not easy to accomplish, but I can tell you that an A5 800 mhz duel core Samsung processor is not faster than A9 1.2 ghz duel core Samsung processor.
Yes both phones processors are made by Samsung
Sent from my SAMSUNG-SGH-I777 using XDA App
Click to expand...
Click to collapse
This is true but your wording is a bit confusing. An "Apple A5" processor is a dual core a9 processor with a powervr 543mp2 gpu. An A5 processor is an Arm core made for ultra low power. Basically both the apple a5 and the exynos processor have have the same processor architecture but there are many other factors that can influence performance like the GPU, memory, cache, decoders, ect. In this case i think the main discrepancy will be the software thats so different between the two.
footballrunner800 said:
This is true but your wording is a bit confusing. An "Apple A5" processor is a dual core a9 processor with a powervr 543mp2 gpu. An A5 processor is an Arm core made for ultra low power. Basically both the apple a5 and the exynos processor have have the same processor architecture but there are many other factors that can influence performance like the GPU, memory, cache, decoders, ect. In this case i think the main discrepancy will be the software thats so different between the two.
Click to expand...
Click to collapse
I did not doubt that both processors were of the same type and architecture, but I did not realize that apple A5 was just an apple brand and that both processors were A9. Both are still Samsung family processor one clocked at 800 mhz one clocked at 1.2 GHz
Thank you for the correction
Sent from my SAMSUNG-SGH-I777 using XDA App
The iPhone is probably utilizing the processor to it's full extent, while Gingerbread (and Android in general) does a terrible job of utilizing the power of the hardware.
ICS should see a nice performance increase on dual cores.
OP is probably refering to the benchmark for gaming. It's not the processor that lacks on GS2. If iPhone 4S does come with the same A5 as iPad2, its GPU will smoke Mali400 in GS2 in almost every benchmark test (in most benchmarks, it is twice as fast as Mali400). Just checkout the review of Internationl GS2 by Anandtech.com with benchmark comparison of GS2 vs iPad2 and other smartphones. It is not the Quadrant or Linkpack benchmark but rather the professional benchmarks measuring fill rates and triangle thoughputs etc.
Processor performance wise, it is probably a wash because both are based on the same ARM design.
Although I do agree that benchmarks are just benchmarks, I am still surprised.
Is it true that Gingerbread only utilizes one cpu? And will Ice Cream Sandwich utilize both?
And BTW, I am by no means an Apple fanboy. I had been waiting for this phone to come out to replace my dinosaur BB 9000, so I wouldn't have to get an iPhone and deal with iTunes.
iOS5 > gingerbread. Sad but true.
Hope ICS comes out soon. It seems to be on par from what I hear.
Sent from my Galaxy S II using Tapatalk
I think I saw the benchmark in question - it was a GPU-heavy benchmark for a workload that most users won't experience 99% of the time. (It was a GPU-bound OpenGL benchmark. The GPU of the iPhone 4S IS faster than ours for 3D work - but unless you do lots of 3D gaming, it's wasted. Also, 3D is kind of a waste on a 3.5" screen.)
Apple has an extremely long history of misleading the public with selective benchmarking. Back in the Pentium II or III days, they claimed one of their machines was twice as fast as an Intel machine clocked at least 50% higher. While I agree that MHz isn't everything, there's a limit to that. In that case, on a single Photoshop benchmark that was optimized for PowerPC by using AltiVec and running non-optimized on the Intel chip (despite an optimized MMX or SSE implementation being available), the Apple did better - and Apple used that to try and make users believe the machine was twice as fast for all workloads.
026TB4U said:
Is it true that Gingerbread only utilizes one cpu? And will Ice Cream Sandwich utilize both?
Click to expand...
Click to collapse
It is true.
I guess the benchmarking was for the javascript using safari browser. So it's apple vs oranges. Also completely 2 different OS. Let's run quadrant if it's available for iOS the see how it handles. In the mean time enjoy the best and fastest smartphone currently in the market no matter what other says.
Sent from my SAMSUNG-SGH-I777 using xda premium
It could be ten times faster than a GII, but it still has a 3.5" screen, and I-jail. My wife and kids have Iphone 4's and there is no way I would trade no matter how fast this new one is.
aintwaven said:
It could be ten times faster than a GII, but it still has a 3.5" screen, and I-jail. My wife and kids have Iphone 4's and there is no way I would trade no matter how fast this new one is.
Click to expand...
Click to collapse
Except for the wife and kids part(I have neither) this. Very much this.
Just ran the SunSpider Javascript on CM7.1. Results seem to be quite a bit better than the ones I see posted on AnandTech. Obviously they were running the GS2 stock but I was surprised to see my numbers so low. Also did the GLBenchmark and while the Egypt was slower, the Pro was faster on CM7.1. Coin flip to me it seems...
Those are just plain synthetic benchmark, what does it mean for RL usage? not a damn thing.
You think all the fashionnista who's buying iphone 4s gonna care how fast their CPU are?
footballrunner800 said:
its all about the software. I expect some good gains when moving over to ICS.
Click to expand...
Click to collapse
That's the problem with android; it is always wait for the next version of software, it'll be better then. How about making a good version now?
Sent from my SAMSUNG-SGH-I777 using Tapatalk
arctia said:
iOS5 > gingerbread. Sad but true.
Hope ICS comes out soon. It seems to be on par from what I hear.
Sent from my Galaxy S II using Tapatalk
Click to expand...
Click to collapse
Are you high and drunk?? As far as I'm aware, iOS5 is just playing catch up to Android. There isn't one feature that they implemented that hasn't already been introduced in Android since the Froyo days.
http://www.youtube.com/watch?v=FUEG7kQegSA&feature=share

RLY?! Xperia x10 gets ISC port but not atrix?

X10 is garbage! this is outrageous!
Yes really, they got it working, you want it so bad try porting it yourself
Sent from my MB860 using XDA App
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
dLo GSR said:
cry about it?
if you want it so bad for your phone, learn to port it yourself. until then, since you rely solely on other peoples' hard work and sweat, shut up and be patient.
Click to expand...
Click to collapse
Oh snap. That was awesome.
Sent from my MB860 using XDA App
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
firefox3 said:
I might start to look into trying to port it this weekend
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Good news man
Sent from my MB860 using XDA App
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Javi97100 said:
Good news man
Sent from my MB860 using XDA App
Click to expand...
Click to collapse
Its turning out to be harder then i though... I think no one will get it until offical updates come out for other phones
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
So EGL = gpu driver? If thats the only setback, would it be possible to get an ICS rom with software rendering as a proof of concept, or are there other pieces missing?
GB/CM7 is pretty good on the Atrix, if we dont see ICS for a few months it doesn't hurt us in any way. I'd like to think most of us can be patient if we lack the skills to help.
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
According to anandtech, Tegra 2 support is essentially ready, so I think as long as nvidia releases the source for ics (libs?), someone will try to port it. Hell, I have a good 5 weeks during break, I might as well try then.
Sent from my MB860 using XDA App
Azurael said:
Being that there are currently no EGL libs for anything except PowerVR SGX devices under ICS yet, and they're closed source and tightly dependent on the kernel there doesn't seem to be a huge point until the official updates start to hit for a range of devices.
Sure, Desire, HD, X10, N1 have ports of a sort at the moment, in fact there shouldn't be too many problems getting them working aside from the graphics drivers but they're just for fun with the framebuffer driver given how much of ICS' UI rendering is done with GPU acceleration in mind. You wouldn't want to use it day-to-day. The browser is surprisingly responsive on the Desire though (I'd say moreso than GB, despite the software rendering), as is the Market (the new one always lagged really badly for me on the Desire before) - glimmers of hope for ICS' eventual performance on older devices. The keyboard lags like you wouldn't believe though!
The Atrix should fly under 4.0.1 though, if it ever happens - bearing in mind the fact that the SGX 540 in the Galaxy Nexus is pretty much in a dead heat with Tegra 2's GPU, we've got a lower resolution screen, and can overclock past the its stock speeds.
Click to expand...
Click to collapse
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Buddy, check out any of the kernels available in the dev thread and you'll see that the GPUs are overclocked.
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
Doubt the iPhone will see ICS, the newest model that can run android as far as I know is the iPhone 3G, which was incredibly slow under Gingerbread.
mac208x said:
X10 is garbage! this is outrageous!
Click to expand...
Click to collapse
222 posts and zero thanks? Is this what you do, go around XDA and post useless threads like the guy complaining about returning home early despite nobody asking him to "to get MIUI ported on his grandma's phone"?
Are you guys related by any chance?
edgeicator said:
Actually, no, despite being a much older GPU, the SGX 540 found in the GNexus outpaces the Tegra 2 due to its higher clock rate by 7% or 45% depending on the GLBenchmark being run. Both GPU tests were done at 720p resolution. Also, you can't overclock the GPU, only the CPU.
Click to expand...
Click to collapse
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Azurael said:
Depends on the benchmark, yes - texture-heavy rendering tends to perform better on the 540 in the OMAP4460 thanks to it's dual channel memory controller and high clock (and that's probably the directly relevant part to UI rendering to be honest, though as I said - lower resolution screen ) but the Tegra 2 is quite substantially ahead in geometry-heavy rendering (and games on mobiles are starting to move that way now, following the desktop landscape over the past 5 years or so.) Averaged out, the performance of the two is very close.
Plus, as I said, the GPU in my phone is running at 400MHz which ought to even things out in the GLMark 720p tests somewhat even if they are biassed to one architecture or the other. While the GPU in OMAP4460 may overclock just as well from its stock 400MHz, I'm only really concerned that the phone can run as fast as a stock GNexus to maybe skip the next generation of mobile hardware and tide it over until Cortex A15-based SoCs on 28nm process start to emerge with stronger GPUs. I don't really think I'm CPU performance bound with a 1.4GHz dual-core A9 - and increasing the number of equivalent cores without a really substantial boost in GPU horesepower seems worthless right now, even if ICS takes better advantage of SMP (re: Disappointing early Tegra 3 benchmarks - although it does seem GLMark stacks the odds against NVidia GPUs more than other benchmarks?)
Click to expand...
Click to collapse
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
Don't you get tired if writing those long rants? We understand you know something about CPU architecture, and that Tegra isn't the best one out there, but damn man, it's the same thing in every thread. Just chill out and try to stay on topic for once
Sent from my MB860 using Tapatalk
edgeicator said:
I would expect the Tegra to beat a nearly 5 year old GPU, but it only does so in triangle throughput. Tegra just uses a very poor architecture in general. Look at how little actual horsepower it can pull. The Tegra 3 gpu pulls 7.2GFLOPs @300mhz. The iPad GPU and the upcoming Adreno 225 both pull 19.2 GFLOPS at that same clockspeed. I honestly have no idea what the engineers are thinking over atNnvidia. It's almost as bad as AMD's latest bulldozer offerings. It's really more of Tegra's shortcomings than GLMark stacking the odds. PowerVR's offerings from 2007 are keeping up with a chip that debuted in 2010/2011. The Geforce just doesn't seem to scale very well at all on mobile platforms. But yea, all Nvidia did with Tegra 3 was slap in 2 extra cores, clocked them higher, threw in the sorely missed NEON instruction set, increased the SIMDs on the GPU by 50% (8 to 12), and then tacked on a 5th hidden core to help save power. Tegra 3 stayed with the 40nm process whereas every other SoC is dropping down to 28nm with some bringing in a brand new architecture as well.
Click to expand...
Click to collapse
I think you are not seeing the whole picture...
The Tegra 3 (Et-Al) is not just about its quad core implementation, remember that the GPU will offer 12 cores that will translate in performance not seeing as of yet on any other platform.
Benchmarks don't tell the whole story! Specially those benchmarking tools which are not Tegra 3 optimized yet.
Cheers!
Sent from my Atrix using Tapatalk
WiredPirate said:
I noticed the Captivate got a port of it too since i9000 ROMs and Cap ROMs are interchangeable. I thought its funny that it's running on the HD a Windows Mobile 6.5 phone lol. Let's all try to be patient and we will eventually see it.
Edit: not to mention I'm sure if it's not already it will soon be on iPhone too. It seems like iPhones always get the new Android versions kinda early. I'm not sweating it I love my Atrix in its current state.
Click to expand...
Click to collapse
LOL I ran all the iDroid ports on my iphone. Not one was even in alpha stage, I would not even count iDroid as a port since you cant use anything on it.

HTC One X - ATT vs International(Benchmarks)

Disclaimer: I am not a professional reviewer, these are just my observations.
Due to a lack of research on my part, I am currently the owner of both a US and International HTC One X. The short story is that I bought a used ATT One X after looking at the spec sheet, not realizing until after I had received it, that it was essentially half the phone that I was expecting. I didn't read the part about the US version being different. So I then tracked down an international version and have been doing some comparisons.
I have read several places, especially on the xda forums, that the US version is faster than the International version. However, after running several benchmarks, my results do not reflect that, not even remotely. For a comparison, I am including my current Droid RAZR. These are free apps from the Google Play market. These scores are the average of three runs, rounded up.
CF Bench(Higher is better): International - 13231, US - 9496, Droid RAZR - 4692
CPU Benchmark(Lower is better): International - 498ms, US - 602ms, Droid RAZR - 702ms
PassMark CPU Test(Higher is better): International - 7410, US - 3556, Droid RAZR - 3061
PassMark Disk Test(Higher is better): International - 3835, US - 3523, Droid RAZR - 2411
PassMark Memory Test(Higher is better): US - 1635, Droid RAZR - 938, International 920
PassMark 2D Graphics Test(Higher is better): International - 1878, Droid RAZR - 1833, US - 1533
PassMark 3D Graphics Test(Higher is better): International - 716, US - 668, Droid RAZR - 654
Quadrant Standard CPU(Higher is better): International - 13223, US - 8395, Droid RAZR - 4882
Quadrant Standard MEM(Higher is better): US - 7547, International - 3498, Droid RAZR - 2985
Quadrant Standard I/O(Higher is better): International - 5479, US - 3630, Droid RAZR - 2941
Quadrant Standard 2D(Higher is better): International - 1000, US - 990, Droid RAZR - 981
Quadrant Standard 3D(Higher is better): International - 2365, Droid RAZR - 2303, US - 2126
The international version is faster in almost every benchmark with the exception of the Memory(RAM) benchmarks.
My conclusion: People in the US market are getting screwed. The US version is slower(albeit only slightly in some cases), has half the storage space, comes with ATT crapware, and is also slightly slower than my Droid RAZR in some benchmarks.
What are your thoughts?
I've had both and the US version lags WAY more with the intl I had zero lag ever whatsoever. Even battery sucks more on at&t version. The one s was always compared to the one x bc.of the advantage of having the lower res screen helped it be close to X in the benchmarks
Sent from my LG-P880 using xda premium
The freaking storage u get on the US version is what 9GB usable? Totally inexcusable, the price is finally where it needs to be, at $99 on contract
Sent from my LG-P880 using xda premium
There are three issues with the U.S. One X (which is actually an XL):
- Adreno reserves 350MB of RAM for itself which kills multitasking which isn't too hot to begin with.
- There's 10GB of user available storage.
- The radios (BT & WI-Fi) are having tons more issues than the international version. It could just be AT&T's implementation of them.
If I had to buy a U.S. carrier phone, as much as like the One X, I'd buy an SGS3. There's a 1.7GHz Teg3/Icera LTE XL being tested for use on AT&T. That could be an interesting phone.
Here's something else I've just noticed: In "Settings" under the "Apps" heading and then the "Running" tab, you should get a good idea of how much RAM your phone is using. Both phones are supposed to have 1 GB of RAM. My international HOX shows 541 MB used and 438 MB free for a total off 979 MB(Close enough). My US HOX shows 341MB used and 331 MB free for a total of 672 MB of RAM.
Where is the rest of the RAM on my US HOX?
EDIT: Just saw a previous post that answered my question.
First time i notice this thread and i have to say nice job and thanks :good:
you are basically confirming everything we analysed and predicted in the One X mega information thread
The dual krait snapdragon 4 was over hyped for the sake of competition with Tegra 3, in terms of graphics performance its no match to Tegra 3 on native 720p
Also all this abundance of memory bandwith is not needed unless you are on a quad processor, on the other hand the One X Tegra 3 has no memory bandwidth issues in actually real world due to the massive IO speed
This leaves battery, with the snapdragon 4 One X you are trading screen on times with screen off times on the Tegra 3 One X
Every snapdragon device i ever tested had the problem of drain during screen off, in fact nothing in the android world can match the dead sleep time on Tegra 3 thanks to the low power core
As for heat, things got a lot better in the latest update for the One X and it only appear when you push it hard with heavy games, you should perfectly accept this when you know your buying a quad core on 40nm
Thanks for your tests. Since the tests were made in early August (before post date of 02-Aug-12), that time both HOX had Android 4.03. Now both have 4.04 (Firmware 2.17 for international & 2.20 for US) which, at least for the international version, brought major speed improvements.
hamdir said:
First time i notice this thread and i have to say nice job and thanks :good:
you are basically confirming everything we analysed and predicted in the One X mega information thread
The dual krait snapdragon 4 was over hyped for the sake of competition with Tegra 3, in terms of graphics performance its no match to Tegra 3 on native 720p
Also all this abundance of memory bandwith is not needed unless you are on a quad processor, on the other hand the One X Tegra 3 has no memory bandwidth issues in actually real world due to the massive IO speed
This leaves battery, with the snapdragon 4 One X you are trading screen on times with screen off times on the Tegra 3 One X
Every snapdragon device i ever tested had the problem of drain during screen off, in fact nothing in the android world can match the dead sleep time on Tegra 3 thanks to the low power core
As for heat, things got a lot better in the latest update for the One X and it only appear when you push it hard with heavy games, you should perfectly accept this when you know your buying a quad core on 40nm
Click to expand...
Click to collapse
wait till u see tegra 4, 32nm and 1.8ghz quad
as usual, an even more powerful geforce gpu
Here are some more benchmarks. Took screenshots. 1st pic is the US HOX, 2nd one is the Global HOX.

iPad 4 vs 5250 (Nexus 10 Soc) GLBenchmark full results. UPDATE now with Anandtech!!

XXXUPDATEXXX
Anandtech have now published the perfromance preview of the Nexus 10, lets the comparison begin!
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Well, the first full result has appeared on GLBenchmark for the iPad4, so I have created a comparison with the Samsung Arndale board, which uses exactly the same SoC as the Nexus 10, so will be very close in performance to Google's newest tablet. GLBenchmark, as its name suggests test Open GL graphics perrformance, which is important criteria for gaming.
Which device wins, click the link to find out.
http://www.glbenchmark.com/compare....ly=1&D1=Apple iPad 4&D2=Samsung Arndale Board
If you're really impatient, the iPad 4 maintains it lead in tablet graphics, the Nexus 10 may performance slightly better in final spec, but the underlying low-level performance will not change much.
I've also made a comparison between the iPad 3 & 4.
Interestingly the in-game tests GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p) :, which is run independent of native screen resolution show the following
iPad 4: 48.6 FPS
iPad 3: 25.9 FPS
5250 : 33.7 FPS
So the iPad is twice as fast as its older brother, the Exynos will probably score nearer 40 FPS in final spec, with new drivers and running 4.2, the board runs ICS, however Jelly Bean did not really boost GL performance over ICS. What is interesting is that iPad 4, whose GPU is supposed to clocked at 500 MHz vs 250 MHz in the iPad 3 does not perform twice as fast in low-level test.
Fill Rate, triangle throughtput, vertex output etc is not double the power of the iPad 3, so although the faster A6 cpu helps, I reckon a lot of the improvement in the Egypt HD test is caused by improved drivers for the SGX 543 MP4 in the Pad 4. The Galaxy S2 received a big jump in GL performance when it got updated Mali drivers, so I imagine we should see good improvements for the T604, which is still a new product and not as mature as the SGX 543.
http://www.glbenchmark.com/compare....tified_only=1&D1=Apple iPad 4&D2=Apple iPad 3
I'd imagine the new new iPad would take the lead in benchmarks for now as it'll take Sammy and Google some time to optimize the beast, in the end however actual app and user interface performance is what matters, and reports are overwhelmingly positive on the Nexus 10.
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
________________
Edit: I forgot that Exynos 4210 with Mali400MP4 GPU had very bad GLbenchmark initially (even worse than PowerVR SGX540), but after updating firmware it's way better than other SoCs on Android handsets.
hung2900 said:
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
Click to expand...
Click to collapse
Not sure about this, but don't benchmark tools need to be upgraded for new architectures to? A15 is quite a big step, SW updates may be necessary for proper bench.
Damn..now I have to get an iPad.
I believe we have to take the Arndale board numbers with pinch of salt. It's a dev board, and I doubt it has optimized drivers for the SoC like it's expected for N10. Samsung has this habit of optimizing the drivers with further updates.
SGS2 makes for a good case study. When it was launched in MWC2011, it's numbers were really pathetic. It was even worse than Tegra2.
Anand ran benchmark on the pre-release version of SGS2 on MWC2011, check this:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
It was showing less than Tegra2 numbers! It was that bad initially.
Then look when Anand finally reviewed the device after few months:
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
Egypt (native resolution) numbers went up by 3.6x and Pro also got 20% higher. Now they could have been higher if not limited by vsync. GLbenchmark moved from 2.0 to 2.1 during that phase, but I am sure this would not make such a big difference in numbers.
If you again check the numbers now for SGS2, it's again another 50% improvement in performance from the time Anand did his review.
Check this SGS2 numbers now:
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
This is just to show that how driver optimization can have a big affect on the performance. My point is that we have to wait for proper testing on final release of N10 device.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Samsung has clocked the GPU @ 533MHz. So, it shouldn't be getting so less.
According to Samsung, it more like 2.1 GPixels/s: http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Fill rate is a low-level test, and there shouldn't be such a big difference from the quoted value. Let's wait and see how the final device shapes up.
hung2900 said:
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
________________
Edit: I forgot that Exynos 4210 with Mali400MP4 GPU had very bad GLbenchmark initially (even worse than PowerVR SGX540), but after updating firmware it's way better than other SoCs on Android handsets.
Click to expand...
Click to collapse
In areas where the Mali 400 lacked performance, like fragment and vertex lit triangle output T604 is comfortably 5 x the performance, whereas in these low-level tests iPad is not a concrete 2x the power of iPad 3, but achieves twice the FPS in Egypt HD than its older brother. I suspect drivers are a big factor here, and Exynos 5250 will get better as they drivers mature.
hot_spare said:
I believe we have to take the Arndale board numbers with pinch of salt. It's a dev board, and I doubt it has optimized drivers for the SoC like it's expected for N10. Samsung has this habit of optimizing the drivers with further updates.
SGS2 makes for a good case study. When it was launched in MWC2011, it's numbers were really pathetic. It was even worse than Tegra2.
Anand ran benchmark on the pre-release version of SGS2 on MWC2011, check this:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
It was showing less than Tegra2 numbers! It was that bad initially.
Then look when Anand finally reviewed the device after few months:
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
Egypt (native resolution) numbers went up by 3.6x and Pro also got 20% higher. Now they could have been higher if not limited by vsync. GLbenchmark moved from 2.0 to 2.1 during that phase, but I am sure this would not make such a big difference in numbers.
If you again check the numbers now for SGS2, it's again another 50% improvement in performance from the time Anand did his review.
Check this SGS2 numbers now:
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
This is just to show that how driver optimization can have a big affect on the performance. My point is that we have to wait for proper testing on final release of N10 device.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Samsung has clocked the GPU @ 533MHz. So, it shouldn't be getting so less.
According to Samsung, it more like 2.1 GPixels/s: http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Fill rate is a low-level test, and there shouldn't be such a big difference from the quoted value. Let's wait and see how the final device shapes up.
Click to expand...
Click to collapse
I agree with most of what you have said. On the GPixel figure this is like ATI GPU teraflops figures always being much higher than Nvidia, in theory with code written to hit the device perfectly you might see that those high figures, but in reality the Nvidia cards with lower on paper numbers equaled or beat ATI in actual game FPS. It all depends on whether the underlying architecture is as efficient in real-world tests, vs maximum technical numbers that can't be replicated in actual game environments.
I think the current resolution of the iPad / Nexus 10 is actually crazy, and will would see prettier games with lower resolutions, the amount of resources needed to drive those high MP displays, means lots of compromises will be made in terms of effects / polygon complexity etc to ensure decent FPS, especially when you think that to drive Battlefield 3 at 2560 x 1600 with AA and high textures, requires a PC that burn 400+ watts of power, not a 10 watt SoC.
Overall when we consider that Nexus 10 has twice the RAM for game developers to use and faster CPU cores, games should look equally as nice as both, the biggest effect will be the level of support game developers provide for each device, the iPad will probably be stronger i that regard. Nvidia was able to coax prettier games out of Tegra 3 through developer support, hopefully Google won't forget the importance of this.
What's the point of speculation? Just wait for the device to be released and run all the test you want to get confirmation on performance. Doesn't hurt to wait
BoneXDA said:
Not sure about this, but don't benchmark tools need to be upgraded for new architectures to? A15 is quite a big step, SW updates may be necessary for proper bench.
Click to expand...
Click to collapse
Both A9 & A15 use the same instruction set architecture (ISA) so no they won't. Benchmarks may need to be modified, if the new SoC are too powerful and max out the old benches, but for GL Benchmark, that has not happened yet and there are already new updates in the pipeline.
I can't wait to see this Exynos 5250 in a 2.0ghz quad-core variant in the semi near future... Ohhhh the possibilities. Samsung has one hell of a piece of silicon on their hand.
Chrome
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Google if you want to use Chrome as the stock browser, then develop to fast and smooth and not an insult, stock AOSP browser would be so much faster.
Turbotab said:
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Google if you want to use Chrome as the stock browser, then develop to fast and smooth and not an insult, stock AOSP browser would be so much faster.
Click to expand...
Click to collapse
True.. Chrome on mobile is still not upto desktop level yet. I believe it's v18 or something, right? The stock browser would have much better result in SunSpider/Browsermark. The N4 numbers looks even worse. Somewhere the optimizations isn't working.
The GLbenchmark tests are weird. Optimus G posts much better result than N4 when both are same hardware. It infact scores lower than Adreno 225 in some cases. This is totally whacked.
For N10, I am still wondering about fill rate. Need to check what guys say about this.
Is it running some debugging code in the devices at this time?
Turbotab said:
Both A9 & A15 use the same instruction set architecture (ISA) so no they won't. Benchmarks may need to be modified, if the new SoC are too powerful and max out the old benches, but for GL Benchmark, that has not happened yet and there are already new updates in the pipeline.
Click to expand...
Click to collapse
Actually not. A8 and A9 are the same ISA (Armv7), while A5 A7 and A15 are in another group (Armv7a)
Once we get rid of the underclock no tablet will be able to match. I'm sure the Mali t604 at 750 MHz would destroy everything.
hung2900 said:
Actually not. A8 and A9 are the same ISA (Armv7), while A5 A7 and A15 are in another group (Armv7a)
Click to expand...
Click to collapse
I have to disagree, this is from ARM's info site.
The ARM Cortex-A15 MPCore processor has an out-of-order superscalar pipeline with a tightly-coupled low-latency level-2 cache that can be up to 4MB in size. The Cortex-A15 processor implements the ARMv7-A architecture.
The ARM Cortex-A9 processor is a very high-performance, low-power, ARM macrocell with an L1 cache subsystem that provides full virtual memory capabilities. The Cortex-A9 processor implements the ARMv7-A architecture and runs 32-bit ARM instructions, 16-bit and 32-bit Thumb instructions, and 8-bit Java bytecodes in Jazelle state.
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.set.cortexa/index.html
Keion said:
Once we get rid of the underclock no tablet will be able to match. I'm sure the Mali t604 at 750 MHz would destroy everything.
Click to expand...
Click to collapse
Except the iPad 4, which has a GPU that is currently 57% faster than the T604.
Sent from my iPad Mini using Tapatalk
Do remember that Awesome resolution does tax the GPU a lot. Heck most lower end desktop GPUs would struggle
Harry GT-S5830 said:
Do remember that Awesome resolution does tax the GPU a lot. Heck most lower end desktop GPUs would struggle
Click to expand...
Click to collapse
Indeed it does,but not in offscreen testing, where Anand made his proclamation.
Sent from my iPad Mini using Tapatalk
Hemlocke said:
Except the iPad 4, which has a GPU that is currently 57% faster than the T604.
Sent from my iPad Mini using Tapatalk
Click to expand...
Click to collapse
Nah, I think we can beat that too.
Drivers + OC.

Categories

Resources