two RN3 pro but with different hardwares! - Xiaomi Redmi Note 3 Questions & Answers

Hi! Guys
I have some weird problem. I have two of this device bought it from different shops. both RN3 pro 32 gig.
but when I check the hardware in antutu for one of these says:
CPU model: 6 Core ARMv8 Processor (VFPv4 NEON)
and for the other one says:
CPU model: Qualcomm Snapdragon 650 MSM8956
both have adreno 510 GPU
results of the first one was 66000 but the second one 76000!
also the Light sensor of the first one is from Liteon and the second one from Sensortek!
besides I see the color accuracy of the their screen is not the same both in same sattings
Please check your device if it's possible and help me out to understand why they have these differences! is it normal?

Not sure but it looks like you might have an mtk version and a snapdragon version

don't worry, it's the same processor.

first pic is the result of antutu not being able to verify exact cpu model cuz your device isnt online. however, what it reports corresponds to 650 anyways. try and go online, it'll show the real name. as far as benches go. heat, rom, usage could affect scores greatly. i wouldnt worry about it.
oh and as far as sensor goes, yeah, xiaomi made several rn3 "versions" with different fp/light/camera sensors. who knows what else was changed tbh. prolly cutting costs after pushing "higher quality" devices in the first batch and gaining some popularity for the device

Completely normal, nothing to worry about.
1. Dont rely on Antutu scores, they are inconsistent everytime. Both of your devices are original running on Snapdragon 650.
2. Xiaomi uses internal hardware from different manufacturers ranging from sensors, camera modules, RAM, display panels etc.
3. Same as above, both devices have display panels from varying OEMs calibrated differently.

thanQ all. Now I'm relieved
I have another issue that I don't understand! in recent apps, one of the phones shows the recent apps with the apps last image, but one of them shows the apps with just its icons! although both are running with 7.3.7 global stable! is it a hidden setting or something about it?!

double finger zoom in/out if i remember correctly

Most manufacturers use parts from different suppliers, like xiaomi. None of it is better or worse than each other. The display seems different because SHARP panel is "cooler" and BOE panel is "warmer". That's just how the manufacturers of the displays calibrated them.

Related

running full speed interesting observation

OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
markimar said:
OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
Click to expand...
Click to collapse
ill check on this when i get home. this issue im assuming is with honeycomb itself. we would assume that ICS would properly use those cores
Sent from my Samsung Galaxy S II t989
i don't have it yet (mine gets delivered on wed), but what you observed makes perfect sense. Can they change it to run on say an 800 MHZ constant "down" to 500MHZ when doing the most simple tasks? obviously i to do not believe that 500MHZ will be sufficient at all times to do screen scrolling and such on it's own.
I'm really hoping that the few performance issues people are seeing is resolved in firmware updates and a tegra 3 optimized version of ICS. Maybe asus/nvidia needs to do more tweaking to HC before the ICS build is pushed if it will take a while for ICS to arrive to the prime (past january).
The cores are optimized just fine. They kick in when rendering a web page or a game, but go idle and use the 5th core when done. Games always render.
ryan562 said:
ill check on this when i get home. this issue im assuming is with honeycomb itself. we would assume that ICS would properly use those cores
Sent from my Samsung Galaxy S II t989
Click to expand...
Click to collapse
Nothing's changed over HC in the way ICS uses h/w acceleration. And I'd assume apps using h/w acceleration do so via calls to the OS, not to the chip directly. So it appears what you've got is what you're going to get.
---------- Post added at 06:59 PM ---------- Previous post was at 06:55 PM ----------
markimar said:
OK I've got mine on normal mode, and this kind of confirms my original thought that the 500mhz 5th core is clocked to low. I find the pad actually speeds up when I have multiple items in my recently run tab! If my understanding of the way it works these programs are still running in the background right? Then it starts kicking in the other 4 and not just running on the 5th at 500mhz! I really think we'd see a speed boost if we can get that 5th core over 500. Yes its supposed to save battery life but I really don't think 500 is fast enough to run on its own. You're thoughts and observations?
Click to expand...
Click to collapse
Do you have Pulse installed? A bunch of people using it were reporting stuttering where their lower powered devices aren't. If you run it at full speed, does it stutter? One of the hypothesis is that it's the core's stepping up and down that's causing the stuttering.
BarryH_GEG said:
Nothing's changed over HC in the way ICS uses h/w acceleration. And I'd assume apps using h/w acceleration do so via calls to the OS, not to the chip directly. So it appears what you've got is what you're going to get.
Click to expand...
Click to collapse
Also, correct me if I'm wrong, but I don't think that the OS knows about the fifth core? I believe the chip's own scheduler manages the transition between the quad-core and the companion core, not the Android scheduler.
Mithent said:
Also, correct me if I'm wrong, but I don't think that the OS knows about the fifth core? I believe the chip's own scheduler manages the transition between the quad-core and the companion core, not the Android scheduler.
Click to expand...
Click to collapse
That's the way I'd guess it would work. I don't think Android addresses different chips differently. I'd assume it's up to the SoC to manage the incoming instructions and react accordingly. If Android was modified for dual-core, I don't think it diffentiates between the different implementations of dual-core chips. Someone with more h/w experience correct me if I'm wrong. Also, does anyone know if the chip manufacturer can add additional API's that developers can write to directly either instead of or in parallel with the OS? I ask because how can a game be optimized for Tegra if to the OS all chips are treated the same?
I tried out the power savings mode for a while.it seemed to perform just fine. Immediate difference is that it lowers the contrast ratio on display. This happens as soon as you press the power savings tab. Screen will look like brightness dropped a bit but if you look closely, you'll see it lowered the contrast ratio. Screen still looks good but not as sharp as in other 2 modes. UI still seems to preform just fine. Plus I think the modes doesn't affect gaming or video playback performance. I read that somewhere, either anandtech or Engadget. When watching vids or playing games, it goes into normal mode. So those things won't be affected no matter what power mode you in, I think..lol
I was thinking of starting a performance mode thread. To see different peoples results and thoughts on different power modes. I read some people post that they just use it in power/battery savings mode. Some keep it in normal all the time. Others in balanced mode. Would be good to see how these different modes perform in real life usage. From user perspective. I've noticed, so far, that In balanced mode, battery drains about 10% an hour. This is with nonstop use including gaming, watching vids, web surfing, etc. now in battery savings mode, it drains even less per hour. I haven't ran normal mode long enough to see how it drains compared to others. One thing though, web surfing drains battery just as fast as gaming.
BarryH_GEG said:
I ask because how can a game be optimized for Tegra if to the OS all chips are treated the same?
Click to expand...
Click to collapse
I hate quoting myself but I found the answer on Nvidia's website. Any otimizations are handled through OpenGL. So games written to handle additional calls that Teg2 can support are making those calls through OpenGL with the OS (I'm guessing) used as a pass-through. It would also explain why Tegra optimized games fail on non-Teg devices because they wouldn't be able process the additional requests. So it would appear that Teg optimization isn't being done through the OS. Again, correct me if I'm wrong.
BarryH_GEG said:
That's the way I'd guess it would work. I don't think Android addresses different chips differently. I'd assume it's up to the SoC to manage the incoming instructions and react accordingly. If Android was modified for dual-core, I don't think it diffentiates between the different implementations of dual-core chips.
Click to expand...
Click to collapse
I did some research on it; here's what Nvidia say:
The Android 3.x (Honeycomb) operating system has built-in support for multi-processing and is
capable of leveraging the performance of multiple CPU cores. However, the operating system
assumes that all available CPU cores are of equal performance capability and schedules tasks
to available cores based on this assumption. Therefore, in order to make the management of
the Companion core and main cores totally transparent to the operating system, Kal-El
implements both hardware-based and low level software-based management of the Companion
core and the main quad CPU cores.
Patented hardware and software CPU management logic continuously monitors CPU workload
to automatically and dynamically enable and disable the Companion core and the main CPU
cores. The decision to turn on and off the Companion and main cores is purely based on current
CPU workload levels and the resulting CPU operating frequency recommendations made by the
CPU frequency control subsystem embedded in the operating system kernel. The technology
does not require any application or OS modifications.
Click to expand...
Click to collapse
http://www.nvidia.com/content/PDF/t...e-for-Low-Power-and-High-Performance-v1.1.pdf
So it uses the existing architecture for CPU power states, but intercepts that at a low level and uses it to control the companion core/quad-core switch?
Edit: I wonder if that means that tinkering with the scheduler/frequency control would allow the point at which the companion core/quad-core switch happens to be altered? If the OP is correct, this might allow the companion core to be utilised less if an increase in "smoothness" was desired, at the cost of some battery life?
Mithent said:
I wonder if that means that tinkering with the scheduler/frequency control would allow the point at which the companion core/quad-core switch happens to be altered? If the OP is correct, this might allow the companion core to be utilised less if an increase in "smoothness" was desired, at the cost of some battery life?
Click to expand...
Click to collapse
So what we guessed was right. The OS treats all multi-cores the same and it's up to the chip maker to optimize requests and return them. To your point, what happens between the three processors (1+1x2+1x2) is black-box and controlled by Nvidia. To any SetCPU type program it's just going to show up as a single chip. People have tried in vain to figure how to make the Qualcomm dual-core's act independently so I'd guess Teg3 will end up the same way. And Nvidia won't even publish their drivers so I highly doubt they'll provide any outside hooks to control something as sensitive as the performance of each individual core in what they're marketing as a single chip.
[/COLOR]
Do you have Pulse installed? A bunch of people using it were reporting stuttering where their lower powered devices aren't. If you run it at full speed, does it stutter? One of the hypothesis is that it's the core's stepping up and down that's causing the stuttering.[/QUOTE]
I have been running mine in balanced mode, have had pulse installed since day one, no lag or stuttering in anything. games, other apps work fine.
Well my phones when clocked at 500 so I wouldn't be surprised
Sent from my VS910 4G using xda premium

iPad 4 vs 5250 (Nexus 10 Soc) GLBenchmark full results. UPDATE now with Anandtech!!

XXXUPDATEXXX
Anandtech have now published the perfromance preview of the Nexus 10, lets the comparison begin!
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Well, the first full result has appeared on GLBenchmark for the iPad4, so I have created a comparison with the Samsung Arndale board, which uses exactly the same SoC as the Nexus 10, so will be very close in performance to Google's newest tablet. GLBenchmark, as its name suggests test Open GL graphics perrformance, which is important criteria for gaming.
Which device wins, click the link to find out.
http://www.glbenchmark.com/compare....ly=1&D1=Apple iPad 4&D2=Samsung Arndale Board
If you're really impatient, the iPad 4 maintains it lead in tablet graphics, the Nexus 10 may performance slightly better in final spec, but the underlying low-level performance will not change much.
I've also made a comparison between the iPad 3 & 4.
Interestingly the in-game tests GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p) :, which is run independent of native screen resolution show the following
iPad 4: 48.6 FPS
iPad 3: 25.9 FPS
5250 : 33.7 FPS
So the iPad is twice as fast as its older brother, the Exynos will probably score nearer 40 FPS in final spec, with new drivers and running 4.2, the board runs ICS, however Jelly Bean did not really boost GL performance over ICS. What is interesting is that iPad 4, whose GPU is supposed to clocked at 500 MHz vs 250 MHz in the iPad 3 does not perform twice as fast in low-level test.
Fill Rate, triangle throughtput, vertex output etc is not double the power of the iPad 3, so although the faster A6 cpu helps, I reckon a lot of the improvement in the Egypt HD test is caused by improved drivers for the SGX 543 MP4 in the Pad 4. The Galaxy S2 received a big jump in GL performance when it got updated Mali drivers, so I imagine we should see good improvements for the T604, which is still a new product and not as mature as the SGX 543.
http://www.glbenchmark.com/compare....tified_only=1&D1=Apple iPad 4&D2=Apple iPad 3
I'd imagine the new new iPad would take the lead in benchmarks for now as it'll take Sammy and Google some time to optimize the beast, in the end however actual app and user interface performance is what matters, and reports are overwhelmingly positive on the Nexus 10.
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
________________
Edit: I forgot that Exynos 4210 with Mali400MP4 GPU had very bad GLbenchmark initially (even worse than PowerVR SGX540), but after updating firmware it's way better than other SoCs on Android handsets.
hung2900 said:
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
Click to expand...
Click to collapse
Not sure about this, but don't benchmark tools need to be upgraded for new architectures to? A15 is quite a big step, SW updates may be necessary for proper bench.
Damn..now I have to get an iPad.
I believe we have to take the Arndale board numbers with pinch of salt. It's a dev board, and I doubt it has optimized drivers for the SoC like it's expected for N10. Samsung has this habit of optimizing the drivers with further updates.
SGS2 makes for a good case study. When it was launched in MWC2011, it's numbers were really pathetic. It was even worse than Tegra2.
Anand ran benchmark on the pre-release version of SGS2 on MWC2011, check this:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
It was showing less than Tegra2 numbers! It was that bad initially.
Then look when Anand finally reviewed the device after few months:
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
Egypt (native resolution) numbers went up by 3.6x and Pro also got 20% higher. Now they could have been higher if not limited by vsync. GLbenchmark moved from 2.0 to 2.1 during that phase, but I am sure this would not make such a big difference in numbers.
If you again check the numbers now for SGS2, it's again another 50% improvement in performance from the time Anand did his review.
Check this SGS2 numbers now:
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
This is just to show that how driver optimization can have a big affect on the performance. My point is that we have to wait for proper testing on final release of N10 device.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Samsung has clocked the GPU @ 533MHz. So, it shouldn't be getting so less.
According to Samsung, it more like 2.1 GPixels/s: http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Fill rate is a low-level test, and there shouldn't be such a big difference from the quoted value. Let's wait and see how the final device shapes up.
hung2900 said:
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
________________
Edit: I forgot that Exynos 4210 with Mali400MP4 GPU had very bad GLbenchmark initially (even worse than PowerVR SGX540), but after updating firmware it's way better than other SoCs on Android handsets.
Click to expand...
Click to collapse
In areas where the Mali 400 lacked performance, like fragment and vertex lit triangle output T604 is comfortably 5 x the performance, whereas in these low-level tests iPad is not a concrete 2x the power of iPad 3, but achieves twice the FPS in Egypt HD than its older brother. I suspect drivers are a big factor here, and Exynos 5250 will get better as they drivers mature.
hot_spare said:
I believe we have to take the Arndale board numbers with pinch of salt. It's a dev board, and I doubt it has optimized drivers for the SoC like it's expected for N10. Samsung has this habit of optimizing the drivers with further updates.
SGS2 makes for a good case study. When it was launched in MWC2011, it's numbers were really pathetic. It was even worse than Tegra2.
Anand ran benchmark on the pre-release version of SGS2 on MWC2011, check this:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
It was showing less than Tegra2 numbers! It was that bad initially.
Then look when Anand finally reviewed the device after few months:
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
Egypt (native resolution) numbers went up by 3.6x and Pro also got 20% higher. Now they could have been higher if not limited by vsync. GLbenchmark moved from 2.0 to 2.1 during that phase, but I am sure this would not make such a big difference in numbers.
If you again check the numbers now for SGS2, it's again another 50% improvement in performance from the time Anand did his review.
Check this SGS2 numbers now:
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
This is just to show that how driver optimization can have a big affect on the performance. My point is that we have to wait for proper testing on final release of N10 device.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Samsung has clocked the GPU @ 533MHz. So, it shouldn't be getting so less.
According to Samsung, it more like 2.1 GPixels/s: http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Fill rate is a low-level test, and there shouldn't be such a big difference from the quoted value. Let's wait and see how the final device shapes up.
Click to expand...
Click to collapse
I agree with most of what you have said. On the GPixel figure this is like ATI GPU teraflops figures always being much higher than Nvidia, in theory with code written to hit the device perfectly you might see that those high figures, but in reality the Nvidia cards with lower on paper numbers equaled or beat ATI in actual game FPS. It all depends on whether the underlying architecture is as efficient in real-world tests, vs maximum technical numbers that can't be replicated in actual game environments.
I think the current resolution of the iPad / Nexus 10 is actually crazy, and will would see prettier games with lower resolutions, the amount of resources needed to drive those high MP displays, means lots of compromises will be made in terms of effects / polygon complexity etc to ensure decent FPS, especially when you think that to drive Battlefield 3 at 2560 x 1600 with AA and high textures, requires a PC that burn 400+ watts of power, not a 10 watt SoC.
Overall when we consider that Nexus 10 has twice the RAM for game developers to use and faster CPU cores, games should look equally as nice as both, the biggest effect will be the level of support game developers provide for each device, the iPad will probably be stronger i that regard. Nvidia was able to coax prettier games out of Tegra 3 through developer support, hopefully Google won't forget the importance of this.
What's the point of speculation? Just wait for the device to be released and run all the test you want to get confirmation on performance. Doesn't hurt to wait
BoneXDA said:
Not sure about this, but don't benchmark tools need to be upgraded for new architectures to? A15 is quite a big step, SW updates may be necessary for proper bench.
Click to expand...
Click to collapse
Both A9 & A15 use the same instruction set architecture (ISA) so no they won't. Benchmarks may need to be modified, if the new SoC are too powerful and max out the old benches, but for GL Benchmark, that has not happened yet and there are already new updates in the pipeline.
I can't wait to see this Exynos 5250 in a 2.0ghz quad-core variant in the semi near future... Ohhhh the possibilities. Samsung has one hell of a piece of silicon on their hand.
Chrome
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Google if you want to use Chrome as the stock browser, then develop to fast and smooth and not an insult, stock AOSP browser would be so much faster.
Turbotab said:
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Google if you want to use Chrome as the stock browser, then develop to fast and smooth and not an insult, stock AOSP browser would be so much faster.
Click to expand...
Click to collapse
True.. Chrome on mobile is still not upto desktop level yet. I believe it's v18 or something, right? The stock browser would have much better result in SunSpider/Browsermark. The N4 numbers looks even worse. Somewhere the optimizations isn't working.
The GLbenchmark tests are weird. Optimus G posts much better result than N4 when both are same hardware. It infact scores lower than Adreno 225 in some cases. This is totally whacked.
For N10, I am still wondering about fill rate. Need to check what guys say about this.
Is it running some debugging code in the devices at this time?
Turbotab said:
Both A9 & A15 use the same instruction set architecture (ISA) so no they won't. Benchmarks may need to be modified, if the new SoC are too powerful and max out the old benches, but for GL Benchmark, that has not happened yet and there are already new updates in the pipeline.
Click to expand...
Click to collapse
Actually not. A8 and A9 are the same ISA (Armv7), while A5 A7 and A15 are in another group (Armv7a)
Once we get rid of the underclock no tablet will be able to match. I'm sure the Mali t604 at 750 MHz would destroy everything.
hung2900 said:
Actually not. A8 and A9 are the same ISA (Armv7), while A5 A7 and A15 are in another group (Armv7a)
Click to expand...
Click to collapse
I have to disagree, this is from ARM's info site.
The ARM Cortex-A15 MPCore processor has an out-of-order superscalar pipeline with a tightly-coupled low-latency level-2 cache that can be up to 4MB in size. The Cortex-A15 processor implements the ARMv7-A architecture.
The ARM Cortex-A9 processor is a very high-performance, low-power, ARM macrocell with an L1 cache subsystem that provides full virtual memory capabilities. The Cortex-A9 processor implements the ARMv7-A architecture and runs 32-bit ARM instructions, 16-bit and 32-bit Thumb instructions, and 8-bit Java bytecodes in Jazelle state.
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.set.cortexa/index.html
Keion said:
Once we get rid of the underclock no tablet will be able to match. I'm sure the Mali t604 at 750 MHz would destroy everything.
Click to expand...
Click to collapse
Except the iPad 4, which has a GPU that is currently 57% faster than the T604.
Sent from my iPad Mini using Tapatalk
Do remember that Awesome resolution does tax the GPU a lot. Heck most lower end desktop GPUs would struggle
Harry GT-S5830 said:
Do remember that Awesome resolution does tax the GPU a lot. Heck most lower end desktop GPUs would struggle
Click to expand...
Click to collapse
Indeed it does,but not in offscreen testing, where Anand made his proclamation.
Sent from my iPad Mini using Tapatalk
Hemlocke said:
Except the iPad 4, which has a GPU that is currently 57% faster than the T604.
Sent from my iPad Mini using Tapatalk
Click to expand...
Click to collapse
Nah, I think we can beat that too.
Drivers + OC.

Optimization help

Okay, so I'm REALLY anal about the speed of my phone, the slightest bit of stutter or lag from just the notification center itself really bothers me. I was wondering if someone could recommend some really good settings for my phone
I currently am running
JellyBam 6.3.0 (JB 4.2.2)
4Aces Kernel
I would like some good settings regarding governor, CPU Frequency, and any other things I can do including stuff in developer options, if that helps. Thanks!
It is likely that you will always have "some" degree of lag present in the note 1. Due in large part to our GPU. We are also limited in performance by our dual core CPU.
That being said, the closest to zero lag I've found, is using Flappjaxxx current JB AOSP build, (0225) combined with nova launcher, and his newest 3.0.9 ALT kernel.
Windows, transition, and animator settings to "off" in development settings.
Wheatley governor set to 384 min, 1.72 max.
system tuner app on system controls set to "recommended".
No over/undervolt.
forced GPU rendering in development settings.
These are my main settings, but yours will likely differ.
Happy tuning....g
^^Limited performance from "only a dual core" ...
Hardware is WAY ahead of software right now.
The second core is offline most of the time due to no load and developers of apps not fully understanding how to utilise multiple threads...
Adding more cores on top of an unused core ain't really gonna help much.
And yet we cant even stream a quality youtube video above 22 FPS, all the while the MSM8660 specs boast a capability of nearly 60 FPS with the Adreno 220 GPU.
So my question is, Are we seeing reduced performance from the CPU, or the GPU. It cant be all software, as we see the reductions when ranging software from GB to JB.
Drivers are in play of course, but I can't hardly imagine a piece of code so poorly made, as to reduce output capacity by 50%.
Not doubting you brother because I "know" you know your way around this machine, and because we so many times have traveled the same paths of thought. And it's entirely possible I'm missing a critical point here. But damn...I wanted the stated video speeds, and I'm not getting what Qual and company promised me. and in a direct comparison to the note2 quad, it's night and day as I watch the same video on my note1 next to my wifes 2. The odds are in favor of 2 cores running low speed on the quad core unit, as opposed to our note 1 running a single core wide open until the second one is needed. That of course was the basis for my statement.
OP can tweak for many great improvements, but I personally feel like we were duped on the claimed output of the 8660.....g
Just get a wife - then the phone lag will seem so trivial.
LOL .....he speaks the truth ....g

My theory/rant about Qualcomm and their Snapdragon 808/810 processors.

So on my thread for the Nexus 6P bootloop fix, @btvolta asked me this question:
btvolta said:
I am still on the previous modified ex kernel and my phone seems to run just the same as before my blod experience. How many cores are normally running if not for the blod issue?
Click to expand...
Click to collapse
He had a good question, as many other people were reporting that the 6P was running almost the same, if not even better, with only half the cores enabled.
Below is the reply I gave to him. I decided to post it into this thread, because I would like to know what you guys think about my theory about Qualcomm's chips, and even correct me if I'm wrong, as I would like to understand this situation as accurately as possible. (Although I do ask that those of you who do disagree with me, do it respectfully, and I will treat you the same)
XCnathan32 said:
So me typing this reply ended up in me going about a long rant about my theories about Qualcomm. Tl;Dr to your question: Stock 6P uses 8 cores, fixed 6P uses 4, Qualcomm 810 using 8 cores probably overheats so much, that it thermal throttles heavily, resulting in performance only slightly higher than the same processor, with 4 cores, that thermal throttle much less.
Trigger warning for anyone about to read this: I harshly bash on Qualcomm in this semi-angry rant, if you are a diehard Qualcomm fan, you should probably not read this.
On a stock Nexus 6P, 8 cores are enabled in ARMs big.LITTLE configuration. big.LITTLE is where there is a cluster of power efficient, slower cores to handle smaller tasks (in the 6P's case, 4 Cortex A53s running at 1.55GHz), and a cluster of more power hungry, high performance cores (for the 6P, 4 Cortex A57s running at 2GHz)
On a bootlooping 6P, a hardware malfunction related to the big cluster causes this bootloop, so this fix remedies the problem by disabling the high performance big cores.
The stock 6P is supposed to use the Cortex A57 cores and some of the Cortex A53 cores for foreground tasks. So you would think that a working phone should have double the performance of a phone with this fix, right? After all, it's using 4 more cores, and those cores are clocked almost 25% higher. The reason that (I think) performance is not noticeably affected, is because Qualcomm's Snapdragon 808/810 SoCs, are a horrible, rushed project, that could be designed better by a group of monkeys.
Even with 4 cores disabled, my phone can still thermal throttle (for those who don't know, thermal throttling is when CPU/GPU performance is intentionally limited by software to keep temperatures in check) when playing games, or even watching YouTube. The big cores run way hotter, and they thermal throttle insanely easily, see this graph here. In 30 seconds, the big cores are already to 1.8GHz (From 2GHz), in 60 seconds, the big cores are down to 1.4GHz, and in 3 minutes, 3 freaking minutes, the big cores are thermal throttled down to 850MHz, which is 235% slower than the advertised 2000MHz, and 182% slower than the little cores 1.55GHz.
So my guess is that the big cores thermal throttle so easily, and the high heat output of the big cores results on the little cores overheating, which results in the little cores being thermal throttled along with the big cores. So 4 cores that typically do not thermal throttle, are better than 8 which do. Either that, or when the big cores overheat, the device turns off the big cores and only uses the little cores, which is essentially this fix.
For those of you that think my description of the 808/810 was slightly (extremely) harsh, you're right. However, here's why I was so hard on them: I feel like Qualcomm rushed development of the 808 and 810 to get it to flagship devices. The 808 and 810 were also the first (and last) of it's processors to use the TSMC 20nm manufacturing process. So my guess would be that Qualcomm designed the processor based on that manufacturing process, and then after finding out about the poor thermals of their new chips, it was too late to redesign their chip, because they had to give it to manufacturers. After all, a "Flagship device" can't use a last gen processor. So the overheating chips were given to manufacturers just so their phone could look better on a spec sheet.
Also, Qualcomm VP McDonough said "The rumours are rubbish, there was not an overheating problem with the Snapdragon 810 in commercial devices"(source). However his response to heat issues and benchmarking problems in the early Flex 2 and One M9 was because they weren't final commercial versions of the devices. "Everything you're saying is fair. But we all build pre-released products to find bugs and do performance optimisation. So when pre-released hardware doesn't act like commercial hardware, it’s just part of the development process." In that context, performance optimisation most likely means "allow the devices to run hotter than they should before they throttle" (source) which results in problems later down the line (like maybe half of the cores failing, causing a record number of bootloops in devices?)
The whole reason I typed this rant, was to express my frustration at how Qualcomm (most likely) caused tens of thousands of people to have devices that performed worse than they should have performed on paper, and even result in broken devices. And I haven't seen many people blame Qualcomm for the bootlooping problem, and everyone blames Hauwei/LG/Google, while Qualcomm twiddles their thumbs and keeps ranking in money for their domination in the mobile SoC market. Now obviously, I'm not 100% sure that Qualcomm is to blame for the bootlooping problems, and no one will probably ever know who caused the problem. So this is just a theory that I have. But it is awfully suspicious how the same chip has had problems in multiple devices, even when different companies manufactured the devices.
Even if Qualcomm isn't to blame for the bootlooping problems, it is hard to deny that their chips have serious overheating issues. Samsung themselves basically admitted that the 810 had problems, as every single one of their Galaxy S devices (at least US models) have used a snapdragon processor, except for the galaxy S6, where Samsung opted to use their own Exynos processor instead of the 810, even on the US model.
Please feel free to reply and discuss/argue my points, as I would really like to hear what you guys think about my theory.
Click to expand...
Click to collapse
Well, if it's true what you're saying, Qualcomm is the bad guy here. It all points towards an overheating issue with the powercores, which are designed and made by them. However, I feel that the OEMs who purchase these SoCs from them should take responsibility for their choice to use them in their devices and step up. If this theory you have can be proven by extensive testing, a lawsuit should be fairly easy to win and Qualcomm should be forced to better their development and testing.
I may be jumping the gun a bit here, but seeing Qualcomm has a bit of a monopoly on the SoC market, we, the consumers should stop putting our trust in devices using their chipsets. I've had several devices with a Qualcomm chipset and every single one of them were crap. I've had a Samsung Galaxy S2 (which I hated because of the software Samsung put on that device) but the hardware (Exynos) was top notch at the time.
Ok, that's about all of my two cents. Thanks for the good read btw.
NeoS said:
Well, if it's true what you're saying, Qualcomm is the bad guy here. It all points towards an overheating issue with the powercores, which are designed and made by them. However, I feel that the OEMs who purchase these SoCs from them should take responsibility for their choice to use them in their devices and step up. If this theory you have can be proven by extensive testing, a lawsuit should be fairly easy to win and Qualcomm should be forced to better their development and testing.
I may be jumping the gun a bit here, but seeing Qualcomm has a bit of a monopoly on the SoC market, we, the consumers should stop putting our trust in devices using their chipsets. I've had several devices with a Qualcomm chipset and every single one of them were crap. I've had a Samsung Galaxy S2 (which I hated because of the software Samsung put on that device) but the hardware (Exynos) was top notch at the time.
Ok, that's about all of my two cents. Thanks for the good read btw.
Click to expand...
Click to collapse
Just happened to be Huawei was making the device and even though Huawei has their own in house chip but since Huawei brand was not really familiar to US, maybe Google is not convinced to market a Nexus brand with some Hi Silicon Kirin processor but they need to get another Nexus device out that year.
If just it was Samsung back then to make the Nexus device, maybe Google is ok with Samsung Exynos chip.
How great would the 6p be IF it could utilize the a57 cores? I'm using Franco Kernel, and he has it set up to barely use the big cores. I'm guessing mostly for battery savings of course, but on a 6p that thus far hasn't had the infamous battery meltdown, to have half of the cores (and the most powerful) cores sitting at lowest frequency for 95% of the time is kind of a shame. I'm willing to dust off my pitchfork

Exynos 9820 Performance

There's too much misinformation around and once I get my unit I will have about 28 days to decide if to keep it or skip this generation, I would like to use this thread to build evidence on how good or bad the international version of this device is, if Samsung scammed 90% of the world then they don't deserve our money.
I'm getting mixed feelings about this chip, In speed test G the 855 beats it by a huge margin, so most people went back spitting at it for being a badly optimized SoC.
Anandtech's Comparisons Show super disappointing scores for the S10 Exynos version, but many of the scores presented make no sense, with older hardware of the same OEM scoring better than the newest, I don't know how much to believe that review and I hope it is fake or badly executed, to my interest, my pre-order comes with the Exynos version and there's no way to have warranty on a 855 in the UK.
Then, the positive evidence we have is that it beats every other released phone on the market in battery usage, there's no such video about the 855 yet so we can't compare them, but that's all I found about the battery of this chip.
In a S10+ vs iPhone XS Max, the S10+ again Exynos beats the iPhone on almost every application, I didn't expect that to happen since it almost never happened, the apps are supposedly the same most of the time and they might as well have completely different algorithms to do the same task done superficially, but generally iOS apps are cleaner inside and their developers have higher standards of work, so how can Exynos be THAT much better?
From what I see the Exynos 9820 is not as what is perceived here on XDA....
Duncan1982 said:
From what I see the Exynos 9820 is not as what is perceived here on XDA....
Click to expand...
Click to collapse
And that's a demo unit with 6GB of ram, there are even higher benchmarks with real ones around:
https://browser.geekbench.com/v4/cpu/12211126
But everyone is dismissing Geekbench as "not reliable", and in a way it is not reliable since it doesn't demonstrate the effectiveness of a good scheduler, even the crappiest ones will be pushed to maximum performance once geekbench runs, we need more comparisons with other tools.
it is a solid fact that the Exynos is much faster than anything else in single core performance except the apple A12 and is much faster than the SD855 , while the SD855 is faster in multi core but no by much,
my only concern with the exynos is the stuttering and frame drops and the smoothness overall , i don't care about benchmarks really , and the S10 is ultra fast in launching apps already in both the exynos and SD855 , but the main concern as i mentioned is the smoothness which i think will be related to how the KERNEL will handle & is optimized and if was targeting performance or targeting efficiency only.

Categories

Resources