Powercolor Vega Red Devil 56 - Non-Review Stuff - PC Hardware General

Hello folks!
You may have noticed on Twitter that we have the Powercolor Red Devil Vega 56 in the house for review. But since our reviews will remain largely focused on Linux, even for GPUs, I wanted to find a place to ask and discuss anything else that readers may want to know. That's where this will come in.
If there's anything you want to see - certain photos, tests (3DMark or game benchmarks) - let me know. For Windows I can throw it in the Coffee Lake test bench while I let another grind out the actual benchmarks for the review.

Here's the progress on the benchmarks:
3DMark (Windows, so I don't put in the review): Under 5% difference versus the 1080 Founder's Edition on Fire Strike & Time Spy.
Destiny 2 vs 580: Noticeable but hard to measure. I think 1080 is still okay to compare 580 and Vega, but I think I want to run additional benchmarks (higher resolutions) to better test against the 1080.
GPU Computing: So there has been a change from Polaris to Vega in OpenCL support. Vega was the starting point to transition to Radeon Open Compute (ROCm) and so the support for OpenCL is not as easy as before. Matter of fact, I'm unable to get OpenCL tests to work after two days. Going to spend some more time on it soon but I'm growing concerned - this is one of biggest things in testing GPUs for XDA. Developers have legit reasons to use these beyond gaming and we want to highlight that - just like I did when testing the 1080 Founder's Edition on Thunderbolt 3.
Hopefully we can get this going here soon. This is our first GPU we've received for review and I want to really do this right. Will have more here hopefully soon.

Related

dual tegra 2 (Gtablet) vs dual snap dragon (HP Touchpad)?

Does any one have any thoughts on this issue?
Which one is faster? Tegra 2 dual core or snap dragon dual core?
http://www.anandtech.com/show/4054/first-look-viewsonic-gtablet-and-tegra-2-performance-preview/2
I dont know about speed, they seem comparable in those tests at least. But as an owner of both, if they get Android working right on the Touchpad, I will gladly move on. The screen on the G-Tab is just sooooo bad and the Touchpad is sooooo nice.
I've often wondered about these types of questions. It's like watching Top Gear, and a car is .01 seconds faster than another and they start singing it's praises...in practical use, does it really much matter?
I remember when PCs were getting fractionally faster and faster. Sure a PC from 1994 is going to seem SLOOOW compared to the best availabel now, but in 2003, would an increase in megahertz really be all that different? To the trained eye, maybe, but to the guy making spreadsheets all day, probably not.
Besides, this question will be moot when the Tegra 3 is released...lol
Do you want to know if it's faster running webOS, or faster running Android 2.2, 2.3, or 3.x? Each tablet is capable of running certain OSs with a certain degree of proficiency.
I trying my best to my hands on a touchpad too.
just curious as to why some tablets have the snap dragon and some have the tegra 2
The snapdragon has a more powerful gpu but less powerful smartphone oriented CPU. The scores for the HTC sensation tell the story since it uses the same SoC+radio. at the retail price the touchpad was a horrific substandard tablet. At 99$ not so much. The snapdragon in that tablet is also capable of clocking the cores independent of each other which allegedlysaves power.
Sent from my GtabComb using Tapatalk

iPad 4 vs 5250 (Nexus 10 Soc) GLBenchmark full results. UPDATE now with Anandtech!!

XXXUPDATEXXX
Anandtech have now published the perfromance preview of the Nexus 10, lets the comparison begin!
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Well, the first full result has appeared on GLBenchmark for the iPad4, so I have created a comparison with the Samsung Arndale board, which uses exactly the same SoC as the Nexus 10, so will be very close in performance to Google's newest tablet. GLBenchmark, as its name suggests test Open GL graphics perrformance, which is important criteria for gaming.
Which device wins, click the link to find out.
http://www.glbenchmark.com/compare....ly=1&D1=Apple iPad 4&D2=Samsung Arndale Board
If you're really impatient, the iPad 4 maintains it lead in tablet graphics, the Nexus 10 may performance slightly better in final spec, but the underlying low-level performance will not change much.
I've also made a comparison between the iPad 3 & 4.
Interestingly the in-game tests GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p) :, which is run independent of native screen resolution show the following
iPad 4: 48.6 FPS
iPad 3: 25.9 FPS
5250 : 33.7 FPS
So the iPad is twice as fast as its older brother, the Exynos will probably score nearer 40 FPS in final spec, with new drivers and running 4.2, the board runs ICS, however Jelly Bean did not really boost GL performance over ICS. What is interesting is that iPad 4, whose GPU is supposed to clocked at 500 MHz vs 250 MHz in the iPad 3 does not perform twice as fast in low-level test.
Fill Rate, triangle throughtput, vertex output etc is not double the power of the iPad 3, so although the faster A6 cpu helps, I reckon a lot of the improvement in the Egypt HD test is caused by improved drivers for the SGX 543 MP4 in the Pad 4. The Galaxy S2 received a big jump in GL performance when it got updated Mali drivers, so I imagine we should see good improvements for the T604, which is still a new product and not as mature as the SGX 543.
http://www.glbenchmark.com/compare....tified_only=1&D1=Apple iPad 4&D2=Apple iPad 3
I'd imagine the new new iPad would take the lead in benchmarks for now as it'll take Sammy and Google some time to optimize the beast, in the end however actual app and user interface performance is what matters, and reports are overwhelmingly positive on the Nexus 10.
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
________________
Edit: I forgot that Exynos 4210 with Mali400MP4 GPU had very bad GLbenchmark initially (even worse than PowerVR SGX540), but after updating firmware it's way better than other SoCs on Android handsets.
hung2900 said:
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
Click to expand...
Click to collapse
Not sure about this, but don't benchmark tools need to be upgraded for new architectures to? A15 is quite a big step, SW updates may be necessary for proper bench.
Damn..now I have to get an iPad.
I believe we have to take the Arndale board numbers with pinch of salt. It's a dev board, and I doubt it has optimized drivers for the SoC like it's expected for N10. Samsung has this habit of optimizing the drivers with further updates.
SGS2 makes for a good case study. When it was launched in MWC2011, it's numbers were really pathetic. It was even worse than Tegra2.
Anand ran benchmark on the pre-release version of SGS2 on MWC2011, check this:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
It was showing less than Tegra2 numbers! It was that bad initially.
Then look when Anand finally reviewed the device after few months:
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
Egypt (native resolution) numbers went up by 3.6x and Pro also got 20% higher. Now they could have been higher if not limited by vsync. GLbenchmark moved from 2.0 to 2.1 during that phase, but I am sure this would not make such a big difference in numbers.
If you again check the numbers now for SGS2, it's again another 50% improvement in performance from the time Anand did his review.
Check this SGS2 numbers now:
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
This is just to show that how driver optimization can have a big affect on the performance. My point is that we have to wait for proper testing on final release of N10 device.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Samsung has clocked the GPU @ 533MHz. So, it shouldn't be getting so less.
According to Samsung, it more like 2.1 GPixels/s: http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Fill rate is a low-level test, and there shouldn't be such a big difference from the quoted value. Let's wait and see how the final device shapes up.
hung2900 said:
So Mali 604T didn't match 5 times better than Mali 400, or maybe Samsung underclocked it.
Still very good but not the best.
________________
Edit: I forgot that Exynos 4210 with Mali400MP4 GPU had very bad GLbenchmark initially (even worse than PowerVR SGX540), but after updating firmware it's way better than other SoCs on Android handsets.
Click to expand...
Click to collapse
In areas where the Mali 400 lacked performance, like fragment and vertex lit triangle output T604 is comfortably 5 x the performance, whereas in these low-level tests iPad is not a concrete 2x the power of iPad 3, but achieves twice the FPS in Egypt HD than its older brother. I suspect drivers are a big factor here, and Exynos 5250 will get better as they drivers mature.
hot_spare said:
I believe we have to take the Arndale board numbers with pinch of salt. It's a dev board, and I doubt it has optimized drivers for the SoC like it's expected for N10. Samsung has this habit of optimizing the drivers with further updates.
SGS2 makes for a good case study. When it was launched in MWC2011, it's numbers were really pathetic. It was even worse than Tegra2.
Anand ran benchmark on the pre-release version of SGS2 on MWC2011, check this:
http://www.anandtech.com/show/4177/samsungs-galaxy-s-ii-preliminary-performance-mali400-benchmarked
It was showing less than Tegra2 numbers! It was that bad initially.
Then look when Anand finally reviewed the device after few months:
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
Egypt (native resolution) numbers went up by 3.6x and Pro also got 20% higher. Now they could have been higher if not limited by vsync. GLbenchmark moved from 2.0 to 2.1 during that phase, but I am sure this would not make such a big difference in numbers.
If you again check the numbers now for SGS2, it's again another 50% improvement in performance from the time Anand did his review.
Check this SGS2 numbers now:
http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
This is just to show that how driver optimization can have a big affect on the performance. My point is that we have to wait for proper testing on final release of N10 device.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Samsung has clocked the GPU @ 533MHz. So, it shouldn't be getting so less.
According to Samsung, it more like 2.1 GPixels/s: http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Fill rate is a low-level test, and there shouldn't be such a big difference from the quoted value. Let's wait and see how the final device shapes up.
Click to expand...
Click to collapse
I agree with most of what you have said. On the GPixel figure this is like ATI GPU teraflops figures always being much higher than Nvidia, in theory with code written to hit the device perfectly you might see that those high figures, but in reality the Nvidia cards with lower on paper numbers equaled or beat ATI in actual game FPS. It all depends on whether the underlying architecture is as efficient in real-world tests, vs maximum technical numbers that can't be replicated in actual game environments.
I think the current resolution of the iPad / Nexus 10 is actually crazy, and will would see prettier games with lower resolutions, the amount of resources needed to drive those high MP displays, means lots of compromises will be made in terms of effects / polygon complexity etc to ensure decent FPS, especially when you think that to drive Battlefield 3 at 2560 x 1600 with AA and high textures, requires a PC that burn 400+ watts of power, not a 10 watt SoC.
Overall when we consider that Nexus 10 has twice the RAM for game developers to use and faster CPU cores, games should look equally as nice as both, the biggest effect will be the level of support game developers provide for each device, the iPad will probably be stronger i that regard. Nvidia was able to coax prettier games out of Tegra 3 through developer support, hopefully Google won't forget the importance of this.
What's the point of speculation? Just wait for the device to be released and run all the test you want to get confirmation on performance. Doesn't hurt to wait
BoneXDA said:
Not sure about this, but don't benchmark tools need to be upgraded for new architectures to? A15 is quite a big step, SW updates may be necessary for proper bench.
Click to expand...
Click to collapse
Both A9 & A15 use the same instruction set architecture (ISA) so no they won't. Benchmarks may need to be modified, if the new SoC are too powerful and max out the old benches, but for GL Benchmark, that has not happened yet and there are already new updates in the pipeline.
I can't wait to see this Exynos 5250 in a 2.0ghz quad-core variant in the semi near future... Ohhhh the possibilities. Samsung has one hell of a piece of silicon on their hand.
Chrome
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Google if you want to use Chrome as the stock browser, then develop to fast and smooth and not an insult, stock AOSP browser would be so much faster.
Turbotab said:
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
Google if you want to use Chrome as the stock browser, then develop to fast and smooth and not an insult, stock AOSP browser would be so much faster.
Click to expand...
Click to collapse
True.. Chrome on mobile is still not upto desktop level yet. I believe it's v18 or something, right? The stock browser would have much better result in SunSpider/Browsermark. The N4 numbers looks even worse. Somewhere the optimizations isn't working.
The GLbenchmark tests are weird. Optimus G posts much better result than N4 when both are same hardware. It infact scores lower than Adreno 225 in some cases. This is totally whacked.
For N10, I am still wondering about fill rate. Need to check what guys say about this.
Is it running some debugging code in the devices at this time?
Turbotab said:
Both A9 & A15 use the same instruction set architecture (ISA) so no they won't. Benchmarks may need to be modified, if the new SoC are too powerful and max out the old benches, but for GL Benchmark, that has not happened yet and there are already new updates in the pipeline.
Click to expand...
Click to collapse
Actually not. A8 and A9 are the same ISA (Armv7), while A5 A7 and A15 are in another group (Armv7a)
Once we get rid of the underclock no tablet will be able to match. I'm sure the Mali t604 at 750 MHz would destroy everything.
hung2900 said:
Actually not. A8 and A9 are the same ISA (Armv7), while A5 A7 and A15 are in another group (Armv7a)
Click to expand...
Click to collapse
I have to disagree, this is from ARM's info site.
The ARM Cortex-A15 MPCore processor has an out-of-order superscalar pipeline with a tightly-coupled low-latency level-2 cache that can be up to 4MB in size. The Cortex-A15 processor implements the ARMv7-A architecture.
The ARM Cortex-A9 processor is a very high-performance, low-power, ARM macrocell with an L1 cache subsystem that provides full virtual memory capabilities. The Cortex-A9 processor implements the ARMv7-A architecture and runs 32-bit ARM instructions, 16-bit and 32-bit Thumb instructions, and 8-bit Java bytecodes in Jazelle state.
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.set.cortexa/index.html
Keion said:
Once we get rid of the underclock no tablet will be able to match. I'm sure the Mali t604 at 750 MHz would destroy everything.
Click to expand...
Click to collapse
Except the iPad 4, which has a GPU that is currently 57% faster than the T604.
Sent from my iPad Mini using Tapatalk
Do remember that Awesome resolution does tax the GPU a lot. Heck most lower end desktop GPUs would struggle
Harry GT-S5830 said:
Do remember that Awesome resolution does tax the GPU a lot. Heck most lower end desktop GPUs would struggle
Click to expand...
Click to collapse
Indeed it does,but not in offscreen testing, where Anand made his proclamation.
Sent from my iPad Mini using Tapatalk
Hemlocke said:
Except the iPad 4, which has a GPU that is currently 57% faster than the T604.
Sent from my iPad Mini using Tapatalk
Click to expand...
Click to collapse
Nah, I think we can beat that too.
Drivers + OC.

Optimization help

Okay, so I'm REALLY anal about the speed of my phone, the slightest bit of stutter or lag from just the notification center itself really bothers me. I was wondering if someone could recommend some really good settings for my phone
I currently am running
JellyBam 6.3.0 (JB 4.2.2)
4Aces Kernel
I would like some good settings regarding governor, CPU Frequency, and any other things I can do including stuff in developer options, if that helps. Thanks!
It is likely that you will always have "some" degree of lag present in the note 1. Due in large part to our GPU. We are also limited in performance by our dual core CPU.
That being said, the closest to zero lag I've found, is using Flappjaxxx current JB AOSP build, (0225) combined with nova launcher, and his newest 3.0.9 ALT kernel.
Windows, transition, and animator settings to "off" in development settings.
Wheatley governor set to 384 min, 1.72 max.
system tuner app on system controls set to "recommended".
No over/undervolt.
forced GPU rendering in development settings.
These are my main settings, but yours will likely differ.
Happy tuning....g
^^Limited performance from "only a dual core" ...
Hardware is WAY ahead of software right now.
The second core is offline most of the time due to no load and developers of apps not fully understanding how to utilise multiple threads...
Adding more cores on top of an unused core ain't really gonna help much.
And yet we cant even stream a quality youtube video above 22 FPS, all the while the MSM8660 specs boast a capability of nearly 60 FPS with the Adreno 220 GPU.
So my question is, Are we seeing reduced performance from the CPU, or the GPU. It cant be all software, as we see the reductions when ranging software from GB to JB.
Drivers are in play of course, but I can't hardly imagine a piece of code so poorly made, as to reduce output capacity by 50%.
Not doubting you brother because I "know" you know your way around this machine, and because we so many times have traveled the same paths of thought. And it's entirely possible I'm missing a critical point here. But damn...I wanted the stated video speeds, and I'm not getting what Qual and company promised me. and in a direct comparison to the note2 quad, it's night and day as I watch the same video on my note1 next to my wifes 2. The odds are in favor of 2 cores running low speed on the quad core unit, as opposed to our note 1 running a single core wide open until the second one is needed. That of course was the basis for my statement.
OP can tweak for many great improvements, but I personally feel like we were duped on the claimed output of the 8660.....g
Just get a wife - then the phone lag will seem so trivial.
LOL .....he speaks the truth ....g

[INFO] Nexus 10 vs Nexus 7 and emulators

Last summer, I decided to buy a Nexus 7 for using it mainly as an ebook reader. It's perfect for that with its very sharp 1280x800 screen. It was my first Android device and I love this little tablet.
I'm a fan of retro gaming and I installed emulators on every device I have: Pocket PC, Xbox, PSP Go, iPhone, iPad3, PS3. So I discovered that the Android platform was one of the most active community for emulation fans like me and I bought many of them, and all those made by Robert Broglia (.EMU series). They were running great on the N7 but I found that 16GB was too small, as was the screen.
I waited and waited until the 32 GB Nexus 10 became available here in Canada and bought it soon after (10 days ago). With its A15 cores, I was expecting the N10 to be a great device for emulation but I am now a little disapointed. When buying the N10, I expected everything to run faster than on the N7 by a noticeable margin.
Many emulators run slower on the N10 than on the N7. MAME4Ddroid and MAME4Droid reloaded are no longer completely smooth with more demanding ROMs, Omega 500, Colleen, UAE4droid and SToid are slower and some others needed much more tweaking than on the N7. I'm a little extreme on accuracy of emulation and I like everything to be as close to the real thing as possible. A solid 60 fps for me is a must (or 50 fps for PAL machines).
On the other side, there are other emus that ran very well: the .EMU series and RetroArch for example. These emulators are much more polished than the average quick port and they run without a flaw. They're great on the 10-inch screen and I enjoy them very much. The CPU intensive emulators (Mupen64Plus AE and FPSE) gained some speed but less that I anticipated.
So is this because of the monster Nexus 10's 2560x1600 resolution? Or is it because of limited memory bandwith? Maybe some emulators are not tweaked for the N10 yet. I wish some emulators had the option to set a lower resolution for rendering and then upscale the output. I think that many Android apps just try to push the frames to the native resolution without checking first if there is a faster way.
The N7 has a lower clocked 4 core CPU but has only 1/4 the resolution. I think that it's a more balanced device that the N10 which may have a faster dual core CPU but too much pixels to push. It's much like the iPad3 who was twice as fast as the iPad2 but had a 4x increase in resolution.
I am now considering going for a custom ROM on the N10 but I wonder if I will see an increase in emulation speed. Maybe those of you who did the jump can tell me. I'm thinking about AOKP maybe.
Any suggestion on that would be appreciated, thanks!
The emulators just need to be tweaked a bit to better perform on the completely different processor architecture. Really our processor is far more powerful than the Nexus 7 so the emulators should run faster. I too am a fan of the old games, and I play Super Nintendo and Game Boy Advance (and some Color) games quite often. I find performance to be perfect with no issues at all, but then again those arent exactly "demanding" emulators.
We do not have any sort of memory bandwidth limitation on the Nexus 10. The tablet has been designed to give the full needed 12.8 GB/s of memory bandwidth that is required for 2560x1600 resolution.
EniGmA1987 said:
The emulators just need to be tweaked a bit to better perform on the completely different processor architecture. Really our processor is far more powerful than the Nexus 7 so the emulators should run faster. I too am a fan of the old games, and I play Super Nintendo and Game Boy Advance (and some Color) games quite often. I find performance to be perfect with no issues at all, but then again those arent exactly "demanding" emulators.
We do not have any sort of memory bandwidth limitation on the Nexus 10. The tablet has been designed to give the full needed 12.8 GB/s of memory bandwidth that is required for 2560x1600 resolution.
Click to expand...
Click to collapse
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Having to render to that size of screen [2560x1600] will slow the game down. It’s called being ‘fill rate bound’. Even for a good processor it's a lot of work as the game uses quite a lot of overdraw.
The solution is to draw everything to a smaller screen (say half at 1280x800) and then stretch the final image to fill the screen.
Click to expand...
Click to collapse
A sad true my nexus 10 get dam hot and i have to play games at 1.4 or 1.2 that sux
Sent from my XT925 using xda app-developers app
espionage724 said:
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Click to expand...
Click to collapse
But fillrate isnt memory bandwidth. We need both more MHz and more raster operations to get higher fill rate of pixels per second. We can overclock the GPU to get the MHz, and that will help, but we have to find a way to solve the higher heat output too from that. More ROP's are impossible as it is a hardware design for how many we have. If we ever get to overclock up to around 750 MHz then we should see a 30-40% improvement in fill rate. At that point we may have memory bandwidth problems, but we wont know for sure until we get there. But the 12.8GB/s of bandwidth that we currently have is enough to support 2560x1600 resolution at our current GPU power. Our Nexus 10 also has the highest fillrate of any Android phone or tablet to date, about 1.4 Mtexel/s. And if we have memory bandwidth limitations, then we would see no improvement at all from the current overclock we do have up to 612-620MHz because the speed wouldnt be where the bottleneck is. Yet we can clearly see in benchmarks and real gaming that we get FPS increases with higher MHz, thus our current problem is the fillrate and not the memory bandwidth.
Also, the solution is not to render the game at half the resolution as that is a band-aid on the real problem. If the developer of a game would code the game properly we wouldnt have this problem, or if they dont feel like doing that then they should at least stop trying to put more into the game than their un-optimized, lazy project is capable of running nicely.
espionage724 said:
Hmm, if no memory bandwidth limitation exists on the N10, wouldn't I be able to run GTA 3 at 100% screen resolution and not have significantly lower FPS, as compared to 50% resolution?
Even Beat Hazard Ultra seems to be a bit laggy on the N10. When I inquired about it to the developer, he said:
Click to expand...
Click to collapse
With that logic you could buy any video card for a PC and it would run any game at the resolution the video card supports. That isn't the case because rendering involves more than just memory fill rate. There are textures, polygons, multiple rendering passes, filtering, it goes on and on. As EniGmA1987 mentioned nothing has been optimized to take advantage of this hardware yet, developers were literally crossing their fingers hoping their games would run 'as is'. thankfully the A15 cpu cores in the exynos will be used in the tegra 4 as well so we can look forward to the CPU optimizations soon which will definitely help.
Emulators are more cpu intensive than anything else, give it a little time and you won't have any problems with your old school games. Run the new 3DMark bench to see what this tablet can do, it runs native resolution and its not even fully optimized for this architecture yet.
2560*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth. Most emulators don't use 3D functions so fillrate, rendering, overdraw won't be a factor. Most emulators are single-threaded (correct me if I'm wrong) and the A15 should shine in this particular situation and even more so in multi-threaded scenarios. With its out-of-order pipeline and greatly enhanced efficiency it should be perfectly suited for the job.
We have the fillrate, we have enough CPU power and I'm still wondering why simple app like emulators aren't much faster than that. Is it Android? Is it the Dalvik VM? Or is it because some emulators need to be written in native code instead of using Java VM? I'm not a developer and I have only minimal knowledge in this department. I can only speculate but I'm curious enough about it that I started googling around to find why.
Lodovik said:
2560*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth
Click to expand...
Click to collapse
Just curious but what is that calculation supposed to be? total bandwidth needed? Cause I don't see your bit depth in there, unless the 4 is supposed to be that? If that is true than you are calculating on 4-bit color depth?
And then the result would just be bandwidth required for pixel data to memory wouldnt it? It wouldnt include texture data in and out of memory and other special functions like post processing.
2560*1600 = number of pixels on the screen
4 = bytes / pixels for 32-bits depth
60 = frames / second
/1024/1024 = divide twice to get the result in MB
Actually, I made a typo the result is 937,5 MB/s or 0.92 GB/s. This is just a rough estimate to get an idea of what is needed at this resolution just to push the all pixels on the screen in flat 2D at 60 fps, assuming that emulators don't use accelerated functions.
My point was that with 12.8 GB/s of memory bandwith, we should have more than enough even if this estimate isn't very accurate.
Thanks for the explanation
If there really were a memory bandwidth limitation the newer Trinity kernels and newest KTManta should help. In addition to the higher GPU speed they both allow (KTManta up to 720MHz) both ROM's have increased memory speeds which increase memory bandwidth to 13.8GB/s, up from 12.8 on stock.
Thanks for the info. There's so many configuration options available for the Nexus 10. I really enjoy having all those possibilities.
EniGmA1987 said:
If there really were a memory bandwidth limitation the newer Trinity kernels and newest KTManta should help. In addition to the higher GPU speed they both allow (KTManta up to 720MHz) both ROM's have increased memory speeds which increase memory bandwidth to 13.8GB/s, up from 12.8 on stock.
Click to expand...
Click to collapse
=Lodovik;40030*1600*4*60/1024/1024 = 937,3 MB/s for a 60 fps game at 32-bit depth. Most emulators don't use 3D functions so fillrate, rendering, overdraw won't be a factor. Most emulators are single-threaded (correct me if I'm wrong) and the A15 should shine in this particular situation and even more so in multi-threaded scenarios. With its out-of-order pipeline and greatly enhanced efficiency it should be perfectly suited for the job.
We have the fillrate, we have enough CPU power and I'm still wondering why simple app like emulators aren't much faster than that. Is it Android? Is it the Dalvik VM? Or is it because some emulators need to be written in native code instead of using Java VM? I'm not a developer and I have only minimal knowledge in this department. I can only speculate but I'm curious enough about it that I started googling around to find why.
Click to expand...
Click to collapse
You are taking what I said out of context. I was responding to someone else, thus the "quote" above my post.
Since you posted I loaded up some Super Nintendo, N64, and PlayStation games on my n10 without any issues. It may just be your setup. There are a lot of tweaks out there that could easily increase performance. One great and very simple one is enabling 2D GPU rendering which is in developer options. Just do some searching. GPU Overclocking won't help much, as you said above your games are only 2D. I am sure you can get them running just fine.

Vorke Z3 RK3399 Powered 4K Smart TV Box with Type C SATA Interface

Rumours about RK3399 chipset from Rockchip have swept over the internet, and we seem to be barely a soundbite apart from it. Promised to rival the quarter-century old way we think about TV BOX, Vorke has included the RK3399-powered Z3 in its ambitious plans. Next February is expected to mark the birth of the revolutionary gadget.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Adopting Rockchip RK3399, Vorke Z3 is about to outperform almost all of its competitors in market. Unveiled at CES 2016, RK3399 is a Hexa Core processor based on Dual-core ARM Cortex-A72 MPCore and Quad-core ARM Cortex-A53 MPCore framework paired with Mali-T860MP4 Quad CoreGPU. Reportedly, it is about to offer a significant performance boost against its predecessors, including RK3288, outranking SoCs from Amlogic and Allwinner. Scored 72483 in Antutu, What a madness!
Unlike other contenders, z3 offers a staggering free storage of 4GB LPDDR3, 32GB of EMMC to back up your TV entertainment, so that you can use it right out of the box. Along side the storage, its worth to point out that Vorke brings together compelling KODI entertainment center and RK3399`s insane 4K image quality into one nifty box. The joining of the Soc conduces to its support for H.265 and VP9 4K video decoding.
One of the problems with any android box comes to that it entails optimal internet connection and speed to stream the bandwidth hogging that video can take. In account of this, Z3 equips itself with AC wifi, the newest WiFi protocol and has a high speed data transfer speed up-to 1200Mbps under some circumstances. Besides, the support for Gigabit Ethernet is admittedly an afterthought.
When it comes to extension possibilities, there is no prospect of compromise. With 1 USB 2.0 , 1 USB3.0, 1 Type-C and 1 SATA placed at the sides, Z3 enables you to attach USB peripherals or even more storage. The aforementioned Type C port is another highlight in Android 6.0 box, and when you factor in the sheer number of connections on Z3, you begin to realize why it is a little bit bigger than Z1. With support for SATA, USB 2.0, USB 3.0, Gigabit Ethernet, SPDIF, HDMI 2.0, there are few devices you won’t be able to connect to the device.
What to Expect?
Rockchip RK3399
Android 6.0
4GB + 32GB
AC WIFI + Gigabit Ethernet
4K VP9
USB3.0, Type C and SATA interface
Just got confirmation that my Z1 is supposed to arrive in 3-5 days .......... So any idea what the price will be on this puppy the Z3???? When it will launch??? Antu score is badasszz .
How much was your Z1?
I;m guessing the Z3 will be about 40% t0 50% dearer.
Z1 in some webshops is US$ 120 - I'd expect the Z3 to be about $160-180
It looks like a decent unit - but also look around here, Antutu is NOT a reliable bench to quote.
Some have reported a Xperia z3 (if my memory serves me) as doing 71000 in Antutu, others say it should be about 50k.
Fluffbutt said:
How much was your Z1?
I;m guessing the Z3 will be about 40% t0 50% dearer.
Z1 in some webshops is US$ 120 - I'd expect the Z3 to be about $160-180
It looks like a decent unit - but also look around here, Antutu is NOT a reliable bench to quote.
Some have reported a Xperia z3 (if my memory serves me) as doing 71000 in Antutu, others say it should be about 50k.
Click to expand...
Click to collapse
Z1 is on sale at $74.99 .
Z3 is expected to twice the price of Z3.
We will take on CPU Z .
Some devices did some optimizations on Antutu Benchmark, so that they score pretty high.
linhuizhen said:
Z1 is on sale at $74.99 .
Z3 is expected to twice the price of Z3.
We will take on CPU Z .
Some devices did some optimizations on Antutu Benchmark, so that they score pretty high.
Click to expand...
Click to collapse
I had an email from Vorke, they say "under $200".. so i replied, "Hopefully under $150 as well" hahah! Hope it's not too dear!
What about that 3DMark bench, firestorm (or is it firestrike)... that seems to be a decent test, but it's mainly gpu, isn't it?
Fluffbutt said:
I had an email from Vorke, they say "under $200".. so i replied, "Hopefully under $150 as well" hahah! Hope it's not too dear!
What about that 3DMark bench, firestorm (or is it firestrike)... that seems to be a decent test, but it's mainly gpu, isn't it?
Click to expand...
Click to collapse
We will check :fingers-crossed:
As a slight sideways jump - I notice its competitor boxes both have heats sink + fan listed in specs - does anyone know if the Vorke is using active cooling?
I think I was right in a different forum's post - maybe the RK chip runs a little hotter than passive cooling can deal with?
So got my Z1 here in No.VA Mon. 16 ordered Jan. 5 at geekbuying. After three days pretty happy. Games I haven't been able to play because of bricked MINIX X8-H, & not being able to root MiBox so PS3 SIXAXIS controller could work run flawlessly on Z1 . Showbox runs smooth ,YouTube,kodi 16.1, Antu 3D score no tweeks 41723. Jide has crowd funded Rockchip 3399 TV box for March or May ,USD $ 99-129
I don't trust a company that need kick-starter/crowd funding to develop a device - they smell like "fly by nighters" to me... produce the device, rake in some dosh, run like buggery when people start to complain or what tech support.
That Mio/MixIO whatever it's called... US$129 on kick-starter... mope, zero trust...
Two things I like about Vorke - they exist as a company, self-funded development... and they responded to my silly queries; pre-sales support suggest good after sales support.
Fluffbutt said:
I don't trust a company that need kick-starter/crowd funding to develop a device - they smell like "fly by nighters" to me... produce the device, rake in some dosh, run like buggery when people start to complain or what tech support.
That Mio/MixIO whatever it's called... US$129 on kick-starter... mope, zero trust...
Two things I like about Vorke - they exist as a company, self-funded development... and they responded to my silly queries; pre-sales support suggest good after sales support.
Click to expand...
Click to collapse
Just out of interest, I found a US site with a price for the UGOOS UT5 (basically the same as the Vorke Z3) -- $349!!!
What an absolute joke - you could buy a half decent laptop and use that as a TV box, FFS!
I guess that pointless little AMOLED display add to the price.
Still, early days, it might just be a place-holder price.
Edit - why did that double post? Weird.
Fluffbutt said:
Just out of interest, I found a US site with a price for the UGOOS UT5 (basically the same as the Vorke Z3) -- $349!!!
What an absolute joke - you could buy a half decent laptop and use that as a TV box, FFS!
Click to expand...
Click to collapse
I agree. What's more I realy don't know what is all the fuss about RK3399? When for more than a year there is Amazon Fire TV available with MediaTek SoC MT8173 ([email protected] + [email protected]).
Maybe this is not so good forum to discuss that because MediaTek isn't fond of open source but people preffer working solution than one which you must fiddle all the time to make it work for a while .
Yes - but the AmFireTV or sort of locked, isn't it - own UI, low customisation, Amazon bias (understandable).
I've heard that the Vorke will be completely unlocked, rooted, open... maybe...
Anyway, the specs say different:
Qualcomm Snapdragon 8064 Quad Core 4x @ 1.7Ghz Qualcomm Adreno 320
MediaTek 8173C Quad Core 2x @ 2GHz & 2x @ 1.6Ghz GPU PowerVR Rogue GX6250
Click to expand...
Click to collapse
Neither of those will match the RK3399, and the Mali 850mp4 is a very good SoC GPU. Not "superb" or "the best2, but certainly good enough to nearly everything.
I do NOT like Antutu as a benchmark (it's heavily biased to certain chips) but the AFTV gets 53K while the RK3399 gets 73K
Fluffbutt said:
I do NOT like Antutu as a benchmark (it's heavily biased to certain chips) but the AFTV gets 53K while the RK3399 gets 73K
Click to expand...
Click to collapse
I would be more impressed if my RK3288 device couldn't do 62k in Antutu. AFAIK Antutu is quite strongly GPU biased which might indicate that RK3399 GPU is slower than RK3288 or there are problems with drivers.
Besides I preffer "clear" CPU benchmarks which can give me indication of Internet browsing performance. When I did some research about MT8173 more than year ago I found something like browser.geekbench.com/v4/cpu/compare/939975?baseline=1646914. (Note: Nexus 9 is spoofed identity of my RK3288 device). Then I was quite pleased with that performance improvement but devices with MT8173 came only with 2GB RAM which is to small amount for me. Even so RK3399 isn't realy more impressive to me than MT8173 (check: browser.geekbench.com/v4/cpu/compare/939975?baseline=993600) and we are more than a year after.
Fluffbutt said:
Just out of interest, I found a US site with a price for the UGOOS UT5 (basically the same as the Vorke Z3) -- $349!!!
What an absolute joke - you could buy a half decent laptop and use that as a TV box, FFS!
I guess that pointless little AMOLED display add to the price.
Still, early days, it might just be a place-holder price.
Edit - why did that double post? Weird.
Click to expand...
Click to collapse
Ugoos is a trusted brand.
Jagee said:
I would be more impressed if my RK3288 device couldn't do 70k Antutu score already. AFAIK Antutu is quite strongly GPU biased which might indicate that RK3399 GPU is slower than RK3288 or there are problems with drivers.
Besides I preffer "clear" CPU benchmarks which can give me indication of Internet browsing performance. When I did some research about MT8173 more than year ago I found something like browser.geekbench.com/v4/cpu/compare/939975?baseline=1646914. (Note: Nexus 9 is spoofed identity of my RK3288 device). Then I was quite pleased with that performance improvement but devices with MT8173 came only with 2GB RAM which is to small amount for me. Even so RK3399 isn't realy more impressive to me than MT8173 (check: browser.geekbench.com/v4/cpu/compare/939975?baseline=993600) and we are more than a year after.
Click to expand...
Click to collapse
We will try other benchmarks also. Recommend one please
linhuizhen said:
We will try other benchmarks also. Recommend one please
Click to expand...
Click to collapse
I overestimated a little Antutu score for my RK3288 device which should be 62k not 70k. Nevertheless I would realy like to see Vellamo and Octane benchmarks scores for RK3399.
My RK3288 device can do about 4800 points with Chrome in Vellamo.
Geekbench v4 score for RK3399 is rather well known. You can brag only if you beat 1600 single- and 3000 multi-core score . browser.geekbench.com/v4/cpu/993600
linhuizhen said:
Ugoos is a trusted brand.
Click to expand...
Click to collapse
I'm not disputing that - they have a good track record... but that doesn't stop $349 being too high for this device. And I give them mucho-credit for NOT trying to effing kickstarter the device!
But the amoled screen is really a non-item - what's the point, a TV box is stuck under the TV table; I don't even look at mine, just use it...
And what can a little screen show anyway apart from time or some form of channel display? Any more info would mean you;d have to get down on the floor, closer to it, to read it!
An amoled screen is perhaps $50 of that $150 over pricing ($200 is all I'd be paying for this spec TV box - for $350 you can get a full i7 with intel 520 gpu (400 GFlops) TV Box! The Mali gpu in the RK3399 is rated at about 90 GFLops.
It smells like a gimmick to make it stand out from the other 3 or 4 RK3399 boxes coming.
So I'm less knocking Ugoos themselves and more knocking their "vision" of the yet-to-come TV Box.
*********************************************************************************
Geekbench 1520 and 2840 isn't too bad - just 160 lower than that magical 3000 isn't to be sniffed at.
Mind you, even Geekbench can be misleading - GB 3 gives over 5300 for the Samsumng Galaxy Tab S2 9.7 (Qualcomm SoC). (some reviews say 4300, 4800, 5300 (rounded off))
Fluffbutt said:
An amoled screen is perhaps $50 of that $150 over pricing
It smells like a gimmick to make it stand out from the other 3 or 4 RK3399 boxes coming.
Click to expand...
Click to collapse
It might be a gimmic to make it stand out but yours estimation of the cost of such display is quite off. First. It isn't AMOLED display like in smartphones. Second it dosen't have similar size (5" 16:9 proportion).
I've seen OLED displays on MP3 players devices with prices lower than 30$.
Fluffbutt said:
Geekbench 1520 and 2840 isn't too bad - just 160 lower than that magical 3000 isn't to be sniffed at.
Mind you, even Geekbench can be misleading - GB 3 gives over 5300 for the Samsumng Galaxy Tab S2 9.7 (Qualcomm SoC). (some reviews say 4300, 4800, 5300 (rounded off))
Click to expand...
Click to collapse
3000 is just a number for multicore Geekbench v4 score which heavily depends from number of them. Even mentioned by you Qualcom SoC (MSM8976) in New Samsung Galaxy Tab S2 9.7 can get over 5000 points in Geekbench v3 with 8 cores (4xA57 + 4xA53) working simultanously (Source: http://browser.primatelabs.com/geekbench3/compare/4803220?baseline=5849462). That doesn't necessary translate to better performance than even just 2xA73.
Another problem when comparing benchmarks is that some devices contains different hardware than previous "batches". Like new (Qualcomm MSM8976) and "old" Galaxy Tab S2 9.7 (Samsung Exynos 5433 Octa).
Another factor is used benchmarking software and OS. There is clear example: http://browser.primatelabs.com/geekbench3/compare/4803220?baseline=5849462 when one test is for older design and with 32bit OS rather than with 64bit (AArch64). I don't want even start about overheating problems which might be per device variable.
Jagee said:
It might be a gimmic to make it stand out but yours estimation of the cost of such display is quite off. First. It isn't AMOLED display like in smartphones. Second it dosen't have similar size (5" 16:9 proportion).
I've seen OLED displays on MP3 players devices with prices lower than 30$.
3000 is just a number for multicore Geekbench v4 score which heavily depends from number of them. Even mentioned by you Qualcom SoC (MSM8976) in New Samsung Galaxy Tab S2 9.7 can get over 5000 points in Geekbench v3 with 8 cores (4xA57 + 4xA53) working simultanously (Source: http://browser.primatelabs.com/geekbench3/compare/4803220?baseline=5849462). That doesn't necessary translate to better performance than even just 2xA73.
Another problem when comparing benchmarks is that some devices contains different hardware than previous "batches". Like new (Qualcomm MSM8976) and "old" Galaxy Tab S2 9.7 (Samsung Exynos 5433 Octa).
Another factor is used benchmarking software and OS. There is clear example: http://browser.primatelabs.com/geekbench3/compare/4803220?baseline=5849462 when one test is for older design and with 32bit OS rather than with 64bit (AArch64). I don't want even start about overheating problems which might be per device variable.
Click to expand...
Click to collapse
Oh, I agree on the guesstimate - I have no real idea what an oled screen costs - your $30 is probably more valid than my $50 - but that make it worse, not better.
What else is there to make this over $150 more than the other RK3399 boxes - "under $200" stated by Vorke, $115 pre-sale price foe another, and so on.
It MIGHT be a "place holder" number - but it's working against them... I'm not going back to look see what the release price is, the $349 has put me off... especially if it's a "state a high price then a release over-price will sound better".
That's a marketing con, really - you state $349 before release then drop it to a still-too-high $299 or $259 after release and people will flock to it thinking they're getting a great deal. (Not saying they're doing it that way)
But the other RK3399 boxes? £115 pre-release, expected to be £140 after; Vokre has stated to me, in email, "definitely under $200".
And i FULLY agree with you about the benchies.
Perhaps the better benchies are the real-world ones... game FPS -- 5.2 for one SoC, 14.8 for another, 25.7 for a third, and so on.
****************
Still - I'm liking this 6 core box... it's better than almost everything I've looked at - allowing for the Tegra devices that are semi-locked (heavily controlled launchers and biased "features" (understandable, of course, like the Amazon devices).

Categories

Resources