Question: Android and zswap in kernel - Android Q&A, Help & Troubleshooting

I was recently trying out various kernels and hit one very intriguing feature in devil kernel. It allows you to set some memory out as zswap and change swappiness value of kernel. I googled and found lot of articles about zswap and linux kernel but absolutely no articles on how it impacts android performance.
So how does providing ram to zswap benefit android ?
Is it something which should be done on devices with 2 GB ram already ?
Does Fsync setting come into picture and does zswap increase chances of data loss ?

Zswap in Android
rApt0r7 said:
I was recently trying out various kernels and hit one very intriguing feature in devil kernel. It allows you to set some memory out as zswap and change swappiness value of kernel. I googled and found lot of articles about zswap and linux kernel but absolutely no articles on how it impacts android performance.
So how does providing ram to zswap benefit android ?
Is it something which should be done on devices with 2 GB ram already ?
Does Fsync setting come into picture and does zswap increase chances of data loss ?
Click to expand...
Click to collapse
Zswap compresses and swaps the pages to the swap area. Frees the available memory. Clearly differences can be observed with cat /proc/meminfo in Memfree section, Swaptotal and swapfree sections.
I have tested with 700MB ram in one android device, observed the clear difference.
I feel, it should be useful even for 2GB ram., because its used in the servers also.
PS: I have also started recently to work on this.
Thanks,

Related

[Q] I/O Settings: What are all the different settings for?

I know about bfq and cfq but I have no idea about the others.... What are the advantages and disadvantages of each? Anyone know?
Please try to elaborate a little more. Your questions should include more information that can be helpful to the general community as well.
noop is a simple FIFO scheduler that doesn't make any attempt to reorder the queue. Due the massive speed of random read operations and miniscule seek times on flash memory, it is often considered to be the best choice on devices like this where SIO or VR aren't options (and in some cases even outperforms them).
Deadline, as the name implies, imposes a time limit on all IO operations and sorts them into queues based on read versus write operations, pulling from the queues based on which operation is next to expire. I'm not sure of any advantage it would convey over noop or SIO on solid-state memory, and in fact many Android kernels don't even include it any more.
CFQ is much more complicated, sorting IO requests by process and priority and trying to ensure that they are executed in the most efficient order. While the advantages on mechanical drives are immediately obvious, on flash memory the added CPU overhead is largely wasted since seek times are irrelevant.
BFQ is a tweaked version of CFQ. It has been shown to actually improve overall write throughput even on flash memory, but again there is a CPU overhead cost involved that is unlikely to be worth it for day-to-day operations.
The TL;DR version is to use noop until kernels with SIO are available, though of course feel free to try them all and use whatever feels smoothest to you. In the long run it probably doesn't make much noticeable difference.
thanks
awesome info

[Q] Overclock Honeycomb View

Hey guys,
i know that there was an overclocked kernel for GB, just curious if any developers are planning on coming out with an updated kernel for the View Honeycomb that will support overclocking....now with HC we can really push these things to the limit and get the most out of them.
hasseye said:
Hey guys,
i know that there was an overclocked kernel for GB, just curious if any developers are planning on coming out with an updated kernel for the View Honeycomb that will support overclocking....now with HC we can really push these things to the limit and get the most out of them.
Click to expand...
Click to collapse
LeeDroid's ROM has a kernel which allows overclocking.
http://forum.xda-developers.com/showthread.php?t=1406851
You need kennel source first
Sent from my HTC Flyer P510e using xda premium
The CPU usable limit is about 1.8 vs. stock 1.5 Ghz. So you can get a little more, but you have to remember for most of the intensive tasks, its not the CPU.... but the GPU that does the brunt of the work, therefore you might not even notice the difference when the CPU is overclocked. For example your Gt3 game won't frame any faster because the video is rendered by the GPU, not the CPU. The custom kernels also enable some difference CPU governor schemes, which might improve observed performance for some things. All of these things factor into battery usage. In general increased performance means more aggressive use of battery power.
DigitalMD said:
The CPU usable limit is about 1.8 vs. stock 1.5 Ghz. So you can get a little more, but you have to remember for most of the intensive tasks, its not the CPU.... but the GPU that does the brunt of the work, therefore you might not even notice the difference when the CPU is overclocked. For example your Gt3 game won't frame any faster because the video is rendered by the GPU, not the CPU. The custom kernels also enable some difference CPU governor schemes, which might improve observed performance for some things. All of these things factor into battery usage. In general increased performance means more aggressive use of battery power.
Click to expand...
Click to collapse
Hey, what about our bragging rights?
We already have the fastest CPU on the market. Do you want to crush and disenfranchise the competition?
I actually would prefer under clocking, undervolting better process management. I want better battery life.
Leedroids, custom kernel , and others, allow for under or over and also for precise voltage adjustments at each speed. Other ways to get better battery are various governors that are less aggressive than the performance governor. Check the Leedroid and other treads for more info. Of course the biggest power eaters are the display and the GPU, if you are doing video or games the GPU will consume a lot and there is not much to can do with that. Display on eats battery, you can reduce brightness to help.

Zram and Swappiness

Im specifically after an answer for the swappiness tbh.
Im using Yank kernel and it allows zRam which i usually set at about 300. The swappiness part i dont understand and cant really find any info on it, it has settings going up in steps of 10 from 60 to 100. I set it at 80 but thats only because its in the middle.
Could someone explain to me or link me to something, that isnt too technical, about what these steps are for and what they do for it?
It just means how aggressive it is at forcing things into zram. The higher the number (100), the more forceful it is to use the 300 you set as zram over the normal ram.
Swappiness is usually a term to describe swap bias. As above but towards a swap partition or file. Technically zram and swap aren't the same but the term swappiness works
This is really over simplified but I think it should be sufficient.
Sent from my GT-I9300 using Tapatalk 2
Cheers that'll do me, at least it gives me an idea and I can have a play about with it

[DEV]Possible way of enabling mali400 dynamic memory use and allocation

I did find this https://groups.google.com/forum/#!topic/linux-sunxi/dYCL84IQH_Yù
In short: changing an option you can have mali do dynamic ram allocation that means it uses only ram that it REALLY needs, not some hardcoded number like 192. So when it is not needing so much ram, system has more free ram. Everyone likes having more usable ram right?
Pasting relevant parts
Siarhei Siamashka said:
After this fix, Mali400 GPU is configured to use up to 256 MiB of
normal memory when CONFIG_SUNXI_MALI_RESERVED_MEM is not enabled.
When CONFIG_SUNXI_MALI_RESERVED_MEM is enabled, it uses 64 MiB of
reserved memory and up to 192 MiB of normal memory (same as before).
Click to expand...
Click to collapse
and this
Mali has MMU and does not strictly need any physically contiguous
memory reservation. Most users will likely prefer more flexible
memory allocation instead of always wasting 64 MiB (even when Mali
is not used or needs much less).
CONFIG_SUNXI_MALI_RESERVED_MEM option is still available and can
be enabled if needed (for performance or some other reasons).
Click to expand...
Click to collapse
I wanted to ask if something like this is doable for our Xperia devices here that have a mali400 (U, Sola, Go and P). And if yes, how can we do this? Or at least try to see if it works.
Is it an option that has to be set in a config file when compiling kernel? Can it be set as build prop or init.d or whatever?
I'm not a dev and I don't have the environment/skill to compile anything so unless it is something easy I'm probably not going to make it myself.
High hopes for this though... :laugh:
@95A31 @percy_g2 @DevSwift1
I looked to our kernel codes now, it's possible to apply it but I don't know, it shouldn't be that easy :/
if the choice of pre-allocating ram for mali was because of "safety" reasons like avoiding potential hacks to pirate stuff or sidestep limitations imposed by licensing or whatever (like for example some of the Xposed Framework's modules that allow youtube content over HDMI port or whatever), it can just be a matter of disabling settings.
Or maybe they pushed it into production before the drivers were stable enough for dynamic ram allocation to work reliably.
Some people running linux on arm boards with a mali400 gpu already recommend to build kernels disabling that option to "maximize available memory".
http://www.malaya-digital.org/make-...-an-external-usb-hard-disk-over-your-network/
pasting relevant part here (note that this is for compiling linux-on-arm kernel and for a different device, stuff may or may not apply to our phones):
9] To maximize available memory, edit .config and look for the line having "CONFIG_CMDLINE". Then edit the line to become::
CONFIG_CMDLINE="[email protected] console=ttyS0,115200 sunxi_ve_mem_reserve=0 sunxi_g2d_mem_reserve=0 sunxi_no_mali_mem_reserve sunxi_fb_mem_reserve=8"
10] Look for CONFIG_SUNXI_MALI_RESERVED_MEM, CONFIG_MALI, and CONFIG_MALI400 in .config . Set them to "n" :
CONFIG_SUNXI_MALI_RESERVED_MEM=n
CONFIG_MALI=n
CONFIG_MALI400=n
Click to expand...
Click to collapse
Besides, I don't see why the heck a non-HD-resolution phone must keep 192 mb of dedicated gpu ram when anything else (desktop/laptop or console) can do perfectly fine with full HD resolutions and hardware accelerated h.264 decoding with a crappy card that has only 64mb of ram (and the hardware accelerator, of course).
Make more of a difference if we had a rom that actually used the leaked sources so it doesn't run like ****.
However I'd like this still as I don't game on my phone at all, so it's pointless for the gpu to use so much on mine.
Are there some devs working on it at the moment? It would be just great to have some MB's more RAM
I comparet our kernel with this one and they are totally different. AFAIK is not possible.
95A31 said:
I comparet our kernel with this one and they are totally different. AFAIK is not possible.
Click to expand...
Click to collapse
But it's still in AOSP todo, so we can hope? I think this can radically change our user experience, isn't it?
Our board uses only 32MB of RAM for Mali, not 64MB
Garcia98 said:
Our board uses only 32MB of RAM for Mali, not 64MB
Click to expand...
Click to collapse
then why we have only 392 mb out of 512? i know that some mbs are dedicated to the gpu, but 120mb to gpu????
alexhdkn said:
then why we have only 392 mb out of 512? i know that some mbs are dedicated to the gpu, but 120mb to gpu????
Click to expand...
Click to collapse
In Xperia U 15MB are used for memory debugger, 1MB for shared memory, 16MB for modem, 1MB for ISSW, 64MB (may vary) for hardware, 32MB are reserved for Mali and then 383MB (may vary) are available for the system
Regarding dynamic memory allocation, I don't think that using it would be a good idea, as it can lead to allocation failures, memory leaks and other logical errors. I'm sure that there is a reason why Igloo Community don't apply Mali dynamic memory allocation to Snowball
BTW, you can read here the opinion of a professional about dynamic memory allocation in real time systems: http://www.mentor.com/embedded-soft...10b6e1-f9e9-4d87-8c88-6ced717a9f7a?cmpid=8688 :good:
Said professional even goes further advising against the practice - but yet it landed on sunxi 3.4 branches, with dev even sounding almost shocked by the senselessness of the previous state of affair.
My educate guess is that if Igloo Community never lift a finger, is just because they were axed before such a secondary issue could become relevant, in the grand scheme of things.

Qualcomm hardware cryptographic engine doesn´t work on Z5C?

Hello,
i just got my Z5C yesterday and so far i´m more than happy. But there is one issue:
I use the AOSP Full disk encryption on the phone but it seems like the native Qualcomm hardware cryptographic engine doesn´t work well - i benchmarked the internal storage before and after, here are the results:
Before: read ~ 200 MB/s write: ~ 120 MB/s
After: read ~ 120 MB/s write: ~ 120 MB/s
(Benchmarked by A1 SD Bench)
I´m using a FDE on my windows 10 Notebook with an e-drive resulting in like 5% performance loss. The decrease in read-speed on the Z5C is noticable. What do you think, is there something wrong or is this a normal behaviour?
Cheers
I don't know if this helps, but it seems that the Nexus 5X and 6P won't use hardware encryption according to this:
DB> Encryption is software accelerated. Specifically the ARMv8 as part of 64-bit support has a number of instructions that provides better performance than the AES hardware options on the SoC.
Click to expand...
Click to collapse
Source: The Nexus 5X And 6P Have Software-Accelerated Encryption, But The Nexus Team Says It's Better Than Hardware Encryption
So maybe, Sony is following the same path...
Sadly they don't, it seems like the write-speed decrease is just on the same level as the N6 back then. Let's hope that they include the bibs in the kernel by the marshmellow update.
Why would they use Qualcomms own crappy crypto engine, if the standard Cortex-A57 is really fast with AES thanks to NEON and possibly additional, newer optimizations/instructions? AFAIK the latter are supported in newer Linux kernels per default, so there's no need to use additional libraries to enable support or the Qualcomm crypto stuff.
But it would be nice, if someone with actual insight and detailed knowledge about this could say a few words for clarification.
Neither i got insight nor big knowledge, but i benchmarked the system and like 60% loss in reading speed doesn't feels like a optimized kernel either :/
Qualcomm is a no go. On android plaform, only Exynos 7420(not sure about 5xxx series) real get used h/w encry and decry engine and no speed down.
TheEndHK said:
Qualcomm is a no go. On android plaform, only Exynos 7420(not sure about 5xxx series) real get used h/w encry and decry engine and no speed down.
Click to expand...
Click to collapse
That's not only off topic, it's also wrong. The Exynos SoCs don't have a substantially different crypto engine or "better"/"faster" crypto/hashing acceleration via the ARM cores. If anything, the Samsung guys are smart enough to optimize their software so it makes use of the good hardware. This seems to be missing here, but for no obvious reason.
xaps said:
That's not only off topic, it's also wrong. The Exynos SoCs don't have a substantially different crypto engine or "better"/"faster" crypto/hashing acceleration via the ARM cores. If anything, the Samsung guys are smart enough to optimize their software so it makes use of the good hardware. This seems to be missing here, but for no obvious reason.
Click to expand...
Click to collapse
I agreed all ARMv8-A cpu support hardware AES and SHA. Both Exynos 7420 and S810 should also got that ability but it turns out doesn't work on Z5c now which is a fact. I'm sure S6 got it working but not sure about on other S810 phones or might be Qualcomm missing driver support.
TheEndHK said:
Both Exynos 7420 and S810 should also got that ability but it turns out doesn't work on Z5c now which is a fact.
Click to expand...
Click to collapse
Please show us the kernel source code proving that fact.
What you call "fact" is the result of a simple before and after comparison done with a flash memory benchmark app run by one person on one device. To draw the conclusion that the only reason for the shown result is that the Z5(c) can't do HW acceleration of AES or SHA is a bit far-fetched, don't you think?
xaps said:
Please show us the kernel source code proving that fact.
What you call "fact" is the result of a simple before and after comparison done with a flash memory benchmark app run by one person on one device. To draw the conclusion that the only reason for the shown result is that the Z5(c) can't do HW acceleration of AES or SHA is a bit far-fetched, don't you think?
Click to expand...
Click to collapse
I've got a S6 and no slower after encry/decry and we had a thread discussing about it on S6 board.
I don't own a Z5c now bcoz my living place HK not yet started to sell it(I come to here bcoz considering to sell my S6 and Z1c and swap to Z5c later) so I can't test it but according to OP, there is a substantial slow down.
All ARMv8-A should support hardware AES/SHA, it is not just a cached benchmark result on S6. That's real.
A few things to ponder...
This is confusing. I was always under the impression that decryption (reads) are usually a tad bit faster then encryption writes. This at least seems true for TrueCrypt benchmarks. But that may be comparing apples and oranges.
A few thoughts...
In some other thread it was mentioned that the Z5C optimizes RAM usage by doing internal on the fly compression / decompression to make very efficient usage of the RAM. As cryptotext usually is incompressible could this be a source of the slowdown on flash R/W. Could this be a source of the problem (either by actual slowdown or confusing the measurement of the benchmarking tool?)
These days the SSD flash controllers also do transparent compression of data before writing to reduce wear on the flash. If you send a huge ASCII plaintext file into the write queue the write speed will be ridiculously high, if you send incompressible data like video the write speed rate goes way down. This happens on hardware level, not taking any cryptop/decrypto operations on the OS level into account.
Is there maybe a similar function in todays smartphone flash controllers?
Can I ask the OP, in what situations do you notice the slower read rate on the crypted device? Not so long ago when spinning rust disks were still the norm in desktop and laptop computers read rates of 120 MB were totally out of reach. What kind of usage do you have on your smartphone that you actually notice the lag? Is it when loading huge games or PDF files or something similar?

Categories

Resources