[KERNEL][ROM][NOUGAT]Despair Kernel/UBERSTOCK - Nexus 6 Android Development

This is the new refined home for DarkRoom Development. If you submit bug reports without a log, you may be prosecuted...or executed.
Disclaimer:
If your device fails to comply with your standards of what you consider functioning, I am not liable. This is provided free of charge and does not come with a warranty. Although, if you provide a log, I can provide some sort of assurance that I will look into your issue.
Links:
Social:
Twitter - http://twitter.com/DespairDev
G+ Community - https://plus.google.com/u/0/communities/117685307734094084120
Downloads:
Google Drive – https://drive.google.com/drive/folders/0Bwcofov-xyI0ZVhQUWJhMm9PMkU
Source:
Github – https://github.com/matthewdalex/
Github – https://github.com/UBERROMS/
Credits:
faux123
franco
Google
flar2
imoseyon
Cl3Kener
neobuddy89
Star Wars
XDA:DevDB Information
[KERNEL][ROM][NOUGAT]Despair Kernel/UBERSTOCK, Kernel for the Nexus 6
Contributors
DespairFactor
Source Code: https://github.com/UBERROMS
Kernel Special Features:
Version Information
Status: Testing
Created 2015-07-07
Last Updated 2017-06-17

Packet Schedulers/Congestion Avoidance Algorithms:
CDG vs. Cubic vs. Westwood:
CDG
CAIA-Delay Gradient (CDG) is a hybrid congestion control algorithm which reacts to both packet loss and inferred queuing delay. It attempts to operate as a delay-based algorithm where possible, but utilises heuristics to detect loss-based TCP cross traffic and will compete effectively as required. CDG is therefore incrementally deployable and suitable for use on shared networks. During delay-based operation, CDG uses a delay-gradient based probabilistic backoff mechanism, and will also try to infer non congestion related packet losses and avoid backing off when they occur. During loss-based operation, CDG essentially reverts to reno-like behaviour. CDG switches to loss-based operation when it detects that a configurable number of consecutive delay-based backoffs have had no measurable effect. It periodically attempts to return to delay-based operation, but will keep switching back to loss-based operation as required.
Cubic
CUBIC is an enhanced version of BIC: it simplifies the BIC window control and improves its TCP-friendliness and RTT-fairness. The window growth function of CUBIC is governed by a cubic function in terms of the elapsed time since the last loss event. Our experience indicates that the cubic function provides a good stability and scalability. Furthermore, the real-time nature of the protocol keeps the window growth rate independent of RTT, which keeps the protocol TCP friendly under both short and long RTT paths.
Westwood
TCP Westwood estimates the available bandwidth by counting and filtering the flow of returning ACKs and adaptively sets the cwnd and the sshtresh after congestion by taking into account the estimated bandwidth.TCP Westwood, is a sender-side-only modification to TCP New Reno that is intended to better handle large bandwidth-delay product paths (large pipes), with potential packet loss due to transmission or other errors (leaky pipes) and with dynamic load (dynamic pipes). TCP Westwood+ is an evolution of TCP Westwood, in fact it was soon discovered that the Westwood bandwidth estimation algorithm did not work well in the presence of reverse traffic due to ACK compression. Westwood+ is friendly towards TCP Reno and fairer than Reno in bandwidth allocation.
Packet Schedulers:
Why use a non default packet scheduler?
Packet schedulers are a portion of the kernel that queues network data on a specific interface and governs how they are transmitted and received including buffers. Below I will breakdown a couple of the packet schedulers included in this kernel.
fq_codel
FQ_Codel (Fair Queuing Controlled Delay) is queuing discipline that combines Fair Queuing with the CoDel AQM scheme. FQ_Codel uses a stochastic model to classify incoming packets into different flows and is used to provide a fair share of the bandwidth to all the flows using the queue. Each such flow is managed by the CoDel queuing discipline. Reordering within a flow is avoided since Codel internally uses a FIFO queue.
pfifo_fast
The FIFO algorithm forms the basis for the default qdisc on all Linux network interfaces (pfifo_fast). It performs no shaping or rearranging of packets. It simply transmits packets as soon as it can after receiving and queuing them. This is also the qdisc used inside all newly created classes until another qdisc or a class replaces the FIFO.
A real FIFO qdisc must, however, have a size limit (a buffer size) to prevent it from overflowing in case it is unable to dequeue packets as quickly as it receives them. Linux implements two basic FIFO qdiscs, one based on bytes, and one on packets. Regardless of the type of FIFO used, the size of the queue is defined by the parameter limit. For a pfifo the unit is understood to be packets and for a bfifo the unit is understood to be bytes.
pie
PIE is designed to control delay effectively. First, an average dequeue rate is estimated based on the standing queue. The rate is used to calculate the current delay. Then, on a periodic basis, the delay is used to calculate the dropping probabilty. Finally, on arrival, a packet is dropped (or marked) based on this probability. PIE makes adjustments to the probability based on the trend of the delay i.e. whether it is going up or down.The delay converges quickly to the target value specified. alpha and beta are statically chosen parameters chosen to control the drop probability growth and are determined through control theoretic approaches. alpha determines how the deviation between the current and target latency changes probability. beta exerts additional adjustments depending on the latency trend. The drop probabilty is used to mark packets in ecn mode. However, as in RED, beyond 10% packets are dropped based on this probability. The bytemode is used to drop packets proportional to the packet size.
fq
A packet scheduler is charged with organizing the flow of packets through the network stack to meet a set of policy objectives. The kernel has quite a few of them, including CBQ for fancy class-based routing, CHOKe for routers, and a couple of variants on the CoDel queue management algorithm. FQ joins this list as a relatively simple scheduler designed to implement fair access across large numbers of flows with local endpoints while keeping buffer sizes down; it also happens to implement TCP pacing.
FQ keeps track of every flow it sees passing through the system. To do so, it calculates an eight-bit hash based on the socket associated with the flow, then uses the result as an index into an array of red-black trees. The data structure is designed, according to Eric, to scale well up to millions of concurrent flows. A number of parameters are associated with each flow, including its current transmission quota and, optionally, the time at which the next packet can be transmitted.
That transmission time is used to implement the TCP pacing support. If a given socket has a pace specified for it, FQ will calculate how far the packets should be spaced in time to conform to that pace. If a flow's next transmission time is in the future, that flow is added to another red-black tree with the transmission time used as the key; that tree, thus, allows the kernel to track delayed flows and quickly find the one whose next packet is due to go out the soonest. A single timer is then used, if needed, to ensure that said packet is transmitted at the right time.
The scheduler maintains two linked lists of active flows, the "new" and "old" lists. When a flow is first encountered, it is placed on the new list. The packet dispatcher services flows on the new list first; once a flow uses up its quota, that flow is moved to the old list. The idea here appears to be to give preferential treatment to new, short-lived connections — a DNS lookup or HTTP "GET" command, for example — and not let those connections be buried underneath larger, longer-lasting flows. Eventually the scheduler works its way through all active flows, sending a quota of data from each; then the process starts over.
There are a number of additional details, of course. There are limits on the amount of data queued for each flow, as well as a limit on the amount of data buffered within the scheduler as a whole; any packet that would exceed one of those limits is dropped. A special "internal" queue exists for high-priority traffic, allowing it to reach the wire more quickly. And so on.
One other detail is garbage collection. One problem with this kind of flow tracking is that nothing tells the scheduler when a particular flow is shut down; indeed, nothing can tell the scheduler for flows without local endpoints or for non-connection-oriented protocols. So the scheduler must figure out on its own when it can stop tracking any given flow. One way to do that would be to drop the flow as soon as there are no packets associated with it, but that would cause some thrashing as the queues empty and refill; it is better to keep flow data around for a little while in anticipation of more traffic. FQ handles this by putting idle flows into a special "detached" state, off the lists of active flows. Whenever a new flow is added, a pass is made over the associated red-black tree to clean out flows that have been detached for a sufficiently long time — three seconds in the current patch.
cake
The CAKE Principle:
(or, how to have your cake and eat it too)
This is a combination of several shaping, AQM and FQ
techniques into one easy-to-use package:
- An overall bandwidth shaper, to move the bottleneck away
from dumb CPE equipment and bloated MACs. This operates
in deficit mode (as in sch_fq), eliminating the need for
any sort of burst parameter (eg. token buxket depth).
Burst support is limited to that necessary to overcome
scheduling latency.
- A Diffserv-aware priority queue, giving more priority to
certain classes, up to a specified fraction of bandwidth.
Above that bandwidth threshold, the priority is reduced to
avoid starving other classes.
- Each priority class has a separate Flow Queue system, to
isolate traffic flows from each other. This prevents a
burst on one flow from increasing the delay to another.
Flows are distributed to queues using a set-associative
hash function.
- Each queue is actively managed by Codel. This serves
flows fairly, and signals congestion early via ECN
(if available) and/or packet drops, to keep latency low.
The codel parameters are auto-tuned based on the bandwidth
setting, as is necessary at low bandwidths.
The configuration parameters are kept deliberately simple
for ease of use. Everything has sane defaults. Complete
generality of configuration is not a goal.
The priority queue operates according to a weighted DRR
scheme, combined with a bandwidth tracker which reuses the
shaper logic to detect which side of the bandwidth sharing
threshold the class is operating. This determines whether
a priority-based weight (high) or a bandwidth-based weight
(low) is used for that class in the current pass.
This qdisc incorporates much of Eric Dumazet's fq_codel code,
customised for use as an integrated subordinate.
How to apply a packet scheduler:
1. Open terminal on your device
2. Use the "su" command to become root
3. Use tc to change the packet scheduler(qdisc) on your device. I have included an example below, the first line is for WiFi and the second for data. In the example, we are setting the qdisc to fq_pie, which is a mix of PIE with per flow rate shaping from fq.
Code:
tc qdisc add dev wlan0 root fq_pie
tc qdisc add dev rmnet_data0 root fq_pie
4. Confirm your packet scheduler has been applied by using the tc tool again. I have included an example below.
Code:
tc qdisc
To use another packet scheduler after applying a previous one, you will need to either reboot or remove the added qdisc from each interface using the command I have included below.
Code:
tc qdisc del root dev wlan0
tc qdisc del root dev rmnet_data0
zzmoove Breakdown:
To set a zzmoove profile using a kernel tweaker such as kernel adiutor, put the corresponding profile number into the tunable labeled "profile_number". I have included a description of the profiles below from @ZaneZam over here: http://forum.xda-developers.com/showpost.php?p=42637787&postcount=3
1 - for Default (set governor defaults)
2 - for Yank Battery -> old untouched setting (a very good battery/performance balanced setting DEV-NOTE: highly recommended!)
3 - for Yank Battery Extreme -> old untouched setting (like yank battery but focus on battery saving)
4 - for ZaneZam Battery -> old untouched setting (a more 'harsh' setting strictly focused on battery saving DEV-NOTE: might give some lags!)
5 - for ZaneZam Battery Plus -> NEW! reworked 'faster' battery setting (DEV-NOTE: recommended too! )
6 - for ZaneZam Optimized -> old untouched setting (balanced setting with no focus in any direction DEV-NOTE: relict from back in the days, even though some people still like it!)
7 - for ZaneZam Moderate -> NEW! setting based on 'zzopt' which has mainly (but not strictly only!) 2 cores online
8 - for ZaneZam Performance -> old untouched setting (all you can get from zzmoove in terms of performance but still has the fast down scaling/hotplugging behaving)
9 - for ZaneZam InZane -> NEW! based on performance with new auto fast scaling active. a new experience!
10 - for ZaneZam Gaming -> NEW! based on performance with new scaling block enabled to avoid cpu overheating during gameplay

Will this work with CM12 nightly builds?
Sent from my Nexus 6 using Tapatalk

violentbydezign said:
Will this work with CM12 nightly builds?
Sent from my Nexus 6 using Tapatalk
Click to expand...
Click to collapse
Yes sir!

Righteous
Sent from my Nexus 6 using Tapatalk

Well that was a fast leave of absence lol...I was just doing my daily routine of reading through my XDA threads and found this new revision of your kernel. I think i pissed off my sleeping girlfriend by shouting with excitement, oops (totally worth it though). Glad you came back ripng, er, DespairFactor. Keep up the great work!!

downloaded now time to play
thanks op

Sweet man. Thanks for continuing on....

@DespairFactor would you please add up the Kernel version in the title too.
Thanks

dany20mh said:
@DespairFactor would you please add up the Kernel version in the title too.
Thanks
Click to expand...
Click to collapse
I will when I am on R2

In for execution......

You´re fast my friend hahaha... Thanks for staying with us.

R2 is up, I added a changelog to the OP

Does your kernel still support all cores on all the time with mako hotplug?
Sent from my Nexus 6

rignfool said:
Does your kernel still support all cores on all the time with mako hotplug?
Sent from my Nexus 6
Click to expand...
Click to collapse
Of course, its part of mako hotplug

Woot your the man, glad to see you back.. Time to feed my addiction, and get rid of the shakes..

Is there a difference between this r2 and the one from a few days ago?

joeyddr said:
Is there a difference between this r2 and the one from a few days ago?
Click to expand...
Click to collapse
Compare the feature lists, its basically a cleaner code base and added only the useful stuff back.

Is force encryption on? I booted the kernel up on a clean flash of Euphoria and it gave me a decryption error? Just juandering.

bmg1001 said:
Is force encryption on? I booted the kernel up on a clean flash of Euphoria and it gave me a decryption error? Just juandering.
Click to expand...
Click to collapse
Never, I would never force anyone to encrypt their device I don't encrypt mine and obviously I would run my own work

Related

[MOD] CrossBreeder - Entropy Lag Reduce/DNS Speedup/Clean Adblock

Crossbreeder is an 5in1 package created to make Android devices run faster with less lag and to give a significant performance boost.
Tested and confirmed to give our wildfires a considerable boost. Tested on CM6, CM7, CM9.
head to the original thread to get the flashable ZIP. and please remember to read the OP carefully before using this.
Zip files are also attached to this post.
Original Thread -
forum.xda-developers.com/showthread.php?t=2113150
all credit goes to idcrisis for creating this package. I'm just sharing this with my fellow buzzers.
what Crossbreeder actually do, QUOTED FROM ORIGINAL POST,
This is a combination of 5 different
key methodologies to improve the
Android experience:
1. It's a big new feature, DNS
caching, parallelising and tether
boost . A lot of the lag in a lot of
apps, apart from the GUI lag, is due
to slow DNS querying, specially on
the mobile network.
CrossBreeder now runs a caching,
parallelising DNS client on the
device. So now most of your DNS
queries will be served from the
cache and if not found, the query
will be sent in in parallel to
multiple DNS servers including the
two Google DNS servers and your
two ISP servers and the quickest
reply will be served to you, hot and
transparent. You can read this
rationale for this approach - http://
ma.ttwagner.com/make-dns-fly-...q-
all-servers/
This speeds up network access and
networked apps, like Browsers of
course, and Tapatalk, Gmail and
thousands of others drastically. And
removes a lot of the lag where it was
due to DNS querying. This will not
increase your network or download
speed but pages will load much
faster.
This will future proof your devices as
more and more apps start using
HTML5 and/or reside completely as
web pages or the like.
CrossBreeder boosts your tethering
connection. Client devices to your
device will take advantage of the
new DNS. Hence their usage is also
improved! In many cases this update
might even fix a broken tethering
feature on your phone. So if your
ROM doesn't have a working
tethering support, you an try and
install this update. It might
magically start working!
CrossBreeder blocks ads and
spyware in an efficient manner by
blocking access to the host. It does
this using a static block list of known
ad sites and behaving as an
authoritative DNS server for these
sites and redirecting them to a
dummy address. CrossBreeder runs a
simple web server serving empty
images and pages, so ads completely
disappear instead of showing an ugly
Page/Image Not found error.
You can update this block list from
an external specialised tool like
Adaway if you need
It also renames any existing /etc/
hosts file on your device. Testing
has proven that keeping a system
wide /etc/hosts file as is used by
most other Ad blocking software
actually slows down your system. So
it is recommended to use this
method instead. Check this out for
the demonstration of the slowdown
and how to test it yourself - http://
forum.xda-developers.com/
show...php?p=41877518
In order to achieve all this DNS
related functionality, CrossBreeder
relies on the excellent open source
utilities - DNRD and Dnsmasq
2. Modulate OS entropy levels for
lag reduction ala Seeder. The whole
OS reads either /dev/random or /
dev/urandom and both need
entropy. However this mod uses a
completely different, lightweight and
efficient random number generator
called Havege . This sharply reduces
cpu consumption and corresponding
battery life loss compared to Seeder.
It also does a better job at keeping
entropy levels high hence your
device is more responsive. It doesn't
run in a CPU intensive loop either.
The extend queue functionality has
also been added to CrossBreeder.
See here for another rationale
favouring Havege compared to Rngd
- ( http://code.google.com/p/csrng/ -
Look for the limitations.)
3. Change kernel parameters
specially the wakeup threshold ones
so read blocks are released instantly
and writes never wake up as we have
an external entropy generator. And a
host of other fail safe and working
tweaks from the community for each
key subsystem. ( one can look
inside /etc/CrossBreeder/
zzCrossBreeder ).
4. Remove /dev/random as it's
blocking . Link it to non-blocking /
dev/urandom. Since /dev/random is
blocking and designed to protect us
from Quantum alien cryptographers
with mathematical certainty and
urandom is non blocking pseudo-
random device that most apps and
OSs are using anyway and with
Haveged running, is as secure
anyway as it's very difficult to empty
the entropy pool faster than Havege
can replenish it. Pre ICS devices
have a lot to gain with this but ICS+
devices show visible gains too.
5. Frandom support (Optional) -
CrossBreeder now supports linking
both your random devices to the
extremely fast alternative - Frandom
( http://billauer.co.il/
frandom.html ). This module is
orders of magnitude (10-50 times)
faster than the standard character
devices ( Check this out - http://
forum.xda-developers.com/
show...&postcount=134 ). The
erandom character device also
installed by Frandom doesn't use up
system entropy at all on top of being
fast. You will need to ask your ROM
developer to develop the kernel
module for you and then place it
in /system/lib/modules.
CrossBreeder will then try and load
it and if successful, make all the
necessary adjustments so that both /
dev/random and /dev/urandom are
pointing to /dev/frandom and /dev/
erandom respectively. The speed
benefits are to be seen to be
believed. But since each ROM
requires a unique kernel module,
this option is left optional ( but
auto detect ). Advanced users can
even try and load the frandom
module built for other kernels if
they don't have one readily available
for their own kernel version using
the Punchmod utility. Read this:
http://forum.xda-developers.com/
show...5#post41920265
remember to download both Crossbreeder and uninstall ZIP files. it's very unlikely that crossbreeder will cause any problems, but it's better to be prepared.
Feedbacks are welcome
Edit - Attachments will no longer be updated. Visit the original thread for latest versions.
lakshan_456 said:
Crossbreeder is an 5in1 package created to make Android devices run faster with less lag and to give a significant performance boost.
Tested and confirmed to give our wildfires a considerable boost. Tested on CM6, CM7, CM9.
head to the original thread to get the flashable ZIP. and please remember to read the OP carefully before using this.
Zip files are also attached to this post.
Original Thread -
forum.xda-developers.com/showthread.php?t=21131500
all credit goes to idcrisis for creating this package. I'm just sharing this with my fellow buzzers.
what Crossbreeder actually do, QUOTED FROM ORIGINAL POST,
This is a combination of 5 different
key methodologies to improve the
Android experience:
1. It's a big new feature, DNS
caching, parallelising and tether
boost . A lot of the lag in a lot of
apps, apart from the GUI lag, is due
to slow DNS querying, specially on
the mobile network.
CrossBreeder now runs a caching,
parallelising DNS client on the
device. So now most of your DNS
queries will be served from the
cache and if not found, the query
will be sent in in parallel to
multiple DNS servers including the
two Google DNS servers and your
two ISP servers and the quickest
reply will be served to you, hot and
transparent. You can read this
rationale for this approach - http://
ma.ttwagner.com/make-dns-fly-...q-
all-servers/
This speeds up network access and
networked apps, like Browsers of
course, and Tapatalk, Gmail and
thousands of others drastically. And
removes a lot of the lag where it was
due to DNS querying. This will not
increase your network or download
speed but pages will load much
faster.
This will future proof your devices as
more and more apps start using
HTML5 and/or reside completely as
web pages or the like.
CrossBreeder boosts your tethering
connection. Client devices to your
device will take advantage of the
new DNS. Hence their usage is also
improved! In many cases this update
might even fix a broken tethering
feature on your phone. So if your
ROM doesn't have a working
tethering support, you an try and
install this update. It might
magically start working!
CrossBreeder blocks ads and
spyware in an efficient manner by
blocking access to the host. It does
this using a static block list of known
ad sites and behaving as an
authoritative DNS server for these
sites and redirecting them to a
dummy address. CrossBreeder runs a
simple web server serving empty
images and pages, so ads completely
disappear instead of showing an ugly
Page/Image Not found error.
You can update this block list from
an external specialised tool like
Adaway if you need
It also renames any existing /etc/
hosts file on your device. Testing
has proven that keeping a system
wide /etc/hosts file as is used by
most other Ad blocking software
actually slows down your system. So
it is recommended to use this
method instead. Check this out for
the demonstration of the slowdown
and how to test it yourself - http://
forum.xda-developers.com/
show...php?p=41877518
In order to achieve all this DNS
related functionality, CrossBreeder
relies on the excellent open source
utilities - DNRD and Dnsmasq
2. Modulate OS entropy levels for
lag reduction ala Seeder. The whole
OS reads either /dev/random or /
dev/urandom and both need
entropy. However this mod uses a
completely different, lightweight and
efficient random number generator
called Havege . This sharply reduces
cpu consumption and corresponding
battery life loss compared to Seeder.
It also does a better job at keeping
entropy levels high hence your
device is more responsive. It doesn't
run in a CPU intensive loop either.
The extend queue functionality has
also been added to CrossBreeder.
See here for another rationale
favouring Havege compared to Rngd
- ( http://code.google.com/p/csrng/ -
Look for the limitations.)
3. Change kernel parameters
specially the wakeup threshold ones
so read blocks are released instantly
and writes never wake up as we have
an external entropy generator. And a
host of other fail safe and working
tweaks from the community for each
key subsystem. ( one can look
inside /etc/CrossBreeder/
zzCrossBreeder ).
4. Remove /dev/random as it's
blocking . Link it to non-blocking /
dev/urandom. Since /dev/random is
blocking and designed to protect us
from Quantum alien cryptographers
with mathematical certainty and
urandom is non blocking pseudo-
random device that most apps and
OSs are using anyway and with
Haveged running, is as secure
anyway as it's very difficult to empty
the entropy pool faster than Havege
can replenish it. Pre ICS devices
have a lot to gain with this but ICS+
devices show visible gains too.
5. Frandom support (Optional) -
CrossBreeder now supports linking
both your random devices to the
extremely fast alternative - Frandom
( http://billauer.co.il/
frandom.html ). This module is
orders of magnitude (10-50 times)
faster than the standard character
devices ( Check this out - http://
forum.xda-developers.com/
show...&postcount=134 ). The
erandom character device also
installed by Frandom doesn't use up
system entropy at all on top of being
fast. You will need to ask your ROM
developer to develop the kernel
module for you and then place it
in /system/lib/modules.
CrossBreeder will then try and load
it and if successful, make all the
necessary adjustments so that both /
dev/random and /dev/urandom are
pointing to /dev/frandom and /dev/
erandom respectively. The speed
benefits are to be seen to be
believed. But since each ROM
requires a unique kernel module,
this option is left optional ( but
auto detect ). Advanced users can
even try and load the frandom
module built for other kernels if
they don't have one readily available
for their own kernel version using
the Punchmod utility. Read this:
http://forum.xda-developers.com/
show...5#post41920265
remember to download both Crossbreeder and uninstall ZIP files. it's very unlikely that crossbreeder will cause any problems, but it's better to be prepared.
Feedbacks are welcomed
Click to expand...
Click to collapse
Tested on rempuzzle, change nothing , will test other thing.
And Tested on miui v2.3,it improve the performances.
On aokp v5, it make bootloop when We usés the ROM too long ( without oc ).
On miui v4, improve stability.
Sent from my HTC Wildfire using xda app-developers app
Pator57 said:
Tested on rempuzzle, change nothing , will test other thing.
And Tested on miui v2.3,it improve the performances.
On aokp v5, it make bootloop when We usés the ROM too long ( without oc ).
On miui v4, improve stability.
Sent from my HTC Wildfire using xda app-developers app
Click to expand...
Click to collapse
well,rempuzzle probably don't need this, since its fast enough as it is now. but it would have been great if this worked.
I think aokp v5 has seeder intergrated, maybe they are conflicting with each other. but crossbreeder says it prevents seeder from executing, so I'm not sure.
anyway thanks for the info
but crossbreeder says it prevents seeder from executing, so I'm not sure.
Click to expand...
Click to collapse
It does. I had that myself when crossbreeder stopped seeder app automatically after having flashed it.
gerope said:
It does. I had that myself when crossbreeder stopped seeder app automatically after having flashed it.
Click to expand...
Click to collapse
yes, I know it does, I was just guessing a probable cause for the bootloop.
Maybe it's something else, better leave that to devs to look into.
Tested this on DK froyo ROM. It does make a difference.
Sent from my HTC Wildfire using Tapatalk 2
Updated Attached files to latest version (6.23.13_v2)
If your current Kernel doesn't support Init.d executions, try an app like Universel Init.d
https://play.google.com/store/apps/...&utm_medium=organic&utm_term=universal+init.d
Pator57 said:
Tested on rempuzzle, change nothing , will test other thing.
And Tested on miui v2.3,it improve the performances.
On aokp v5, it make bootloop when We usés the ROM too long ( without oc ).
On miui v4, improve stability.
Sent from my HTC Wildfire using xda app-developers app
Click to expand...
Click to collapse
I flashed AOKP v5 and deleted the seeder script from Init.d folder after flashing Crossbreeder, and I did not experience any bootoops while I was using that ROM ( I used it for about 2 weeks)
Does Rempuzzle support Init.d scripts?
lakshan_456 said:
I flashed AOKP v5 and deleted the seeder script from Init.d folder after flashing Crossbreeder, and I did not experience any bootoops while I was using that ROM ( I used it for about 2 weeks)
Does Rempuzzle support Init.d scripts?
Click to expand...
Click to collapse
Yea, im sure.
Sent from my HTC Wildfire using xda app-developers app
Definitely, this mod effects, cm7 becomes smoother.
while coming out of app drawer back to main screen, scrolling has also improved.
Thanks
Sent from my HTC Wildfire using Tapatalk 2
Intresting..
Thnx for this m8 Going to try hopefully it will reduce many lags :]
-------------------------------
Btw i'm new to this all :] :good:
New update available.
Visit the Original thread for more info and downloads.
http://forum.xda-developers.com/showthread.php?t=2113150

What use have kernel logging, debug etc.?

Some Kernels have disabled "unnecessary" logging and tracing functions, e.g. Speedmod.
1) What exactly are these logging, debugging and other functions?
2) Why do stock kernels have these functions?
3) Do they really thwart the system?
4) Are these functions only for human analysis or does Android make use of the logged data itself?
1) As far as I know these are tools the kernel uses to put errors/crashes into log files. It's a great way for developers to fix certain issues because users can extract these logs from the device and send them over or upload them in the forums.
2) I don't know if they have it. Anyways, I imagine that the logs created are useful for service centers / supporters if you've a software issue.
3) I'm using DorimanX kernel and you can disable all loggers. But I don't feel a performance increase nor does battery last significantly longer. As long as the kernel is stable this may be called fine tuning :b
4) I guess the system doesn't touch them. Not completely sure though.
Thank you for your reply.
Since you assert disabled logging would not save battery, what is it that the developers promote their kernels to be more power saving than the stock kernels? Not regarding underclocking or undervolting.
Let's take Speedmod again as an example. It is - of course among the brilliant work of other developers - known for its power saving qualities. But without touching anything of the conventional power loads (CPU, Display,...).
It's not all about overclocking and undervolting Just to name a few examples: Developers can alter the way how and when the CPU scales up - the governor is responsible for that. Or some kernels provide several schedulers or options to save battery in deep sleep. Take DorimanX as an example: You can activate "Auto WiFi" and set 30 sec for example. So if the screen is off for 30 seconds WiFi will turn off. If you've got a data plan you'll still receive WhatsApp/Facebook messages but it's more battery saving because WiFi doesn't drain anymore :b
So in general it's about a code-efficient kernel and how you tweak it

SecAndy : let's get the party started

Pronounced "say candy", the goal of SecAndy is to come up with as secure and private of an OS as possible. So as not to reinvent the wheel, we'll base this initiative on our open source code of choice (Android or maybe other developers' choice).
I am not a developer myself but I can without a doubt, because of former professional experiences, organize a project and gather the right people together as a community in order to make sure that project sees the light of day after it has acquired a life of its own if needed, which I think we will agree is something that this kind of project requires because of the scrutiny it will quickly attract.
I am officially calling upon this post all interested developers that could help us fork Android or other open source OS.
Let's get a kickstarter funded and let the party begin. I will update you later today on the advancement of such.
This thread welcomes constructive ideas and developer participation, but here are beginning requirements we'll need to fulfill eventually to privatize and secure android :
- default browser allowing custom search engines such as https://ixquick.com or duckduckgo
- default system search pointing to those custom engines for online component
- control of gps at firmware level to allow full disability
- peer to peer file exchange (think BitTorrent sync) with 1024 to 2048 bit encryption
- implementation of secure sms and mms exchange (think textsecure)
- implementation of encrypted voice channels (think redphone or SIP with end-to-end encryption)
- root vpn for all online access
- systemwide warning of insecure solutions (example : wanting to use gmail or regular email)
- PGP transparent email solution
- Tor option for root vpn (subject to mitm attacks but more on that later)
- peerguardian type auto-updated database to identify suspicious IP address ranges
- systematic in-out firewall control auto updated with peerguardian database and community based rules database
- hardened malware protection and app permissions with automatic permission audit based on application type
- full device encryption and lockup (in case of unauthorized user)
- full remote wipe out and bricking with auto IMEI reporting (in case of theft, might have to be amended because of attack vector)
- full remote location capability with real time tracking (that one might have to be scratched, high security risk because of attack vector)
This obviously doesn't cover all the bases but would be a good start... I know a lot of these options can be implemented with a mismatch of apps and custom Roms but having it all at an OS level AOKP style would greatly help in building an android by the people for the people community that could eventually loosen the stranglehold of less than transparent corporations.
100 views total between both threads in 24 hours and not one comment. Obviously I'm approaching this the wrong way. More news at 11.

[Q] Framework doesn't gracefully handle foreground application that uses too much CPU

Hello,
There is a known behaviour about when device is low on memory (OOM is activated and last used activities are killed and resources are freed)
But I didn't find any info about how Android will handle foregrounded applications, which overloading the system. I mean starting a stress app overloads the system so much (CPU: 130%, loadavg 30, 25, 20) and there is no CPU time slot for system activities (system_server threads and so on). After a couple of minutes it starts to timeouts in FinalizerWatchdogDaemon and many other places.
Any ideas and info about this situation is appreciated.
Thanks,
B.

[KERNEL(Nougat)][ROM]Phasma Kernel/UBERSTOCK

This is the new refined home for DarkRoom Development. If you submit bug reports without a log, you may be prosecuted...or executed.
Disclaimer:
If your device fails to comply with your standards of what you consider functioning, I am not liable. This is provided free of charge and does not come with a warranty. Although, if you provide a log, I can provide some sort of assurance that I will look into your issue.
Links:
Social:
Twitter - http://twitter.com/DespairDev
G+ Community - https://plus.google.com/u/0/communities/117685307734094084120
Telegram - https://t.me/darkroomdev
Discord - https://discord.gg/BGTFutW
Downloads:
https://go.hunternott.com/darkroom
Source:
Github – https://github.com/matthewdalex/
Github – https://github.com/UBERROMS/
Credits:
faux123
franco
Google
flar2
imoseyon
Cl3Kener
neobuddy89
Star Wars
XDA:DevDB Information
[KERNEL(Nougat)][ROM]Phasma Kernel/UBERSTOCK, ROM for the LG Nexus 5X
Contributors
DespairFactor, Cl3Kener
Source Code: https://github.com/UBERROMS
ROM OS Version: 6.0.x Marshmallow
ROM Kernel: Linux 3.10.x
Based On: AOSP
Version Information
Status: Testing
Created 2015-11-18
Last Updated 2017-12-28
Packet Schedulers/Congestion Avoidance Algorithms:
CDG vs. Cubic vs. Westwood:
CDG
CAIA-Delay Gradient (CDG) is a hybrid congestion control algorithm which reacts to both packet loss and inferred queuing delay. It attempts to operate as a delay-based algorithm where possible, but utilises heuristics to detect loss-based TCP cross traffic and will compete effectively as required. CDG is therefore incrementally deployable and suitable for use on shared networks. During delay-based operation, CDG uses a delay-gradient based probabilistic backoff mechanism, and will also try to infer non congestion related packet losses and avoid backing off when they occur. During loss-based operation, CDG essentially reverts to reno-like behaviour. CDG switches to loss-based operation when it detects that a configurable number of consecutive delay-based backoffs have had no measurable effect. It periodically attempts to return to delay-based operation, but will keep switching back to loss-based operation as required.
Cubic
CUBIC is an enhanced version of BIC: it simplifies the BIC window control and improves its TCP-friendliness and RTT-fairness. The window growth function of CUBIC is governed by a cubic function in terms of the elapsed time since the last loss event. Our experience indicates that the cubic function provides a good stability and scalability. Furthermore, the real-time nature of the protocol keeps the window growth rate independent of RTT, which keeps the protocol TCP friendly under both short and long RTT paths.
Westwood
TCP Westwood estimates the available bandwidth by counting and filtering the flow of returning ACKs and adaptively sets the cwnd and the sshtresh after congestion by taking into account the estimated bandwidth.TCP Westwood, is a sender-side-only modification to TCP New Reno that is intended to better handle large bandwidth-delay product paths (large pipes), with potential packet loss due to transmission or other errors (leaky pipes) and with dynamic load (dynamic pipes). TCP Westwood+ is an evolution of TCP Westwood, in fact it was soon discovered that the Westwood bandwidth estimation algorithm did not work well in the presence of reverse traffic due to ACK compression. Westwood+ is friendly towards TCP Reno and fairer than Reno in bandwidth allocation.
Packet Schedulers:
Why use a non default packet scheduler?
Packet schedulers are a portion of the kernel that queues network data on a specific interface and governs how they are transmitted and received including buffers. Below I will breakdown a couple of the packet schedulers included in this kernel.
fq_codel
FQ_Codel (Fair Queuing Controlled Delay) is queuing discipline that combines Fair Queuing with the CoDel AQM scheme. FQ_Codel uses a stochastic model to classify incoming packets into different flows and is used to provide a fair share of the bandwidth to all the flows using the queue. Each such flow is managed by the CoDel queuing discipline. Reordering within a flow is avoided since Codel internally uses a FIFO queue.
pfifo_fast
The FIFO algorithm forms the basis for the default qdisc on all Linux network interfaces (pfifo_fast). It performs no shaping or rearranging of packets. It simply transmits packets as soon as it can after receiving and queuing them. This is also the qdisc used inside all newly created classes until another qdisc or a class replaces the FIFO.
A real FIFO qdisc must, however, have a size limit (a buffer size) to prevent it from overflowing in case it is unable to dequeue packets as quickly as it receives them. Linux implements two basic FIFO qdiscs, one based on bytes, and one on packets. Regardless of the type of FIFO used, the size of the queue is defined by the parameter limit. For a pfifo the unit is understood to be packets and for a bfifo the unit is understood to be bytes.
pie
PIE is designed to control delay effectively. First, an average dequeue rate is estimated based on the standing queue. The rate is used to calculate the current delay. Then, on a periodic basis, the delay is used to calculate the dropping probabilty. Finally, on arrival, a packet is dropped (or marked) based on this probability. PIE makes adjustments to the probability based on the trend of the delay i.e. whether it is going up or down.The delay converges quickly to the target value specified. alpha and beta are statically chosen parameters chosen to control the drop probability growth and are determined through control theoretic approaches. alpha determines how the deviation between the current and target latency changes probability. beta exerts additional adjustments depending on the latency trend. The drop probabilty is used to mark packets in ecn mode. However, as in RED, beyond 10% packets are dropped based on this probability. The bytemode is used to drop packets proportional to the packet size.
fq
A packet scheduler is charged with organizing the flow of packets through the network stack to meet a set of policy objectives. The kernel has quite a few of them, including CBQ for fancy class-based routing, CHOKe for routers, and a couple of variants on the CoDel queue management algorithm. FQ joins this list as a relatively simple scheduler designed to implement fair access across large numbers of flows with local endpoints while keeping buffer sizes down; it also happens to implement TCP pacing.
FQ keeps track of every flow it sees passing through the system. To do so, it calculates an eight-bit hash based on the socket associated with the flow, then uses the result as an index into an array of red-black trees. The data structure is designed, according to Eric, to scale well up to millions of concurrent flows. A number of parameters are associated with each flow, including its current transmission quota and, optionally, the time at which the next packet can be transmitted.
That transmission time is used to implement the TCP pacing support. If a given socket has a pace specified for it, FQ will calculate how far the packets should be spaced in time to conform to that pace. If a flow's next transmission time is in the future, that flow is added to another red-black tree with the transmission time used as the key; that tree, thus, allows the kernel to track delayed flows and quickly find the one whose next packet is due to go out the soonest. A single timer is then used, if needed, to ensure that said packet is transmitted at the right time.
The scheduler maintains two linked lists of active flows, the "new" and "old" lists. When a flow is first encountered, it is placed on the new list. The packet dispatcher services flows on the new list first; once a flow uses up its quota, that flow is moved to the old list. The idea here appears to be to give preferential treatment to new, short-lived connections — a DNS lookup or HTTP "GET" command, for example — and not let those connections be buried underneath larger, longer-lasting flows. Eventually the scheduler works its way through all active flows, sending a quota of data from each; then the process starts over.
There are a number of additional details, of course. There are limits on the amount of data queued for each flow, as well as a limit on the amount of data buffered within the scheduler as a whole; any packet that would exceed one of those limits is dropped. A special "internal" queue exists for high-priority traffic, allowing it to reach the wire more quickly. And so on.
One other detail is garbage collection. One problem with this kind of flow tracking is that nothing tells the scheduler when a particular flow is shut down; indeed, nothing can tell the scheduler for flows without local endpoints or for non-connection-oriented protocols. So the scheduler must figure out on its own when it can stop tracking any given flow. One way to do that would be to drop the flow as soon as there are no packets associated with it, but that would cause some thrashing as the queues empty and refill; it is better to keep flow data around for a little while in anticipation of more traffic. FQ handles this by putting idle flows into a special "detached" state, off the lists of active flows. Whenever a new flow is added, a pass is made over the associated red-black tree to clean out flows that have been detached for a sufficiently long time — three seconds in the current patch.
cake
The CAKE Principle:
(or, how to have your cake and eat it too)
This is a combination of several shaping, AQM and FQ
techniques into one easy-to-use package:
- An overall bandwidth shaper, to move the bottleneck away
from dumb CPE equipment and bloated MACs. This operates
in deficit mode (as in sch_fq), eliminating the need for
any sort of burst parameter (eg. token buxket depth).
Burst support is limited to that necessary to overcome
scheduling latency.
- A Diffserv-aware priority queue, giving more priority to
certain classes, up to a specified fraction of bandwidth.
Above that bandwidth threshold, the priority is reduced to
avoid starving other classes.
- Each priority class has a separate Flow Queue system, to
isolate traffic flows from each other. This prevents a
burst on one flow from increasing the delay to another.
Flows are distributed to queues using a set-associative
hash function.
- Each queue is actively managed by Codel. This serves
flows fairly, and signals congestion early via ECN
(if available) and/or packet drops, to keep latency low.
The codel parameters are auto-tuned based on the bandwidth
setting, as is necessary at low bandwidths.
The configuration parameters are kept deliberately simple
for ease of use. Everything has sane defaults. Complete
generality of configuration is not a goal.
The priority queue operates according to a weighted DRR
scheme, combined with a bandwidth tracker which reuses the
shaper logic to detect which side of the bandwidth sharing
threshold the class is operating. This determines whether
a priority-based weight (high) or a bandwidth-based weight
(low) is used for that class in the current pass.
This qdisc incorporates much of Eric Dumazet's fq_codel code,
customised for use as an integrated subordinate.
How to apply a packet scheduler:
1. Open terminal on your device
2. Use the "su" command to become root
3. Use tc to change the packet scheduler(qdisc) on your device. I have included an example below, the first line is for WiFi and the second for data. In the example, we are setting the qdisc to fq_pie, which is a mix of PIE with per flow rate shaping from fq.
Code:
tc qdisc add dev wlan0 root fq_pie
tc qdisc add dev rmnet_data0 root fq_pie
4. Confirm your packet scheduler has been applied by using the tc tool again. I have included an example below.
Code:
tc qdisc
To use another packet scheduler after applying a previous one, you will need to either reboot or remove the added qdisc from each interface using the command I have included below.
Code:
tc qdisc del root dev wlan0
tc qdisc del root dev rmnet_data0
Ubermallow is coming for the 5X as well, it is compiling now.
Support fauxsound?
added to index
[INDEX] LG NEXUS 5X Resources Compilation Roll-Up
Awesome! Thanks Despair!
Dwayne01 said:
Support fauxsound?
Click to expand...
Click to collapse
Just added it in R1.3
dbrohrer said:
Awesome! Thanks Despair!
Click to expand...
Click to collapse
You are welcome!
Thanks, any details of your rom?, aosp based?
NisseGurra said:
Thanks, any details of your rom?, aosp based?
Click to expand...
Click to collapse
It's aosp based with tons of optimizations
Sent from my Nexus 6P using Tapatalk
DespairFactor said:
It's aosp based with tons of optimizations
Sent from my Nexus 6P using Tapatalk
Click to expand...
Click to collapse
Nice, i try it, any recommendations on gapps?
NisseGurra said:
Nice, i try it, any recommendations on gapps?
Click to expand...
Click to collapse
Use the purenexus arm64 gapps
So far, feels snappy, notification leds functional, charging led not.
No bugs so far.
@DespairFactor I took a gander at some of your other kernels in your signature. They seem pretty well optimized. The BFS scheduler for the Nexus 6 intrigued me as well. Are some of those features and optimizations built in (or planned to be built into) this kernel or is this simply a loosened up stock kernel that allows users to tweak more settings?
Alcolawl said:
@DespairFactor I took a gander at some of your other kernels in your signature. They seem pretty well optimized. The BFS scheduler for the Nexus 6 intrigued me as well. Are some of those features and optimizations built in (or planned to be built into) this kernel or is this simply a loosened up stock kernel that allows users to tweak more settings?
Click to expand...
Click to collapse
Check my github
Sent from my Nexus 6P using Tapatalk
Where do I find feature list etc for this ROM?
stackz07 said:
Where do I find feature list etc for this ROM?
Click to expand...
Click to collapse
On github or on your phone when you flash it...
https://github.com/ubermallow
NisseGurra said:
So far, feels snappy, notification leds functional, charging led not.
No bugs so far.
Click to expand...
Click to collapse
can you help? how to flash it on mda98e ?stuck on boot
georgiem9 said:
can you help? how to flash it on mda98e ?stuck on boot
Click to expand...
Click to collapse
Wipe system/data/cache/dalvik and then flash rom, gapps and kernel
Sent from my Nexus 6P using Tapatalk
georgiem9 said:
can you help? how to flash it on mda98e ?stuck on boot
Click to expand...
Click to collapse
I think its not working on mda89e flashed mdb08i then root install recovery , then flashed uber + gaps + kernel booting now thanks
georgiem9 said:
I think its not working on mda89e flashed mdb08i then root install recovery , then flashed uber + gaps + kernel booting now thanks
Click to expand...
Click to collapse
I suppose there are different ramdisk offsets for the older build.

Categories

Resources