[TIP] Audio Programming on Android - C++ or Other Android Development Languages

Hey just wanted to post my experience with audio programming on android. I developed an app recently and went thru hell and back to get it working properly with respect to some of the known issues with Android and the audio subsystem.
I programmed a OpenSL Implementation using ANDROIDSIMPLEBUFFERQUEUE and used all the properly real-time audio constructs such as a circular-buffer, no allocations of memory during sound playback and buffer "queue" etc etc etc.
However I found that while multitasking with a release build on my android phone sometimes the audio would pause or skip. Well what ended up giving me acceptable results (finally) was to encapsulate the audio layer in a "Service", specifically a foreground priority service. There are some excellent doc's covering this on the official android developer documentation.
So I recommend:
Implement a Service separate from your main activity and ensure Android recognizes it as Foreground
Use a circular buffer, statically allocated - boost has an excellent implementation - leverage that
Don't do any real-time allocation of memory while playing audio or filling in the next buffer queue
Compile you application using the arm7 abi, ensuring that optimizations are in place
Use lockfree constructs and don't block the thread.

Useful tips
Carandiru said:
Hey just wanted to post my experience with audio programming on android. I developed an app recently and went thru hell and back to get it working properly with respect to some of the known issues with Android and the audio subsystem.
I programmed a OpenSL Implementation using ANDROIDSIMPLEBUFFERQUEUE and used all the properly real-time audio constructs such as a circular-buffer, no allocations of memory during sound playback and buffer "queue" etc etc etc.
However I found that while multitasking with a release build on my android phone sometimes the audio would pause or skip. Well what ended up giving me acceptable results (finally) was to encapsulate the audio layer in a "Service", specifically a foreground priority service. There are some excellent doc's covering this on the official android developer documentation.
So I recommend:
Implement a Service separate from your main activity and ensure Android recognizes it as Foreground
Use a circular buffer, statically allocated - boost has an excellent implementation - leverage that
Don't do any real-time allocation of memory while playing audio or filling in the next buffer queue
Compile you application using the arm7 abi, ensuring that optimizations are in place
Use lockfree constructs and don't block the thread.
Click to expand...
Click to collapse
Good jobs guy, thanks to your tips. I know we can process audio with native SLES APIs more powerful.

I implemented reverb effect using opensl in android but unable to identify any effect in the output..so pls let me know what is the problem..I added permissions in manifest file..

I used OpenSL to play motor sounds in my racing game. But that library is not officially supported by Google. Android 4 for example removed ability to change playing speed and I had to start using SoundPool from java.

Related

[Q] How to disable microphone globally

I'm developing a customized ROM for Android, with which users are able to disable the microphone globally.
My requirements are:
The implementation should be as difficult to be circumvented as better.
The implementation should cause less crashes of apps (returning white noise?).
I've taken a look into the source code of the android.media.MediaRecorder class. By modifying the native implementation of the start() method, some of the recording apps cannot record audio now (of course they will crash at some point, e.g., while recording or playing). However, this method does not work for all recording apps. I guess those apps which circumvent the prohibition use OpenSL ES in Android NDK.
Another method I've tried is modifying the checkUidPermission method of PackageManagerService. When the permission to be checked is android.permission.RECORD_AUDIO, the method simply returns PackageManager.PERMISSION_DENIED. This method works for a wider range of recording apps, but I'm afraid this kind of prohibition can also be circumvented and apps are likely to crash.
Does anyone know how to better implement this feature? Where should I get started? Or help evaluate my methods? Currently I'm developing on a Nexus 4.
Any help would be appreciated!

How to create a video overlay?

I'm attempting to create a video overlay on top of Chrome (31.0) in Android (4.2).
I'm building an art installation involving a robot using a Nexus 7 in place of his face. Players will interact with the robot by shoving various cubes inside his head and triggering media and other reactions.
I've created the game within Construct 2 (an HTML5 game making tool) and have it running full screen in Chrome. I've removed and disabled all other UI elements to maintain immersion and prevent players from interfering.
My issue comes from attempting to load video content through the cubes without revealing any of the underlying UI within Chrome. I'm using Tasker in conjunction with NFC tags inside the cubes to trigger the loading of content within the game, since Construct is unable to load video files natively.
I'm able to call and load files with Tasker using any flavor of media player, but leaving Chrome causes it not to exit full screen after the video has ended, and reveals the interface. I've also found there isn't a away to force Chrome into full screen without action on behalf of the user.
I've attempted loading a video through Tasker's webview function, but it can't handle video and I'm unable to get support or find evidence of similar scenarios with it's function elsewhere online.
I've found applications such as YouTube Overlay, but it's very slow and only for Youtube videos. Although, it demonstrated this basic type of function is possible, even if it requires a peculiar circumstance to make use of it.
I'm wondering if I'm missing something or if there is an effective way to create my own Webview without having to write my own custom Android application.

Realt-Time Audio Effect - DSP implementation - pitch shifter

Hi
i want to make an app totally similar to this one (which is not yet supported after android api 14)
audioshift.surina.net
www youtube com / watch?v=AGZ7z_OVahU
wich basically is a realtime audio effect, that get the output mix, do some dsp stuff and put the result out.
i reverse enginereed the app and found that they provide a native library, to make their specific implementation of a generic android.media.audiofx.AudioEffect (using TYPE_NULL).
this library i suppose i a slightly variant of the OpenSrc soundtouch.surina.net
so my question is how can i do this modification to the soundtouch library? because in the library is only present a method that operate on samples from file and not from stream.
Thanks
Im intresting how to do this
Im found half solve for this problem with changing samplerate in audio policity files, but my problem tempo is slowed down, while pitch is good for me.

SoundMods - reality or fraud?

Greetings, dear sound modifications developers for Android OS. We want to ask you a few questions about feasibility and performance of some components inside your audio mods. We are very interested in features and their structure, resources & components. We are not great experts in the .nix systems. Simply, there are some situations what uncertainty is obvious even to a child. Let's start in order:
For the convenience of questions will be numbered
#1. The first thing we would like to know, what is the point from the availability of binaries and libraries of application-players in system/lib? Even if it is working, why would you add a library from Onkyo HF? This media application is an exclusive for their headphones & amplifiers, as well as their player app. It does not have unique properties such as hi-res audio, and other "goodies". And we didn’t find any circuits and connections, including those libs. Neither of the players does not become system directories, and if treated - what for? For the purpose of hardware decoding built by resampler at the level of the system to replace a core staff of the mixer? After all, the application has its own lib, furthermore - it was is not able to originally bypass ALL system. Only FX chain. This problem seems very paradoxical.
#2.Beats Audio. The first thing you need to consider - is deep binding of Beats api in flinger and srsfx library. If we open the original file from the M7 - we see that the effect starts with reference to sub_16A44 (substream?), and then goes on distributing multi-channels. And that's just for the first observations. In any case, even just to "rip" these lines is not possible (the only one who did something on porting it was @sun_dream). Well, we can replace the HTC M8 flinger with beats one entirety – it will work (probably the same trick will work with other 801qualc’s). But this trick works only under 5.0 OS – why did you say it is confirmed for MM? Besides, if we can set a modification to smartphone that already has beats by default the same configs replace themselves and this trick is not fully presented in the updater scripts. Furthermore – we are embarrassed with presence of "dummy-binaries" / bin / beatsbass, / bin / beatsnormal, / bin / spkamp, / bin / snd… etc. System will work safely without them, and their presence is not giving any help to set effects (beats, sony, etc) working.
#3.SRS. From small analyze dozen all-different versions, we have not seen a competent implementation of this technology. As well as previously mentioned beats, srs is proc-dependent (and any other libs) – so this will never work on Exynos or Mediatek as most of the technologies are deeply tied the same libaudioflinger. For samopisny or donorskiy. Even if it is working properly there are very interesting thing going in configs, for example Project Yume (probably all other mods by @PDesire) the eg compressor value is 15000. Why?? While the maximum is only 1! We consider these (and most of the libs) just "hanging" in the system and wasting its space. After al, speaking of SRSl, we should not forget that the SRS prefers Little Endian, while some fashion mods exhibit Float_64 (sic!) with unrealistic resampling.
#4.Sony DSEE-HX, Sony LDAC and other software technologies by Sony. Why it is presented for universal modifications, as its various types are unique for devices based on qualcomm? Or have you forced to work DSEE-HX, wired directly into the hardware libraries sony? If secret - how? There are no deep links with the system, or any other implementation. The only truly working tech port srom sony is – but even with this thing you should be careful. LDAC is hw implementation only.
#5. Alsa. As the developers, you know that Mediatek and Exynos does not have full alsa (the one you stick in your mods). In the low-level code at the kernel level they work within tinyalsa. While “usr” folder in your mods is designed to work in Qualcomm
environment. So there will never be any sampling above those, that MTK or Exynos got by default!
№ 6.Stegefright-decoders are not spelled out in media_codecs.Simple presence in system will not give an effect.
№ 7. 24/192 is impossible without touching a "true-low-level".Alsa isnt working on 90% devices,we was talking about this in previous number
We don't want to discredit anyone - maybe we missed something. We really wonder how mod authors have coped with porting many libraries. However, at this moment it looks more like a fraud and exorbitant increase in the system partition with a bunch of garbage files.
Sorry for our bad English.
With regards,
Android Modders from the Other Side.If you have questions-ask me.
@ahrion @Ben Feutrill @guitardedhero @PDesire @androidexpert35 @DeadRod @anandmore @A.R.I.S.E. Sound Systems @TheRoyalSeeker @mrchezco1995 @Hani K. and others.

Developing a launcher/kiosk app for video playback

This is sort of a research thread and I hope someone here is willing to weigh in with their knowledge.
I'm a Ruby / Java / Python / JS / PHP developer, who did a little bit of Android game development during my studies back in 2012. I assume things have changed since then.
I'm working on a commercial project where we need a network controllable video player for LED TV's and/or video projectors. Currently, we are using a Raspberry Pi 3-based design with the OMX Player, but this board is somewhat weak and the player is cumbersome to interact with and has limitations. Especially when it comes to rendering multiple layers with transparency. I would like to work on a platform where I have a rich multimedia API for rendering sound and video with an object-oriented API.
I have obtained myself an Asus Tinker, which has an official Android distribution. This runs rather smooth and from what I can tell, the API's for Android appear rich and flexible. So my questions are:
1) Is it possible to develop a launcher / kiosk app, that will allow me to boot into a "blank" screen and allow the app to place video surfaces, image surfaces and text layers? I should also be able to interact with the sound card and playback PCM audio. I would like an API that supports audio mixing, amplification, etc... There is no direct user input on the device, so I will need a solution that does not present any status bars, google account wizards, wifi wizards, update prompts, notifications or anything. In fact, when the Tinker is powered on, there should ideally not be anything indicating that it's Android.
I guess what I'm asking for is kind of a console video game engine / SDK, minus game controller support.
2) What kind of libraries or API's would I need to dive into and understand? Where should I start?
3) How complex is it? What is the scope of it? How much development time? Days? Weeks? Months? Years? Would I need more developers with specific skills?
4) Is there any developer here who's interested in participating in such a project as a paid freelance developer?
5) Is there any alternative software/OS platforms I should look into? I want to be able to boot into a custom passive user interface that is remotely controlled over REST by another device. I would like to avoid dealing with low level implementation of video decoding and rendering, but at the same time I would prefer to have control over screen resolution, refresh rate, color depth and I would like to run a ssh server on the client, so it can be serviced. Ideally, the platform should be able to both stream from the internet, but also accept commands to download to local storage and play from there.
6) Is there any alternative hardware platform I should look into?
7) Anything else I should consider? Problems that I'll need to address / prepare for?

Categories

Resources