-
Notifications
You must be signed in to change notification settings - Fork 47
Description
We use AMI to call the AudioSocket application. We discovered a CPU core spikes to 100% while the call is in AudioSockets.
I found this article discussing the issue https://github.com/NormHarrison/audiosocket_server.
He wrote "When AudioSocket is used like a channel driver, for example Dial(AudioSocket/127.0.0.1:3278/), CPU usage remains perfectly normal, but... depending on the other channel its going to be bridged with (for example, a softphone connected via SIP), the audio sent to your Audiosocket server instance will no longer be in 16-bit, 8KHz, mono LE PCM format.
Instead... It will be encoded and sent as whatever audio codec was agreed upon between the two channels. So in my experience, when a SIP softphone that uses the u-law (G.711) codec makes a call to a place in the Dialplan that eventually invokes Audiosocket, the audio you will be sent will also be in encoded as u-law, which can be both a positive and negative."
First, is there a solution in development for the CPU core spike? Obviously, this would not scale very well.
Second, is Norm correct about AudioSocket encoding using the Dial? We have some WebRTC (opec), some ulaw, and potentially some additional codecs for various channels. If the AudioSocket is encoded according to the channel's codec, wouldn't the AudioSocket server need an indication of the encoding it is receiving.