Search Results

Search found 18 results on 1 pages for 'audioqueue'.

Page 1/1 | 1 

  • AudioQueue in-memory playback example

    - by Jonesy
    Does anybody know of any examples using AudioQueue that play from an in-memory source? All the examples I can find play from files (using AudioFileReadPackets) but in my particular case I am generating the data myself in realtime so ideally, I want to enqueue the data myself rather than sucking it out of a file using the callback. Any help much appreciated.

    Read the article

  • AudioQueue on iPhone

    - by Sridhar
    Hi, Is there anyway to record the sound in slow manner using AudioQueues in Iphone(may be in call back function ?). Currently I am recording in Linear PCM with 22050 Hz. Basically I want to adjust the audio samples to match my video frame rate (which is 10 FPS). Thanks

    Read the article

  • 2 AudioQueue questions

    - by iter
    I am learning to use AudioQueue. I wish to generate an audio stream programmatically. I have 2 issues that I cannot account for. I am getting audio when I run in the simulator, but not on an iPhone. (Other apps do produce sound on the phone). I get about 20ms-long gaps of silence between buffers. In my testing, I generate an audio buffer on startup and repeatedly enqueue it without modification. I don't spend any processing on filling audio buffers at runtime, not even copying them. Ari.

    Read the article

  • AudioQueue recording as float

    - by niklassaers
    Hi guys, I would like to have the result from my recording as a float in the range [0.0, 1.0], alternatively [-1.0, 1.0] because of a bit of math I want to do on it. When I set my recordingformat to be in float, like this: mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat; I get: Error: AudioQueueNewInput failed ('fmt?') Does this mean the hardware doesn't support recording to floats? If not, how do I set it to record in floats? If so, are there any processor-friendly ways I can convert a signed integer array to a float array? Cheers Nik

    Read the article

  • AudioQueue ate my buffer (first 15 milliseconds of it)

    - by iter
    I am generating audio programmatically. I hear gaps of silence between my buffers. When I hook my phone to a scope, I see that the first few samples of each buffer are missing, and in their place is silence. The length of this silence varies from almost nothing to as much as 20 ms. My first thought is that my original callback function takes too much time. I replace it with the shortest one possible--it re-renqueues the same buffer over and over. I observe the same behavior. AudioQueueRef aq; AudioQueueBufferRef aq_buffer; AudioStreamBasicDescription asbd; void aq_callback (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) { OSStatus s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL); } void aq_init(void) { OSStatus s; asbd.mSampleRate = AUDIO_SAMPLES_PER_S; asbd.mFormatID = kAudioFormatLinearPCM; asbd.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; asbd.mBytesPerPacket = 1; asbd.mFramesPerPacket = 1; asbd.mBytesPerFrame = 1; asbd.mChannelsPerFrame = 1; asbd.mBitsPerChannel = 8; asbd.mReserved = 0; int PPM_PACKETS_PER_SECOND = 50; // one buffer is as long as one PPM frame int BUFFER_SIZE_BYTES = asbd.mSampleRate/PPM_PACKETS_PER_SECOND*asbd.mBytesPerFrame; s = AudioQueueNewOutput(&asbd, aq_callback, NULL, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aq); s = AudioQueueAllocateBuffer(aq, BUFFER_SIZE_BYTES, &aq_buffer); // put samples in the buffer buffer_data(my_data, aq_buffer); s = AudioQueueStart(aq, NULL); s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL); }

    Read the article

  • Record/Playback with AudioQueue on iPhone

    - by Biranchi
    Hi, I am currently using Audio Queues on the iPhone to record and playback audio. What I would like to be able to do is to record some audio, allow the user to pause the record queue, and to seek back and forward through the audio to select a position from where they can start recording from again. I have got over the seeking issue by making the playback AudioQueueBuffer sizes small enough so that the play audio queue callback happens at a rate that allows the user to use a slider control to hear the audio as they adjust the slider back and forth. I think I can achieve the recording at a new position by setting the inStartingPacket parameter of the AudioFileWritePackets function that I call from the Audio Recording Queue callback. The trouble is this only inserts audio over the previously recorded audio. The file length obviously doesn't change so if the user were to go backwards and record less audio than before, the old audio still remains after the end of the newly recorded audio. Is there a way I can get the AudioFile to truncate at the point the user starts to insert the new audio, is there some other way I can remove the old audio starting at the insert position or is there a better way about going about this task? Thanks

    Read the article

  • audio power on AudioQueue

    - by Tomoyuki
    Hi everyone. I'm now creating an Application using speech recognition.To check the Audio Power coming in through the microphone, I wrote a method as follows. -(void)checkPower(AudioqueRef)queue{ UInt32 expectedSize= sizeof(AudioQueueLevelMeterState); AudioQueueGetProperty(queue, kAudioQueueProperty_CurrentLevelMeter, audioLevels, expectedSize); NSLog(@"average:%f peak:%f",audioLevels.mAveragePower,audioLevels.mPeakPower); } I found that sometimes mAveragePower was larger than mPeakPower, and when mAveragePower was 1.0, in other words, averagePower is regarded as max, mPeakPower was lower than 1.0. I think that generally this result is inpossible. please Let me know if you have any information about sound power on CoreAudio. thanks.

    Read the article

  • How to start writing out an existing AudioQueue in response to an event?

    - by Halle
    Hello, I am writing a class that opens an AudioQueue and analyzes its characteristics, and then under certain conditions can begin or end writing out a file from that AudioQueue that is already instantiated. This is my code (entirely based on SpeakHere) that opens the AudioQueue without writing anything out to tmp: void AQRecorder::StartListen() { int i, bufferByteSize; UInt32 size; try { SetupAudioFormat(kAudioFormatLinearPCM); XThrowIfError(AudioQueueNewInput(&mRecordFormat, MyInputBufferHandler, this, NULL, NULL, 0, &mQueue), "AudioQueueNewInput failed"); mRecordPacket = 0; size = sizeof(mRecordFormat); XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription, &mRecordFormat, &size), "couldn't get queue's format"); bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); for (i = 0; i < kNumberRecordBuffers; ++i) { XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed"); XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL), "AudioQueueEnqueueBuffer failed"); } mIsRunning = true; XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed"); } catch (CAXException &e) { char buf[256]; fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf)); } catch (...) { fprintf(stderr, "An unknown error occurred\n"); } } But I'm a little unclear on how to write a function that will tell this queue "from now until the stop signal, start writing out this queue to tmp as a file". I understand how to tell an AudioQueue to write out as a file at the time that it's created, how to set files format, etc, but not how to tell it to start and stop midstream. Much appreciative of any pointers, thanks.

    Read the article

  • Sound File editing in Objective C

    - by Biranchi
    Hi All, I am able to record and create audio files using AudioFileCreateWithURL in the AudioToolbox Framework. I want to figure out if there is any way to edit the .caf sound files. I want to insert another recoreded audio inside the main audio file. Any thoughts or suggestions how to proceed ?? Thanks.

    Read the article

  • iPhone SDK: Audio Queue control

    - by codemercenary
    Hi all, I am new to the audio queue services so I have taken an example from a book called iPhone Cool Projects where it describes how to stream audio. I want to extend this to being able to play a continuous playlist of links to mp3 files like an internet radio. The problem with the example code it that it does not detect when a stream ends and does not call AudioQueueStop at any point, so I added a counter to number of buffers added to the queue, and then decrement this counter each time audioQueueOutputCallback is called by the queue. This works fine except if when the buffer count goes to 0, and then I add a call AudioQueueFlush(audioQueue) and then AudioQueueStop(audioQueue, false) I get an error. If I only call AudioQueueReset, it continues to load the buffers again, but plays them out faster then it loads them... getting stuck in a loop and then crashing. 2010-04-14 13:56:29.745 AudioPlayer[2269:207] init player with URL 2010-04-14 13:56:29.941 AudioPlayer[2269:207] did recieve data 2010-04-14 13:56:29.942 AudioPlayer[2269:207] audio request didReceiveData 2010-04-14 13:56:29.944 AudioPlayer[2269:207] >>> start audio queue 2010-04-14 13:56:29.960 AudioPlayer[2269:207] packetCallback count 2 2010-04-14 13:56:29.961 AudioPlayer[2269:207] add buffer: 1 2010-04-14 13:56:29.962 AudioPlayer[2269:207] did recieve data 2010-04-14 13:56:29.963 AudioPlayer[2269:207] audio request didReceiveData 2010-04-14 13:56:29.963 AudioPlayer[2269:207] packetCallback count 1 2010-04-14 13:56:29.964 AudioPlayer[2269:207] add buffer: 2 2010-04-14 13:56:29.965 AudioPlayer[2269:207] packetCallback count 13 2010-04-14 13:56:29.967 AudioPlayer[2269:207] add buffer: 3 2010-04-14 13:56:29.968 AudioPlayer[2269:207] done with buffer: 3 2010-04-14 13:56:29.969 AudioPlayer[2269:207] done with buffer: 2 2010-04-14 13:56:29.974 AudioPlayer[2269:207] done with buffer: 1 So this loop continues some 20 - 30 times and then it crashes. The first time it plays an audio file it queues up the buffers and then plays sound, but doesn't callback to delete them until some 100 or more have been played. Can anyone explain this behavior? I read that there was a limit of 1 audio queue for MP3 playback for the iPhone. Is that still true? If not then I suppose I should use another audio queue for the next mp3 stream. I've had a look through the apple docs but it doesn't explain this in any particular detail. A better insight into this would be great. TIA.

    Read the article

  • Xcode best audio practices

    - by Zachary Webert
    What is the best practice for creating an audio prompt within my app, which will append different portions of audio together to ask a question? ex. "What is" + "foo"? "What is" + "bar"? I have developed a "AudioQueue" object using audiotool box which uses AudioServicesPlaySystemSound() and it is working perfectly. Is there anything wrong with playing this type of audio through the alert system. If so what are my alternatives?? Thank you

    Read the article

  • Finding out estimated duration of a stream using Core Audio

    - by Reflog
    I am streaming a MP3 over network using custom feeding code, not AVAudioPlayer (which only works with URLs) using APIs like AudioFileStreamOpen and etc. Is there any way to estimate a length of the stream? I know that I can get a 'elapsed' property using: if(AudioQueueGetCurrentTime(queue.audioQueue, NULL, &t, &b) < 0) return 0; return t.mSampleTime / dataFormat.mSampleRate; But what about total duration to create a progress bar? Is that possible?

    Read the article

  • Audio queue start failed

    - by mobapps99
    Hi , i'm developing a project which has both audio streaming and playing audio from file. For audio streaming i'm using AudioStreamer and for playing from file i'm using avaudioplayer. Both streaming and playing works perfectly as long as the app is not interrupted by a phone call or sms. But when a call/sms comes after dismissing the call when i try to restart streaming i'm getting the error "Audio queue start failed" . This happens only when i have used avaudioplayer at least once and after that used streaming. When the avaudioplayer obeject is not created , in this scenario the there is no problem with resuming streaming after dismissing the call. My guess is that some thing is wrong with audioqueue. Help is very much appreciated.......

    Read the article

  • AVFoundation: Video to OpenGL texture working - How to play and sync audio?

    - by j00hi
    I've managed to load a video-track of a movie frame by frame into a OpenGL texture with AVFoundation. I followed the steps described in the answer here: iOS4: how do I use video file as an OpenGL texture? and took some code from the GLVideoFrame sample from WWDC2010 which can be downloaded here: http://bit.ly/cEf0rM How do I play the audio-track of the movie synchronously to the video. I think it would not be a good idea to play it in a separate player, but to use the audio-track of the same AVAsset. AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]; I retrieve a videoframe and it's timestamp in the CADisplayLink-callback via CMSampleBufferRef sampleBuffer = [self.readerOutput copyNextSampleBuffer]; CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer ); where readerOutput is of type AVAssetReaderTrackOutput* How to get the corresponding audio-samples? And how to play them? Edit: I've looked around a bit and I think, best would be to use AudioQueue from the AudioToolbox.framework using the approach described here: AVAssetReader and Audio Queue streaming problem There is also an audio-player in the AVFoundation: AVAudioPlayer. But I don't know exactly how I should pass data to it's initWithData-initializer which expects NSData. Furthermore I don't think it's the best choice for my case because a new AVAudioPlayer-instance would have to be created for every new chunk of audio samples, as I understand it. Any other suggestions? What's the best way to play the raw audio samples which i get from the AVAssetReaderTrackOutput?

    Read the article

  • Can AVAudioSession do full duplex?

    - by Eric Christensen
    It would seem like it should be able to, but the following breakout test code can't do both: //play a file: NSArray *pathsArray = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [pathsArray objectAtIndex:0]; NSString* playFilePath=[documentsDirectory stringByAppendingPathComponent:@"testplayfile.wav"]; AVAudioPlayer *tempplayer = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:playFilePath] error:nil]; [tempplayer prepareToPlay]; [tempplayer play]; //and record a file: NSString* recFilePath=[documentsDirectory stringByAppendingPathComponent:@"testrecordfile.wav"]; AVAudioRecording *soundrecording = [[AVAudioRecorder alloc] initWithURL:[NSURL fileURLWithPath:recFilePath] settings:nil error:nil]; [soundrecording prepareToRecord]; [soundrecording record]; This is the minimum I can think of to individually play one file and record another. And this works just fine in the simulator. I can play back a file and record at the same time. But it doesn't work on the iphone itself. If I comment out either function, the other performs fine. The playback plays fine either alone or with both, if it's first. If I comment out the playback, the record records fine. (There's additional code to stop the recording not shown here.) So each works fine, but not together. I know audioQueue has a setting to allow both, but I don't see an analogue for AVAudioSessions. Any idea if it's possible, and if so, what I need to add? Thanks!

    Read the article

  • AVAudioPlayer working in Simulator, but not on device

    - by cannyboy
    My mp3 playing code is: NSError *error; soundObject = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:audioPathString] error:&error]; if (soundObject == nil) NSLog(@"%@", [error description]); soundObject.delegate = self; soundObject.numberOfLoops = 0; soundObject.volume = 1.0; NSLog(@"about to play"); [soundObject prepareToPlay]; [soundObject play]; NSLog(@"[soundObject play];"); The mp3 used to play fine, and it still does on the simulator. But not on the device. I've recently added some sound recording code (not mine) to the software. It uses AudioQueue stuff which is slightly beyond me. Does that conflict with AVAudioPlayer? Or what could be the problem? I've noticed that as soon as the audiorecording code starts working, I can't adjust the volume on the device anymore, so maybe it blocks the audio playback?. EDIT The solution seems to be to put this in my code. I put it in applicationDidFinishLaunching: [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error: nil]; UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride); The first line allows both play and record, whilst the other lines apparently reroute things to make the volume louder. All audio code is voodoo to me.

    Read the article

  • iPhone AVAudioPlayer failed to find codec

    - by Anthony
    Hello, I am writing an app that downloads a wav file from a server and needs to play that file. The files use the mulaw codec with 2:1 compression. These wav files are dynamically created by a seperate process so there is no way for me to preconvert the files to a different format or codec, I need to be able to play them as is. I am using an AVAudioPlayer instance initialized as follows: NSURL *audioURL = [[NSURL alloc] initWithString:@"http://xxx.../file.wav"]; NSData *audioData = [[NSData alloc] initWithContentsOfURL:audioURL]; AVAudioPlayer *audio = [[AVAudioPlayer alloc] initWithData:audioData error:nil]; [audio play]; However, when the play method executes, I get the following Console Output when executing on the Simulator: AudioQueue codec policy 1: failed to find a codec of the requested type I also tried saving the downloaded data to a local file and using a file URL, however that yeilds the same results. The downloaded file does play fine on both Mac and Windows based desktop media players. The SDK docs state that the mulaw codec is supported on the iPhone, so I am unsure why it is failing to find it. Any assistance would be greatly appreciated. Thanks.

    Read the article

1