Search Results

Search found 5304 results on 213 pages for 'audio streaming'.

Page 87/213 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • What does LAME text does in MP3 file?

    - by Dims
    I see here http://en.wikipedia.org/wiki/MP3 that MP3 file consists of MP3 headers interchanged with MP3 data. MP3 header consist of few bytes. But here is my MP3 file dump with ID3 tag cut. Header is highlighted with blue. You can see that "LAME3.96" text is highlighted with green. What does it does there? Is this a part of MP3 elementary stream? Or this is the part of some headers I didn't tag?

    Read the article

  • Is there any live video stream editing open source project with API for my needs?

    - by Ole Jak
    I need an open source project with an API capable of reading a live video stream (stream codec can be any API can read - I can provide with practically any live streamable one) giving me last image data for some processing (like brightness\contrast or more exotic filtering) being able to receive data I've changed and starting to stream that data on to some http://localhost:port/ in some format I need it to be easily accessible from C# (even better, written in C#).

    Read the article

  • How can I get latency info from Android's AudioTrack class?

    - by Ryan
    I've noticed that the C++ classes underlying the AudioTrack and AudioRecord APIs in Android both have a latency() method that is not exposed via JNI. As far as I can see, the latency() method in AudioRecord still does not take into account the hardware latency (they have a TODO comment for that), but the latency() method in AudioTrack does add in the hardware latency. I absolutely need to get this latency value from AudioTrack. Is there any possible way I can do this? I don't care what kind of crazy hack is needed as long as it doesn't require a rooted phone (the resulting code must still be packaged as an app on the market).

    Read the article

  • is it possible to display video information from an rtsp stream in an android app UI

    - by Joseph Cheung
    I have managed to get a working video player that can stream rtsp links, however im not sure how to display the videos current time position in the UI, i have used the getDuration and getCurrentPosition calls, stored this information in a string and tried to display it in the UI but it doesnt seem to work in main.xml: TextView android:id="@+id/player" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="1px" android:text="@string/cpos" / in strings.xml: string name="cpos""" /string in Player.java private void playVideo(String url) { try { media.setEnabled(false); if (player == null) { player = new MediaPlayer(); player.setScreenOnWhilePlaying(true); } else { player.stop(); player.reset(); } player.setDataSource(url); player.getCurrentPosition(); player.setDisplay(holder); player.setAudioStreamType(AudioManager.STREAM_MUSIC); player.setOnPreparedListener(this); player.prepareAsync(); player.setOnBufferingUpdateListener(this); player.setOnCompletionListener(this); } catch (Throwable t) { Log.e(TAG, "Exception in media prep", t); goBlooey(t); try { try { player.prepare(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } Log.v(TAG, "Duration: === " + player.getDuration()); } catch (IllegalStateException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } private Runnable onEverySecond = new Runnable() { public void run() { if (lastActionTime 0 && SystemClock.elapsedRealtime() - lastActionTime 3000) { clearPanels(false); } if (player != null) { timeline.setProgress(player.getCurrentPosition()); //stores getCurrentPosition as a string cpos = String.valueOf(player.getCurrentPosition()); System.out.print(cpos); } if (player != null) { timeline.setProgress(player.getDuration()); //stores getDuration as a string cdur = String.valueOf(player.getDuration()); System.out.print(cdur); } if (!isPaused) { surface.postDelayed(onEverySecond, 1000); } } };

    Read the article

  • JWPlayer plays videos at 3x speed on the first run, then works fine in subsequent runs on the same p

    - by Josiah Kiehl
    Go here: http://nano.materials.drexel.edu/research/videolibrary For some bizarre reason, the videos will play at 3x speed on the first run through, but then will play at normal speed each subsequent play. This doesn't happen all the time, and it's not always the same video(s) that do it. I'm utterly baffled. I've reconverted the videos from m4p to flv (using BitComet's converter) several times, double checking the settings each time through with no change to the behavior. Anyone have a clue what's going on?

    Read the article

  • Android PCM Bytes

    - by Pintac
    Hi I am using the AudioRecord class to analize raw pcm bytes as it comes in the mic. So thats working nicely. Now i need convert the pcm bytes into decibel. I have a formula that takes sound presure in Pa into db. db = 20 * log10(Pa/ref Pa) So the question is the bytes i am getting from audiorecorder from the buffer what is it is it amplitude pascal sound pressure or what. I tried to putting the value into te formula but it comes back with very hight db so i do not think its right thanks

    Read the article

  • Open Source sound engine

    - by Steph Thirion
    When I started using SoundEngine (from CrashLanding and TouchFighter), I had read about a few people recommending not to use it, for it was, according to them, not stable enough. Still it was the only solution I knew of to play sounds with pitch and position control without learning C++ and OpenAL, so I ignored the warnings and went on with it. But now I'm starting to worry. The 2.2 SDK introduced AVFoundation. Using both SoundEngine from CrashLanding (for sounds) and AVAudioPlayer (for music), I found out SoundEngine behaves strangely when the only existing AVAudioPlayer is released (all sounds stop until a new AVAudioPlayer is initiated). Around the same time as the 2.2 SDK came out, the CrashLanding sample code was mysteriously removed from the ADC site. I'm worried there are more bad surprises to come. My question is, is anyone aware of an Open Source alternative to SoundEngine? Maybe even a C++ library that uses OpenAL?

    Read the article

  • Why do calls to waveOutGetPosition hang?

    - by MusiGenesis
    I'm using the winmm.dll API method waveOutGetPosition to get the current position of the playback of a WAV file. Sometimes this works as expected for me, but eventually one of the calls never returns and my application locks up. I found this thread with a few users who have experienced the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsgeneraldevelopmentissues/thread/c6a1e80e-4a18-47e7-af11-56a89f638ad7 but no solution. Has anyone run into this problem before?

    Read the article

  • can the python wave module accept StringIO object

    - by user368005
    i'm trying to use the wave module to read wav files in python. whats not typical of my applications is that I'm NOT using a file or a filename to read the wav file, but instead i have the wav file in a buffer. And here's what i'm doing import StringIO buffer = StringIO.StringIO() buffer.output(wav_buffer) file = wave.open(buffer, 'r') but i'm getting a EOFError when i run it... File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/wave.py", line 493, in open return Wave_read(f) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/wave.py", line 163, in __init__ self.initfp(f) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/wave.py", line 128, in initfp self._file = Chunk(file, bigendian = 0) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/chunk.py", line 63, in __init__ raise EOFError i know the StringIO stuff works for creation of wav file and i tried the following and it works import StringIO buffer = StringIO.StringIO() audio_out = wave.open(buffer, 'w') audio_out.setframerate(m.getRate()) audio_out.setsampwidth(2) audio_out.setcomptype('NONE', 'not compressed') audio_out.setnchannels(1) audio_out.writeframes(raw_audio) audio_out.close() buffer.flush() # these lines do not work... # buffer.output(wav_buffer) # file = wave.open(buffer, 'r') # this file plays out fine in VLC file = open(FILE_NAME + ".wav", 'w') file.write(buffer.getvalue()) file.close() buffer.close()

    Read the article

  • AVAudioPlayer currentTime problem

    - by StrAbZ
    Hi, I'm trying to use the audioplayer with a slider in order to seek into a track (nothing complicated). But I have a weird behavior... for some value of currentTime (between 0 and trackDuration), the player stop playing the track, and goes into "audioPlayerDidFinishPlaying:successfully:" with successfully to NO. And it did not go into "audioPlayerDecodeErrorDidOccur:error:" It's like it can't read the time i'm giving to it. For exemple the duration of the track is: 295.784424 seconds i set the currentTime to 55.0s (ie: 54.963878 or 54.963900 or 54.987755, etc... when printed as %f). The "crashes" always happen when the currentTime is 54.987755... and I really don't understand why... So if you have any idea... ^^

    Read the article

  • Simple sound effect loop using AudioToolKit

    - by Typeoneerror
    I've created a few sounds for use in my game. I can play them at certain events without issue: // create sounds CFBundleRef mainBundle; mainBundle = CFBundleGetMainBundle(); _soundFileShake = CFBundleCopyResourceURL(mainBundle, CFSTR("shake"), CFSTR("wav"), NULL); AudioServicesCreateSystemSoundID(_soundFileShake, &_soundIdShake); // later... AudioServicesPlaySystemSound(_soundIdShake); The game has a mechanism which allows you to shake the device to activate some functionality. I've got the shaking code done so I get get a "shaking started" and "shaking ended" message to my game. What I need to have happen is start playing "shave.wav" when shaking starts and loop it until it stops. Is there a way to do this with AudioToolbox/AudioServices? How could I do this if not?

    Read the article

  • Loading a page into memory in Rails

    - by titaniumdecoy
    My rails app produces XML when I load /reports/generate_report.xml. On a separate page, I want to read this XML into a variable and save it to the database. How can I do this? Can I somehow stream the response from the /reports/generate_report.xml URI into a variable? Or is there a better way to do it since the XML is produced by the same web app?

    Read the article

  • Webcam video stream processing.

    - by vikramtheone
    Hi Guys, I'm working with an image processing project, my final goal is to detect features on a real time video and finally track those features. I will be working with an Embedded Processor Platform called Freescale's i.MX515, it is a 32-bit media processor running on Ubuntu 9.04. Right now I'm working on the algorithms to locate the features, so, I'm using still images. When I'm satisfied with the results I will have to start using a video stream and I don't want to make use of a video file as a source stream, because then I will have to worry about video decoders then. Instead I would like to plug in a USB Wecam to the embedded platform (It has USB ports on it), directly take the frames as they are captured and send it to my application. I will take care to buy a webcam which will be supported in Linux (Device driver). But my question is will I be able to capture the incoming video stream from the webcam and send it to my application? Will I be able to configure the webcam and DMA to write the incoming frames in a particular memory location whose pointer I can simply pass to my application? (Confused!!!) I hope I could convey my doubts, can anyone guide me with what steps that I have to take to achieve all of these easily? Do you foresee any impossibility here? Help!!! Regards Vikram

    Read the article

  • How to play extracted wave file byte array in C#?

    - by user261924
    At the moment i have managed to separate the left and right channel of a WAVE file and have included the header in a byte[] array. My next step is to be about to play both channels. How can this be done? Here is a code snippet: byte[] song_left = new byte[fa.Length]; byte[] song_right = new byte[fa.Length]; int p = 0; for (int c = 0; c < 43; c++) { song_left[p] = header[c]; p++; } int q = 0; for (s = startByte; s < length; s = s + 3) { song_left[s] = sLeft[q]; q++; s++; song_left[s] = sLeft[q]; q++; } p = 0; for (int c = 0; c < 43; c++) { song_right[p] = header[c]; p++; } This part is reading the header and data from both the right and light channel and saving it to array sLeft[] and sRight[]. This part is working perfectly. Once I obtained the byte arrays, I did the following: System.IO.File.WriteAllBytes("c:\\left.wav", song_left); System.IO.File.WriteAllBytes("c:\\right.wav", song_right); Added a button to play the saved wave file: private void button2_Click(object sender, EventArgs e) { spWave = new SoundPlayer("c:\\left.wav"); spWave.Play(); } Once I hit the play button, this error appers: An unhandled exception of type 'System.InvalidOperationException' occurred in System.dll Additional information: The wave header is corrupt. Any ideas?

    Read the article

  • How to record sound from a microphone in VB6?

    - by Clay Nichols
    We've been recording sound for over a decade using what seems like a very clunky method using the Winmm.dll and the MCIsendString. I've read that this doesn't set the recording quality value correctly (not sure if that article was ever true or is still true). I was wondering if there is any better way to record sound.

    Read the article

  • Stream writing lags my GUI

    - by blez
    I have a thread that dequeues data from a queue and write it to another application's STDIN. I'm using Stream, but with .Write and even .BeginWrite, when I send 1mb chunks to the second app, my GUI gets laggy for ~1sec. Why?

    Read the article

  • Rapid calls to fread crashes the application

    - by Slynk
    I'm writing a function to load a wave file and, in the process, split the data into 2 separate buffers if it's stereo. The program gets to i = 18 and crashes during the left channel fread pass. (You can ignore the couts, they are just there for debugging.) Maybe I should load the file in one pass and use memmove to fill the buffers? if(params.channels == 2){ params.leftChannelData = new unsigned char[params.dataSize/2]; params.rightChannelData = new unsigned char[params.dataSize/2]; bool isLeft = true; int offset = 0; const int stride = sizeof(BYTE) * (params.bitsPerSample/8); for(int i = 0; i < params.dataSize; i += stride) { std::cout << "i = " << i << " "; if(isLeft){ std::cout << "Before Left Channel, "; fread(params.leftChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Left Channel, "; } else{ std::cout << "Before Right Channel, "; fread(params.rightChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Right Channel, "; offset += stride; std::cout << "After offset incr.\n"; } isLeft != isLeft; } } else { params.leftChannelData = new unsigned char[params.dataSize]; fread(params.leftChannelData, sizeof(BYTE), params.dataSize, file); }

    Read the article

  • How to start writing out an existing AudioQueue in response to an event?

    - by Halle
    Hello, I am writing a class that opens an AudioQueue and analyzes its characteristics, and then under certain conditions can begin or end writing out a file from that AudioQueue that is already instantiated. This is my code (entirely based on SpeakHere) that opens the AudioQueue without writing anything out to tmp: void AQRecorder::StartListen() { int i, bufferByteSize; UInt32 size; try { SetupAudioFormat(kAudioFormatLinearPCM); XThrowIfError(AudioQueueNewInput(&mRecordFormat, MyInputBufferHandler, this, NULL, NULL, 0, &mQueue), "AudioQueueNewInput failed"); mRecordPacket = 0; size = sizeof(mRecordFormat); XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription, &mRecordFormat, &size), "couldn't get queue's format"); bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); for (i = 0; i < kNumberRecordBuffers; ++i) { XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed"); XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL), "AudioQueueEnqueueBuffer failed"); } mIsRunning = true; XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed"); } catch (CAXException &e) { char buf[256]; fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf)); } catch (...) { fprintf(stderr, "An unknown error occurred\n"); } } But I'm a little unclear on how to write a function that will tell this queue "from now until the stop signal, start writing out this queue to tmp as a file". I understand how to tell an AudioQueue to write out as a file at the time that it's created, how to set files format, etc, but not how to tell it to start and stop midstream. Much appreciative of any pointers, thanks.

    Read the article

  • What exactly does raw microphone data represent?

    - by esperantist
    I'm using PyAudio, a PortAudio wrapper for Python. I'm getting data from a microphone. Data which is represented by a continuous stream of bytes divided into chunks (of a size determined by me). I've tried to plot the signal, assuming the bytes represent the current signal amplitude, but I get an interesting image that I can't easily describe. ^^ It seems to be composed of two waves, one shifted from the other. What exactly do the particular bytes represent, and how does this change when I'm recording only one channel, instead of two? Any explanations, suggestions, code snippets, anything, very welcome! (I'm new at this.) Thanks!

    Read the article

  • JAVA - Download PDF file from Webserver

    - by Augusto Picciani
    I need to download a pdf file from a webserver to my pc and save it locally. I used Httpclient to connect to webserver and get the content body: HttpEntity entity=response.getEntity(); InputStream in=entity.getContent(); String stream = CharStreams.toString(new InputStreamReader(in)); int size=stream.length(); System.out.println("stringa html page LENGTH:"+stream.length()); System.out.println(stream); SaveToFile(stream); Then i save content in a file: //check CRLF (i don't know if i need to to this) String[] fix=stream.split("\r\n"); File file=new File("C:\\Users\\augusto\\Desktop\\progetti web\\test\\test2.pdf"); PrintWriter out = new PrintWriter(new FileWriter(file)); for (int i = 0; i < fix.length; i++) { out.print(fix[i]); out.print("\n"); } out.close(); I also tried to save a String content to file directly: OutputStream out=new FileOutputStream("pathPdfFile"); out.write(stream.getBytes()); out.close(); But the result is always the same: I can open pdf file but i can see white pages only. Does the mistake is around pdf stream and endstream charset encoding? Does pdf content between stream and endStream need to be manipulate in some others way?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >