Search Results

Search found 5304 results on 213 pages for 'audio streaming'.

Page 88/213 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Is there any live video stream editing open source project for my needs?

    - by Ole Jak
    I need an open source project with an API capable of reading a live video stream (stream codec can be any API can read - I can provide with practically any live streamable one) giving me last image data for some processing (like brightness\contrast or more exotic filtering) being able to receive data I've changed and starting to stream that data on to some http://localhost:port/ in some format I need it to be easily accessible from C# (even better, written in C#).

    Read the article

  • DSP - Filter sweep effect

    - by Trap
    I'm implementing a 'filter sweep' effect (I don't know if it's called like that). What I do is basically create a low-pass filter and make it 'move' along a certain frequency range. To calculate the filter cut-off frequency at a given moment I use a user-provided linear function, which yields values between 0 and 1. My first attempt was to directly map the values returned by the linear function to the range of frequencies, as in cf = freqRange * lf(x). Although it worked ok it looked as if the sweep ran much faster when moving through low frequencies and then slowed down during its way to the high frequency zone. I'm not sure why is this but I guess it's something to do with human hearing perceiving changes in frequency in a non-linear manner. My next attempt was to move the filter's cut-off frequency in a logarithmic way. It works much better now but I still feel that the filter doesn't move at a constant perceived speed through the range of frequencies. How should I divide the frequency space to obtain a constant perceived sweep speed? Thanks in advance.

    Read the article

  • Advice on embedding video content via CMS - what format?

    - by ted776
    Hi, if I set up the facility for people to embed video content on their site via their CMS (using TinyMCE editor), is there any reliable cross platform video format that should be used? From what I can find online, the only reliable way to embed and stream video is using FLV. Other formats seem to have caveats, e.g codecs required or quicktime updates required. Ideally I'd like to avoid this type of situation. If it is the case that FLV is the preferred option, then that involves asking people to encode their video content to FLV before uploading, so there is an extra step required here (unless I can set up the encoding in the back end, but this might take a while to process depending on the size of the video). Does anyone have any additional advice on this? The types of video i'd imagine people will be working with is raw camera footage, so i need to figure out the easiest and most reliable way of getting the footage on to a web page.

    Read the article

  • Java vs Flash for webcam access

    - by Alfredo Palhares
    I will make a video chat website, but coming from PHP and Python for the web i have no experience with video steaming. What do you recommend? Java or Flash? What's more flexible ? I am thinking of even making a C++ server application for stream controlling with a PHP fronted. Since is going to be a high traffic website and performance is a must. Can you point to some direction? Any documentation? Framework?

    Read the article

  • Is there any live video stream editing opensource project for my needs?

    - by Ole Jak
    So I need Some open source project with API capable of reading live video stream (stream codec can be any API can read - I can provide with practically any live streamable one) giving me last image data for some processing (like brightness\contrast or more exotic filtering) Being able to recieve data I've changed and starting to stream that data on to some http://localhost:port/ in some format I need it to be easily acsessible from C# (better written on it)

    Read the article

  • how to do p2p with flash? [closed]

    - by Female Gay
    Possible Duplicates: how to do p2p with flash? Does Flash10 + p2p really work? It's not difficult to tell what is being asked here. This question is not ambiguous, not vague, not incomplete, either not rhetorical and can be reasonably answered in its current form. So, good luck!

    Read the article

  • Play Shoutcast MP3 radio stream with Python?

    - by Zachary Brown
    I have managed to create an online radio station using Shoutcast and Sam Broadcaster. Now, I am wanting to build my own player for that radio station. I am not sure where to begin, I have googled, but no luck. I am using Python 2.6 on Microsoft Windows. I have managed to capture the stream and save it as an MP# on the hard disk, just not sure what to do with it next. I tried playback of the file, but it always pulls up errors. This is the code I have so far: import urllib target = open("broadcast.mp3") conn = urllib.urlopen("http://78.159.104.175:80") while True: target.write(con.read(5200)) Any help would be greatly appreciated!

    Read the article

  • getting started with libmms

    - by Vnuce
    Actually, the title explains it all... I want to read a stream, but have no idea from where to start. I've searched the web for some documentation/tutorial/whatever with no luck. Any help using this lib would be very appreciated.

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • Corba sequence<octet> a lot slower than using a socket

    - by Totonga
    I have a corba releated question. In my Java app I use typedef sequence Data; Now I played around with this Data vector. If I am right with the Corba specification sequence will either be converted to xs:base64Binary or xs:hexBinary. It should be an Opaque type and so it should not use any marshalling. I tried different idl styles: void Get(out Data d); Data Get(); but what I see is that moving the data using Corba is a lot slower than using a socket directly. I am fine with a little overhead but it looks for me like tha data is still marshalled. Do I need to somehow configure my orb to suppress the marshalling or did I miss something.

    Read the article

  • byte[] to wav file

    - by John
    Hi, It would be great if you could tell me how I could save a byte[] to a wav file. Sometimes I need to set different samplerate, number of bits and channels. Thanks for your help.

    Read the article

  • Read media stream from servlet in a webpage?

    - by khue
    Hi, I have a servlet that construct response to a media file request by reading the file from server: File uploadFile = new File("C:\TEMP\movie.mov"); FileInputStream in = new FileInputStream(uploadFile); Then write that stream to the response stream. My question is how do I play the media file in the webpage using or tag to read the media stream from the response. Thank you very much. Regards K.

    Read the article

  • SoundPool.load() and FileDescriptor from file

    - by Hans
    I tried using the load function of the SoundPool that takes a FileDescriptor, because I wanted to be able to set the offset and length. The File is not stored in the Ressources but a file on the storage card. Even though neither the load nor the play function of the SoundPool throw any Exception or print anything to the console, the sound is not played. Using the same code, but use the file path string in the SoundPool constructor works perfectly. This is how I have tried the loading (start equals 0 and length is the length of the file in miliseconds): FileInputStream fileIS = new FileInputStream(new File(mFile)); mStreamID = mSoundPool.load(fileIS.getFD(), start, length, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); If I would use this, it works: mStreamID = mSoundPool.load(mFile, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); Any ideas anyone? Thanks

    Read the article

  • AudioRecord problems with non-HTC devices

    - by Marc
    I'm having troubles using AudioRecord. An example using some of the code derived from the splmeter project: private static final int FREQUENCY = 8000; private static final int CHANNEL = AudioFormat.CHANNEL_CONFIGURATION_MONO; private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; private int BUFFSIZE = 50; private AudioRecord recordInstance = null; ... android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, CHANNEL, ENCODING, 8000); recordInstance.startRecording(); short[] tempBuffer = new short[BUFFSIZE]; int retval = 0; while (this.isRunning) { for (int i = 0; i < BUFFSIZE - 1; i++) { tempBuffer[i] = 0; } retval = recordInstance.read(tempBuffer, 0, BUFFSIZE); ... // process the data } This works on the HTC Dream and the HTC Magic perfectly without any log warnings/errors, but causes problems on the emulators and Nexus One device. On the Nexus one, it simply never returns useful data. I cannot provide any other useful information as I'm having a remote friend do the testing. On the emulators (Android 1.5, 2.1 and 2.2), I get weird errors from the AudioFlinger and Buffer overflows with the AudioRecordThread. I also get a major slowdown in UI responsiveness (even though the recording takes place in a separate thread than the UI). Is there something apparent that I'm doing incorrectly? Do I have to do anything special for the Nexus One hardware?

    Read the article

  • OpenAL device, buffer and context relationship

    - by Markus
    I'm trying to create an object oriented model to wrap OpenAL and have a little problem understanding the devices, buffers and contexts. From what I can see in the Programmer's Guide, there are multiple devices, each of which can have multiple contexts as well as multiple buffers. Each context has a listener, and the alListener*() functions all operate on the listener of the active context. (Meaning that I have to make another context active first if I wanted to change it's listener, if I got that right.) So far, so good. What irritates me though is that I need to pass a device to the alcCreateContext() function, but none to alGenBuffers(). How does this work then? When I open multiple devices, on which device are the buffers created? Are the buffers shared between all devices? What happens to the buffers if I close all open devices? (Or is there something I missed?)

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >