Search Results

Search found 5304 results on 213 pages for 'audio streaming'.

Page 86/213 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • If I want to play the same sound 10 times per second, must I have 10 copies of that sound in memory?

    - by mystify
    I have a sound that needs to get played 10 times per second. The sound is 1 second long. So it does overlap like 10 times. However, as far as I understand the Finch sound library, I would need 10 different instances of a sound in place so that I can play it 10 times at almost the same time. When I have just one instance, the sound would stop and play from the beginning on every iteration, but not overlap with itself. How to do that?

    Read the article

  • Pitch detection and change java

    - by omegas27
    Hello, I'm french so I'm sorry if you have trouble to understand some of my sentences. Aniways, I saw in some topics that the pitch could be fetected thanks to the Fourier transform but I didn't really understand how to implement it. Moreover, I didn't find how to change the pitch of a wav file and if possibl ,a mp3 file I am listening to music using javaSound for the wav and JLayer for the mp3. Thanks

    Read the article

  • Steaming a non-PCM WAV file to a SilverLight application

    - by Satumba
    Hi, I would like to allow users to play recorded WAV files that stored on a server back to a Silverlight application as a client to play them. I saw that there is a way to play a WAV file on Silverlight (here), but when i tried to impliment it, i got an error playing the file because it is not in PCM format but encoded. The files that i'm trying to play are encoded with a special encoder, so i thought that the only way is to decode the WAV file on the server and stream it back to the client. The limitation is that the decode process should occur in real time because it is not reasonable to convert all the WAV files that exists. Is it possible to do it? Which streamer can i use? (Windows Media Service can help here?) Does somebody has any experience with such a scenario? Appreciate your help.

    Read the article

  • Wireshark doesnt' recognises RTMP streams

    - by Andrew
    Hello! I found on the web few samples on tracking RTMP (Real Time Messaging Protocol) with Wireshark, but it doesn't work for me. All RTMPT packets rendered as basic TCP packet like this: 149 14.324999 85.115.xxx.xxx 192.168.1.20 TCP macromedia-fcs > 54557 [ACK] Seq=1 Ack=1452 Win=69 Len=0 I'm using Wireshark 1.2.8 with all protocols installed on Windows Vista. What can i do to fix it? Thx!

    Read the article

  • Dimdim Change name

    - by islam
    i build dimdim v4.5 on my pc and its work fine with me. each time i want to start meeting i type my pc IP address like this : http://<my-ip-address>/dimdim i want to change the word dimdim to be anything else like : http://<my-ip-address>/meeting regards

    Read the article

  • capturing video from ip camera

    - by Ruby
    I am trying to capture video from ip camera into my application , its giving exception com.sun.image.codec.jpeg.ImageFormatException: Not a JPEG file: starts with 0x0d 0x0a at sun.awt.image.codec.JPEGImageDecoderImpl.readJPEGStream(Native Method) at sun.awt.image.codec.JPEGImageDecoderImpl.decodeAsBufferedImage(Unknown Source) at test.AxisCamera1.readJPG(AxisCamera1.java:130) at test.AxisCamera1.readMJPGStream(AxisCamera1.java:121) at test.AxisCamera1.readStream(AxisCamera1.java:100) at test.AxisCamera1.run(AxisCamera1.java:171) at java.lang.Thread.run(Unknown Source) its giving exception at image = decoder.decodeAsBufferedImage(); Here is the code i am trying private static final long serialVersionUID = 1L; public boolean useMJPGStream = true; public String jpgURL = "http://ip here/video.cgi/jpg/image.cgi?resolution=640×480"; public String mjpgURL = "http://ip here /video.cgi/mjpg/video.cgi?resolution=640×480"; DataInputStream dis; private BufferedImage image = null; public Dimension imageSize = null; public boolean connected = false; private boolean initCompleted = false; HttpURLConnection huc = null; Component parent; /** Creates a new instance of AxisCamera */ public AxisCamera1(Component parent_) { parent = parent_; } public void connect() { try { URL u = new URL(useMJPGStream ? mjpgURL : jpgURL); huc = (HttpURLConnection) u.openConnection(); // System.out.println(huc.getContentType()); InputStream is = huc.getInputStream(); connected = true; BufferedInputStream bis = new BufferedInputStream(is); dis = new DataInputStream(bis); if (!initCompleted) initDisplay(); } catch (IOException e) { // incase no connection exists wait and try // again, instead of printing the error try { huc.disconnect(); Thread.sleep(60); } catch (InterruptedException ie) { huc.disconnect(); connect(); } connect(); } catch (Exception e) { ; } } public void initDisplay() { // setup the display if (useMJPGStream) readMJPGStream(); else { readJPG(); disconnect(); } imageSize = new Dimension(image.getWidth(this), image.getHeight(this)); setPreferredSize(imageSize); parent.setSize(imageSize); parent.validate(); initCompleted = true; } public void disconnect() { try { if (connected) { dis.close(); connected = false; } } catch (Exception e) { ; } } public void paint(Graphics g) { // used to set the image on the panel if (image != null) g.drawImage(image, 0, 0, this); } public void readStream() { // the basic method to continuously read the // stream try { if (useMJPGStream) { while (true) { readMJPGStream(); parent.repaint(); } } else { while (true) { connect(); readJPG(); parent.repaint(); disconnect(); } } } catch (Exception e) { ; } } public void readMJPGStream() { // preprocess the mjpg stream to remove the // mjpg encapsulation readLine(3, dis); // discard the first 3 lines readJPG(); readLine(2, dis); // discard the last two lines } public void readJPG() { // read the embedded jpeg image try { JPEGImageDecoder decoder = JPEGCodec.createJPEGDecoder(dis); image = decoder.decodeAsBufferedImage(); } catch (Exception e) { e.printStackTrace(); disconnect(); } } public void readLine(int n, DataInputStream dis) { // used to strip out the // header lines for (int i = 0; i < n; i++) { readLine(dis); } } public void readLine(DataInputStream dis) { try { boolean end = false; String lineEnd = "\n"; // assumes that the end of the line is marked // with this byte[] lineEndBytes = lineEnd.getBytes(); byte[] byteBuf = new byte[lineEndBytes.length]; while (!end) { dis.read(byteBuf, 0, lineEndBytes.length); String t = new String(byteBuf); System.out.print(t); // uncomment if you want to see what the // lines actually look like if (t.equals(lineEnd)) end = true; } } catch (Exception e) { e.printStackTrace(); } } public void run() { System.out.println("in Run..................."); connect(); readStream(); } @SuppressWarnings("deprecation") public static void main(String[] args) { JFrame jframe = new JFrame(); jframe.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); AxisCamera1 axPanel = new AxisCamera1(jframe); new Thread(axPanel).start(); jframe.getContentPane().add(axPanel); jframe.pack(); jframe.show(); } } Any suggestions what I am doing wrong here??

    Read the article

  • How to produce precisely-timed tone and silence?

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.

    Read the article

  • Playing an arbitrary tone with Android.

    - by fiXedd
    Is there any way to make Android emit a sound of arbitrary frequency (meaning, I don't want to have pre-recorded sound files)? I've looked around and ToneGenerator was the only thing I was able to find that was even close, but it seems to only be capable of outputting the standard DTMF tones. Any ideas?

    Read the article

  • Playing Multiple sounds at the same time in Android

    - by Wrapper
    I am unable to use the following to code to play multiple sounds/beeps simultaneously. In my onclicklistener I have added ... public void onClick(View v) { mSoundManager.playSound(1); mSoundManager.playSound(2); } ... But this plays only one sound at a time, sound with index 1 followed by sound with index 2. How can I play atleast 2 sounds simultaneously using this code whenever there is an onClick() event? public class SoundManager { private SoundPool mSoundPool; private HashMap<Integer, Integer> mSoundPoolMap; private AudioManager mAudioManager; private Context mContext; public SoundManager() { } public void initSounds(Context theContext) { mContext = theContext; mSoundPool = new SoundPool(4, AudioManager.STREAM_MUSIC, 0); mSoundPoolMap = new HashMap<Integer, Integer>(); mAudioManager = (AudioManager)mContext.getSystemService(Context.AUDIO_SERVICE); } public void addSound(int Index,int SoundID) { mSoundPoolMap.put(1, mSoundPool.load(mContext, SoundID, 1)); } public void playSound(int index) { int streamVolume = mAudioManager.getStreamVolume(AudioManager.STREAM_MUSIC); mSoundPool.play(mSoundPoolMap.get(index), streamVolume, streamVolume, 1, 0, 1f); } public void playLoopedSound(int index) { int streamVolume = mAudioManager.getStreamVolume(AudioManager.STREAM_MUSIC); mSoundPool.play(mSoundPoolMap.get(index), streamVolume, streamVolume, 1, -1, 1f); } }

    Read the article

  • No mic activity with setLoopBack set to false - AS3

    - by Franky
    Trying to figure out why setloopback needs to be set to true for microphone activity to be detected. The problem is the echo feedback when using a macbook with a built in mic. If anyone has some ideas about this let me know. Right now I'm experimenting with toggling gain, depending on activity to simulate echo reduction. Not optimal though. @lessfame

    Read the article

  • Stream (.NET) handling best-practices

    - by Jader Dias
    The question is entitled with the word "Stream" because the question below is a concrete example of a more generic doubt I have about Streams: I have a problem that accepts two solutions and I want to know the best one: I download a file, save it to disk (2 min), read it and write the contents to the DB (+ 2 min). I download a file and write the contents directly to the DB (3 min). If the write to DB fails I'll have to download again in the second case, but not in the first case. Which is best? Which would you use?

    Read the article

  • Correct way to Convert 16bit PCM Wave data to float

    - by fredley
    I have a wave file in 16bit PCM form. I've got the raw data in a byte[] and a method for extracting samples, and I need them in float format, i.e. a float[] to do a Fourier Transform. Here's my code, does this look right? I'm working on Android so javax.sound.sampled etc. is not available. private static short getSample(byte[] buffer, int position) { return (short) (((buffer[position + 1] & 0xff) << 8) | (buffer[position] & 0xff)); } ... float[] samples = new float[samplesLength]; for (int i = 0;i<input.length/2;i+=2){ samples[i/2] = (float)getSample(input,i) / (float)Short.MAX_VALUE; }

    Read the article

  • SoundPlayer causing Memory Leaks?

    - by Nick Udell
    I'm writing a basic writing app in C# and I wanted to have the program make typewriter sounds as you typed. I've hooked the KeyPress event on my RichTextBox to a function that uses a SoundPlayer to play a short wav file every time a key is pressed, however I've noticed after a while my computer slows to a crawl and checking my processes, audiodlg.exe was using 5 GIGABYTES of RAM. The code I'm using is as follows: I initialise the SoundPlayer as a global variable on program start with SoundPlayer sp = new SoundPlayer("typewriter.wav") Then on the KeyPress event I simply call sp.Play(); Does anybody know what's causing the heavy memory usage? The file is less than a second long, so it shouldn't be clogging the thing up too much.

    Read the article

  • Extracting note onset from MIDI

    - by Dolphin
    Hi I need to extract musical features (note details-pitch, duration, rhythm, loudness, note start time) from a polyphonic (having 2 scores for treble and bass - bass may also have chords) MIDI file. I'm using the jMusic API to extract these details from a MIDI file. My approach is to go through each score, into parts, then phrases and finally notes and extract the details. With my approach, it's reading all the treble notes first and then the bass notes - but chords are not captured (i.e. only a single note of the chord is taken), and I cannot identify from which point onwards are the bass notes. So what I tried was to get the note onsets (i.e. the start time of note being played) - since the starting time of both the treble and bass notes at the start of the piece should be same - But I cannot extract the note onset using jMusic API. Each time it shows 0.0. Is there any way I can identify the voice (treble or bass) of a note? And also all the notes of a chord? How is the voice or note onset for each note stored in MIDI? Is this different for each MIDI file? Any insight is greatly appreciated. Thanks in advance

    Read the article

  • XMLStreamReader and a real stream

    - by Yuri Ushakov
    Update There is no ready XML parser in Java community which can do NIO and XML parsing. This is the closest I found, and it's incomplete: http://wiki.fasterxml.com/AaltoHome I have the following code: InputStream input = ...; XMLInputFactory xmlInputFactory = XMLInputFactory.newInstance(); XMLStreamReader streamReader = xmlInputFactory.createXMLStreamReader(input, "UTF-8"); Question is, why does the method #createXMLStreamReader() expects to have an entire XML document in the input stream? Why is it called a "stream reader", if it can't seem to process a portion of XML data? For example, if I feed: <root> <child> to it, it would tell me I'm missing the closing tags. Even before I begin iterating the stream reader itself. I suspect that I just don't know how to use a XMLStreamReader properly. I should be able to supply it with data by pieces, right? I need it because I'm processing a XML stream coming in from network socket, and don't want to load the whole source text into memory. Thank you for help, Yuri.

    Read the article

  • Is it possible to detect when the system is recording a sound and then perform some action on Python

    - by Jorge
    I began learning Python a few days ago, and i was wondering about a practical use for a program. Then i came up with the following: if my brother is in his room recording himself playing guitar, a led plugged to the usb and wired so it's outside his door lights up, and then i'll know he's recording and i'll take care not to make any noises. The main questions are: How Python can detect any recording going on in the system? How would i interface with the usb so i can actually turn the led on?

    Read the article

  • Android Stream Data Over Wifi?

    - by Neb
    Im trying to make an app for android that will stream the data of the accelerometer to be used as a game controller on my pc over a local wifi connection. Is it possible to make some kind of wifi stream of the accelerometer values in the android app and then make the pc somehow 'read' this stream? Or would it just be better for the pc to make endless calls to the phone getting the newest accelerometer values from a local android server? It would also have to send commands from the phone such as 'button1 pressed', 'button1 released'.

    Read the article

  • Pros and cons of MPMoviePlayerController versus launching UIWebView to stream movie

    - by Nosredna
    I have a client who has video content for the web in Flash format. My task is to help them show the videos in an iPhone app. I realize that step one is to get these videos into the appropriate Quicktime format for the iPhone. Then I'm going to have to help the client figure out how or where to host these files. If that's tricky I assume they can be hosted at YouTube. My chief concern, though, is which approach to take to stream the video. What are the pros and cons of MPMoviePlayerController versus launching UIWebView with the URL of the stream? Is there any difference? Is one of them more or less forgiving? Is one of them a better user experience? Any gotchas I might expect to run into? I'm assuming playing video is pretty easy on the iPhone. Is it reasonable to try both and have one available as a fallback, or would that be a waste of time? I'm trying to schedule this out a bit, so I'd love to hear real-world experiences from anyone who's done this.

    Read the article

  • Getting following exception javax.sound.sampled.LineUnavailableException: line with format ULAW 800

    - by angelina
    Dear All, I tried to play and get duration of a wave file using code below but got following exception.please resolve.I m using a wave file format. URL url = new URL("foo.wav"); Clip clip = AudioSystem.getClip(); AudioInputStream ais = AudioSystem.getAudioInputStream(url); clip.open(ais); System.out.println(clip.getMicrosecondLength()); **javax.sound.sampled.LineUnavailableException: line with format ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame, not supported.**

    Read the article

  • What is the best way to merge mp3 files?

    - by Dan Williams
    I've got many, many mp3 files that I would like to merge into a single file. I've used the command line method copy /b 1.mp3+2.mp3 3.mp3 but it's a pain when there's a lot of them and their namings are inconsistent. The time never seems to come out right either.

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • SoundPool repeating issue for Samsung Galaxy S3

    - by Alaa Eldin
    I'm trying to play a background sound for my application, I use SoundPool class, my problem is that, sound plays well only when I set the loop parameter with zero value, but it doesn't work for any other value. My code for initialization is: soundpool = new SoundPool(4, AudioManager.STREAM_MUSIC, 0); soundsMap = new HashMap<Integer, Integer>(); soundsMap.put(1, soundpool.load(this, R.raw.soundfile_1, 1)); soundsMap.put(2, soundpool.load(this, R.raw.soundfile_2, 1)); my code for playing is soundpool.play(1, 0.9f, 0.9f, 1, -1, 1f); as I mentioned sound works when I put (0) instead of (-1) for the loop value, anyone has any idea why (-1) or any value other than (0) doesn't work (there is no output sound) ?

    Read the article

  • A way to enable a LaunchDaemon to output sound?

    - by Varun Mehta
    I have a small Foundation application that checks a website and plays a sound if it sees a certain value. This application successfully plays a sound when I run it as my user from the Terminal. I've configured this app to run as a LaunchDaemon, with the following plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>org.myorg.appidentifier</string> <key>ProgramArguments</key> <array> <string>/Users/varunm/path/to/cli/application</string> </array> <key>KeepAlive</key> <true/> <key>RunAtLoad</key> <true/> </dict> </plist> When I have this service launched I can see it successfully read in and log values from the website, but it never generates any sound. The sound files are located in the same directory as the binary, and I use the following code: NSSound *soundToPlay = [[NSSound alloc] initWithContentsOfFile:@"sound.wav" byReference:NO]; [soundToPlay setDelegate:stopper]; [soundToPlay play]; while (g_keepRunning) { [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; } [soundToPlay setCurrentTime:0.0]; Is there any way to get my LaunchDaemon application to play sound? This machine gets run by different people, and sometimes has no one logged in, which is why I have to configure it as a LaunchDaemon.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >