Search Results

Search found 496 results on 20 pages for 'wav'.

Page 4/20 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Improving the join of two wave file?

    - by kaki
    I have written a code for joining two wave files.It works fine when i am joining larger segments but as i need to join very small segments the clarity is not good. I have learned that the signal processing technique such a windowed join can be used to improve the joining of file. y[n] = w[n]s[n] Multiply value of signal at sample number n by the value of a windowing function hamming window w[n]= .54 - .46*cos(2*Pi*n)/L 0 I am not understanding how to get the value to signal at sample n and how to implement this?? the code i am using for joining is import wave m=['C:/begpython/S0001_0002.wav', 'C:/begpython/S0001_0001.wav'] i=1 a=m[i] infiles = [a, "C:/begpython/S0001_0002.wav", a] outfile = "C:/begpython/S0001_00367.wav" data= [] data1=[] for infile in infiles: w = wave.open(infile, 'rb') data1=[w.getnframes] data.append( [w.getparams(), w.readframes(w.getnframes())] ) #data1 = [ord(character) for character in data1] #print data1 #data1 = ''.join(chr(character) for character in data1) w.close() output = wave.open(outfile, 'wb') output.setparams(data[0][0]) output.writeframes(data[0][1]) output.writeframes(data[1][1]) output.writeframes(data[2][1]) output.close()

    Read the article

  • Android library to get pitch from WAV file

    - by Sakura
    I have a list of sampled data from the WAV file. I would like to pass in these values into a library and get the frequency of the music played in the WAV file. For now, I will have 1 frequency in the WAV file and I would like to find a library that is compatible with Android. I understand that I need to use FFT to get the frequency domain. Is there any good libraries for that? I found that [KissFFT][1] is quite popular but I am not very sure how compatible it is on Android. Is there an easier and good library that can perform the task I want? EDIT: I tried to use JTransforms to get the FFT of the WAV file but always failed at getting the correct frequency of the file. Currently, the WAV file contains sine curve of 440Hz, music note A4. However, I got the result as 441. Then I tried to get the frequency of G4, I got the result as 882Hz which is incorrect. The frequency of G4 is supposed to be 783Hz. Could it be due to not enough samples? If yes, how much samples should I take? //DFT DoubleFFT_1D fft = new DoubleFFT_1D(numOfFrames); double max_fftval = -1; int max_i = -1; double[] fftData = new double[numOfFrames * 2]; for (int i = 0; i < numOfFrames; i++) { // copying audio data to the fft data buffer, imaginary part is 0 fftData[2 * i] = buffer[i]; fftData[2 * i + 1] = 0; } fft.complexForward(fftData); for (int i = 0; i < fftData.length; i += 2) { // complex numbers -> vectors, so we compute the length of the vector, which is sqrt(realpart^2+imaginarypart^2) double vlen = Math.sqrt((fftData[i] * fftData[i]) + (fftData[i + 1] * fftData[i + 1])); //fd.append(Double.toString(vlen)); // fd.append(","); if (max_fftval < vlen) { // if this length is bigger than our stored biggest length max_fftval = vlen; max_i = i; } } //double dominantFreq = ((double)max_i / fftData.length) * sampleRate; double dominantFreq = (max_i/2.0) * sampleRate / numOfFrames; fd.append(Double.toString(dominantFreq)); Can someone help me out? EDIT2: I manage to fix the problem mentioned above by increasing the number of samples to 100000, however, sometimes I am getting the overtones as the frequency. Any idea how to fix it? Should I use Harmonic Product Frequency or Autocorrelation algorithms?

    Read the article

  • Can't play wav file from Javascript in Firefox for Mac

    - by Mike Royle
    I have the following html file that plays a wav file when the user hovers over the 'Play' anchor tag. It works perfectly on IE, Chrome, Firefox, Opera, Safari on both Windows and Mac - except for Firefox on the Mac which does not play the file. <html> <head> <title></title> <script> function PlayAudio() { var s = document.getElementById("soundFile"); s.Play(); } </script> </head> <body> <embed src="MySound.wav" enablejavascript="true" type="audio/wav" autostart="false" width="0" height="0" id="soundFile" /> <a href="#" onmouseover="PlayAudio()">Play</a> </body> </html> If the autostart attribute of the embed tag is set to true then the wav file plays as expected in Firefox for Mac, but not on the mouseover of the anchor tag. Any ideas?

    Read the article

  • Reproduce PIPE functionality in IronPython

    - by Muppet Geoff
    Hi, I am hoping some genious out there can help me out with this... I am using sox to merge and resample a group of WAV files, and pipe the output directly to the input of NeroAACEnc for encoding to AAC format. I originally ran the process in a script, which included: sox.exe d:\audio\1.wav d:\audio\2.wav d:\audio\3.wav -c 1 -r 22050 -t wav - | neroAacEnc.exe -q 0.5 -if - -of test.m4a This worked as expected. The '-' in the comand line translates as 'Pipe/redirect input/output (stdin/stdout)' - So Sox pipes to stdout, and NeroAACEnc reads from stdin, the | joins them together. I then migrated the whole solution to Python, and the equivalent command became: from subprocess import call, Popen, PIPE runwav = Popen(['sox.exe', 'd:\audio\1.wav', 'd:\audio\2.wav', 'd:\audio\3.wav', '-c', '1', '-r', '22050', '-t', 'wav', '-'], shell=False, stdout=PIPE) runm4b = call(['neroAacEnc.exe', '-q', '0.5', '-if', '-', '-of', 'test.m4a'], shell=False, stdin=runwav.stdout) This also worked like a charm, exactly as expected. Slightly more convoluted, but hey :) Well now I have to move it to IronPython, and the Subprocess module isn't available (the partial implementation that is, doesn't have Popen/PIPE support - plus it seems silly to add a custom library when there is probably a native alternative). I should mention here, that I opted for IronPython over C#, because I am comfortable with Python now - however, there is a chance of moving it again later to C# native, and I am using IronPython to ease myself into it :) I have no C# or .net experience. So far I have the following equivalent, that sets up the 2 processes: from System.Diagnostics import Process wav = Process() wav.StartInfo.UseShellExecute = False wav.StartInfo.RedirectStandardOutput = True wav.StartInfo.FileName = 'sox.exe' wav.StartInfo.Arguments = 'd:\audio\1.wav d:\audio\2.wav d:\audio\3.wav -c 1 -r 22050 -t wav -' wav.Start() m4b = Process() m4b.StartInfo.UseShellExecute = False m4b.StartInfo.RedirectStandardInput = True m4b.StartInfo.FileName = 'neroAacEnc.exe' m4b.StartInfo.Arguments = '-q 0.5 -if - -of test.m4a' m4b.Start() I know that these 2 processes start (I can see Nero and Sox in the task manager) but what I can't figure out (for the life of me) is how to string the two output/input streams together, as with the previous two solutions. I have searched and searched, so I thought I'd ask! If anyone knows either: How to join the two streams with the same net result as the Python and Commandline versions; or A better way to acheive what I am trying to do. I would be extremely grateful! Many thanks in advance, Geoff P.S. A code sample based off the above would be awesome :) or a specific code example of a similar process that I can easily translate... this has broked my brayne.

    Read the article

  • Access files (.wav) in java package

    - by Highmastdon
    I want to access my .wav files which are in a package inside my project. For example I got two packages: package program package sounds From inside the program/something.class I'd like to play the sounds/asound.wav. How is this possible. clip.open(AudioSystem.getAudioInputStream(new File(filename))); clip.start(); //.... something inbetween clip.stop(); Here filename is C:\\projects\\something\\sounds\\, but how is it possible to just give a relative path to the asound.wav in the package?

    Read the article

  • how to use a wav file in eclipse

    - by AlphaAndOmega
    I've been trying to add audio to a project I've been doing. I found some code on here for html that is also supposed to work with file but it keeps saying "Exception in thread "main" javax.sound.sampled.UnsupportedAudioFileException: could not get audio input stream from input file at javax.sound.sampled.AudioSystem.getAudioInputStream(Unknown Source) at LoopSound.main(LoopSound.java:15)" the code public class LoopSound { public static void main(String[] args) throws Throwable { File file = new File("c:\\Users\\rabidbun\\Pictures\\10177-m-001.wav"); Clip clip = AudioSystem.getClip(); // getAudioInputStream() also accepts a File or InputStream AudioInputStream wav = AudioSystem.getAudioInputStream( file ); clip.open(wav); // loop continuously clip.loop(-1000); SwingUtilities.invokeLater(new Runnable() { public void run() { // A GUI element to prevent the Clip's daemon Thread // from terminating at the end of the main() JOptionPane.showMessageDialog(null, "Close to exit!"); } }); } } What is wrong with the code?

    Read the article

  • what software will rip a CD to flac files in one step?

    - by jcollum
    I'm using EAC to rip files to FLAC. Seems fine. But for a large amount of CDs (200 or so) the process is a little labor intensive. There's 4 or 5 clicks in there that seem extraneous -- EAC seems to want to rip the CD to a set of WAV files then (after all those files are ripped) I have to select "Process WAV files" (I'd get it exact, but EAC is occupied right now). I'd like to send all of it to FLAC files in just a few steps: get cddb info, select art, rip. Is there a way to do this with EAC that I'm missing? Haven't been able to find anything on the web that explains this. Is there a better program for this?

    Read the article

  • How to convert from MIDI to WAV

    - by nXqd
    I really need a software which can convert from midi to wav. I've found midiConverter but it seems not work . After click record button, I receive a WEIRD file which is just messy sound not the fancy melody. THanks for reading this and I'll vote to every answer :)

    Read the article

  • Sound/Silence in a wav file.

    - by Vivek
    Hi, I am searching for a utility/code that could detect and let me know if my 1 minute wav file contains sound or not ? Other way, if it could detect the duration of the silence(if exists) at any position in the wav file, that would also server the purpose. Does SOX support any command for that ? I tried with Java, but didnt found anything in JMF. Thanks Vivek

    Read the article

  • Extract wav file from video file

    - by Nikos Steiakakis
    I am developing an application in which I need to extract the audio from a video. The audio needs to be extracted in .wav format but I do not have a problem with the video format. Any format will do, as long as I can extract the audio in a wav file. Currently I am using Windows Media Player COM control in a windows form to play the videos, but any other embedded player will do as well. Any suggestions on how to do this? Thanks

    Read the article

  • Mixing two wav music files of different size

    - by iphoneDev
    Hi, I want to mix audio files of different size into a one single .wav file. There is a sample through which we can mix files of same size [(http://www.modejong.com/iOS/#ex4 )(Example 4)]. I modified the code to get the mixed file as a .wav file. But I am not able to understand that how to modify this code for unequal sized files. If someone can help me out with some code snippet,i'll be really thankful.

    Read the article

  • How to play .wav files with java

    - by HellRider
    Hi , I am trying to play a .wav file with java. I need it when a button is pressed to play a short beep sound. I have google it but most of the code wasn't working. Can someone give me a simple code to play a .wav file.

    Read the article

  • Reading a WAV file into VST.Net to process with a plugin

    - by Paul
    Hello, I'm trying to use the VST.Net and NAudio frameworks to build an application that processes audio using a VST plugin. Ideally, the application should load a wav or mp3 file, process it with the VST, and then write a new file. I have done some poking around with the VST.Net library and was able to compile and run the samples (specifically the VST Host one). What I have not figured out is how to load an audio file into the program and have it write a new file back out. I'd like to be able to configure the properties for the VST plugin via C#, and be able to process the audio with 2 or more consecutive VSTs. Using NAudio, I was able to create this simple script to copy an audio file. Now I just need to get the output from the WaveFileReader into the VST.Net framework somehow. private void processAudio() { reader = new WaveFileReader("c:/bass.wav"); writer = new WaveFileWriter("c:/bass-copy.wav", reader.WaveFormat); int read; while ((read = reader.Read(buffer, 0, buffer.Length)) > 0) { writer.WriteData(buffer, 0, read); } textBox1.Text = "done"; reader.Close(); reader.Dispose(); writer.Close(); writer.Dispose(); } Please help!! Thanks References: http://vstnet.codeplex.com (VST.Net) http://naudio.codeplex.com (NAudio)

    Read the article

  • Preload *.wav with SystemSoundID?

    - by fuzzygoat
    I am playing a wav file to give a little audio feedback when a button in my UI is pressed. My question is when you first press the button there is a delay (about 1.5secs) whilst the sound file "sound.wav" is loaded and cached. Is there a way to pre-cache this file (maybe in my viewDidLoad)? I guess I could do it by just playing it a viewDidLoad, but would really need to disable the audio so it does not "beeb" each time the app starts. many thanks for and help. gary EDIT: Looks like my question is a duplicate of this post unless anyone has any new info? Maybe a way to turn the play volume down temporarily, unless the audio is cleared each time through the run loop.

    Read the article

  • How would i down-sample a .wav file then reconstruct it using nyquist? - in matlab [closed]

    - by martin
    This is all done in MatLab 2010 My objective is to show the results of: undersampling, nyquist rate/ oversampling First i need to downsample the .wav file to get an incomplete/ or impartial data stream that i can then reconstuct. Heres the flow chart of what im going to be doing So the flow is analog signal - sampling analog filter - ADC - resample down - resample up - DAC - reconstruction analog filter what needs to be achieved: F= Frequency F(Hz=1/s) E.x. 100Hz = 1000 (Cyc/sec) F(s)= 1/(2f) Example problem: 1000 hz = Highest frequency 1/2(1000hz) = 1/2000 = 5x10(-3) sec/cyc or a sampling rate of 5ms This is my first signal processing project using matlab. what i have so far. % Fs = frequency sampled (44100hz or the sampling frequency of a cd) [test,fs]=wavread('test.wav'); % loads the .wav file left=test(:,1); % Plot of the .wav signal time vs. strength time=(1/44100)*length(left); t=linspace(0,time,length(left)); plot(t,left) xlabel('time (sec)'); ylabel('relative signal strength') **%this is were i would need to sample it at the different frequecys (both above and below and at) nyquist frequency.*I think.*** soundsc(left,fs) % shows the resaultant audio file , which is the same as original ( only at or above nyquist frequency however) Can anyone tell me how to make it better, and how to do the various sampling at different frequencies?

    Read the article

  • Make VLC play all of a sound clip?

    - by bill weaver
    In VLC 1.0.5, sound files (mp3 and wav) are not playing completely. I think the leading .1 sec and trailing .5 second, more or less, are not playing. I remember this behavior from some years ago, but am not sure the app or how i fixed it. Seems like there was some setting that was causing it to clip the sounds, but can't find or remember it. The Zune Software and Windows Media Player work fine. Windows 7 Pro, if it matters. Any suggestions?

    Read the article

  • Record 8 separate Line IN Channels from M-Audio Delta 1010 Card

    - by Peter Hoffmann
    I want to record the 8 separate Line IN Channels from my M-Audio Delta 1010 Card. The card is recogniced nicely and a can record a single channel via arecord -d 10 -f cd -t wav -D channel1 out2.wav. I've set up the different channels in ~/.asoundrc. Now if I want to record a second channel in parallel (arecord -d 10 -f cd -t wav -D channel2 out2.wav) I get the error arecord: main:564: audio open error: Device or resource busy As I understand the delta 1010 is a single Access Card, so only one application can access it at a time. Is this correct? The next step was to configure a dual channel input in .asoundrc # envy24 channel 1+2 only pcm.test { type plug ttable.0.0 1 ttable.0.1 1 slave.pcm ice1712 } Which works ok when I do a arecord -d 10 -f cd -t wav -D test -c 2 out.wav (BTW can anyone point me to a tool to split a multi channel wav into a file per channel?) But when I want to record the channels separately with (-I option) arecord -d 10 -f cd -t wav -D test -c 2 -I channel1.wav channel2.wav I get no recordings. Did I miss something with the configuration or what are my options to record all 8 channels via arecord. I've no experience with jackd. Is it an option to install jackd and record the line ins via jackd?

    Read the article

  • Dynamically created iframe used to download file triggers onload with firebug but not without

    - by justkt
    EDIT: as this problem is now "solved" to the point of working, I am looking to have the information on why. For the fix, see my comment below. I have an web application which repeatedly downloads wav files dynamically (after a timeout or as instructed by the user) into an iframe in order to trigger the a default audio player to play them. The application targets only FF 2 or 3. In order to determine when the file is downloaded completely, I am hoping to use the window.onload handler for the iframe. Based on this stackoverflow.com answer I am creating a new iframe each time. As long as firebug is enabled on the browser using the application, everything works great. Without firebug, the onload never fires. The version of firebug is 1.3.1, while I've tested Firefox 2.0.0.19 and 3.0.7. Any ideas how I can get the onload from the iframe to reliably trigger when the wav file has downloaded? Or is there another way to signal the completion of the download? Here's the pertinent code: HTML (hidden's only attribute is display:none;): <div id="audioContainer" class="hidden"> </div> JavaScript (could also use jQuery, but innerHTML is faster than html() from what I've read): waitingForFile = true; // (declared at the beginning of closure) $("#loading").removeClass("hidden"); var content = "<iframe id='audioPlayer' name='audioPlayer' src='" + /path/to/file.wav + "' onload='notifyLoaded()'></iframe>"; document.getElementById("audioContainer").innerHTML = content; And the content of notifyLoaded: function notifyLoaded() { waitingForFile = false; // (declared at beginning of the closure) $("#loading").addClass("hidden"); } I have also tried creating the iframe via document.createElement, but I found the same behavior. The onload triggered each time with firebug enabled and never without it. EDIT: Fixed the information on how the iframe is being declared and added the callback function code. No, no console.log calls here.

    Read the article

  • Trimming bit of the beginning off a recorder waveform

    - by Lowgain
    I've got a flash 10.1 app that lets me record microphone input to a wav without a media server, which I am saving to an Amazon S3 bucket. I have another process running on a server which gets wavs from this bucket, converts to mp3 using LAME and puts them into another bucket. This all works fine, but in converting wav mp3, about 0.1sec or so of silence is added to my sound. In the application this are being used in, perfect sync is critical, so I need to trim off that little bit. If I have to trim it off the original waveform that is okay, I don't expect anything important to happen in that first fraction of a second. What is the best way to go about this? I am using Adobe's WavWriter to convert by ByteArray into a proper waveform. Is there a way I can easily trim off the first few samples from my ByteArray without invalidating the structure? Alternatively, is there a good server-side tool I can use to trim the wav before running it through LAME, or an argument I can give LAME? Or, could I even trim that sound off the mp3 after it has been converted? Thanks!

    Read the article

  • How to play extracted wave file byte array in C#?

    - by user261924
    At the moment i have managed to separate the left and right channel of a WAVE file and have included the header in a byte[] array. My next step is to be about to play both channels. How can this be done? Here is a code snippet: byte[] song_left = new byte[fa.Length]; byte[] song_right = new byte[fa.Length]; int p = 0; for (int c = 0; c < 43; c++) { song_left[p] = header[c]; p++; } int q = 0; for (s = startByte; s < length; s = s + 3) { song_left[s] = sLeft[q]; q++; s++; song_left[s] = sLeft[q]; q++; } p = 0; for (int c = 0; c < 43; c++) { song_right[p] = header[c]; p++; } This part is reading the header and data from both the right and light channel and saving it to array sLeft[] and sRight[]. This part is working perfectly. Once I obtained the byte arrays, I did the following: System.IO.File.WriteAllBytes("c:\\left.wav", song_left); System.IO.File.WriteAllBytes("c:\\right.wav", song_right); Added a button to play the saved wave file: private void button2_Click(object sender, EventArgs e) { spWave = new SoundPlayer("c:\\left.wav"); spWave.Play(); } Once I hit the play button, this error appers: An unhandled exception of type 'System.InvalidOperationException' occurred in System.dll Additional information: The wave header is corrupt. Any ideas?

    Read the article

  • CentOS - Convert Each WAV File to MP3/OGG

    - by Benny
    I am trying to build a script (I'm pretty new to linux scripting) and I can't seem to figure out why I'm not able to run this script. If I keep the header (#!/bin/sh) in, I get the following: -bash: /tmp/ConvertAndUpdate.sh: /bin/sh^M: bad interpreter: No such file or directory If I take it out, I get the following: 'tmp/ConvertAndUpdate.sh: line 2: syntax error near unexpected token `do 'tmp/ConvertAndUpdate.sh: line 2: `do Any ideas? Here is the full script: #!/bin/sh for file in *.wav; do mp3=$(basename .$file. .wav).mp3; #echo $mp3 nice lame -b 16 -m m -q 9 .resample 8 .$file. .$mp3.; touch .reference .$file. .$mp3.; chown apache.apache .$mp3.; chmod 600 .$mp3.; rm -f .$file.; mv .$file. /converted; sql="UPDATE recordings SET IsReady=1 WHERE Filename='${file%.*}'" echo $sql | mysql --user=me --password=pasword Recordings #echo $sql done

    Read the article

  • Audio processing in C# or C++

    - by melculetz
    Hi, I would like to create an application that uses AI techniques and allows the user to record a part of a song and then tries to find that song in a database of wav files. I would have liked to use some already existing libraries for the audio processing part. So, could you recommend any libraries in C# which can read a wav file, get input from microphone, have some audio filters (low pass, high pass, FFT etc) and maybe have the ability to plot the audio signal as well. I would prefer to develop in C#, but if there aren't good libraries for audio processing, I guess I could work in C++ as well. As far as I know, Mathlab already has the above mentioned functionalities, but I can't use it in my application.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >