Search Results

Search found 7 results on 1 pages for 'pyaudio'.

Page 1/1 | 1 

  • record output sound in python

    - by aaronstacy
    i want to programatically record sound coming out of my laptop in python. i found PyAudio and came up with the following program that accomplishes the task: import pyaudio, wave, sys chunk = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 RECORD_SECONDS = 5 WAVE_OUTPUT_FILENAME = sys.argv[1] p = pyaudio.PyAudio() channel_map = (0, 1) stream_info = pyaudio.PaMacCoreStreamInfo( flags = pyaudio.PaMacCoreStreamInfo.paMacCorePlayNice, channel_map = channel_map) stream = p.open(format = FORMAT, rate = RATE, input = True, input_host_api_specific_stream_info = stream_info, channels = CHANNELS) all = [] for i in range(0, RATE / chunk * RECORD_SECONDS): data = stream.read(chunk) all.append(data) stream.close() p.terminate() data = ''.join(all) wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(data) wf.close() the problem is i have to connect the headphone jack to the microphone jack. i tried replacing these lines: input = True, input_host_api_specific_stream_info = stream_info, with these: output = True, output_host_api_specific_stream_info = stream_info, but then i get this error: Traceback (most recent call last): File "./test.py", line 25, in data = stream.read(chunk) File "/Library/Python/2.5/site-packages/pyaudio.py", line 562, in read paCanNotReadFromAnOutputOnlyStream) IOError: [Errno Not input stream] -9975 is there a way to instantiate the PyAudio stream so that it inputs from the computer's output and i don't have to connect the headphone jack to the microphone? is there a better way to go about this? i'd prefer to stick w/ a python app and avoid cocoa.

    Read the article

  • Python/Tkinter Audio Player

    - by Nicholas Quirk
    Hey everyone reading this, I've recently got into doing GUI development with Python. Tkinter seems like the easiest and most logical choice starting out. I did a little with wxPython but it was more sophisticated than what I needed. Anyway, I'm developing a media player. Right now it's a simple window with a button to load .wav files. The problem is I would like to implement a pause button now. But, when playing a audio file the GUI isn't accessible again (no buttons can be pushed) till the file is done playing. How can I make the GUI dynamic while an audio file is playing? I was thinking this maybe be because I'm using PyAudio, and their implementation doesn't allow this. Anyway, thanks for any advice before hand.

    Read the article

  • Detect and record a sound with python

    - by Jean-Pierre
    I'm using this program to record a sound in python: import pyaudio import wave import sys chunk = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 RECORD_SECONDS = 5 WAVE_OUTPUT_FILENAME = "output.wav" p = pyaudio.PyAudio() stream = p.open(format = FORMAT, channels = CHANNELS, rate = RATE, input = True, frames_per_buffer = chunk) print "* recording" all = [] for i in range(0, RATE / chunk * RECORD_SECONDS): data = stream.read(chunk) all.append(data) print "* done recording" stream.close() p.terminate() write data to WAVE file data = ''.join(all) wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(data) wf.close() I want to change the program to start recording when sound is detected by the sound card input. Probably should compare the input sound level in Chunk, but how do this?

    Read the article

  • Capturing Mac OS X System Audio output with Python

    - by richbs
    Hello, I've been trying to "hijack" the Mac OS X system audio using PyAudio and save to a wav in python. That is, I do not want to record from an input device such as a microphone. I want to grab the sound output from any or all applications. I have followed the tutorials on the PyAudio site but these do not appear to cover my use case and when I try to read from the output stream I unsurprisingly get the paCanNotReadFromAnOutputOnlyStream exception. Fair enough! Is there a way to do what I am proposing with the PyAudio or other FOSS Python Library?

    Read the article

  • Yes, another thread question...

    - by Michael
    I can't understand why I am loosing control of my GUI even though I am implementing a thread to play a .wav file. Can someone pin point what is incorrect? #!/usr/bin/env python import wx, pyaudio, wave, easygui, thread, time, os, sys, traceback, threading import wx.lib.delayedresult as inbg isPaused = False isStopped = False class Frame(wx.Frame): def __init__(self): print 'Frame' wx.Frame.__init__(self, parent=None, id=-1, title="Jasmine", size=(720, 300)) #initialize panel panel = wx.Panel(self, -1) #initialize grid bag sizer = wx.GridBagSizer(hgap=20, vgap=20) #initialize buttons exitButton = wx.Button(panel, wx.ID_ANY, "Exit") pauseButton = wx.Button(panel, wx.ID_ANY, 'Pause') prevButton = wx.Button(panel, wx.ID_ANY, 'Prev') nextButton = wx.Button(panel, wx.ID_ANY, 'Next') stopButton = wx.Button(panel, wx.ID_ANY, 'Stop') #add widgets to sizer sizer.Add(pauseButton, pos=(1,10)) sizer.Add(prevButton, pos=(1,11)) sizer.Add(nextButton, pos=(1,12)) sizer.Add(stopButton, pos=(1,13)) sizer.Add(exitButton, pos=(5,13)) #initialize song time gauge #timeGauge = wx.Gauge(panel, 20) #sizer.Add(timeGauge, pos=(3,10), span=(0, 0)) #initialize menuFile widget menuFile = wx.Menu() menuFile.Append(0, "L&oad") menuFile.Append(1, "E&xit") menuBar = wx.MenuBar() menuBar.Append(menuFile, "&File") menuAbout = wx.Menu() menuAbout.Append(2, "A&bout...") menuAbout.AppendSeparator() menuBar.Append(menuAbout, "Help") self.SetMenuBar(menuBar) self.CreateStatusBar() self.SetStatusText("Welcome to Jasime!") #place sizer on panel panel.SetSizer(sizer) #initialize icon self.cd_image = wx.Image('cd_icon.png', wx.BITMAP_TYPE_PNG) self.temp = self.cd_image.ConvertToBitmap() self.size = self.temp.GetWidth(), self.temp.GetHeight() wx.StaticBitmap(parent=panel, bitmap=self.temp) #set binding self.Bind(wx.EVT_BUTTON, self.OnQuit, id=exitButton.GetId()) self.Bind(wx.EVT_BUTTON, self.pause, id=pauseButton.GetId()) self.Bind(wx.EVT_BUTTON, self.stop, id=stopButton.GetId()) self.Bind(wx.EVT_MENU, self.loadFile, id=0) self.Bind(wx.EVT_MENU, self.OnQuit, id=1) self.Bind(wx.EVT_MENU, self.OnAbout, id=2) #Load file usiing FileDialog, and create a thread for user control while running the file def loadFile(self, event): foo = wx.FileDialog(self, message="Open a .wav file...", defaultDir=os.getcwd(), defaultFile="", style=wx.FD_MULTIPLE) foo.ShowModal() self.queue = foo.GetPaths() self.threadID = 1 while len(self.queue) != 0: self.song = myThread(self.threadID, self.queue[0]) self.song.start() while self.song.isAlive(): time.sleep(2) self.queue.pop(0) self.threadID += 1 def OnQuit(self, event): self.Close() def OnAbout(self, event): wx.MessageBox("This is a great cup of tea.", "About Jasmine", wx.OK | wx.ICON_INFORMATION, self) def pause(self, event): global isPaused isPaused = not isPaused def stop(self, event): global isStopped isStopped = not isStopped class myThread (threading.Thread): def __init__(self, threadID, wf): self.threadID = threadID self.wf = wf threading.Thread.__init__(self) def run(self): global isPaused global isStopped self.waveFile = wave.open(self.wf, 'rb') #initialize stream self.p = pyaudio.PyAudio() self.stream = self.p.open(format = self.p.get_format_from_width(self.waveFile.getsampwidth()), channels = self.waveFile.getnchannels(), rate = self.waveFile.getframerate(), output = True) self.data = self.waveFile.readframes(1024) isPaused = False isStopped = False #main play loop, with pause event checking while self.data != '': # while isPaused != True: # if isStopped == False: self.stream.write(self.data) self.data = self.waveFile.readframes(1024) # elif isStopped == True: # self.stream.close() # self.p.terminate() self.stream.close() self.p.terminate() class App(wx.App): def OnInit(self): self.frame = Frame() self.frame.Show() self.SetTopWindow(self.frame) return True def main(): app = App() app.MainLoop() if __name__=='__main__': main()

    Read the article

  • [python] voice communication for python help!

    - by Eric
    Hello! I'm currently trying to write a voicechat program in python. All tips/trick is welcome to do this. So far I found pyAudio to be a wrapper of PortAudio. So I played around with that and got an input stream from my microphone to be played back to my speakers. Only RAW of course. But I can't send RAW-data over the netowrk (due the size duh), so I'm looking for a way to encode it. And I searched around the 'net and stumbled over this speex-wrapper for python. It seems to good to be true, and believe me, it was. You see in pyAudio you can set the size of the chunks you want to take from your input audiobuffer, and in that sample code on the link, it's set to 320. Then when it's encoded, its like ~40 bytes of data per chunk, which is fairly acceptable I guess. And now for the problem. I start a sample program which just takes the input stream, encodes the chunks, decodes them and play them (not sending over the network due testing). If I just let my computer idle and run this program it works great, but as soon as I do something, i.e start Firefox or something, the audio input buffer gets all clogged up! It just grows and then it all crashes and gives me an overflow error on the buffer.. OK, so why am I just taking 320 bytes of the stream? I could just take like 1024 bytes or something and that will easy the pressure on the buffer. BUT. If I give speex 1024 bytes of data to encode/decode, it either crashes and says that thats too big for its buffer. OR it encodes/decodes it, but the sound is very noisy and "choppy" as if it only encoded a tiny bit of that 1024 chunk and the rest is static noise. So the sound sounds like a helicopter, lol. I did some research and it seems that speex only can convert 320 bytes of data at time, and well, 640 for wide-band. But that's the standard? How can I fix this problem? How should I construct my program to work with speex? I could use a middle-buffer tho that takes all available data to read from the buffer, then chunk this up in 320 bits and encode/decode them. But this takes a bit longer time and seems like a very bad solution of the problem.. Because as far as I know, there's no other encoder for python that encodes the audio so it can be sent over the network in acceptable small packages, or? I've been googling for three days now. Also there is this pyMedia library, I don't know if its good to convert to mp3/ogg for this kind of software. Thank in in advance for reading this, hope anyone can help me! (:

    Read the article

  • What exactly does raw microphone data represent?

    - by esperantist
    I'm using PyAudio, a PortAudio wrapper for Python. I'm getting data from a microphone. Data which is represented by a continuous stream of bytes divided into chunks (of a size determined by me). I've tried to plot the signal, assuming the bytes represent the current signal amplitude, but I get an interesting image that I can't easily describe. ^^ It seems to be composed of two waves, one shifted from the other. What exactly do the particular bytes represent, and how does this change when I'm recording only one channel, instead of two? Any explanations, suggestions, code snippets, anything, very welcome! (I'm new at this.) Thanks!

    Read the article

1