Search Results

Search found 21392 results on 856 pages for 'audio output'.

Page 69/856 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • How to convert m4a file to aac adts file in Xcode?

    - by Bird Hsuie
    I have a mp4 file copied from iPod lib and saved to my Document for my next step, I need it to convert to .mp3 or .aac(ADTS type) I use this code and failed... -(IBAction)compressFile:(id)sender{ NSLog (@"handleConvertToPCMTapped"); // open an ExtAudioFile NSLog (@"opening %@", exportURL); ExtAudioFileRef inputFile; CheckResult (ExtAudioFileOpenURL((__bridge CFURLRef)exportURL, &inputFile), "ExtAudioFileOpenURL failed"); // prepare to convert to a plain ol' PCM format AudioStreamBasicDescription myPCMFormat; myPCMFormat.mSampleRate = 44100; // todo: or use source rate? myPCMFormat.mFormatID = kAudioFormatMPEGLayer3 ; myPCMFormat.mFormatFlags = kAudioFormatFlagsCanonical; myPCMFormat.mChannelsPerFrame = 2; myPCMFormat.mFramesPerPacket = 1; myPCMFormat.mBitsPerChannel = 16; myPCMFormat.mBytesPerPacket = 4; myPCMFormat.mBytesPerFrame = 4; CheckResult (ExtAudioFileSetProperty(inputFile, kExtAudioFileProperty_ClientDataFormat, sizeof (myPCMFormat), &myPCMFormat), "ExtAudioFileSetProperty failed"); // allocate a big buffer. size can be arbitrary for ExtAudioFile. // you have 64 KB to spare, right? UInt32 outputBufferSize = 0x10000; void* ioBuf = malloc (outputBufferSize); UInt32 sizePerPacket = myPCMFormat.mBytesPerPacket; UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket; // set up output file NSString *outputPath = [myDocumentsDirectory() stringByAppendingPathComponent:@"m_export.mp3"]; NSURL *outputURL = [NSURL fileURLWithPath:outputPath]; NSLog (@"creating output file %@", outputURL); AudioFileID outputFile; CheckResult(AudioFileCreateWithURL((__bridge CFURLRef)outputURL, kAudioFileCAFType, &myPCMFormat, kAudioFileFlags_EraseFile, &outputFile), "AudioFileCreateWithURL failed"); // start convertin' UInt32 outputFilePacketPosition = 0; //in bytes while (true) { // wrap the destination buffer in an AudioBufferList AudioBufferList convertedData; convertedData.mNumberBuffers = 1; convertedData.mBuffers[0].mNumberChannels = myPCMFormat.mChannelsPerFrame; convertedData.mBuffers[0].mDataByteSize = outputBufferSize; convertedData.mBuffers[0].mData = ioBuf; UInt32 frameCount = packetsPerBuffer; // read from the extaudiofile CheckResult (ExtAudioFileRead(inputFile, &frameCount, &convertedData), "Couldn't read from input file"); if (frameCount == 0) { printf ("done reading from file"); break; } // write the converted data to the output file CheckResult (AudioFileWritePackets(outputFile, false, frameCount, NULL, outputFilePacketPosition / myPCMFormat.mBytesPerPacket, &frameCount, convertedData.mBuffers[0].mData), "Couldn't write packets to file"); NSLog (@"Converted %ld bytes", outputFilePacketPosition); // advance the output file write location outputFilePacketPosition += (frameCount * myPCMFormat.mBytesPerPacket); } // clean up ExtAudioFileDispose(inputFile); AudioFileClose(outputFile); // show size in label NSLog (@"checking file at %@", outputPath); [self transMitFile:outputPath]; if ([[NSFileManager defaultManager] fileExistsAtPath:outputPath]) { NSError *fileManagerError = nil; unsigned long long fileSize = [[[NSFileManager defaultManager] attributesOfItemAtPath:outputPath error:&fileManagerError] fileSize]; } any suggestion?.......thanks for your great help!

    Read the article

  • Java writes bad wave files

    - by Cliff
    I'm writing out wave files in Java using AudioInputStream output = new AudioInputStream(new ByteArrayInputStream(rawPCMSamples), new AudioFormat(22000,16,1,true,false), rawPCMSamples.length) AudioSystem.write(output, AudioFileFormat.Type.WAVE, new FileOutputStream('somefile.wav')) And I get what appears to be corrupt wave files on OSX. They won't play from Finder however using the same code behind a servlet writing directly to the response stream and setting the Content-Type to audio/wave seems to play fine in quicktime. What gives?

    Read the article

  • Redirect logging output using custom logging handler

    - by mridang
    Hi Guys, I'm using a module in my python app that writes a lot a of messages using the logging module. Initially I was using this in a console application and it was pretty easy to get the logging output to display on the console using a console handler. Now I've developed a GUI version of my app using wxPython and I'd like to display all the logging output to a custom control — a multi-line textCtrl. Is there a way i could create a custom logging handler so i can redirect all the logging output there and display the logging messages wherever/however I want — in this case, a wxPython app. Thanks

    Read the article

  • iPad MPMoviePlayerController only hearing audio, no videos!

    - by Steph Moreau
    I am currently rebuilding my app for the iPad. I would like to play the videos sourced online. I display the information and when i go to play the video all i get is the audio... No video is shown at all. My page looks exactly the same except that i have some "background" noise. These are the same videos i use on the iPhone app and they work perfectly This is the code that i call to play my videos - (IBAction) playMovie{ NSURL *url = [NSURL URLWithString:vidMovie]; MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc]initWithContentURL:url]; [moviePlayer play]; } I am using this on a button on the right side view of a splitViewController. I get the same result in my simulator as on an iPad. Not sure if i'm missing something, but if anyone can help it would be greatly appreciated!

    Read the article

  • SAT Thread and Process output capture in c#

    - by alex
    Hi: This is a strange problem I encountered. I have an window application written in c# to do testing. It has a MDI parent form that is hosting a few children forms. One of the forms launch test cripts by creating processes and capture the scripts output to a text box. Another form open serial port and monitoring the status of the device I am working on(like a shell). If I ran both of them together, the output of the script seems only appear in the text box after the test is done. However, If I don't open the serial port form, the output of the script is captured in real time. Does anyone knows what's causing the problem? I notice the onDataReceived even handler for serial port form has a [SAThread] header to it. Will this cause the serial port thread having higher priority than other processes? Thanks in advance.

    Read the article

  • gdb+osx: no output when using printf/CFShow

    - by yairchu
    I attached to a program with gdb in OSX and I want to use CFShow in the gdb console etc. However, nothing shows up. printf shows nothing as well: (gdb) call (int) printf("Hello\n") $10 = 6 (gdb) call (int) printf("Hello World!\n") $11 = 13 Apple suggests the following tip for when attaching with gdb, to make the output appear in the gdb console: (gdb) call (void) close(1) (gdb) call (void) close(2) (gdb) shell tty /dev/ttyp1 (gdb) call (int) open("/dev/ttyp1", 2, 0) $1 = 1 (gdb) call (int) open("/dev/ttyp1", 2, 0) $2 = 2 In xcode's gdb console tty gives "not a tty", so I tried it in gdb in a terminal. There tty does work but after redirecting stdout there's still no output. Also no output if I direct stdout to a file.. :/ Any salvation?

    Read the article

  • Eclipse/adb error message in Vista "Failed to parse the output of adb version"

    - by watchman317
    I am trying to learn Android development, so I downloaded Eclipse Galileo and the Android SDK. However, whenever I start Eclipse, I get the error message "Failed to parse the output of adb version." In the Console/DDMS pane, the debug output reads: [2010-06-07 20:15:13 - ddms]Failed to reopen debug port for Selected Client to: 8700 [2010-06-07 20:15:13 - ddms]Address family not supported by protocol family: bind java.net.SocketException: Address family not supported by protocol family: bind at sun.nio.ch.Net.bind(Native Method) at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source) at sun.nio.ch.ServerSocketAdaptor.bind(Unknown Source) at sun.nio.ch.ServerSocketAdaptor.bind(Unknown Source) at com.android.ddmlib.MonitorThread.reopenDebugSelectedPort(Unknown Source) at com.android.ddmlib.MonitorThread.run(Unknown Source) [2010-06-07 20:15:17 - adb]Failed to parse the output of 'adb version' I am running Eclipse Galileo, have the most recent Android SDK downloaded, and am running Windows Vista 32-bit SP2. I am sure that the Android SDK path is correct and that all the files are there. I would appreciate any assistance anyone could provide. P.S.--If anyone could direct me to any useful Android development resources, I would appreciate it

    Read the article

  • How to view output .mp files from Functional MetaPost

    - by Jared Updike
    I'm interested in using Functional MetaPost on Mac OS X: http://cryp.to/funcmp/ I'm looking for a tutorial like: http://haskell.org/haskellwiki/Haskell_in_5_steps but for a trivial FuncMP example, i.e. using GHC, I can compile something simple such as: import FMP myPicture = text "blah" main = generate "foo" 1 myPicture but I can't figure out how to view this foo.1.mp output. (It gives a runtime error about not finding 'virmp'; my MetaPost binary is 'mpost'; I can't figure out how to override this Parameter or what my .FunMP file is or should be doing...) I can run mpost on that but the output (foo.1.1) is what, PostScript? EPS? How do I use this? (I imagine I just need a simple LaTeX file with an EPS figure in it or something...) Preferably, I'd like to generate output (.ps or .pdf that I can view) so I an actually get somewhere with Functional MetaPost, learning it, playing with it, not banging my head against paths and binaries and shell commands.

    Read the article

  • Problem setting output flags for ALU in "Nand to Tetris" course

    - by MahlerFive
    Although I tagged this homework, it is actually for a course which I am doing on my own for free. Anyway, the course is called "From Nand to Tetris" and I'm hoping someone here has seen or taken the course so I can get some help. I am at the stage where I am building the ALU with the supplied hdl language. My problem is that I can't get my chip to compile properly. I am getting errors when I try to set the output flags for the ALU. I believe the problem is that I can't subscript any intermediate variable, since when I just try setting the flags to true or false based on some random variable (say an input flag), I do not get the errors. I know the problem is not with the chips I am trying to use since I am using all builtin chips. Here is my ALU chip so far: /** * The ALU. Computes a pre-defined set of functions out = f(x,y) * where x and y are two 16-bit inputs. The function f is selected * by a set of 6 control bits denoted zx, nx, zy, ny, f, no. * The ALU operation can be described using the following pseudocode: * if zx=1 set x = 0 // 16-bit zero constant * if nx=1 set x = !x // Bit-wise negation * if zy=1 set y = 0 // 16-bit zero constant * if ny=1 set y = !y // Bit-wise negation * if f=1 set out = x + y // Integer 2's complement addition * else set out = x & y // Bit-wise And * if no=1 set out = !out // Bit-wise negation * * In addition to computing out, the ALU computes two 1-bit outputs: * if out=0 set zr = 1 else zr = 0 // 16-bit equality comparison * if out<0 set ng = 1 else ng = 0 // 2's complement comparison */ CHIP ALU { IN // 16-bit inputs: x[16], y[16], // Control bits: zx, // Zero the x input nx, // Negate the x input zy, // Zero the y input ny, // Negate the y input f, // Function code: 1 for add, 0 for and no; // Negate the out output OUT // 16-bit output out[16], // ALU output flags zr, // 1 if out=0, 0 otherwise ng; // 1 if out<0, 0 otherwise PARTS: // Zero the x input Mux16( a=x, b=false, sel=zx, out=x2 ); // Zero the y input Mux16( a=y, b=false, sel=zy, out=y2 ); // Negate the x input Not16( in=x, out=notx ); Mux16( a=x, b=notx, sel=nx, out=x3 ); // Negate the y input Not16( in=y, out=noty ); Mux16( a=y, b=noty, sel=ny, out=y3 ); // Perform f Add16( a=x3, b=y3, out=addout ); And16( a=x3, b=y3, out=andout ); Mux16( a=andout, b=addout, sel=f, out=preout ); // Negate the output Not16( in=preout, out=notpreout ); Mux16( a=preout, b=notpreout, sel=no, out=out ); // zr flag Or8way( in=out[0..7], out=zr1 ); // PROBLEM SHOWS UP HERE Or8way( in=out[8..15], out=zr2 ); Or( a=zr1, b=zr2, out=zr ); // ng flag Not( in=out[15], out=ng ); } So the problem shows up when I am trying to send a subscripted version of 'out' to the Or8Way chip. I've tried using a different variable than 'out', but with the same problem. Then I read that you are not able to subscript intermediate variables. I thought maybe if I sent the intermediate variable to some other chip, and that chip subscripted it, it would solve the problem, but it has the same error. Unfortunately I just can't think of a way to set the zr and ng flags without subscripting some intermediate variable, so I'm really stuck! Just so you know, if I replace the problematic lines with the following, it will compile (but not give the right results since I'm just using some random input): // zr flag Not( in=zx, out=zr ); // ng flag Not( in=zx, out=ng ); Anyone have any ideas? Edit: Here is the appendix of the book for the course which specifies how the hdl works. Specifically look at section 5 which talks about buses and says: "An internal pin (like v above) may not be subscripted". Edit: Here is the exact error I get: "Line 68, Can't connect gate's output pin to part". The error message is sort of confusing though, since that does not seem to be the actual problem. If I just replace "Or8way( in=out[0..7], out=zr1 );" with "Or8way( in=false, out=zr1 );" it will not generate this error, which is what lead me to look up in the appendix and find that the out variable, since it was derived as intermediate, could not be subscripted.

    Read the article

  • RichFaces rich:insert takes a long time to output large files

    - by Mark Lewis
    Hello I'm using a RichFaces <rich:insert like this: <rich:panel header="my head"> <a4j:outputPanel ajaxRendered="true"> <rich:insert src="#{MyBacking.myPath}" highlight="groovy" /> </a4j:outputPanel> </rich:panel> If I have a 60k file to output, it takes 23 seconds. I've got a requirement to output the contents of some larger files than that and obviously the larger the file, the larger the wait for content. The recommendation in the answer to another related question is to introduce paging. I will, but the question is, why does it take so long to output 60k of text using JSF/RichFaces? That is, reading off a local disk with Windows XP SP2 PC - I can see from the log the data has already been written to disk from the network. Other scripting languages appear to be faster than this - is it something to do with the JSF lifecycle having to handle the text maybe? Thanks

    Read the article

  • Disable debug output in libxml2 and xmlsec

    - by ereOn
    Hi, In my software, I use libxml2 and xmlsec to manipulate (obviously) XML data structures. I mainly use XSD schema validation and so far, it works well. When the data structure input by the client doesn't match the XSD schema, libxml2 (or xmlsec) output some debug strings to the console. Here is an example: Entity: line 1: parser error : Start tag expected, '<' not found DUMMY<?xml ^ While those strings are useful for debugging purposes, I don't want them to appear and polute the console output in the released software. So far, I couldn't find an official way of doing this. Do you know how to suppress the debug output or (even better) to redirect it to a custom function ? Many thanks.

    Read the article

  • STAThread and Process output capture in c#

    - by alex
    Hi: This is a strange problem I encountered. I have an window application written in c# to do testing. It has a MDI parent form that is hosting a few children forms. One of the forms launch test cripts by creating processes and capture the scripts output to a text box. Another form open serial port and monitoring the status of the device I am working on(like a shell). If I ran both of them together, the output of the script seems only appear in the text box after the test is done. However, If I don't open the serial port form, the output of the script is captured in real time. Does anyone knows what's causing the problem? I notice the onDataReceived even handler for serial port form has a [STAThread] header to it. Will this cause the serial port thread having higher priority than other processes? Thanks in advance.

    Read the article

  • How to burn an Audio CD programmatically in Mac OS X

    - by Adion
    All the info I can find about burning cd's is for Windows, or about full programs to burn cd's. I would however like to be able to burn an Audio CD directly from within my program. I don't mind using Cocoa or Carbon, or if there are no API's available to do this directly, using a command-line program that can use a wav/aiff file as input would be a possibility too if it can be distributed with my application. Because it will be used to burn dj mixes to cd, it would also be great if it is possible to create different tracks without a gap between them.

    Read the article

  • How to send Sound Stream of a file from disk over network using FMOD?

    - by chris
    Hey everyone, i'm currently working on a project in college. my application should do some things with audio files from my computer. i'm using FMOD as sound library. the problem i have is, that i dont know how to access the data of a soundfile (wich was opened and startet using the FMOD methods) to stream it over network for playback on another pc in the net. does anyone has a similar problem?! any help is apreciated. thanks in advance. chris

    Read the article

  • Highlighting effect to text and/or image similar to be synchronized with audio

    - by Irfan Mulic
    I am looking how to approach following problem: We have application that displays text with audio recorded material. We use Browser Control (Internet Explorer) in Delphi App to do this. We respond to events in Delphi code setting innerHTML for elements if we have to update the style ... Now, request is to add option to dynamically move the cursor or dynamically highlight the words spoken from the paragraph. It doesn't need to match absolutely the exact word spoken so we will have to dynamically update the content of position of highlighted word based on some timer or something (because it is not text to speach). What should be the most practical and easy approach to this kind of problem, all answers are greatly appreciated. Thanks.

    Read the article

  • Sharing output streams through a JNI interface

    - by Chris Conway
    I am writing a Java application that uses a C++ library through a JNI interface. The C++ library creates objects of type Foo, which are duly passed up through JNI to Java. Suppose the library has an output function void Foo::print(std::ostream &os) and I have a Java OutputStream out. How can I invoke Foo::print from Java so that the output appears on out? Is there any way to coerce the OutputStream to a std::ostream in the JNI layer? Can I capture the output in a buffer the JNI layer and then copy it into out?

    Read the article

  • Storing Shell Output

    - by Emil Radoncik
    Hello everybody, I am trying to read the output of a shell command into a string buffer, the reading and adding the values is ok except for the fact that the added values are every second line in the shell output. for example, I have 10 rows od shell output and this code only stores the 1, 3, 5, 7, 9, row . Can anyone point out why i am not able to catch every row with this code ??? any suggestion or idea is welcomed :) import java.io.*; public class Linux { public static void main(String args[]) { try { StringBuffer s = new StringBuffer(); Process p = Runtime.getRuntime().exec("cat /proc/cpuinfo"); BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); while (input.readLine() != null) { //System.out.println(line); s.append(input.readLine() + "\n"); } System.out.println(s.toString()); } catch (Exception err) { err.printStackTrace(); } } }

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • PHP JSON encode output number as string

    - by mitch
    I am trying to output a JSON string using PHP and MySQL but the latitude and longitude is outputting as a string with quotes around the values. This causes an issue when I am trying to add the markers to a google map. Here is my code: $sql = mysql_query('SELECT * FROM markers WHERE address !=""'); $results = array(); while($row = mysql_fetch_array($sql)) { $results[] = array( 'latitude' =>$row['lat'], 'longitude' => $row['lng'], 'address' => $row['address'], 'project_ID' => $row['project_ID'], 'marker_id' => $row['marker_id'] ); } $json = json_encode($results); echo "{\"markers\":"; echo $json; echo "}"; Here is the expected output: {"markers":[{"latitude":0.000000,"longitude":0.000000,"address":"2234 2nd Ave, Seattle, WA","project_ID":"7","marker_id":"21"}]} Here is the output that I am getting: {"markers":[{"latitude":"0.000000","longitude":"0.000000","address":"2234 2nd Ave, Seattle, WA","project_ID":"7","marker_id":"21"}]} Notice the quotes around the latitude and longitude values.

    Read the article

  • Emacs shell output buffer height

    - by jimbo
    Hi , i have the following in my .emacs file(thanks to a SOer nikwin), which evaluates the current buffer content and displays the output in another buffer. (defun shell-compile () (interactive) (save-buffer) (shell-command (concat "python " (buffer-file-name)))) (add-hook 'python-mode-hook (lambda () (local-set-key (kbd "\C-c\C-c") 'shell-compile))) The problem is that the output window takes half the emacs screen. Is there any way to set the output windows's height to something smaller. I googled for 30mins or so and could not find anything that worked. Thanks in advance.

    Read the article

  • Crossfading audio with PyQT4 and Phonon

    - by dwelch
    I'm trying to get audio files to crossfade with phonon. I'm using PyQT4. I have tracks queuing properly, but I'm stuck with the fade effect. I think I need to be using the KVolumeFader effect. Here's my current code: def music_play(self): self.delayedInit() self.m_media.setCurrentSource(Phonon.MediaSource(self.playlist[self.playlist_pos])) self.m_media.play() def music_stop(self): self.m_media.stop() def delayedInit(self): if not self.m_media: self.m_media = Phonon.MediaObject(self) audioOutput = Phonon.AudioOutput(Phonon.MusicCategory, self) Phonon.createPath(self.m_media, audioOutput) def enqueueNextSource(self): if len(self.playlist) >= self.playlist_pos+1: self.playlist_pos += 1 self.m_media.enqueue(Phonon.MediaSource(self.playlist[self.playlist_pos])) else: self.m_media.stop() Can anyone give me some advice on implementing the effect?

    Read the article

  • How to fetch output when calling R using Qprocess or system

    - by SYK
    Hi Experts, I would like to execute a R script simply as R --file=x.R It runs well on the command line. However when I try the system call in C++ by QProcess::execute("R --file=x.R"); or system("R --file=x.R"); the program R runs and quits but I can't see the output the program is supposed to generate. If a program uses no stdout (such as R), how do I fetch the output after a system call either as a output file or in the program's own console? Thanks for your time.

    Read the article

  • cannot output a json encoded dict containing accents (noob inside)

    - by user296546
    Hi all, here is a fairly simple example wich is driving me nuts since a couple of days. Considering the following script: # -*- coding: utf-8 -* from json import dumps as json_dumps machaine = u"une personne émérite" print(machaine) output = {} output[1] = machaine jsonoutput = json_dumps(output) print(jsonoutput) The result of this from cli: une personne émérite {"1": "une personne \u00e9m\u00e9rite"} I don't understand why their such a difference between the two strings. i have been trying all sorts of encode, decode etc but i can't seem to be able to find the right way to do it. Does anybody has an idea ? Thanks in advance. Matthieu

    Read the article

  • Convert Audio File to text using System.Speech

    - by Kushal Kalambi
    I am looking to convert a .wav file recorded through an android phone at 16000 to text using C#; namely the System.Speech namespace. My code is mentioned below; recognizer.SetInputToWaveFile(Server.MapPath("~/spoken.wav")); recognizer.LoadGrammar(new DictationGrammar()); RecognitionResult result = recognizer.Recognize(); label1.Text = result.Text; The is working perfectly with sample .wav "Hello world" file. However when i record something on teh phone and try to convert to on the pc, the converted text is no where close to what i had recoreded. Is there some way to make sure the audio file is transcribed accurately?

    Read the article

  • Playing audio from a wav file in iPhone SpeakHere example

    - by Mo
    I'm working with the iPhone SpeakHere example, and I would like to be able to play audio from either the mic (as in the example) or from a wav file. I have working code to play from a particular wav file, which looks like this: NSString *path = [[NSBundle mainBundle] pathForResource:@"basketBall" ofType:@"wav"]; AVAudioPlayer* theAudio=[[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:path] error:NULL]; theAudio.delegate = self; [theAudio play]; So I'm fine with actually getting the wav to play in the application (I can hook it up to a button, etc.) but I would like it to also behave the same way pushing the "Play" button does after recorded speech, in that it should be connected to the same visualization (which I have modified quite a bit, but essentially shows the current volume, among other things). Thanks for your help!

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >