Search Results

Search found 21392 results on 856 pages for 'audio output'.

Page 67/856 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • File command output is different for same file on diff machine

    - by Coka
    I get different output of file command on same file(checked inode) from different machines. One of the machines is with suse10 sp3 and the another - rhel4. machine1>file x.tcl x.tcl: ASCII English text machin2>file x.tcl x.tcl: data Even in vi editor same file look different from different machine. Any clue? One more thing - there's third machine suse10 sp3 works fine. Is this machine issue?

    Read the article

  • Attaching strace to 100% CPU Apache process - output

    - by knef
    I am having a problem with Apache2 spawning processes that use 100% CPU. Attaching strace to one of such processes produces no output sometimes and sometimes gives this: 2672 17:18:07 poll([{fd=14, events=POLLIN|POLLPRI}], 1, 0) = 0 (Timeout) 2672 17:18:07 write(14, "\236\3\0\0\3SELECT FLOOR(((price_index."..., 930) = 930 2672 17:18:07 read(14, "\1\0\0\1\2\33\0\0\2\3def\0\0\0\5range\0\f?\0\r\0\0\0\10\0"..., 16384) = 85 I would be grateful for any ideas as to interpreting the above.

    Read the article

  • Incorrect units in iotop output

    - by brodie
    iotop is behaving strangely on a opensuse 11.2 server. It all of a sudden started reporting the output in the wrong units. Kilobytes per second are now Terabytes a second, Gigabytes now Petabytes. This server is also having stability issues, so I'm curious as to if the system is reporting things wrong to iotop is related to other issues. Any one else see similar behaviour?

    Read the article

  • Unable to capture standard output of process using Boost.Process

    - by Chris Kaminski
    Currently am using Boost.Process from the Boost sandbox, and am having issues getting it to capture my standard output properly; wondering if someone can give me a second pair of eyeballs into what I might be doing wrong. I'm trying to take thumbnails out of RAW camera images using DCRAW (latest version), and capture them for conversion to QT QImage's. The process launch function: namespace bf = ::boost::filesystem; namespace bp = ::boost::process; QImage DCRawInterface::convertRawImage(string path) { // commandline: dcraw -e -c <srcfile> -> piped to stdout. if ( bf::exists( path ) ) { std::string exec = "bin\\dcraw.exe"; std::vector<std::string> args; args.push_back("-v"); args.push_back("-c"); args.push_back("-e"); args.push_back(path); bp::context ctx; ctx.stdout_behavior = bp::capture_stream(); bp::child c = bp::launch(exec, args, ctx); bp::pistream &is = c.get_stdout(); ofstream output("C:\\temp\\testcfk.jpg"); streamcopy(is, output); } return (NULL); } inline void streamcopy(std::istream& input, std::ostream& out) { char buffer[4096]; int i = 0; while (!input.eof() ) { memset(buffer, 0, sizeof(buffer)); int bytes = input.readsome(buffer, sizeof buffer); out.write(buffer, bytes); i++; } } Invoking the converter: DCRawInterface DcRaw; DcRaw.convertRawImage("test/CFK_2439.NEF"); The goal is to simply verify that I can copy the input stream to an output file. Currently, if I comment out the following line: args.push_back("-c"); then the thumbnail is written by DCRAW to the source directory with a name of CFK_2439.thumb.jpg, which proves to me that the process is getting invoked with the right arguments. What's not happening is connecting to the output pipe properly. FWIW: I'm performing this test on Windows XP under Eclipse 3.5/Latest MingW (GCC 4.4).

    Read the article

  • File Output using Gforth

    - by sheepez
    As a first project I have been writing a short program to render the Mandelbrot fractal. I have got to the point of trying to output my results to a file ( e.g. .bmp or .ppm ) and got stuck. I have not really found any examples of exactly what I am trying to do, but I have found two examples of code to copy from one file to another. The examples in the Gforth documentation ( Section 3.27 ) did not work for me ( winXP ) in fact they seemed to open and create files but not write to files properly. This is the Gforth documentation example that copies the contents of one file to another: 0 Value fd-in 0 Value fd-out : open-input ( addr u -- ) r/o open-file throw to fd-in ; : open-output ( addr u -- ) w/o create-file throw to fd-out ; s" foo.in" open-input s" foo.out" open-output : copy-file ( -- ) begin line-buffer max-line fd-in read-line throw while line-buffer swap fd-out write-line throw repeat ; I found this example ( http://rosettacode.org/wiki/File_IO#Forth ) which does work. The main problem is that I can't isolate the part that writes to a file and have it still work. The main confusion is that r doesn't seem to consume TOS as I might expect. : copy-file2 ( a1 n1 a2 n2 -- ) r/o open-file throw >r w/o create-file throw r> begin pad maxstring 2 pick read-file throw ?dup while pad swap 3 pick write-file throw repeat close-file throw close-file throw ; \ Invoke it like this: s" output.txt" s" input.txt" copy-file I would be very grateful if someone could explain exactly how the open, create read and write -file words actually work, as my investigation keeps resulting in somewhat bizarre stacks. Any clues as to why the Gforth examples do not work might help too. In summary, I want to output from Gforth to a file and so far have been thwarted. Can anyone offer any help? Thank you Vijay, I think that I understand the example that you gave. However when I try to use something like this ( which I think is similar ): 0 value test-file : write-test s" testfile.out" w/o create-file throw to test-file s" test text" test-file write-line ; I get ok but nothing is put into the file, have I made a mistake? It seems that the problem was due to not flushing the relevant buffers or explicitly closing the file. Adding something like test-file flush-file throw or test-file close-file throw between write-line and ; makes it work. Thanks again Vijay for helping.

    Read the article

  • How to play audio in Java Application

    - by user577829
    I'm making a java application and I need to play audio. I'm playing mainly small sound files of my cannon firing (its a cannon shooting game) and the projectiles exploding, though I plan on having looping background music. I have found two different methods to accomplish this, but both don't work how I want. The first method is literally a method: public void playSoundFile(File file) {//http://java.ittoolbox.com/groups/technical-functional/java-l/sound-in-an-application-90681 try { //get an AudioInputStream AudioInputStream ais = AudioSystem.getAudioInputStream(file); //get the AudioFormat for the AudioInputStream AudioFormat audioformat = ais.getFormat(); System.out.println("Format: " + audioformat.toString()); System.out.println("Encoding: " + audioformat.getEncoding()); System.out.println("SampleRate:" + audioformat.getSampleRate()); System.out.println("SampleSizeInBits: " + audioformat.getSampleSizeInBits()); System.out.println("Channels: " + audioformat.getChannels()); System.out.println("FrameSize: " + audioformat.getFrameSize()); System.out.println("FrameRate: " + audioformat.getFrameRate()); System.out.println("BigEndian: " + audioformat.isBigEndian()); //ULAW format to PCM format conversion if ((audioformat.getEncoding() == AudioFormat.Encoding.ULAW) || (audioformat.getEncoding() == AudioFormat.Encoding.ALAW)) { AudioFormat newformat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, audioformat.getSampleRate(), audioformat.getSampleSizeInBits() * 2, audioformat.getChannels(), audioformat.getFrameSize() * 2, audioformat.getFrameRate(), true); ais = AudioSystem.getAudioInputStream(newformat, ais); audioformat = newformat; } //checking for a supported output line DataLine.Info datalineinfo = new DataLine.Info(SourceDataLine.class, audioformat); if (!AudioSystem.isLineSupported(datalineinfo)) { //System.out.println("Line matching " + datalineinfo + " is not supported."); } else { //System.out.println("Line matching " + datalineinfo + " is supported."); //opening the sound output line SourceDataLine sourcedataline = (SourceDataLine) AudioSystem.getLine(datalineinfo); sourcedataline.open(audioformat); sourcedataline.start(); //Copy data from the input stream to the output data line int framesizeinbytes = audioformat.getFrameSize(); int bufferlengthinframes = sourcedataline.getBufferSize() / 8; int bufferlengthinbytes = bufferlengthinframes * framesizeinbytes; byte[] sounddata = new byte[bufferlengthinbytes]; int numberofbytesread = 0; while ((numberofbytesread = ais.read(sounddata)) != -1) { int numberofbytesremaining = numberofbytesread; sourcedataline.write(sounddata, 0, numberofbytesread); } } } catch (Exception e) { e.printStackTrace(); } } The problem with this is that my entire program stops until the sound file is finished, or at least nearly finished. The second method is this: File file = new File("Launch1.wav"); AudioClip clip; try { clip = JApplet.newAudioClip(file.toURL()); clip.play(); } catch (Exception e) { e.getMessage(); } The problem I have here is that every time the sound file ends early or doesn't play at all depending on where I place the code. Is their any way to play sound without the above mentioned problems? Am I doing something wrong? Any help is greatly appreciated.

    Read the article

  • Running a command that produces no output with SharpSSH

    - by Paolo Tedesco
    I want to run a command using ssh. I am using the SharpSSH library, as in this example: using System; using Tamir.SharpSsh; class Program { static void Main(string[] args) { string hostName = "host.foo.com"; string userName = "user"; string privateKeyFile = @"C:\privatekey.private"; string privateKeyPassword = "xxx"; SshExec sshExec = new SshExec(hostName, userName); sshExec.AddIdentityFile(privateKeyFile, privateKeyPassword); sshExec.Connect(); string command = string.Join(" ", args); Console.WriteLine("command = {0}", command); string output = sshExec.RunCommand(command); int code = sshExec.ChannelExec.getExitStatus(); sshExec.Close(); Console.WriteLine("code = {0}", code); Console.WriteLine("output = {0}", output); } } My problem is that when the command I run produces no output, I get -1 as return code, instead of the code returned by the command on the remote machine. Has someone encountered this problem, or am I doing something wrong?

    Read the article

  • Any screen capture software that captures webcam, microphone inputs too ?

    - by mohanr
    I am going to conduct a user study. Apart from capturing the screen while the user is interacting with the system, I also want to capture the video/audio of the user. Is there any software that in addition to capturing the screen also overlays it with the webcam/microphone inputs. The goal is to capture the complete experience of the user: key/mouse interactions with the system along with their facial/vocal responses. I know that I can maybe run a screen-capture software and also run a software for capturing webcam audio/video alongside and try to sync/overlay both these streams with timestamps. But I am going to be dealing with probably several hundred hours of data. So I am looking for a tool that can streamline the process for me amap and help me keep my sanity at end of the process. Thanks,

    Read the article

  • oracle pl/sql bug: can't put_line more than 2000 characters

    - by FrustratedWithFormsDesigner
    Has anyone else noticed this phenomenon where dbms_output.put_line is unable to print more than 2000 characters at a time? Script is: set serveroutput on size 100000; declare big_str varchar2(2009); begin for i in 1..2009 loop big_str := big_str||'x'; end loop; dbms_output.put_line(length(big_str)); dbms_output.put_line(big_str); end; / I copied and pasted the output into an editor (Notepad++) which told me there were only 2000 characters, not 2009 which is what I think should have been pasted. This also happens with a few of my test scripts - only 2000 characters get printed. I have a workaround to print like this: dbms_output.put_line(length(big_str)); dbms_output.put_line(substr(big_str,1,1999)); dbms_output.put_line(substr(big_str,2000)); This adds new lines to the output, makes it hard to read when the text you're working with is preformatted. Has anyone else noticed this? Is it really a bug or some sort of obscure feature? Is there a better workaround? Is there any other information on this out there? Oracle version is: 10.2.0.3.0, using PL/SQL Developer (from Allround Automation).

    Read the article

  • Programming R/Sweave for proper \Sexpr output

    - by deoksu
    Hi I'm having a bit of a problem programming R for Sweave, and the #rstats twitter group often points here, so I thought I'd put this question to the SO crowd. I'm an analyst- not a programmer- so go easy on me my first post. Here's the problem: I am drafting a survey report in Sweave with R and would like to report the marginal returns in line using \Sexpr{}. For example, rather than saying: Only 14% of respondents said 'X'. I want to write the report like this: Only \Sexpr{p.mean(variable)}$\%$ of respondents said 'X'. The problem is that Sweave() converts the results of the expression in \Sexpr{} to a character string, which means that the output from expression in R and the output that appears in my document are different. For example, above I use the function 'p.mean': p.mean<- function (x) {options(digits=1) mmm<-weighted.mean(x, weight=weight, na.rm=T) print(100*mmm) } In R, the output looks like this: p.mean(variable) >14 but when I use \Sexpr{p.mean(variable)}, I get an unrounded character string (in this case: 13.5857142857143) in my document. I have tried to limit the output of my function to 'digits=1' in the global environment, in the function itself, and and in various commands. It only seems to contain what R prints, not the character transformation that is the result of the expression and which eventually prints in the LaTeX file. as.character(p.mean(variable)) >[1] 14 >[1] "13.5857142857143" Does anyone know what I can do to limit the digits printed in the LaTeX file, either by reprogramming the R function or with a setting in Sweave or \Sexpr{}? I'd greatly appreciate any help you can give. Thanks, David

    Read the article

  • Silverlight 4 - encoding PCM data from the microphone

    - by Richard
    Hi I've written a basic SL4 application to capture audio data from the microphone using CaptureSource. The trouble is, it's raw PCM output - which means huge and uncompressed. Given that I need this application to run purely within a SL4 environment, how can I compress the PCM audio data into something that can be delivered to a remote server more easily? In conversation, people have suggested Speex and WMA for instance, but I haven't found any libraries or examples that work without requiring reference to DLL's that won't work in a SL4 project. Thanks, Richard.

    Read the article

  • Group multiple media queries formed as output of LESS css

    - by Goje87
    I was planning to use LESS css in my project (PHP). I am planning to use its nested @media query feature. I find that it fails to group the multiple media queries in the output css it generates. For example: // LESS .header { @media all and (min-width: 240px) and (max-width: 319px) { font-size: 12px; } @media all and (min-width: 320px) and (max-width: 479px) { font-size: 16px; font-weight: bold; } } .body { @media all and (min-width: 240px) and (max-width: 319px) { font-size: 10px; } @media all and (min-width: 320px) and (max-width: 479px) { font-size: 12px; } } // output CSS @media all and (min-width: 240px) and (max-width: 319px) { .header { font-size: 12px; } } @media all and (min-width: 320px) and (max-width: 479px) { .header { font-size: 16px; font-weight: bold; } } @media all and (min-width: 240px) and (max-width: 319px) { .body { font-size: 10px; } } @media all and (min-width: 320px) and (max-width: 479px) { .body { font-size: 12px; } } My expected output is (@media queries grouped) @media all and (min-width: 240px) and (max-width: 319px) { .header { font-size: 12px; } .body { font-size: 10px; } } @media all and (min-width: 320px) and (max-width: 479px) { .header { font-size: 16px; font-weight: bold; } .body { font-size: 12px; } } I would like to know if it can be done in LESS it self or is there any simple CSS parser I can use to manipulate the output CSS to group the @media queries.

    Read the article

  • Is it possible to have AVFramework and AudioToolbox framework in one app?

    - by Satyam
    I'm trying to write develop audio related application. In that, I'm using AudioToolBox framework for recording the sound. And I'm using AVFramework to play soudns. When app is stared, it will play some mp3 file using AVFramework. And also initializes Audiotoolbox. In simulator, I'm able to record and play. But when I'm testing it on iPhone, I'm getting following error for initializing AudioToolBox. 2009-12-11 22:25:51.599 StoryBook[807:207] AudioRecorder init AudioSessionInitialize failed with error: 1768843636 Can some one tell me whether we can use both AV as well as Audio Toolbox frame works in one application? Why I'm getting that error?

    Read the article

  • After playing a MediaElement, how can I play it again?

    - by Edward Tanguay
    I have a variable MediaElement variable named TestAudio in my Silverlight app. When I click the button, it plays the audio correctly. But when I click the button again, it does not play the audio. How can I make the MediaElement play a second time? None of the tries below to put position back to 0 worked: private void Button_Click_PlayTest(object sender, RoutedEventArgs e) { //TestAudio.Position = new TimeSpan(0, 0, 0); //TestAudio.Position = TestAudio.Position.Add(new TimeSpan(0, 0, 0)); //TestAudio.Position = new TimeSpan(0, 0, 0, 0, 0); //TestAudio.Position = TimeSpan.Zero; TestAudio.Play(); }

    Read the article

  • Changing volume in Java when using JLayer.

    - by Penchant
    I'm using JLayer to play an inputstream of mp3 data from the internet. How do i change the volume of the output? I'm using this code to play it: URL u = new URL(s); URLConnection conn = u.openConnection(); conn.setConnectTimeout(Searcher.timeoutms); conn.setReadTimeout(Searcher.timeoutms); bitstream = new Bitstream(conn.getInputStream()/*new FileInputStream(quick_file)*/); System.out.println(bitstream); decoder = new Decoder(); decoder.setEqualizer(equalizer); audio = FactoryRegistry.systemRegistry().createAudioDevice(); audio.open(decoder); for(int i = quick_positions[0]; i > 0; i--){ Header h = bitstream.readFrame(); if (h == null){ return; } bitstream.closeFrame();

    Read the article

  • Output iterator's value_type

    - by wilhelmtell
    The STL commonly defines an output iterator like so: template<class Cont> class insert_iterator : public iterator<output_iterator_tag,void,void,void,void> { // ... Why do output iterators define value_type as void? It would be useful for an algorithm to know what type of value it is supposed to output. For example, a function that translates a URL query "key1=value1&key2=value2&key3=value3" into any container that holds key-value strings elements. template<typename Ch,typename Tr,typename Out> void parse(const std::basic_string<Ch,Tr>& str, Out result) { std::basic_string<Ch,Tr> key, value; // loop over str, parse into p ... *result = typename iterator_traits<Out>::value_type(key, value); } The SGI reference page of value_type hints this is because it's not possible to dereference an output iterator. But that's not the only use of value_type: I might want to instantiate one in order to assign it to the iterator.

    Read the article

  • Play a beep that loop and change the frequency/speed

    - by Bono
    Hi all, I am creating an iphone application that use audio. I want to play a beep sound that loop indefinitely. I found an easy way to do that using the upper layer AVAudioPlayer and the numberOfLoops set to "-1". It works fine. But now I want to play this audio and be able to change the rate / speed. It may works like the sound played by a car when approaching an obstacle. At the beginning the beep has a low frequency and this frequency accelerate till reaching a continuous sound biiiiiiiiiiiip ... It seems this is not feasible using the high layer AVAudioPlayer, but even looking at AudioToolBox I found no solution. Does anybody have informations about how to do that? Thanks a lot for helping me!

    Read the article

  • Novocaine - How to loop file playback? (iOS)

    - by lppier
    I'm using Novocaine by alexbw Novocaine for my audio project. I'm playing around with the example code here for file reading. The file plays back with no problem. I would like to loop this recording with the gap between the loops - any suggestion as to how I can do so? Thanks. Pier. // AUDIO FILE READING OHHH YEAHHHH // ======================================== NSArray *pathComponents = [NSArray arrayWithObjects: [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject], @"testrecording.wav", nil]; NSURL *inputFileURL = [NSURL fileURLWithPathComponents:pathComponents]; NSLog(@"URL: %@", inputFileURL); fileReader = [[AudioFileReader alloc] initWithAudioFileURL:inputFileURL samplingRate:audioManager.samplingRate numChannels:audioManager.numOutputChannels]; [fileReader play]; [fileReader setCurrentTime:0.0]; //float duration = fileReader.getDuration; [audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) { [fileReader retrieveFreshAudio:data numFrames:numFrames numChannels:numChannels]; NSLog(@"Time: %f", [fileReader getCurrentTime]); }];

    Read the article

  • How Do I Convert text to a WAV file With Inaudible Waveform?

    - by Scott
    I am trying to create an audio watermarking system. I figure the best solution is to create an audio file (WAV) based on a unique string of text and then combine this with the original wav. The part that makes this tricky (for me anyway) is: How do I convert the text string to a wav? How do I ensure that the resulting WAV form is inaudible (or at least barely noticeable to the listener). I would prefer this be done server side (via PHP, etc) but if the processing load isn't too much then would be ok with something in Flash or Javascript. I'd be willing to pay someone to create me a workable solution (complete source code that functions as described). Thanks, Scott!

    Read the article

  • Unwanted Shell expansion when assigning the output of a shell command to a variable

    - by Rob Goodwin
    I am exporting a portion of a local prototypte svn repository to import into a different repo. We have a number of svn properties set throughout the repo so I figured I would write a script to list the file elements and their corresponding properties. How hard can that be right. So I write started writing a bash script that would assign the output of the svn proplist -v to a variable so I could check if the specified file had any properties. #!/bin/bash svn proplist -v $1 o=$(svn proplist -v "$1") echo $o now this works fine and echos the output of the svn proplist command. But if the proplist command returns something like svn:ignore : * build it performs a shell expansion on the * and inserts the entire directory listing prior to the build property value. So if the directory had a.txt, b.txt and build files/dirs in it, the output would look like. svn:ignore a.txt b.txt build I figure I need to somehow escape the output or something to keep the expansion from happening, but have yet to find something that works. There are other ways to do this, but I hate when I cannot figure something out. and I have to admin, I think this one beat me ( well given the time I can spend on it )

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • Signal amplitude against time in Java

    - by wsr74ws84
    I'm racking my brain in order to solve a knotty problem (at least for me). While playing an audio file (using Java) I want the signal amplitude to be displayed against time. I mean I'd like to implement a small panel showing a sort of oscilloscope (spectrum analyzer). The audio signal should be viewed in the time domain (vertical axis is amplitude and the horizontal axis is time). Does anyone know how to do it? Is there a good tutorial I can rely on? Since I know very little about Java, I hope someone can help me.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >