Search Results

Search found 3512 results on 141 pages for 'circular buffer'.

Page 127/141 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • waveInProc / Windows audio question...

    - by BTR
    I'm using the Windows API to get audio input. I've followed all the steps on MSDN and managed to record audio to a WAV file. No problem. I'm using multiple buffers and all that. I'd like to do more with the buffers than simply write to a file, so now I've got a callback set up. It works great and I'm getting the data, but I'm not sure what to do with it once I have it. Here's my callback... everything here works: // Media API callback void CALLBACK AudioRecorder::waveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2) { // Data received if (uMsg == WIM_DATA) { // Get wav header LPWAVEHDR mBuffer = (WAVEHDR *)dwParam1; // Now what? for (unsigned i = 0; i != mBuffer->dwBytesRecorded; ++i) { // I can see the char, how do get them into my file and audio buffers? cout << mBuffer->lpData[i] << "\n"; } // Re-use buffer mResultHnd = waveInAddBuffer(hWaveIn, mBuffer, sizeof(mInputBuffer[0])); // mInputBuffer is a const WAVEHDR * } } // waveInOpen cannot use an instance method as its callback, // so we create a static method which calls the instance version void CALLBACK AudioRecorder::staticWaveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2) { // Call instance version of method reinterpret_cast<AudioRecorder *>(dwParam1)->waveInProc(hWaveIn, uMsg, dwInstance, dwParam1, dwParam2); } Like I said, it works great, but I'm trying to do the following: Convert the data to short and copy into an array Convert the data to float and copy into an array Copy the data to a larger char array which I'll write into a WAV Relay the data to an arbitrary output device I've worked with FMOD a lot and I'm familiar with interleaving and all that. But FMOD dishes everything out as floats. In this case, I'm going the other way. I guess I'm basically just looking for resources on how to go from LPSTR to short, float, and unsigned char. Thanks much in advance!

    Read the article

  • Ping broadcast on Win XP SP3

    - by PaulH
    I'm trying to ping the broadcast address 255.255.255.255 on WinXP SP3. If I use the command line, I get host error: C:\>ping 255.255.255.255 Ping request could not find host 255.255.255.255. Please check the name and try again. If I try a C++ program using the iphlpapi, IcmpSendEcho() fails and GetLastError returns 11010 IP_REQ_TIMED_OUT. HANDLE h = ::IcmpCreateFile(); IPAddr broadcast = inet_addr( "255.255.255.255" ); BYTE payload[ 32 ] = { 0 }; IP_OPTION_INFORMATION option = { 255, 0, 0, 0, 0 }; // a buffer with room for 32 replies each containing the full payload std::vector< BYTE > replies( 32 * ( sizeof( ICMP_ECHO_REPLY ) + 32 ) ); DWORD res = ::IcmpSendEcho( h, broadcast, payload, sizeof( payload ), &option, &replies[ 0 ], replies.size(), 1000 ); ::IcmpCloseHandle( h ); I can ping the local broadcast 192.168.0.255 with no problem. What do I need to do to ping the global broadcast? Thanks, PaulH

    Read the article

  • IE Information Bar, download file...how do I code for this?

    - by flatline
    I have a web page (asp.net) that compiles a package then redirects the user to the download file via javascript (window.location = ....). This is accompanied by a hard link on the page in case the redirect doesn't work - emulating the download process on many popular sites. When the IE information bar appears at the top due to restricted security settings, and a user clicks on it to download the file, it redirects the user to the page, not the download file, which refreshes the page and removes the hard link. What is the information bar doing here? Shouldn't it send the user to the location of the redirect? Am I setting something wrong in the headers of the download response, or doing something else wrong to send the file in the first place? C# Code: m_context.Response.Buffer = false; m_context.Response.ContentType = "application/zip"; m_context.Response.AddHeader("Content-Length", fs.Length.ToString()); m_context.Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}_{1}.zip", downloadPrefix, DateTime.Now.ToString("yyyy-MM-dd_HH-mm"))); //send the file

    Read the article

  • Downloading a webpage in C# example

    - by Chris
    I am trying to understand some example code on this web page: (http://www.csharp-station.com/HowTo/HttpWebFetch.aspx) that downloads a file from the internet. The piece of code quoted below goes through a loop getting chunks of data and saving them to a string until all the data has been downloaded. As I understand it, "count" contains the size of the downloaded chunk and the loop runs until count is 0 (an empty chunk of data is downloaded). My question is, isn't it possible that count could be 0 without the file being completely downloaded? Say if the network connection is interrupted, the stream may not have any data to read on a pass of the loop and count should be 0, ending the download prematurely. Or does ResStream.Read stop the program until it gets data? Is this the correct way to save a stream? int count = 0; do { // fill the buffer with data count = resStream.Read(buf, 0, buf.Length); // make sure we read some data if (count != 0) { // translate from bytes to ASCII text tempString = Encoding.ASCII.GetString(buf, 0, count); // continue building the string sb.Append(tempString); } } while (count 0); // any more data to read?

    Read the article

  • Why I can't get all UDP packets?

    - by Jack
    My program use UdpClient to try to receive 27 responses from 27 hosts. The size of the response is 10KB. My broadband incoming bandwidth is 150KB/s. The 27 responses are sent from the hosts almost at the same time and for every 10 secs. However, I can only receive 8 - 17 responses each time. The number of responses that I can receive is quite dynamic but within the range. Can anyone tell me why? why can't I receive all? I understand UDP is not reliable. but I tried receiving 5 - 10 responses at the same time, it worked. I guess the network links are not so bad. The code is very simple. ON the 27 hosts, I just use UdpClient to send 10KB to my machine. On my machine, I have one UdpClient receive datagrams. Each time I get a data, I create a thread to handle it (basically handling it means just print out "I received 10KB", but it runs in a thread). listener = new UDPListener(Port); listener.Start(); while (true) { try { UDPContext context = listener.Accept(); ThreadPool.QueueUserWorkItem(new WaitCallback(HandleMessage), context); } catch (Exception) { } } If I reduce the size of the response down to 3KB, the case gets much better that roughly 25 responses can be received. Any more idea? UDP buffer problems???

    Read the article

  • C++: Maybe you know this fitfall?

    - by Martijn Courteaux
    Hi, I'm developing a game. I have a header GameSystem (just methods like the game loop, no class) with two variables: int mouseX and int mouseY. These are updated in my game loop. Now I want to access them from Game.cpp file (a class built by a header-file and the source-file). So, I #include "GameSystem.h" in Game.h. After doing this I get a lot of compile errors. When I remove the include he says of course: Game.cpp:33: error: ‘mouseX’ was not declared in this scope Game.cpp:34: error: ‘mouseY’ was not declared in this scope Where I want to access mouseX and mouseY. All my .h files have Header Guards, generated by Eclipse. I'm using SDL and if I remove the lines that wants to access the variables, everything compiles and run perfectly (*). I hope you can help me... This is the error-log when I #include "GameSystem.h" (All the code he is refering to works, like explained by the (*)): In file included from ../trunk/source/domein/Game.h:14, from ../trunk/source/domein/Game.cpp:8: ../trunk/source/domein/GameSystem.h:30: error: expected constructor, destructor, or type conversion before ‘*’ token ../trunk/source/domein/GameSystem.h:46: error: variable or field ‘InitGame’ declared void ../trunk/source/domein/GameSystem.h:46: error: ‘Game’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: ‘g’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘char’ ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘bool’ ../trunk/source/domein/FPS.h:46: warning: ‘void FPS_SleepMilliseconds(int)’ defined but not used This is the code which try to access the two variables: SDL_Rect pointer; pointer.x = mouseX; pointer.y = mouseY; pointer.w = 3; pointer.h = 3; SDL_FillRect(buffer, &pointer, 0xFF0000);

    Read the article

  • Android - BitmapFactory.decodeByteArray - OutOfMemoryError (OOM)

    - by Bob Keathley
    I have read 100s of article about the OOM problem. Most are in regard to large bitmaps. I am doing a mapping application where we download 256x256 weather overlay tiles. Most are totally transparent and very small. I just got a crash on a bitmap stream that was 442 Bytes long while calling BitmapFactory.decodeByteArray(....). The Exception states: java.lang.OutOfMemoryError: bitmap size exceeds VM budget(Heap Size=9415KB, Allocated=5192KB, Bitmap Size=23671KB) The code is: protected Bitmap retrieveImageData() throws IOException { URL url = new URL(imageUrl); InputStream in = null; OutputStream out = null; HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // determine the image size and allocate a buffer int fileSize = connection.getContentLength(); if (fileSize < 0) { return null; } byte[] imageData = new byte[fileSize]; // download the file //Log.d(LOG_TAG, "fetching image " + imageUrl + " (" + fileSize + ")"); BufferedInputStream istream = new BufferedInputStream(connection.getInputStream()); int bytesRead = 0; int offset = 0; while (bytesRead != -1 && offset < fileSize) { bytesRead = istream.read(imageData, offset, fileSize - offset); offset += bytesRead; } // clean up istream.close(); connection.disconnect(); Bitmap bitmap = null; try { bitmap = BitmapFactory.decodeByteArray(imageData, 0, bytesRead); } catch (OutOfMemoryError e) { Log.e("Map", "Tile Loader (241) Out Of Memory Error " + e.getLocalizedMessage()); System.gc(); } return bitmap; } Here is what I see in the debugger: bytesRead = 442 So the Bitmap data is 442 Bytes. Why would it be trying to create a 23671KB Bitmap and running out of memory?

    Read the article

  • Python file iterator over a binary file with newer idiom.

    - by drewk
    In Python, for a binary file, I can write this: buf_size=1024*64 # this is an important size... with open(file, "rb") as f: while True: data=f.read(buf_size) if not data: break # deal with the data.... With a text file that I want to read line-by-line, I can write this: with open(file, "r") as file: for line in file: # deal with each line.... Which is shorthand for: with open(file, "r") as file: for line in iter(file.readline, ""): # deal with each line.... This idiom is documented in PEP 234 but I have failed to locate a similar idiom for binary files. I have tried this: >>> with open('dups.txt','rb') as f: ... for chunk in iter(f.read,''): ... i+=1 >>> i 1 # 30 MB file, i==1 means read in one go... I tried putting iter(f.read(buf_size),'') but that is a syntax error because of the parens after the callable in iter(). I know I could write a function, but is there way with the default idiom of for chunk in file: where I can use a buffer size versus a line oriented? Thanks for putting up with the Python newbie trying to write his first non-trivial and idiomatic Python script.

    Read the article

  • Any Alternate way for writing to a file other than ofstream

    - by Aditya
    Hi All, I am performing file operations (writeToFile) which fetches the data from a xml and writes into a output file(a1.txt). I am using MS Visual C++ 2008 and in windows XP. currently i am using this method of writing to output file.. 01.ofstreamhdr OutputFile; 02./* few other stmts / 03.hdrOutputFile.open(fileName, std::ios::out); 04. 05.hdrOutputFile << "#include \"commondata.h\""<< endl ; 06.hdrOutputFile << "#include \"Commonconfig.h\"" << endl ; 07.hdrOutputFile << "#include \"commontable.h\"" << endl << endl ; 08. hdrOutputFile << "#pragma pack(push,1)" << endl ; 09.hdrOutputFile << "typedef struct \n {" << endl ; 10./ simliar hdrOutputFiles statements... */.. I have around 250 lines to write.. Is any better way to perform this task. I want to reduce this hdrOutputFile and use a buffer to do this. Please guide me how to do that action. I mean, buff = "#include \"commontable.h\"" + "typedef struct \n {" + ....... hdrOutputFile << buff. is this way possible? Thanks Ramm

    Read the article

  • Convenient way to do "wrong way rebase" in git?

    - by Kaz
    I want to pull in newer commits from master into topic, but not in such a way that topic changes are replayed over top of master, but rather vice versa. I want the new changes from master to be played on top of topic, and the result to be installed as the new topic head. I can get exactly the right object if I rebase master to topic, the only problem being that the object is installed as the new head of master rather than topic. Is there some nice way to do this without manually shuffling around temporary head pointers? Edit: Here is how it can be achieved using a temporary branch head, but it's clumsy: git checkout master git checkout -b temp # temp points to master git rebase topic # topic is brought into temp, temp changes played on top Now we have the object we want, and it's pointed at by temp. git checkout topic git reset --hard temp Now topic has it; and all that is left is to tidy up by deleting temp: git branch -d temp Another way is to to do away with temp and just rebase master, and then reset topic to master. Finally, reset master back to what it was by pulling its old head from the reflog, or a cut-and-paste buffer.

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • Splitting Code into Headers/Source files

    - by cam
    I took the following code from the examples page on Asio class tcp_connection : public boost::enable_shared_from_this<tcp_connection> { public: typedef boost::shared_ptr<tcp_connection> pointer; static pointer create(boost::asio::io_service& io_service) { return pointer(new tcp_connection(io_service)); } tcp::socket& socket() { return socket_; } void start() { message_ = make_daytime_string(); boost::asio::async_write(socket_, boost::asio::buffer(message_), boost::bind(&tcp_connection::handle_write, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } private: tcp_connection(boost::asio::io_service& io_service) : socket_(io_service) { } void handle_write(const boost::system::error_code& /*error*/, size_t /*bytes_transferred*/) { } tcp::socket socket_; std::string message_; }; I'm relatively new to C++ (from a C# background), and from what I understand, most people would split this into header and source files (declaration/implementation, respectively). Is there any reason I can't just leave it in the header file if I'm going to use it across many source files? If so, are there any tools that will automatically convert it to declaration/implementation for me? Can someone show me what this would look like split into header/source file for an example (or just part of it, anyway)? I get confused around weird stuff like thistypedef boost::shared_ptr<tcp_connection> pointer; Do I include this in the header or the source? Same with tcp::socket& socket() I've read many tutorials, but this has always been something that has confused me about C++.

    Read the article

  • Memory optimization while downloading

    - by lboregard
    hello all i have the following piece of code, that im looking forward to optimize, since i'm consuming gobs of memory this routine is heavily used first optimization would be to move the stringbuilder construction out of the download routine and make it a field of the class, then i would clear it inside the routine can you please suggest any other optimization or point me in the direction of some resources that could help me with this (web articles, books, etc). i'm thinking about replacing the stringbuilder by a fixed (much larger) size buffer ... or perhaps create a larger sized stringbuilder thanks in advance. StreamWriter _writer; StreamReader _reader; public string Download(string msgId) { _writer.WriteLine("BODY <" + msgId + ">"); string response = _reader.ReadLine(); if (!response.StartsWith("222")) return null; bool done = false; StringBuilder body = new StringBuilder(256* 1024); do { response = _reader.ReadLine(); if (OnProgress != null) OnProgress(response.Length); if (response == ".") { done = true; } else { if (response.StartsWith("..")) response = response.Remove(0, 1); body.Append(response); body.Append("\r\n"); } } while (!done); return body.ToString(); }

    Read the article

  • Split UInt32 (audio frame) into two SInt16s (left and right)?

    - by morgancodes
    Total noob to anything lower-level than Java, diving into iPhone audio, and realing from all of the casting/pointers/raw memory access. I'm working with some example code wich reads a WAV file from disc and returns stereo samples as single UInt32 values. If I understand correctly, this is just a convenient way to return the 32 bits of memory required to create two 16 bit samples. Eventually this data gets written to a buffer, and an audio unit picks it up down the road. Even though the data is written in UInt32-sized chunks, it eventually is interpreted as pairs of 16-bit samples. What I need help with is splitting these UInt32 frames into left and right samples. I'm assuming I'll want to convert each UInt32 into an SInt16, since an audio sample is a signed value. It seems to me that for efficiency's sake, I ought to be able to simply point to the same blocks in memory, and avoid any copying. So, in pseudo-code, it would be something like this: UInt32 myStereoFrame = getFramefromFilePlayer; SInt16* leftChannel = getFirst16Bits(myStereoFrame); SInt16* rightChannel = getSecond16Bits(myStereoFrame); Can anyone help me turn my pseudo into real code?

    Read the article

  • switch from glOrtho to gluPerspective

    - by Knitex
    I have a car draw at (0,0) and some obstacles set up but right now my main concern is switching from glPerspective to glOrtho and vice-versa. All that i get when i switch from perspective to ortho is a black screen. void myinit(){ glClearColor(0.0,0.0,0.0,0.0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(60,ww/wh,1,100); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(-5,5,3,backcarx,topcarx,0,0,0,1); } void menu(int id){ /*menu selects if you want to view it in ortho or perspective*/ if(id == 1){ glClear(GL_DEPTH_BUFFER_BIT); glViewport(0,0,ww,wh); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(-2,100,-2,100,-1,1); glMatrixMode(GL_MODELVIEW); glutPostRedisplay(); } if(id == 2){ glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(60,ww/wh,1,100); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); viewx = backcarx - 10; viewy = backcary - 10; gluLookAt(viewx,viewy,viewz,backcarx,topcarx,0,0,0,1); } } i've tried using the clear depth buffer and still doesnt work.

    Read the article

  • Security strategies for storing password on disk

    - by Mike
    I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it. I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely. Here are some basic rules The system has multiple users. We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable) I don't mind anyone with superuser access looking at our stored password Here is what i've got so far Store password in password.txt $chmod 600 password.txt Process reads from password.txt when it's needed Buffer overwritten with zeros when it's no longer needed Although I'm sure there is a better way.

    Read the article

  • How do I redirect standard output to a file in Perl? [closed]

    - by rockyurock
    I want to send standard output to the file "my_output.txt" but failed. Here's the output: inside value loop ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 108 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.16.2 port 5001 connected with 192.168.16.1 port 3189 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 5.0 sec 2.14 MBytes 3.61 Mbits/sec 0.369 ms 0/ 1528 (0%) inside value loop3 clue1 clue2 inside value loop4 one iperf completed *************************************** When I enable the local *STDOUT; in below code then I could see the above output on command prompt display (ofcourse server is sending some data): my $file = 'my_output.txt'; use Win32::Process; print"inside value loop\n"; # redirect stdout to a file #local *STDOUT; open STDOUT, '>', $file or die "can't redirect STDOUT to <$file> $!"; Win32::Process::Create(my $ProcessObj, "D:\\IOT_AUTOMATION_UTILITY\\_SATURDAY_09-04-10\\adb_cmd.bat", "adb shell /data/app/iperf -u -s -p 5001", 0, NORMAL_PRIORITY_CLASS, ".") || die ErrorReport(); #$alarm_time = $IPERF_RUN_TIME+10; #20sec #$ProcessObj->Wait(40); #print"inside value loop2\n"; #sleep $alarm_time; sleep 40; $ProcessObj->Kill(0); sub ErrorReport{ print Win32::FormatMessage( Win32::GetLastError() ); }

    Read the article

  • Audio Streaming Latency

    - by killianmcc
    I'm writing a UDP local area network video chat system and have got the video and audio streams working. However I'm experiencing a little latency (about half a second) in the audio and was wondering what codecs would provide the least latency. I'm using NAudio (http://naudio.codeplex.com/) which provides me access to the following codecs for streaming; Speex Narrow Band (VBR) Speex Wide Band (16kHz)(VBR) Speex Ultra Wide Band (32kHz)(VBR) DSP Group TrueSpeech (8.5kbps) GSM 6.10 (13kbps) Microsoft ADPCM (32.8kbps) G.711 a-law (64kbps) G.722 16kHz (64kbps) G.711 mu-law (64kbps) PCM 8kHz 16 bit uncompressed (128kbps) I've tried them out and I'm not noticing much difference. Is there any others that I should download and try to reduce latency? I'm only going to be sending voice over the connection but I'm not really worried about quality or background noises too much. UPDATE I'm sending the audio in blocks like so; waveIn = new WaveIn(); waveIn.BufferMilliseconds = 50; waveIn.DeviceNumber = inputDeviceNumber; waveIn.WaveFormat = codec.RecordFormat; waveIn.DataAvailable += waveIn_DataAvailable; void waveIn_DataAvailable(object sender, WaveInEventArgs e) { if (connected) { byte[] encoded = codec.Encode(e.Buffer, 0, e.BytesRecorded); udpSender.Send(encoded, encoded.Length); } }

    Read the article

  • How to throw meaningful data in exceptions ?

    - by ZeroCool
    I would like to throw exceptions with meaningful explanation ! I thought of using variable arguments worked fine but lately I figured out it's impossible to extend the class in order to implement other exception types. So I figured out using an std::ostreamstring as a an internal buffer to deal with formatting ... but doesn't compile! I guess it has something to deal with copy constructors. Anyhow here is my piece of code: class Exception: public std::exception { public: Exception(const char *fmt, ...); Exception(const char *fname, const char *funcname, int line, const char *fmt, ...); //std::ostringstream &Get() { return os_ ; } ~Exception() throw(); virtual const char *what() const throw(); protected: char err_msg_[ERRBUFSIZ]; //std::ostringstream os_; }; The variable argumens constructor can't be inherited from! this is why I thought of std::ostringstream! Any advice on how to implement such approach ?

    Read the article

  • Qt XQuery into a QStringList

    - by Stewart
    Hi, I'm trying to use QtXmlQuery to extract some data from XML. I'd like to put this into a QStringList. I try the following: QByteArray in = "this is where my xml lives"; QBuffer received; received.setData(in); received.open(QIODevice::ReadOnly); QXmlQuery query; query.bindVariable("data", &received); query.setQuery(NAMESPACE //contains definition of the t namespace "doc($data)//t:service/t:serviceID/text()"); QBuffer outb; outb.open(QIODevice::ReadWrite); QXmlSerializer s(query, &outb); query.evaluateTo(&s); qDebug() << "buffer" << outb.data(); //This works perfectly! QStringList * queryResult = new QStringList(); query.evaluateTo(queryResult); qDebug() << "string list" << *queryResult; //This gives me no output! As you can see, everything works fine when I send the output into a QBuffer via a QXmlSerializer... but I get nothing when I try to use a QStringList.

    Read the article

  • OpenGL Pixel Format Attributes (NSOpenGLPixelFormatAttibutes) explanation?

    - by nacho4d
    Hi, I am not new to OpenGL, but not an expert. Many tutorials teach how to draw, 3D, 2D, projections, orthogonal, etc, but How about setting a the view? (NSOpenGLView in Cocoa, Macs). For example I have this: - (id) initWithFrame: (NSRect) frame { GLuint attribs[] = { //PF: PixelAttibutes NSOpenGLPFANoRecovery, NSOpenGLPFAWindow, NSOpenGLPFAAccelerated, NSOpenGLPFADoubleBuffer, NSOpenGLPFAColorSize, 24, NSOpenGLPFAAlphaSize, 8, NSOpenGLPFADepthSize, 24, NSOpenGLPFAStencilSize, 8, NSOpenGLPFAAccumSize, 0, 0 }; NSOpenGLPixelFormat* fmt = [[NSOpenGLPixelFormat alloc] initWithAttributes: (NSOpenGLPixelFormatAttribute*) attribs]; return self = [super initWithFrame:frame pixelFormat: [fmt autorelease]]; } And I don't understand very well their usage, specially when combining them. For example: If I want my view to be capable of full screen should I write NSOpenGLPFAFullScreen only ? or both? (by capable I mean not always in full screen) Regarding Double Buffer, what is this exactly? (Below: Apple's definition) If present, this attribute indicates that only double-buffered pixel formats are considered. Otherwise, only single-buffered pixel formats are considered Regarding Color: if NSOpenGLPFAColorSize is 24 and NSOpenGLPFAColorSize is 8 then it means that alpha and RGB components are treated differently? what happen if I set the former to 32 and the later to 0? Etc, etc,In general how do I learn to set my view from scratch? Thanks in advance. Ignacio.

    Read the article

  • Linear Interpolation. How to implement this algorithm in C ? (Python version is given)

    - by psihodelia
    There exists one very good linear interpolation method. It performs linear interpolation requiring at most one multiply per output sample. I found its description in a third edition of Understanding DSP by Lyons. This method involves a special hold buffer. Given a number of samples to be inserted between any two input samples, it produces output points using linear interpolation. Here, I have rewritten this algorithm using Python: temp1, temp2 = 0, 0 iL = 1.0 / L for i in x: hold = [i-temp1] * L temp1 = i for j in hold: temp2 += j y.append(temp2 *iL) where x contains input samples, L is a number of points to be inserted, y will contain output samples. My question is how to implement such algorithm in ANSI C in a most effective way, e.g. is it possible to avoid the second loop? NOTE: presented Python code is just to understand how this algorithm works. UPDATE: here is an example how it works in Python: x=[] y=[] hold=[] num_points=20 points_inbetween = 2 temp1,temp2=0,0 for i in range(num_points): x.append( sin(i*2.0*pi * 0.1) ) L = points_inbetween iL = 1.0/L for i in x: hold = [i-temp1] * L temp1 = i for j in hold: temp2 += j y.append(temp2 * iL) Let's say x=[.... 10, 20, 30 ....]. Then, if L=1, it will produce [... 10, 15, 20, 25, 30 ...]

    Read the article

  • Interesting Scala typing solution, doesn't work in 2.7.7?

    - by djc
    I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is: trait Pixel[T <: Pixel[T]] { def mul(v: Double): T def max(v: T): T def div(v: Double): T def div(v: T): T } Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel: class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] { var data: Array[Double] = v def this() = this(Array(0.0, 0.0, 0.0)) def toString(): String = { "(" + data(0) + ", " + data(1) + ", " + data(2) + ")" } def increment(v: TripleDoublePixel) { data(0) += v.data(0) data(1) += v.data(1) data(2) += v.data(2) } def mul(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x * v)) } def div(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x / v)) } def div(v: TripleDoublePixel): TripleDoublePixel = { var tmp = new Array[Double](3) tmp(0) = data(0) / v.data(0) tmp(1) = data(1) / v.data(1) tmp(2) = data(2) / v.data(2) new TripleDoublePixel(tmp) } def max(v: TripleDoublePixel): TripleDoublePixel = { val lv = data(0) * data(0) + data(1) * data(1) + data(2) * data(2) val vv = v.data(0) * v.data(0) + v.data(1) * v.data(1) + v.data(2) * v.data(2) if (lv > vv) (this) else v } } Now I want to write code to use this, that doesn't have to know what type the pixels are. For example: def idiv[T](a: Image[T], b: Image[T]) { for (i <- 0 until a.data.size) { a.data(i) = a.data(i).div(b.data(i)) } } Unfortunately, this doesn't compile: (fragment of lindet-gen.scala):145: error: value div is not a member of T a.data(i) = a.data(i).div(b.data(i)) I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but it doesn't compile for me. Is there any way to get this to work in 2.7.7?

    Read the article

  • Setup for games animation: How do I know JFrame is finished setting itself up?

    - by Jokkel
    I'm using javax.swing.JFrame to draw game animations using double buffer strategy. First, I set up the frame. JFrame frame = new JFrame(); frame.setVisible(true); Now, I draw an object (let it be a circle, doesn't matter) like this. frame.createBufferStrategy(2); bufferStrategy = frame.getBufferStrategy(); Graphics g = bufferStrategy.getDrawGraphics(); circle.draw(g); bufferStrategy.show(); The problem is that the frame is not always fully set-up when the drawing takes place. Seems like JFrame needs up to three steps in resizing itself, until it reaches it's final size. That makes the drawing slide out of frame or hinders it to appear completely from time to time. I already managed to delay things using SwingUtilities.invokeLater(). While this improved the result, there are still times when the drawing slides away / looks prematurely draw. Any idea / strategy? Thanks in advance. EDIT: Ok thanks so far. I didn't mention that I write a little Pong game in the first place. Sorry for the confusion What I actually looked for was the right setup for accelerated game animations done in Java. While reading through the suggestions I found my question answered (though indirectly) here and this example made things clear for me. A resume for this might be that for animating game graphics in Java, the first step is to get rid of the GUI logic overhead.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >