Search Results

Search found 4759 results on 191 pages for 'depth buffer'.

Page 106/191 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Run a macro in all buffers in vim

    - by Caleb Huitt - cjhuitt
    I know about the :bufdo command, and was trying to combine it with a macro I had recorded (@a) to add a #include in the proper spot of each of the header files I'd loaded. However, I couldn't find an easy way to run the macro on each buffer. Is there a way to execute a macro through ex mode, which is what :bufdo requires? Or is there another command I'm missing?

    Read the article

  • Oracle SQL CMD Line!!!

    - by DAVID
    Hi when ever perform select statements in the command line tool it doesnt use all of the space.. ive modified buffer size and window size and it just doesnt work. here is the Screenshot http://img19.imageshack.us/img19/8954/cmdoracle.jpg

    Read the article

  • Double paging definition

    - by Albinoswordfish
    This is not a programming question but more of an operating system question Right now I'm trying to learn what exactly Double paging means. I see two different terms, double paging on disk and double paging in memory. Apparently this problem arises when we introduce a buffer cache to store disk blocks when doing File I/O But I'm not really sure what exactly this term means. If anybody could specify it would be very helpful.

    Read the article

  • How to create a complete binary tree of height 'h' using Python?

    - by Jack
    Here is the node structure class Node: def __init__(self, data): # initializes the data members self.left = None self.right = None self.parent = None self.data = data complete binary tree Definition: A binary tree in which every level, except possibly the deepest, is completely filled. At depth n, the height of the tree, all nodes must be as far left as possible. -- http://www.itl.nist.gov/div897/sqg/dads/HTML/completeBinaryTree.html I am looking for an efficient algorithm.

    Read the article

  • File Output using Gforth

    - by sheepez
    As a first project I have been writing a short program to render the Mandelbrot fractal. I have got to the point of trying to output my results to a file ( e.g. .bmp or .ppm ) and got stuck. I have not really found any examples of exactly what I am trying to do, but I have found two examples of code to copy from one file to another. The examples in the Gforth documentation ( Section 3.27 ) did not work for me ( winXP ) in fact they seemed to open and create files but not write to files properly. This is the Gforth documentation example that copies the contents of one file to another: 0 Value fd-in 0 Value fd-out : open-input ( addr u -- ) r/o open-file throw to fd-in ; : open-output ( addr u -- ) w/o create-file throw to fd-out ; s" foo.in" open-input s" foo.out" open-output : copy-file ( -- ) begin line-buffer max-line fd-in read-line throw while line-buffer swap fd-out write-line throw repeat ; I found this example ( http://rosettacode.org/wiki/File_IO#Forth ) which does work. The main problem is that I can't isolate the part that writes to a file and have it still work. The main confusion is that r doesn't seem to consume TOS as I might expect. : copy-file2 ( a1 n1 a2 n2 -- ) r/o open-file throw >r w/o create-file throw r> begin pad maxstring 2 pick read-file throw ?dup while pad swap 3 pick write-file throw repeat close-file throw close-file throw ; \ Invoke it like this: s" output.txt" s" input.txt" copy-file I would be very grateful if someone could explain exactly how the open, create read and write -file words actually work, as my investigation keeps resulting in somewhat bizarre stacks. Any clues as to why the Gforth examples do not work might help too. In summary, I want to output from Gforth to a file and so far have been thwarted. Can anyone offer any help? Thank you Vijay, I think that I understand the example that you gave. However when I try to use something like this ( which I think is similar ): 0 value test-file : write-test s" testfile.out" w/o create-file throw to test-file s" test text" test-file write-line ; I get ok but nothing is put into the file, have I made a mistake? It seems that the problem was due to not flushing the relevant buffers or explicitly closing the file. Adding something like test-file flush-file throw or test-file close-file throw between write-line and ; makes it work. Thanks again Vijay for helping.

    Read the article

  • Performing full screen grab in windows

    - by Steven Lu
    I am working an idea that involves getting a full capture of the screen including windows and apps, analyzing it, and then drawing items back onto the screen, as an overlay. I want to learn image processing techniques and I could get lots of data to work with if I can directly access the Windows screen. I could use this to build automation tools the likes of which have never been seen before. More on that later. I have full screen capture working for the most part. HWND hwind = GetDesktopWindow(); HDC hdc = GetDC(hwind); int resx = GetSystemMetrics(SM_CXSCREEN); int resy = GetSystemMetrics(SM_CYSCREEN); int BitsPerPixel = GetDeviceCaps(hdc,BITSPIXEL); HDC hdc2 = CreateCompatibleDC(hdc); BITMAPINFO info; info.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); info.bmiHeader.biWidth = resx; info.bmiHeader.biHeight = resy; info.bmiHeader.biPlanes = 1; info.bmiHeader.biBitCount = BitsPerPixel; info.bmiHeader.biCompression = BI_RGB; void *data; hbitmap = CreateDIBSection(hdc2,&info,DIB_RGB_COLORS,(void**)&data,0,0); SelectObject(hdc2,hbitmap); Once this is done, I can call this repeatedly: BitBlt(hdc2,0,0,resx,resy,hdc,0,0,SRCCOPY); The cleanup code (I have no idea if this is correct): DeleteObject(hbitmap); ReleaseDC(hwind,hdc); if (hdc2) { DeleteDC(hdc2); } Every time BitBlt is called it grabs the screen and saves it in memory I can access thru data. Performance is somewhat satisfactory. BitBlt executes in 50 milliseconds (sometimes as low as 33ms) at 1920x1200x32. What surprises me is that when I switch display mode to 16 bit, 1920x1200x16, either through my graphics settings beforehand, or by using ChangeDisplaySettings, I get a massively improved screen grab time between 1ms and 2ms, which cannot be explained by the factor of two reduction in bit-depth. Using CreateDIBSection (as above) offers a significant speed up when in 16-bit mode, compared to if I set up with CreateCompatibleBitmap (6-7ms/f). Does anybody know why dropping to 16bit causes such a speed increase? Is there any hope for me to grab 32bit at such speeds? if not for the color depth, but for not forcing a change of screen buffer modes and the awful flickering.

    Read the article

  • Help with a t-sql query

    - by user324650
    Hi Based on the following table Path ---------------------- area1 area1\area2 area1\area2\area3 area1\area2\area3\area4 area1\area2\area5 area1\area2\area6 area1\area7 Input to my stored procedure is areapath and no.of children (indicates the depth that needs to considered from the input areapath) areapath=area1 children=2 Above should give Path ----------- area1 area1\area2 area1\area2\area3 area1\area2\area5 area1\area2\area6 area1\area7 similary for areapath=area2 and children=1 output should be Path --------------- area1\area2 area1\area2\area3 area1\area2\area5 area1\area2\area6 I am confused how to write a query for this one.

    Read the article

  • Using Boost.Asio to get "the whole packet"

    - by wowus
    I have a TCP client connecting to my server which is sending raw data packets. How, using Boost.Asio, can I get the "whole" packet every time (asynchronously, of course)? Assume these packets can be any size up to the full size of my memory. Basically, I want to avoid creating a statically sized buffer.

    Read the article

  • Android library to get pitch from WAV file

    - by Sakura
    I have a list of sampled data from the WAV file. I would like to pass in these values into a library and get the frequency of the music played in the WAV file. For now, I will have 1 frequency in the WAV file and I would like to find a library that is compatible with Android. I understand that I need to use FFT to get the frequency domain. Is there any good libraries for that? I found that [KissFFT][1] is quite popular but I am not very sure how compatible it is on Android. Is there an easier and good library that can perform the task I want? EDIT: I tried to use JTransforms to get the FFT of the WAV file but always failed at getting the correct frequency of the file. Currently, the WAV file contains sine curve of 440Hz, music note A4. However, I got the result as 441. Then I tried to get the frequency of G4, I got the result as 882Hz which is incorrect. The frequency of G4 is supposed to be 783Hz. Could it be due to not enough samples? If yes, how much samples should I take? //DFT DoubleFFT_1D fft = new DoubleFFT_1D(numOfFrames); double max_fftval = -1; int max_i = -1; double[] fftData = new double[numOfFrames * 2]; for (int i = 0; i < numOfFrames; i++) { // copying audio data to the fft data buffer, imaginary part is 0 fftData[2 * i] = buffer[i]; fftData[2 * i + 1] = 0; } fft.complexForward(fftData); for (int i = 0; i < fftData.length; i += 2) { // complex numbers -> vectors, so we compute the length of the vector, which is sqrt(realpart^2+imaginarypart^2) double vlen = Math.sqrt((fftData[i] * fftData[i]) + (fftData[i + 1] * fftData[i + 1])); //fd.append(Double.toString(vlen)); // fd.append(","); if (max_fftval < vlen) { // if this length is bigger than our stored biggest length max_fftval = vlen; max_i = i; } } //double dominantFreq = ((double)max_i / fftData.length) * sampleRate; double dominantFreq = (max_i/2.0) * sampleRate / numOfFrames; fd.append(Double.toString(dominantFreq)); Can someone help me out? EDIT2: I manage to fix the problem mentioned above by increasing the number of samples to 100000, however, sometimes I am getting the overtones as the frequency. Any idea how to fix it? Should I use Harmonic Product Frequency or Autocorrelation algorithms?

    Read the article

  • Cannot remove git repository completely

    - by Aleyna
    I have been using git on windows-msysgit. Whenever I try to remove a repository completely either using explorer or using $ git rm -rf ptp/ fatal: Not a git repository (or any of the parent directories): .git it errors out "The data present in the reparse point buffer is invalid" or the fatal error above. What's wrong with me/git? Thanks in advance

    Read the article

  • stderr to file; but without buffering

    - by l.thee.a
    I am trying to isolate a nasty bug, which brings down my linux kernel. I am printing messages to stderr and stderr is redirected to a log file. Is there a way to disable buffering on the file access? When kernel hangs, I am losing the messages in the buffer.

    Read the article

  • Emacs : problem with tags file ?

    - by KaluSingh Gabbar
    I am using ctags to create tags for my Emacs to read symbols from, using cygwin. Emacs says "visit-tags-table-buffer: File /home/superman/tags is not a valid tags table" here are my options to find files and generate tags. $>find . -type f -regex '.*\.[hc]\|.*\.cpp' -print0 | xargs -0 ctags -e --extra=+q --fields=+fksaiS --c++-kinds=+px --append -f ~/tags

    Read the article

  • The question regarding cerr cout and clog

    - by skydoor
    Can anybody explain the difference between cerr cout and clog and why does different objects are proposed? I know the differences are as below: 1) cout can redirected but cerr can't 2) clog can use buffer. I am confused about the point 2, I am grateful if anybody can elaborate it more.

    Read the article

  • Transferring binary data through a SOAP webservice? C# / .NET

    - by Jason
    I have a webservice that returns the binary array of an object. Is there an easier way to transfer this with SOAP or does it need to be contained in XML? It's working, but I had to increase the send and receive buffer to a large value. How much is too much? Transferring binary in XML as an array seems really inefficient, but I can't see any way to add a binary attachment using .NET.

    Read the article

  • How to learn a language that has very little coverage?

    - by bennybdbc
    I recently came across the Kogut language, and was interested by it. However, the only website to gain information from is the sourceforge page that hosts the project. I had no idea how to even attempt to look at the language in more depth. So what I'm asking is, has anyone here learnt a language that doesn't have the thousands of resources that Ruby, Python etc. have? What would be the best method to do so?

    Read the article

  • sending data packet just before closing socket

    - by xopht
    Before disconnect the client, the server wants to send some info to the client - why do I(server) disconnect you(client). If I send packet to the info and close the client socket immediately, closesocket() returns -1 and if I use linger option to work closesocket() successfully, the info cannot be sent completely. How can I complete this and is it possible to know socket buffer is empty(means my packet sent all)? thx.

    Read the article

  • Are there dtrace ustack() helpers on Mac OS X?

    - by raichoo
    Hi, I've been wondering if there is any chance to use dtrace ustack helpers on Mac OS X for python and other interpreted languages? I know that you can figure out what python and php on OpenSolaris are doing when giving the ustack some extra memory for a buffer. Is this somehow possible on Mac OS X? Regards raichoo

    Read the article

  • C strange array behaviour

    - by LukeN
    After learning that both strncmp is not what it seems to be and strlcpy not being available on my operating system (Linux), I figured I could try and write it myself. I found a quote from Ulrich Drepper, the libc maintainer, who posted an alternative to strlcpy using mempcpy. I don't have mempcpy either, but it's behaviour was easy to replicate. First of, this is the testcase I have #include <stdio.h> #include <string.h> #define BSIZE 10 void insp(const char* s, int n) { int i; for (i = 0; i < n; i++) printf("%c ", s[i]); printf("\n"); for (i = 0; i < n; i++) printf("%02X ", s[i]); printf("\n"); return; } int copy_string(char *dest, const char *src, int n) { int r = strlen(memcpy(dest, src, n-1)); dest[r] = 0; return r; } int main() { char b[BSIZE]; memset(b, 0, BSIZE); printf("Buffer size is %d", BSIZE); insp(b, BSIZE); printf("\nFirst copy:\n"); copy_string(b, "First", BSIZE); insp(b, BSIZE); printf("b = '%s'\n", b); printf("\nSecond copy:\n"); copy_string(b, "Second", BSIZE); insp(b, BSIZE); printf("b = '%s'\n", b); return 0; } And this is its result: Buffer size is 10 00 00 00 00 00 00 00 00 00 00 First copy: F i r s t b = 46 69 72 73 74 00 62 20 3D 00 b = 'First' Second copy: S e c o n d 53 65 63 6F 6E 64 00 00 01 00 b = 'Second' You can see in the internal representation (the lines insp() created) that there's some noise mixed in, like the printf() format string in the inspection after the first copy, and a foreign 0x01 in the second copy. The strings are copied intact and it correctly handles too long source strings (let's ignore the possible issue with passing 0 as length to copy_string for now, I'll fix that later). But why are there foreign array contents (from the format string) inside my destination? It's as if the destination was actually RESIZED to match the new length.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >