Search Results

Search found 3117 results on 125 pages for 'buffer'.

Page 95/125 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Maximum Possible File Name Length in Windows Kernel

    - by Lambert
    I was wondering, what is the longest possible name length allowed by the Windows kernel? E.g.: I know the kernel uses UNICODE_STRING structures to hold all object paths, and since the byte length of a wide-character string is stored inside a USHORT, that allows for a maximum path length of 2^15 - 1 characters. Is there a similar, hard restriction on a file name (rather than path)? (I don't care if NTFS or FAT32 imposes a particular restriction; I'm looking for the longest possible theoretically allowed name in the kernel, assuming no additional file system or shell restrictions.) (Edit: For those wondering why this even matters, consider that normally, traversing a directory is achieved by FindFirstFile/FindNextFile calls, one call per file. Given the function named NtQueryDirectoryFile, which is the underlying system call and which returns multiple file names per call, it's actually possible to take advantage of this maximum-length restriction on the path to make an extremely-fast directory traverser that uses solely the stack as a buffer. Now I'm trying to extend that concept, and I need to know the maximum size of a file name.)

    Read the article

  • Django gives "I/O operation on closed file" error when reading from a saved ImageField

    - by Rob Osborne
    I have a model with two image fields, a source image and a thumbnail. When I update the new source image, save it and then try to read the source image to crop/scale it to a thumbnail I get an "I/O operation on closed file" error from PIL. If I update the source image, don't save the source image, and then try to read the source image to crop/scale, I get an "attempting to read from closed file" error from PIL. In both cases the source image is actually saved and available in later request/response loops. If I don't crop/scale in a single request/response loop but instead upload on one page and then crop/scale in another page this all works fine. This seems to be a cached buffer being reused some how, either by PIL or by the Django file storage. Any ideas on how to make an ImageField readable after saving?

    Read the article

  • C++ code beautifier for emacs/linux

    - by aaa
    hi I am looking for code beautifier for UNIX/emacs. I have looked at gnu indent, artistic style, however I need something a bit different. For example, I would like the following: for( int x= 0;; ++ x) if(x) break; to be formatted as for (int x = 0; ; ++x) if (x) break;. As far as I can tell artistic style does not do that (correct me if I am wrong). What can you recommend? Thanks edit both, artistic style and indent remove whitespace. Here is a small interactive command to beautify region: 405 (defun my-emacs-command-beautify-region() 406 (interactive) 407 (let ((cmd "astyle")) 408 (shell-command-on-region (region-beginning) (region-end) cmd (current-buffer) t))

    Read the article

  • getnameinfo specifies socklen_t

    - by bobby
    The 2nd arg for the getnameinfo prototype asks for a socklen_t type but sizeof uses size_t. So how can I get socklen_t ? Prototype: int getnameinfo(const struct sockaddr *restrict sa, socklen_t salen, char *restrict node, socklen_t nodelen, char *restrict service, socklen_t servicelen, int flags); Example: struct sockaddr_in SIN; memset(&SIN, 0, sizeof(SIN)); // This should also be socklen_t ? SIN.sin_family = AF_INET; SIN.sin_addr.s_addr = inet_addr(IP); SIN.sin_port = 0; getnameinfo((struct sockaddr *)&SIN, sizeof(SIN) /* socklen_t */, BUFFER, NI_MAXHOST, NULL, 0, 0);

    Read the article

  • Writing my own iostream utility class: Is this a good idea?

    - by Alex
    I have an application that wants to read word by word, delimited by whitespace, from a file. I am using code along these lines: std::istream in; string word; while (in.good()) { in>>word; // Processing, etc. ... } My issue is that the processing on the words themselves is actually rather light. The major time consumer is a set of mySQL queries I run. What I was thinking is writing a buffered class that reads something like a kilobyte from the file, initializes a stringstream as a buffer, and performs extraction from that transparently to avoid a great many IO operations. Thoughts and advice?

    Read the article

  • clear html webpage?

    - by noname
    i am currently developing a crawler that crawls all links on the web and displays them in the web browser (and saving it of course). but after some hours there will be a huge list displayed on the web browser and i want to only display lets say 1000 links at the same time. then i clear the html and display another 1000 links. this is also good for the RAM or it will eat up all memory. how do i clear the web browser screen? EDIT: i have seen some scripts using some flush buffer functions. has this anything to do with my case?

    Read the article

  • c++ library for endian-aware reading of raw file stream metadata?

    - by Kache4
    I've got raw data streams from image files, like: vector<char> rawData(fileSize); ifstream inFile("image.jpg"); inFile.read(&rawData[0]); I want to parse the headers of different image formats for height and width. Is there a portable library that can can read ints, longs, shorts, etc. from the buffer/stream, converting for endianess as specified? I'd like to be able to do something like: short x = rawData.readLeShort(offset); or long y = rawData.readBeLong(offset) An even better option would be a lightweight & portable image metadata library (without the extra weight of an image manipulation library) that can work on raw image data. I've found that Exif libraries out there don't support png and gif.

    Read the article

  • jQuery Audio Player

    - by tony noriega
    I was given 2 MP3 files, one that is 4.5Mb and one that is 5.6Mb. I was instructed to have them play on a website i am managing. I have found a nice, clean looking CSS based jQuery audio player. My question is, is this the right solution for files that big? I am not sure if the player preloads the file, or streams it ? (if that is the correct terminology) i dont deal much with audio players and such... this player is from happyworm.com/jquery/jplayer/latest/demo-01.htm is there another approach i shoudl take to get this to play properly? I dont want it to have to buffer, and the visitor to wait, or slow page loading...etc..etc.. i want it to play clean and not affect the visitors session to the site. thanks

    Read the article

  • How to reliably measure available memory in Linux?

    - by Alex B
    Linux /proc/meminfo shows a number of memory usage statistics. MemTotal: 4040732 kB MemFree: 23160 kB Buffers: 163340 kB Cached: 3707080 kB SwapCached: 0 kB Active: 1129324 kB Inactive: 2762912 kB There is quite a bit of overlap between them. For example, as far as I understand, there can be active page cache (belongs to "cached" and "active") and inactive page cache ("inactive" + "cached"). What I want to do is to measure "free" memory, but in a way that it includes used pages that are likely to be dropped without a significant impact on overall system's performance. At first, I was inclined to use "free" + "inactive", but Linux's "free" utility uses "free" + "cached" in its "buffer-adjusted" display, so I am curious what a better approach is. When the kernel runs out of memory, what is the priority of pages to drop and what is the more appropriate metric to measure available memory?

    Read the article

  • nxhtml and geben: debug mode stops responding to keystrokes upon step into html/php mixed line

    - by artistoex
    I'm using the php debugger geben and nxhtml-mode. While debugging, as soon as I step into a mixed line such as <foo><?php bar(); ?></foo> the debugger is no longer accepting any key-strokes. However, the mode line still indicates the debugger's presence (*debugging*'-entry). I guess this due to nxhtml's mode changes, because it's the exact same behavior geben shows after disabling end re-enabling it. Does anybody use nxhtml together with geben and has fixed it? Or is it possible to configure emacs to enable nxhtml conditionaly, such that php-mode is used instead when the buffer was opened by geben?

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • SSIS process files from folder

    - by RT
    Background: I've a folder that gets pumped with files continuously. My SSIS package needs to process the files and delete them. The SSIS package is scheduled to run once every minute. I'm picking up the files in ascending order of file creation time. I'm building an array of files and then processing-deleting them one at a time. Problem: If an instance of my package takes longer than one minute to run, the next instance of the SSIS package will pick up some of the files the previous instance has in its buffer. By the time the second instance of teh package gets around to processing a file, it may already have been deleted by the first instance, creating an exception condition. I was wondering whether there was a way to avoid the exception condition. Thanks.

    Read the article

  • Double Buffering with awt

    - by DDP
    Is double buffering (in java) possible with awt? Currently, I'm aware that swing should not be used with awt, so I can't use BufferStrategy and whatnot. If double buffering is possible with awt, do I have to write the buffer by hand? Unlike swing, awt doesn't seem to have the same built-in double buffering capability. If I do have to write the code by hand, is there a good tutorial to look at? Or is it just easier/advisable for a novice programmer to use swing instead? Sorry about the multi-step question. Thanks for your time :)

    Read the article

  • .NET SerialPort.Read skipps bytes

    - by Lukas Rieger
    Solution Reading the data byte wise via "port.ReadByte" is too slow, the problem is inside the SerialPort class. i changed it to reading bigger chunks via "port.Read" and there are now no buffer overruns. although i found the solution myself, writing it down helped me and maybe someone else has the same problem and finds this via google... (how can i mark it as answered?) EDIT 2 by setting port.ReadBufferSize = 2000000; i can delay the problem for ~30 seconds. so it seems, .Net really is too slow... since my application is not that critical, i just set the buffer to 20MB, but i am still interested in the cause. EDIT i just tested something i had not thought of before (shame on me): port.ErrorReceived += (object self, SerialErrorReceivedEventArgs se_arg) => { Console.Write("| Error: {0} | ", System.Enum.GetName(se_arg.EventType.GetType(), se_arg.EventType)); }; and it seems that i have an overrun. Is the .Net implementation too slow for 500k or is there an error on my side? Original Question i built a very primitive oszilloscope (avr, which sends adc data over uart to an ftdi chip). On the pc side i have a WPF Programm that displays this data. The Protokoll is: two sync bytes (0xaffe) - 14 data bytes - two sync bytes - 14 data bytes - ... i use 16bit values, so inside the 14 data bytes are 7 channels (lsb first). I verified the uC Firmware with hTerm, and it does send and receive everything correct. But, if i try to read the data with C#, sometimes some bytes are lost. The oszilloscop programm is a mess, but i created a small sample application, which has the same symptoms. I added two extension methods to a) read one byte from the COM Port and ignore -1 (EOF) and b) wait for the sync pattern. The sample programm first syncs onto the data stream by waiting for (0xaffe) and then compares the received bytes with the expected values. the loop runs a few times until an assert failed message pops up. I could not find anything about lost bytes via google, any help would be appreciated. Code using System; using System.Collections.Generic; using System.Diagnostics; using System.IO.Ports; using System.Linq; using System.Text; using System.Threading.Tasks; namespace SerialTest { public static class SerialPortExtensions { public static byte ReadByteSerial(this SerialPort port) { int i = 0; do { i = port.ReadByte(); } while (i < 0 || i > 0xff); return (byte)i; } public static void WaitForPattern_Ushort(this SerialPort port, ushort pattern) { byte hi = 0; byte lo = 0; do { lo = hi; hi = port.ReadByteSerial(); } while (!(hi == (pattern >> 8) && lo == (pattern & 0x00ff))); } } class Program { static void Main(string[] args) { //500000 8n1 SerialPort port = new SerialPort("COM3", 500000, Parity.None, 8, StopBits.One); port.Open(); port.DiscardInBuffer(); port.DiscardOutBuffer(); //Sync port.WaitForPattern_Ushort(0xaffe); byte hi = 0; byte lo = 0; int val; int n = 0; // Start Loop, the stream is already synced while (true) { //Read 7 16-bit values (=14 Bytes) for (int i = 0; i < 7; i++) { lo = port.ReadByteSerial(); hi = port.ReadByteSerial(); val = ((hi << 8) | lo); Debug.Assert(val != 0xaffe); } //Read two sync bytes lo = port.ReadByteSerial(); hi = port.ReadByteSerial(); val = ((hi << 8) | lo); Debug.Assert(val == 0xaffe); n++; } } } }

    Read the article

  • "Streaming" MJPG using python.

    - by tyler
    I have a webcam that I want to do some image processing on using Python. It's coming through as a Motion-JPEG. I want to try to process the stuff "live," but really what I want to do is this: Open the URL, start data streaming to some buffer... Read x bytes (where x is image size) to an image Process that image Display in result panel Return to number 2 The problem is that, while I do have the resolution, I have no idea how many bytes to read. I've tried googling the M-JPEG specification but can't find anything on if the images are separated by some header or what. Anybody have any ideas?

    Read the article

  • Open a file with su/sudo inside Emacs

    - by Chris Conway
    Suppose I want to open a file in an existing Emacs session using su or sudo, without dropping down to a shell and doing sudoedit or sudo emacs. One way to do this is (require 'tramp) C-c C-f /sudo::/path/to/file but this requires an expensive round-trip through SSH. Is there a more direct way? [EDIT] @JBB is right. I want to be able to invoke su/sudo to save as well as open. It would be OK (but not ideal) to re-authorize when saving. What I'm looking for is variations of find-file and save-buffer that can be "piped" through su/sudo.

    Read the article

  • Emacs: Define a function which loads the file where the function itself is defined

    - by damd
    I'm refactoring a bit in my Emacs set up and have come to the conclusion that I want to use a different init file than the default one. So basically, in my ~/.emacs file, I have this: (load "/some/directory/init.el") Up until now, that's been working just fine. However, now I want to redefine an old command that I've used for ages, which opens my init file: (defun conf () "Open a buffer with the user init file." (interactive) (find-file user-init-file)) As you can see, this will open ~/.emacs no matter what I do. I want it to open /some/directory/init.el, or wherever the conf command itself is defined. How would I do that?

    Read the article

  • How to play multiple online videos on IOS continuously

    - by Matt.Z
    The scenario is like this: I have some long video, and slice it into small files(mp4, for example: 5 min per file), put them under some website. I wanna play (on IOS) these mp4 videos continuously, one by one, try to do not let user feel there has a pause between video pieces. So I need to buffer next video when I play current one. But I don't know where to start. What should I do? Can anyone give me some information of related documentation or source code I can study with?

    Read the article

  • what is the return value of BeautifulSoup.find ?

    - by prosseek
    I run to get some value as score. score = soup.find('div', attrs={'class' : 'summarycount'}) I run 'print score' to get as follows. <div class=\"summarycount\">524</div> I need to extract the number part. I used re module but failed. m = re.search("[^\d]+(\d+)", score) TypeError: expected string or buffer function search in re.py at line 142 return _compile(pattern, flags).search(string) What's the return type of the find function? How to get the number from the score variable? Is there any easy way to let BeautifulSoup to return the value(in this case 524) itself?

    Read the article

  • How to name variables wich are structs

    - by evilpie
    Hello, i often work on private projects using the WinApi, and as you might know, it has thousands of named and typedefed structs like MEMORY_BASIC_INFORMATION. I will stick to this one in my question, what still is preferred, or better when you want to name a variable of this type. Is there some kind of style guide for this case? For example if i need that variable for the VirtualQueryEx function. Some ideas: MEMORY_BASIC_INFORMATION memoryBasicInformation; MEMORY_BASIC_INFORMATION memory_basic_information; Just use the name of the struct non captialized and with or without the underlines. MEMORY_BASIC_INFORMATION basicInformation; MEMORY_BASIC_INFORMATION information; Short form? MEMORY_BASIC_INFORMATION mbi; I often see this style, using the abbreviation of the struct name. MEMORY_BASIC_INFORMATION buffer; VirtualQueryEx defines the third parameter lpBuffer (where you pass the pointer to the struct), so using this name might be an idea, too. Cheers

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • producing a typewriter-like effect

    - by Tony Ennis
    Android newb here. Please use small words :-) I'd like to simulate typewriter output on my Android. The output being displayed is generated by a game and is somewhat freeform. The effect I want to see individual characters appear at a rate of about 6 characters a second. When a 'carriage return' is seen, I'd like to insert a delay then resume typing on the left. What are some suggestions on views? Would the view of choice for this be a TextView? Even that seems like overkill for this read-only coarsely scrolling output. I saw something on this thread about an AsyncTask. That looks useful. Perhaps my game will write to some manner of buffer, and a subclass of AsyncTask will pull characters out every .15 seconds or so, add them to the TextView, then invalidate() the TextView? Sound like a plan?

    Read the article

  • Python IPC, popen too slow

    - by UnableToLoad
    i need to run a subprocess (./myProgram) form python script and get output, actually i do this: import subprocess proc = subprocess.Popen('./generate_out', shell=False, stdout=subprocess.PIPE, ) while proc.poll() is None: out = proc.stdout.readline() data = doStuff(out) print(data) but is slow, sometimes pass a lot of time between the output produced by ./generate_out and the print(data), knowing that my doStuff() function is very fast, i think there is some buffer slowing down my pipe... Notes: ./generate_out, generates potentially an unlimited number of lines of finite length each. It seems that when too few chars are put in the pipe between the two processes nothing happens, then when enough is produced i get a huge print (non the expected behaviour!) sometimes i wait many seconds (10-20 and more) between generate_out print and python print) what can i do? maybe communicate() is faster? anithing else? Thank you a lot!

    Read the article

  • Does malloc() allocate a contiguous block of memory?

    - by user66854
    I have a piece of code written by a very old school programmer :-) . it goes something like this typedef struct ts_request { ts_request_buffer_header_def header; char package[1]; } ts_request_def; ts_request_buffer_def* request_buffer = malloc(sizeof(ts_request_def) + (2 * 1024 * 1024)); the programmer basically is working on a buffer overflow concept. I know the code looks dodgy. so my questions are: Does malloc always allocate contiguous block of memory ?. because in this code if the blocks are not contiguous , the code will fail big time Doing free(request_buffer) , will it free all the bytes allocated by malloc i.e sizeof(ts_request_def) + (2 * 1024 * 1024), or only the bytes of the size of the structure sizeof(ts_request_def) Do you see any evident problems with this approach , i need to discuss this with my boss and would like to point out any loopholes with this approach

    Read the article

  • Interop Structure: Should Unsigned Short be Mapped to byte[]?

    - by Ngu Soon Hui
    I have such a C++ structure: typedef struct _FILE_OP_BLOCK { unsigned short fid; // objective file ID unsigned short offset; // operating offset unsigned char len; // buffer length(update) // read length(read) unsigned char buff[MAX_BUFF_SIZE]; } FILE_OP_BLOCK; And now I want to map it in .Net. The tricky thing is that the I should pass a 2 byte array for fid, and integer for len, even though in C# fid is an unsigned short and len is an unsigned char I wonder whether my structure ( in C#) below is correct? public struct File_OP_Block { [MarshalAs(UnmanagedType.ByValArray, SizeConst = 2)] public byte[] fid; public ushort offset; public byte length; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 240)] public char[] buff; }

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >