Search Results

Search found 7188 results on 288 pages for 'frame buffer'.

Page 198/288 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Python IPC, popen too slow

    - by UnableToLoad
    i need to run a subprocess (./myProgram) form python script and get output, actually i do this: import subprocess proc = subprocess.Popen('./generate_out', shell=False, stdout=subprocess.PIPE, ) while proc.poll() is None: out = proc.stdout.readline() data = doStuff(out) print(data) but is slow, sometimes pass a lot of time between the output produced by ./generate_out and the print(data), knowing that my doStuff() function is very fast, i think there is some buffer slowing down my pipe... Notes: ./generate_out, generates potentially an unlimited number of lines of finite length each. It seems that when too few chars are put in the pipe between the two processes nothing happens, then when enough is produced i get a huge print (non the expected behaviour!) sometimes i wait many seconds (10-20 and more) between generate_out print and python print) what can i do? maybe communicate() is faster? anithing else? Thank you a lot!

    Read the article

  • How to reliably measure available memory in Linux?

    - by Alex B
    Linux /proc/meminfo shows a number of memory usage statistics. MemTotal: 4040732 kB MemFree: 23160 kB Buffers: 163340 kB Cached: 3707080 kB SwapCached: 0 kB Active: 1129324 kB Inactive: 2762912 kB There is quite a bit of overlap between them. For example, as far as I understand, there can be active page cache (belongs to "cached" and "active") and inactive page cache ("inactive" + "cached"). What I want to do is to measure "free" memory, but in a way that it includes used pages that are likely to be dropped without a significant impact on overall system's performance. At first, I was inclined to use "free" + "inactive", but Linux's "free" utility uses "free" + "cached" in its "buffer-adjusted" display, so I am curious what a better approach is. When the kernel runs out of memory, what is the priority of pages to drop and what is the more appropriate metric to measure available memory?

    Read the article

  • nxhtml and geben: debug mode stops responding to keystrokes upon step into html/php mixed line

    - by artistoex
    I'm using the php debugger geben and nxhtml-mode. While debugging, as soon as I step into a mixed line such as <foo><?php bar(); ?></foo> the debugger is no longer accepting any key-strokes. However, the mode line still indicates the debugger's presence (*debugging*'-entry). I guess this due to nxhtml's mode changes, because it's the exact same behavior geben shows after disabling end re-enabling it. Does anybody use nxhtml together with geben and has fixed it? Or is it possible to configure emacs to enable nxhtml conditionaly, such that php-mode is used instead when the buffer was opened by geben?

    Read the article

  • Latex two captioned verbatim environments side-by-side

    - by egon
    How to get two verbatim environments inside floats with automatic captioning side-by-side? \usepackage{float,fancyvrb} ... \DefineVerbatimEnvironment{filecontents}{Verbatim}% {fontsize=\small, fontfamily=tt, gobble=4, frame=single, framesep=5mm, baselinestretch=0.8, labelposition=topline, samepage=true} \newfloat{fileformat}{thp}{lof}[chapter] \floatname{fileformat}{File Format} \begin{fileformat} \begin{filecontents} A B C \end{filecontents} \caption{example.abc} \end{fileformat} \begin{fileformat} \begin{filecontents} C B A \end{filecontents} \caption{example.cba} \end{fileformat} So basically I just need those examples to be side-by-side (and keeping automatic nunbering of caption). I've been trying for a while now.

    Read the article

  • clear html webpage?

    - by noname
    i am currently developing a crawler that crawls all links on the web and displays them in the web browser (and saving it of course). but after some hours there will be a huge list displayed on the web browser and i want to only display lets say 1000 links at the same time. then i clear the html and display another 1000 links. this is also good for the RAM or it will eat up all memory. how do i clear the web browser screen? EDIT: i have seen some scripts using some flush buffer functions. has this anything to do with my case?

    Read the article

  • How can I register a custom protocol with xdg?

    - by julien
    I've been struggling this morning trying to associate an application with a custom protocol, namely emacsclient and org-protocol. I'm calling this protocol from a webbrowser bookmarklet, and I get the following behaviour : In chromium, the "Launch Application" dialog comes up, and calls xdg-open org-protocol://... which ends up firing a new chromium frame. In firefox, I've tried setting network.protocol-handler.app.org-protocol to an empty string or my emacsclient path, anyhow I get the following error message : "Firefox doesn't know how to open this address, because the protocol (org-protocol) isn't associated with any program" without even showing any external application selection dialog. I'm not using any desktop environment, so I need to make this work strictly with xdg, however, despite reading the shared mime info spec etc, I still can't fathom a working configuration.

    Read the article

  • Unable to diagnose Windows 7 lockup

    - by Delyan
    Basic info: Laptop Dell Studio XPS 13 (Intel P9600, 4GB RAM, NVidia 9400M card, 256G Samsung PM800 SSD) Windows 7 Ultimate, as well as Fedora 14 Here's the deal - Windows would just lock up out of nowhere, no log entries, no dumps, no BSOD, it just freezes. This happens mostly when idle (but it happened when I was using it too) and does not follow a concrete time frame. No input is accepted - only solution is to hold the power button. Although this sounds like a clean cut hardware issue, the reason I'm willing to rule this out is that my primary OS is Fedora 14. It's been working fine for the past 2 years and I've been stress testing the hardware (intentionally or not) every once in a while with no issues. I would like to ask if there's any way to get a diagnostic output from Windows in a situation such as this. The next step in my testing is to leave it in Safe Mode overnight and see if it locks up but even if I do that, I still need to figure out what component freezes it up during normal operation.

    Read the article

  • .NET SerialPort.Read skipps bytes

    - by Lukas Rieger
    Solution Reading the data byte wise via "port.ReadByte" is too slow, the problem is inside the SerialPort class. i changed it to reading bigger chunks via "port.Read" and there are now no buffer overruns. although i found the solution myself, writing it down helped me and maybe someone else has the same problem and finds this via google... (how can i mark it as answered?) EDIT 2 by setting port.ReadBufferSize = 2000000; i can delay the problem for ~30 seconds. so it seems, .Net really is too slow... since my application is not that critical, i just set the buffer to 20MB, but i am still interested in the cause. EDIT i just tested something i had not thought of before (shame on me): port.ErrorReceived += (object self, SerialErrorReceivedEventArgs se_arg) => { Console.Write("| Error: {0} | ", System.Enum.GetName(se_arg.EventType.GetType(), se_arg.EventType)); }; and it seems that i have an overrun. Is the .Net implementation too slow for 500k or is there an error on my side? Original Question i built a very primitive oszilloscope (avr, which sends adc data over uart to an ftdi chip). On the pc side i have a WPF Programm that displays this data. The Protokoll is: two sync bytes (0xaffe) - 14 data bytes - two sync bytes - 14 data bytes - ... i use 16bit values, so inside the 14 data bytes are 7 channels (lsb first). I verified the uC Firmware with hTerm, and it does send and receive everything correct. But, if i try to read the data with C#, sometimes some bytes are lost. The oszilloscop programm is a mess, but i created a small sample application, which has the same symptoms. I added two extension methods to a) read one byte from the COM Port and ignore -1 (EOF) and b) wait for the sync pattern. The sample programm first syncs onto the data stream by waiting for (0xaffe) and then compares the received bytes with the expected values. the loop runs a few times until an assert failed message pops up. I could not find anything about lost bytes via google, any help would be appreciated. Code using System; using System.Collections.Generic; using System.Diagnostics; using System.IO.Ports; using System.Linq; using System.Text; using System.Threading.Tasks; namespace SerialTest { public static class SerialPortExtensions { public static byte ReadByteSerial(this SerialPort port) { int i = 0; do { i = port.ReadByte(); } while (i < 0 || i > 0xff); return (byte)i; } public static void WaitForPattern_Ushort(this SerialPort port, ushort pattern) { byte hi = 0; byte lo = 0; do { lo = hi; hi = port.ReadByteSerial(); } while (!(hi == (pattern >> 8) && lo == (pattern & 0x00ff))); } } class Program { static void Main(string[] args) { //500000 8n1 SerialPort port = new SerialPort("COM3", 500000, Parity.None, 8, StopBits.One); port.Open(); port.DiscardInBuffer(); port.DiscardOutBuffer(); //Sync port.WaitForPattern_Ushort(0xaffe); byte hi = 0; byte lo = 0; int val; int n = 0; // Start Loop, the stream is already synced while (true) { //Read 7 16-bit values (=14 Bytes) for (int i = 0; i < 7; i++) { lo = port.ReadByteSerial(); hi = port.ReadByteSerial(); val = ((hi << 8) | lo); Debug.Assert(val != 0xaffe); } //Read two sync bytes lo = port.ReadByteSerial(); hi = port.ReadByteSerial(); val = ((hi << 8) | lo); Debug.Assert(val == 0xaffe); n++; } } } }

    Read the article

  • SSIS process files from folder

    - by RT
    Background: I've a folder that gets pumped with files continuously. My SSIS package needs to process the files and delete them. The SSIS package is scheduled to run once every minute. I'm picking up the files in ascending order of file creation time. I'm building an array of files and then processing-deleting them one at a time. Problem: If an instance of my package takes longer than one minute to run, the next instance of the SSIS package will pick up some of the files the previous instance has in its buffer. By the time the second instance of teh package gets around to processing a file, it may already have been deleted by the first instance, creating an exception condition. I was wondering whether there was a way to avoid the exception condition. Thanks.

    Read the article

  • Emacs: Define a function which loads the file where the function itself is defined

    - by damd
    I'm refactoring a bit in my Emacs set up and have come to the conclusion that I want to use a different init file than the default one. So basically, in my ~/.emacs file, I have this: (load "/some/directory/init.el") Up until now, that's been working just fine. However, now I want to redefine an old command that I've used for ages, which opens my init file: (defun conf () "Open a buffer with the user init file." (interactive) (find-file user-init-file)) As you can see, this will open ~/.emacs no matter what I do. I want it to open /some/directory/init.el, or wherever the conf command itself is defined. How would I do that?

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • c++ library for endian-aware reading of raw file stream metadata?

    - by Kache4
    I've got raw data streams from image files, like: vector<char> rawData(fileSize); ifstream inFile("image.jpg"); inFile.read(&rawData[0]); I want to parse the headers of different image formats for height and width. Is there a portable library that can can read ints, longs, shorts, etc. from the buffer/stream, converting for endianess as specified? I'd like to be able to do something like: short x = rawData.readLeShort(offset); or long y = rawData.readBeLong(offset) An even better option would be a lightweight & portable image metadata library (without the extra weight of an image manipulation library) that can work on raw image data. I've found that Exif libraries out there don't support png and gif.

    Read the article

  • How to play multiple online videos on IOS continuously

    - by Matt.Z
    The scenario is like this: I have some long video, and slice it into small files(mp4, for example: 5 min per file), put them under some website. I wanna play (on IOS) these mp4 videos continuously, one by one, try to do not let user feel there has a pause between video pieces. So I need to buffer next video when I play current one. But I don't know where to start. What should I do? Can anyone give me some information of related documentation or source code I can study with?

    Read the article

  • "Streaming" MJPG using python.

    - by tyler
    I have a webcam that I want to do some image processing on using Python. It's coming through as a Motion-JPEG. I want to try to process the stuff "live," but really what I want to do is this: Open the URL, start data streaming to some buffer... Read x bytes (where x is image size) to an image Process that image Display in result panel Return to number 2 The problem is that, while I do have the resolution, I have no idea how many bytes to read. I've tried googling the M-JPEG specification but can't find anything on if the images are separated by some header or what. Anybody have any ideas?

    Read the article

  • Improve performance on Lync desktop sharing

    - by Trikks
    I'm using Lync 2010 server to handle some clients communication and screen sharing. The biggest issue is the performance with screen sharing, it is of rather high quality but the frame rate is very poor. I have been reading and searching a lot on the subject and 95% of all topics is about bandwidth, we have a 200/200 MBit Internet connection solely for this application. Also my test machines runs on an internal gigabit lan. The speeds between all boxes is hysterically fast. Next step was to ensure that there where some profiles for different bandwidths, so i registered some New-CsNetworkBandwidthPolicyProfile -Identity 50Mb_Link -Description "BW profile for 50Mb links" -AudioBWLimit 20000 -AudioBWSessionLimit 200 -VideoBWLimit 14000 -VideoBWSessionLimit 700 New-CsNetworkBandwidthPolicyProfile -Identity 100Mb_Link -Description "BW profile for 100Mb links" -AudioBWLimit 30000 -AudioBWSessionLimit 300 -VideoBWLimit 25000 -VideoBWSessionLimit 1500 Nothing fancy happend here either. Non of the test boxes have anything from Norton installed, they doesn't have any firewalls running (nor does the Lync server), all fences are down in this environment just for the testing. Is there any thing that I may have missed to improve the quality of this? Thanks

    Read the article

  • Tool to convert a series of image into a single sprite sheet

    - by senbei
    Are there any known tools that can take a serie of pictures and generate a single sprite sheet from them? (ie. place all the images into a single image, one after the other). I have found some very basic software doing this, but I am looking for one that is able to detect duplicate frames as well as outputting some kind of data structure that allow me to programatically retrieve the coordinate of each frame on the generated image. I need this to convert existing animations with many duplicate frames and I am a bit surprised that there is no standard software to do this, as most infos you can find on the web about this is people doing this completely manually.. Any recommendations?

    Read the article

  • Open a file with su/sudo inside Emacs

    - by Chris Conway
    Suppose I want to open a file in an existing Emacs session using su or sudo, without dropping down to a shell and doing sudoedit or sudo emacs. One way to do this is (require 'tramp) C-c C-f /sudo::/path/to/file but this requires an expensive round-trip through SSH. Is there a more direct way? [EDIT] @JBB is right. I want to be able to invoke su/sudo to save as well as open. It would be OK (but not ideal) to re-authorize when saving. What I'm looking for is variations of find-file and save-buffer that can be "piped" through su/sudo.

    Read the article

  • Double Buffering with awt

    - by DDP
    Is double buffering (in java) possible with awt? Currently, I'm aware that swing should not be used with awt, so I can't use BufferStrategy and whatnot. If double buffering is possible with awt, do I have to write the buffer by hand? Unlike swing, awt doesn't seem to have the same built-in double buffering capability. If I do have to write the code by hand, is there a good tutorial to look at? Or is it just easier/advisable for a novice programmer to use swing instead? Sorry about the multi-step question. Thanks for your time :)

    Read the article

  • SharePoint MOSS - Serve HTTP content on an HTTPS page without Mixed Content Warning?

    - by kcb263
    Our "portal-like" SharePoint site is served using HTTPS/SSL. So a user goes to https://web.company.com and sees content and different Web Parts. So far, no problem. The desire now is to have new Web Parts added that either frame HTTP content (such as Weather Bug) or HTTP RSS feeds. The issue that arises is that by doing this, results in a "Mixed Content" warning in the browser. Has anybody successfully been able to implement such a scenario, or one similar to it? The options we have looked at, unsuccessfully, have been: using Apache Reverse Proxy Server mirror an external site Custom Web Parts

    Read the article

  • What is the fastest way to copy content of DVD to hard disc using Linux

    - by Ritesh
    I have gone through some of the links Which talks about fastest way of copying files in windows using FILE_FLAG_NO_BUFFERING and FILE_FLAG_OVERLAPPED . It also talks about how request made for read and write opeartions with BUFFER SIZE as 256KB and 128KB are faster than 1Mb .The link for that is :- Explanation for tiny reads (overlapped, buffered) outperforming large contiguous reads? I am also loking for a Similar method in linux Which allows me to copy the content of my DVD to Hard Disc in a fast Way . So I wanted to know Is there some file operation flags in Linux which would provide me the best result or Which way of Copy in Linux is the best ? My codes are all in c++.

    Read the article

  • Word document to PDF: open hyperlinks in new window

    - by baens
    I have a Mircosoft Word document with hyperlinks in it. When I save the PDF document, those hyperlinks no longer open that link in a new window. I have tried all the settings under the "Target Frame..." option, but those don't seem to persist. Is there any settings that help with this to make all hyperlinks in the document open in a new window? I am currently using the Acrobat plugin, but could move to a different plugin if it offers this feature.

    Read the article

  • what is the return value of BeautifulSoup.find ?

    - by prosseek
    I run to get some value as score. score = soup.find('div', attrs={'class' : 'summarycount'}) I run 'print score' to get as follows. <div class=\"summarycount\">524</div> I need to extract the number part. I used re module but failed. m = re.search("[^\d]+(\d+)", score) TypeError: expected string or buffer function search in re.py at line 142 return _compile(pattern, flags).search(string) What's the return type of the find function? How to get the number from the score variable? Is there any easy way to let BeautifulSoup to return the value(in this case 524) itself?

    Read the article

  • How to name variables wich are structs

    - by evilpie
    Hello, i often work on private projects using the WinApi, and as you might know, it has thousands of named and typedefed structs like MEMORY_BASIC_INFORMATION. I will stick to this one in my question, what still is preferred, or better when you want to name a variable of this type. Is there some kind of style guide for this case? For example if i need that variable for the VirtualQueryEx function. Some ideas: MEMORY_BASIC_INFORMATION memoryBasicInformation; MEMORY_BASIC_INFORMATION memory_basic_information; Just use the name of the struct non captialized and with or without the underlines. MEMORY_BASIC_INFORMATION basicInformation; MEMORY_BASIC_INFORMATION information; Short form? MEMORY_BASIC_INFORMATION mbi; I often see this style, using the abbreviation of the struct name. MEMORY_BASIC_INFORMATION buffer; VirtualQueryEx defines the third parameter lpBuffer (where you pass the pointer to the struct), so using this name might be an idea, too. Cheers

    Read the article

  • export excel taking long time from ASP pages?

    - by ricky
    i am using following code for export to excel from .ASP page? GMID = Request.QueryString ("GMID") Response.Buffer = False Response.ContentType = "application/vnd.ms-excel" DIR_YR = Request.QueryString ("DIR_YR") CD = Request.QueryString("CD") YEAR = Request.QueryString("IND") Problem that I am facing is that When records are around 2,000 or more{ export to excel ask for open option .When i click on that option only Download in progress... shown but actually no excel pop up will open .How can I fixed this bug because for 700-800 rows its working Fine. I am not looking for whole change codes because there is a problem with only One Sale rep who is having more than 2000 rows.I am looking for one or two rows changes.

    Read the article

  • Excel SUM From Different Sheets IF Date Found

    - by user329005
    I have a workbook with separate sheets for each product (about 20 sheets, adding more on a regular basis). Each product is only available for a certain time frame, and has daily sales data recorded on that product's sheet. I want an overall snapshot across all products from any given date to be consolidated on a new sheet. This would sum from a particular column on each of the other sheets if a corresponding date exists. I have a moderately passable function right now that has a separate VLOOKUP for each product sheet like SUM(IF(ISERROR(VLOOKUP(DATECELL,SHEETNAME!ARRAY,COLUMN... next VLOOKUP, next VLOOKUP etc., but it's incredibly cumbersome to update each function when a new product is added. I'm thinking there's a much easier way utilizing a named group (sheet names), SUMIF, VLOOKUP etc. Then when a new product sheet is added, I can simply add the sheet name to the named group rather than editing all the functions. Any help would be much appreciated!

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >