Search Results

Search found 6231 results on 250 pages for 'slow diver'.

Page 178/250 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • problem in `delete-directory` with enabled `delete-by-removing-to-trash`

    - by Andreo
    There is a strange behavior of delete-directory function with enabled flag delete-by-removing-to-trash. It deletes files one by one instead of applying move-file-to-trash to the directory. As a result emacs deletes big directories slowly and there are many files in the trash after deleting, so it is impossible to restore the directory. Example: Directory structure: ddd/ ccc/ 1.txt There are three files in the trash after deleting ddd: trash/ ddd/ ccc/ 1.txt instead of one: trash/ ddd/ It is very slow, because emacs traverse directory recursively. I can't restore deleted directory. What i need is exactly the same behavior as of move-file-to-trash. But it should be transparent (i.e. 'D x' in dired mode). How to solve the problem? As a temporary solution i see the making advice function for `delete-directory'.

    Read the article

  • Will this make the object thread-safe?

    - by sharptooth
    I have a native Visual C++ COM object and I need to make it completely thread-safe to be able to legally mark it as "free-threaded" in th system registry. Specifically I need to make sure that no more than one thread ever accesses any member variable of the object simultaneously. The catch is I'm almost sure that no sane consumer of my COM object will ever try to simultaneously use the object from more than one thread. So I want the solution as simple as possible as long as it meets the requirement above. Here's what I came up with. I add a mutex or critical section as a member variable of the object. Every COM-exposed method will acquire the mutex/section at the beginning and release before returning control. I understand that this solution doesn't provide fine-grained access and this might slow execution down, but since I suppose simultaneous access will not really occur I don't care of this. Will this solution suffice? Is there a simpler solution?

    Read the article

  • IPhone CoreData: How should I relate many child entities to thier parents

    - by Robert
    I am trying to import data from a database that uses primary key / forign key relations to a core data database in Xcode. I have code that creates hundreds of child entities in a managed object context: Each child has an ID that corresponds to a parent. child1 parentID = 3 child2 parentID = 17 child3 parentID = 17 ... childn parentID = 5 I now need to relate each child to its parent. The parents are all stored in persistent memory. My first thought was to preform a fetch for each child to get its parent. However, I think this would be slow. Am I correct? How should I do this instead?

    Read the article

  • How do I write a Java text file viewer for big log files

    - by Hannes de Jager
    I am working on a software product with an integrated log file viewer. Problem is, its slow and unstable for really large files because it reads the whole file into memory when you view a log file. I'm wanting to write a new log file viewer that addresses this problem. What are the best practices for writing viewers for large text files? How does editors like notepad++ and VIM acomplish this? I was thinking of using a buffered Bi-directional text stream reader together with Java's TableModel. Am I thinking along the right lines and are such stream implementations available for Java?

    Read the article

  • Is there a faster way to download a page from the net to a string?

    - by cphil5
    I have tried other methods to download info from a URL, but needed a faster one. I need to download and parse about 250 separate pages, and would like the app to not appear ridiculously slow. This is the code I am currently using to retrieve a single page, any insight would be great. try { URL myURL = new URL("http://www.google.com"); URLConnection ucon = myURL.openConnection(); InputStream inputStream = ucon.getInputStream(); BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream); ByteArrayBuffer byteArrayBuffer = new ByteArrayBuffer(50); int current = 0; while ((current = bufferedInputStream.read()) != -1) { byteArrayBuffer.append((byte) current); } tempString = new String(byteArrayBuffer.toByteArray()); } catch (Exception e) { Log.i("Error",e.toString()); }

    Read the article

  • Query to update rowNum

    - by BrokeMyLegBiking
    Can anyone help me write this query more efficiently? I have a table that captures TCP traffic, and I'd like to update a column called RowNumForFlow which is simly the sequential number of the IP packet in that flow. The code below works fine, but it is slow. declare @FlowID int declare @LastRowNumInFlow int declare @counter1 int set @counter1 = 0 while (@counter1 < 1) BEGIN set @counter1 = @counter1 + 1 -- 1) select top 1 @FlowID = t.FlowID from Traffic t where t.RowNumInFlow is null if (@FlowID is null) break -- 2) set @LastRowNumInFlow = null select top 1 @LastRowNumInFlow = RowNumInFlow from Traffic where FlowID=@FlowID and RowNumInFlow is not null order by ID desc if @LastRowNumInFlow is null set @LastRowNumInFlow = 1 else set @LastRowNumInFlow = @LastRowNumInFlow + 1 update Traffic set RowNumInFlow = @LastRowNumInFlow where ID = (select top 1 ID from Traffic where flowid = @FlowID and RowNumInFlow is null) END Example table values after query has run: ID FlowID RowNumInFlow 448923 44 1 448924 44 2 448988 44 3 448989 44 4 448990 44 5 448991 44 6 448992 44 7 448993 44 8 448995 44 9 448996 44 10 449065 44 11 449063 45 1 449170 45 2 449171 45 3 449172 45 4 449187 45 5

    Read the article

  • MS SQL 2005 - Understanding ouput of DBCC SHOWCONTIG

    - by user169743
    I'm seeing some slow performance on a MS SQL 2005 database. I've been doing some research regarding MS SQL performance but I'm having difficulty fully understanding the output of SHOWCONTIG and would be very grateful if someone could have a look and offer some suggestions to improve performance. TABLE level scan performed. Pages Scanned................................: 19348 Extents Scanned..............................: 2427 Extent Switches..............................: 3829 Avg. Pages per Extent........................: 8.0 Scan Density [Best Count:Actual Count].......: 63.16% [2419:3830] Logical Scan Fragmentation ..................: 8.40% Extent Scan Fragmentation ...................: 35.15% Avg. Bytes Free per Page.....................: 938.1 Avg. Page Density (full).....................: 88.41%

    Read the article

  • SQL Server 2005 - Understanding ouput of DBCC SHOWCONTIG

    - by user169743
    I'm seeing some slow performance on a SQL Server 2005 database. I've been doing some research regarding SQL Server performance but I'm having difficulty fully understanding the output of SHOWCONTIG and would be very grateful if someone could have a look and offer some suggestions to improve performance. TABLE level scan performed. Pages Scanned................................: 19348 Extents Scanned..............................: 2427 Extent Switches..............................: 3829 Avg. Pages per Extent........................: 8.0 Scan Density [Best Count:Actual Count].......: 63.16% [2419:3830] Logical Scan Fragmentation ..................: 8.40% Extent Scan Fragmentation ...................: 35.15% Avg. Bytes Free per Page.....................: 938.1 Avg. Page Density (full).....................: 88.41%

    Read the article

  • Can we view objets in the JVM memory?

    - by Sebastien Lorber
    Hey, At work we found that on some instances (particulary the slow ones) we have a different behaviour, acquired at the reboot. We guess a cache is not initialized correctly, or maybe a concurrency problem... Anyway it's not reproductible in any other env than production. We actually don't have loggers to activate... it's an old component... Thus i'd like to know if there are tools that can help us to see the different objets present in the JVM memory in order to check the content of the cache... Thank you!

    Read the article

  • how to submit using link with my current code(php+jquery+ajax)

    - by ruslyrossi
    My current Code : <script type="text/javascript"> $(document).ready(function() { $('#form').ajaxForm( { target: '#preview', success: function() { $('#success_box').addClass('success') setTimeout(function() { $('#success_box').fadeOut('slow'); }, 3100); // <-- time in milliseconds } }); }); </script> <form action="controller/product_edit_update.php" method="post" id="form" name="form" > bla..bla.. <input type="submit" value="Save" /></form> but now i want to add submit link with <a href="" id="submit_with_link" >Save</a>

    Read the article

  • Purpose of IF, ELSE, FOR macros ?

    - by psihodelia
    I have a source code of a library which has a lot of strange IF, ELSE, FOR, etc. macros for all common C-keywords instead of using just usual if,else,for,while keywords. These macros are defined like this: #define IF( a) if( increment_if(), a) where increment_if() function is defined so: static __inline void increment_if( void) { // If the "IF" operator comes just after an "ELSE", its counter // must not be incremented. ... //implementation } I don't really understand, what is the purpose of such macros? This library is for a real-time application and I suppose that using such macros must slow-down an application.

    Read the article

  • Webservice for serial port devices

    - by camx
    Hi I want to create a remote webservice for an application that is now avaliable only localy. This application controlls three devices (each is controlled separately) connected on serial port. The problem is that I don't know how to take care of passing back information that a device return requested data. For example - I send move command to the motion device (which is very slow and can take a minute or more). Can I just set a big timeout on the client side (and server side) and return for example a true/false if operation is completed or is this a bad idea? Is SOAP with big timeouts ok? And the other question is if Mono on Linux (Ubuntu 9.10, Mono 2.4) is stable enought for making a web service or should I chose Java or some other language? I'm open for recommendations. Thanks for your help!

    Read the article

  • Replacing keywords in text with php & mysql

    - by intacto
    Hello, I have a news site containing an archive with more than 1 million news. I created a word definitions database with about 3000 entries, consisting of word-definition pairs. What I want to do is adding a definition next to every occurence of these words in the news. I cant make a static change as I can add a new keyword everyday, so i can make it realtime or cached. The question is, a str_replace or a preg_replace would be very slow for searching 3 thousand keywords in a text and replacing them. Are there any fast alternatives?

    Read the article

  • Using pow() for large number

    - by g4ur4v
    I am trying to solve a problem, a part of which requires me to calculate (2^n)%1000000007 , where n<=10^9. But my following code gives me output "0" even for input like n=99. Is there anyway other than having a loop which multilplies the output by 2 every time and finding the modulo every time (this is not I am looking for as this will be very slow for large numbers). #include<stdio.h> #include<math.h> #include<iostream> using namespace std; int main() { unsigned long long gaps,total; while(1) { cin>>gaps; total=(unsigned long long)powf(2,gaps)%1000000007; cout<<total<<endl; } }

    Read the article

  • How should I capture clickstream data?

    - by editor
    I'd like to start using clickstream analysis to improve a dynamic site's user experience. I'd like to rule out two options: parameterizing URLs (index.php?src=http://www.example.com) and immediate database logging. The former makes pretty ugly URLs and isn't great for SEO and the latter might slow down page render when there are lots of concurrent users. Assuming these aren't viable options, I think I'm left with doing an asynchronous POST to a server side script that runs a database query and returns a 204 (no data) response. Is this the best option for capturing clickstream data?

    Read the article

  • how to protect an imported win32 dll into a .net application from memory issues

    - by Eric
    I have a c# application that needs to use a legacy win32 dll. The dll is almost its own app, it has dialogs, operations with hardware, etc. When this dll is imported and used, there are a couple of problems that occur: Dragging a dialog (not a windows system dialog, but one created by the dll) across the managed code app causes the UI to not repaint. Further it generates a system out of memory exception from various ui controls. The performance is incredibly slow. There seems to be no way to unload the dll so the memory never gets cleaned up. When we close our managed app, we get another memory exception. At the moment we import each method call as such: [DllImport("dllname.dll", EntryPoint = "MethodName", SetLastError = true, CharSet = CharSet.Auto, ExactSpelling = true, CallingConvention = CallingConvention.StdCall)]

    Read the article

  • WCF client hangs on response

    - by JohnIdol
    I have a WCF client (running on Win7) pointing to a WebSphere service. All is good from a test harness (a little test fixture outside my web app) but when my calls to the service originate from my web project one of the calls is extremely slow to deserialize (it takes up to 10 times longer) and not just the first time. I can see from fiddler that the response comes back quickly but then the WCF client hangs on the response itself for more than a minute before the next line of code is hit by the debugger, almost if the client was having trouble deserializing. This happens only if in the response I have a given pdf string, base64 encoded chunked. If for example the service raises a fault (this pdf string is not there) then the response is deserialized immediately. Again, If I send the exact same envelope through Soap-UI or from outside the web project all is good. I am at loss - What should I be looking for and is there some config setting that might do the trick? Any help appreciated!

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • Sorting 1000-2000 elements with many cache misses

    - by Soylent Graham
    I have an array of 1000-2000 elements which are pointers to objects. I want to keep my array sorted and obviously I want to do this as quick as possible. They are sorted by a member and not allocated contiguously so assume a cache miss whenever I access the sort-by member. Currently I'm sorting on-demand rather than on-add, but because of the cache misses and [presumably] non-inlining of the member access the inner loop of my quick sort is slow. I'm doing tests and trying things now, (and see what the actual bottleneck is) but can anyone recommend a good alternative to speeding this up? Should I do an insert-sort instead of quicksorting on-demand, or should I try and change my model to make the elements contigious and reduce cache misses? OR, is there a sort algorithm I've not come accross which is good for data that is going to cache miss?

    Read the article

  • g++: Use ZIP files as input

    - by Notinlist
    We have the Boost library in our side. It consists of huge amount of files which are not changing ever and only a tiny portion of it is used. We swap the whole boost directory if we are changing versions. Currently we have the Boost sources in our SVN, file by file which makes the checkout operations very slow, especially on Windows. It would be nice if there were a notation / plugin to address C++ files inside ZIP files, something like: // @ZIPFS ASSIGN 'boost' 'boost.zip/boost' #include <boost/smart_ptr/shared_ptr.hpp> Are there any support for compiler hooks in g++? Are there any effort regarding ZIP support? Other ideas?

    Read the article

  • JS Framework that doesn't use CSS selectors?

    - by RoToRa
    A thing that I noticed about most JavaScript frameworks is that the most common way to find/access the DOM elements is to use CSS selectors. However this usually requires the framework to include a CSS selector parser, because they need to support selectors, that the browser natively doesn't, foremost the frameworks own proprietary extensions. I would think that these parsers are large and slow. Wouldn't it be more efficient to have something that doesn't require a parser, such a chained method calls? Some like: id("example").children().class("test").hasAttribute("href") instead of $("#example > .test[href]") Are there any frameworks around that do something like this? And how do they compare with jQuery and friends in regard to performance and size? EDIT: You can consider this a theoretical discussion topic. I don't plan to use anything other than jQuery in any practical projects in near furure. I was just wondering why there aren't any other, possibly better approaches.

    Read the article

  • PHP: Simulate multiple MySQL connections to same database

    - by Varun
    An insert query is constantly getting logged in my MySQL slow query log. I want to see how much time the INSERT query is taking with 100 simultaneous insert operations(to the same table). So I need to simulate the follwoing. 500 different simultaneous connections from PHP to the same database on a mysql server, all of which are inserting a row(simultaneously) to the same table. DO I need to use any load testing tool? Or Can I simply write a PHP script to do this? Any thoughts?

    Read the article

  • Search in big text log files

    - by 0xFF
    Hi, let's say you have an game server which creating text log files of gamers actions, and from time to time you need to lookup something in those logs files (like investigating an scam or loosing an item). Just for example you have 100 files and each file have size between 20MB and 50MB - How you would search them quickly? What I have already tried to do is create several threads and each invidual thread will map his own file to memory (let say memory should not be problem if it not exceed 500MB of ram) perform search here, result was something around 1 second per file : File:a26.log - read in: 0.891, lines: 625282, matches: 78848 Is there better way how to do that ? - because it seems to me kinda slow. thanks. (java was used for this case)

    Read the article

  • asp.net page flickers

    - by user464111
    I have an asp.net webform. I have the background as an image,that when i aslo have a menu bar, when i click on an icon, it uses response.redirect() to redirect to a new page, but when it loads up the page, it seems like the background image is slow. I can see the backgroud as white first and then split seconds later the background image will load up, this causes an illusion of flickering when it redirects to a new page. How do i solve this problem? When i tested using visual studio web server, it dont flicker. I am using a background image in the master page hosted up on the internet.

    Read the article

  • Using UIImageViews for 'pages' in an iPhone/iPad storybook app?

    - by outtoplayinc
    I'm new to iPhone programming, and well, what seems obvious to me may seem silly to a seasoned coder. I did a few 'switching views' tutorials on Youtube, and basically, they seems to work nicely for adding pages to a storybook type app. You add a UIViewController and associated view for each page. My question is would this become insanely slow, or a memory hog if I continued this method for say....35+ pages? Each page would also have a sound file associated with it that would play narration when a page load and stops when we leave. Basically, think of a powerpoint type app, with sound, possibly animated image elements, next & back buttons. I'm probably thinking of this very simplistically, but that's where my experience is at for the moment. Any insight or tips as to better and or more efficient ways to proceed would be greatly appreciated.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >