Search Results

Search found 29860 results on 1195 pages for 'write speed'.

Page 59/1195 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Handler invocation speed: Objective-C vs virtual functions

    - by Kerido
    I heard that calling a handler (delegate, etc.) in Objective-C can be even faster than calling a virtual function in C++. Is it really correct? If so, how can that be? AFAIK, virtual functions are not that slow to call. At least, this is my understanding of what happens when a virtual function is called: Compute the index of the function pointer location in vtbl. Obtain the pointer to vtbl. Dereference the pointer and obtain the beginning of the array of function pointers. Offset (in pointer scale) the beginning of the array with the index value obtained on step 1. Issue a call instruction. Unfortunately, I don't know Objective-C so it's hard for me to compare performance. But at least, the mechanism of a virtual function call doesn't look that slow, right? How can something other than static function call be faster?

    Read the article

  • UART speed possibly wrong

    - by Mike
    My brain is fried, so I thought I would pass this one to the community. When sending 1 character to my embedded system, it consistently thinks it receives 2 characters. The first received character seems to map to the transmitted character (in some unkown way) and the second received character is always 0xff Here is what I observed: Tx char (in hex) Rx character (in hex), I left out the second byte (always ff) 31 9D 32 9B 33 99 61 3D 62 3B 63 39 64 37 65 35 41 7D 42 7B 43 79 I have check my clocks and them seem to be ok. The only diff between this non working version and the previous version is that i am now using a RS485 chip. I have traced the signal all the way up to the MCU and it looks fine (confirmed the bit value on the rx pin)

    Read the article

  • speed: XmlTextReader vs LinqtoXml

    - by Michel
    Hi, i'm about to read some xml (who isn't :-)) This time however it's a lot of data: about 30.000 records with 5 properties, all in one file. Till now i've always read that the XmlTextReader is the fastest way to read xml data, but now there also is the (nice sytax of) linqtoXml. Does anybody know any performance issues, or that there aren't any, with LinqToXml? Michel

    Read the article

  • Improve speed of own debug visualizer for Delphi 2010

    - by netcodecz
    I wrote Delphi debug visualizer for TDataSet to display values of current row, source + screenshot: http://delphi.netcode.cz/text/tdataset-debug-visualizer.aspx . Working good, but very slow. I did some optimalization (how to get fieldnames) but still for only 20 fields takes 10 seconds to show - very bad. Main problem seems to be slow IOTAThread90.Evaluate used by main code shown below, this procedure cost most of time, line with ** about 80% time. FExpression is name of TDataset in code. procedure TDataSetViewerFrame.mFillData; var iCount: Integer; I: Integer; // sw: TStopwatch; s: string; begin // sw := TStopwatch.StartNew; iCount := StrToIntDef(Evaluate(FExpression+'.Fields.Count'), 0); for I := 0 to iCount - 1 do begin s:= s + Format('%s.Fields[%d].FieldName+'',''+', [FExpression, I]); // FFields.Add(Evaluate(Format('%s.Fields[%d].FieldName', [FExpression, I]))); FValues.Add(Evaluate(Format('%s.Fields[%d].Value', [FExpression, I]))); //** end; if s<> '' then Delete(s, length(s)-4, 5); s := Evaluate(s); s:= Copy(s, 2, Length(s) -2); FFields.CommaText := s; { sw.Stop; s := sw.Elapsed; Application.MessageBox(Pchar(s), '');} end; Now I have no idea how to improve performance.

    Read the article

  • Speed Difference between native OLE DB and ADO.NET

    - by weijiajun
    I'm looking for suggestions as well as any benchmarks or observations people have. We are looking to rewrite our data access layer and are trying to decide between native C++ OLEDB or ADO.NET for connecting with databases. Currently we are specifically targeting Oracle which would mean we would use the Oracle OLE DB provider and the ODP.NET. Requirements: 1. All applications will be in managed code so using native C++ OLEDB would require C++/CLI to work (no PInvoke way to slow). 2. Application must work with multiple databases in the future, currently just targeting Oracle specifically. Question: 1. Would it be more performant to use ADO.NET to accomplish this or use native C++ OLE DB wrapped in a Managed C++ interface for managed code to access? Any ideas, or help or places to look on the web would be greatly appreciated.

    Read the article

  • R: ESS shell.exec speed

    - by Musa
    I am using ESS in Windows XP. I have noticed that shell.exec is much slower with ESS than with RGui (the problem occurs when I try help(ls) for example, the help is displayed much faster in RGui, I tracked it down and it is due to shell.exec). Is there any reason for this? How can I fix it? My default browser is Firefox.

    Read the article

  • Iteration speed of int vs long

    - by jqno
    I have the following two programs: long startTime = System.currentTimeMillis(); for (int i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); and long startTime = System.currentTimeMillis(); for (long i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); Note: the only difference is the type of the loop variable (int and long). When I run this, the first program consistently prints between 0 and 16 msecs, regardless of the value of N. The second takes a lot longer. For N == Integer.MAX_VALUE, it runs in about 1800 msecs on my machine. The run time appears to be more or less linear in N. So why is this? I suppose the JIT-compiler optimizes the int loop to death. And for good reason, because obviously it doesn't do anything. But why doesn't it do so for the long loop as well? A colleague thought we might be measuring the JIT compiler doing its work in the long loop, but since the run time seems to be linear in N, this probably isn't the case.

    Read the article

  • How to speed up loading the splash screen.

    - by AngryHacker
    I am optimizing the startup of a WinForms app. One issue I identified is the loading of the splash screen form. It takes about half a second to a second. I know that multi-threading is a no-no on UI pieces, however, seeing how the splash screen is a fairly autonomous piece of the application, is it possible to somehow mitigate its performance hit by throwing it one some other thread (perhaps in the way Chrome does it), so that the important pieces of the application can actually get going.

    Read the article

  • Would it be possible to speed up Android Emulator by removing unnecessary apps\

    - by Stan
    I am using Android SDK 1.6 and developing some simple apps. It seems everytime Android Emulator loads every default apps, like message music browser etc... I guess this cause the booting process slow (takes 1 minute overhead for me to test the code every time). Would it be possible to take these apps out and just have my apps on the emulator? My purpose is to have a faster boot up time on emulator. Thanks.

    Read the article

  • How to speed-up Eclipse startup?

    - by Ivan
    I've installed Eclipse Modelling Framework (eclipse-modeling-galileo-SR1-incubation-linux-gtk.tar.gz) by extracting 'features' and 'plugins' filders of the package to '~/.eclipse/org.eclipse.platform_3.5.0_1543616141'. And now Eclipse splashscreen is shown for many minutes (the whole system (incl. BIOS, Linux and KDE) takes much less time to start). I don't need Eclipse Modelling Framework to be loaded at startup, if it is, I only need it when I start a modelling project. How to set up Eclipse to start faster and don't load all the plugins at startup time? My system runs Arch Linux 2010.05, OpenJDK 6.0, Eclipse 3.5.2.

    Read the article

  • MINA: Performing synchronous write requests / read responses

    - by Matt Huggins
    I'm attempting to perform a synchronous write/read in a demux-based client application with MINA 2.0 RC1, but it seems to get stuck. Here is my code: public boolean login(final String username, final String password) { // block inbound messages session.getConfig().setUseReadOperation(true); // send the login request final LoginRequest loginRequest = new LoginRequest(username, password); final WriteFuture writeFuture = session.write(loginRequest); writeFuture.awaitUninterruptibly(); if (writeFuture.getException() != null) { session.getConfig().setUseReadOperation(true); return false; } // retrieve the login response final ReadFuture readFuture = session.read(); readFuture.awaitUninterruptibly(); if (readFuture.getException() != null) { session.getConfig().setUseReadOperation(true); return false; } // stop blocking inbound messages session.getConfig().setUseReadOperation(false); // determine if the login info provided was valid final LoginResponse loginResponse = (LoginResponse)readFuture.getMessage(); return loginResponse.getSuccess(); } I can see on the server side that the LoginRequest object is retrieved, and a LoginResponse message is sent. On the client side, the DemuxingProtocolCodecFactory receives the response, but after throwing in some logging, I can see that the client gets stuck on the call to readFuture.awaitUninterruptibly(). I can't for the life of me figure out why it is stuck here based upon my own code. I properly set the read operation to true on the session config, meaning that messages should be blocked. However, it seems as if the message no longer exists by time I try to read response messages synchronously. Any clues as to why this won't work for me?

    Read the article

  • Trying to speed up a SQLITE UNION QUERY

    - by user142683
    I have the below SQLITE code SELECT x.t, CASE WHEN S.Status='A' AND M.Nomorebets=0 THEN S.PriceText ELSE '-' END AS Show_Price FROM sb_Market M LEFT OUTER JOIN (select 2010 t union select 2020 t union select 2030 t union select 2040 t union select 2050 t union select 2060 t union select 2070 t ) as x LEFT OUTER JOIN sb_Selection S ON S.MeetingId=M.MeetingId AND S.EventId=M.EventId AND S.MarketId=M.MarketId AND x.t=S.team WHERE M.meetingid=8051 AND M.eventid=3 AND M.Name='Correct Score' With the current interface restrictions, I have to use the above code to ensure that if one selection is missing, that a '-' appears. Some feed would be something like the following SelectionId Name Team Status PriceText =================================== 1 Barney 2010 A 10 2 Jim 2020 A 5 3 Matt 2030 A 6 4 John 2040 A 8 5 Paul 2050 A 15/2 6 Frank 2060 S 10/11 7 Tom 2070 A 15 Is using the above SQL code the quickest & efficient?? Please advise of anything that could help. Messages with updates would be preferable.

    Read the article

  • can't write to physical drive in win 7??

    - by matt
    I wrote a disk utility that allowed you to erase whole physical drives. it uses the windows file api, calling : destFile = CreateFile("\\.\PhysicalDrive1", GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING,createflags, NULL); and then just calling WriteFile, and making sure you write in multiples of sectors, i.e. 512 bytes. this worked fine in the past, on XP, and even on the Win7 RC, all you have to do is make sure you are running it as an administrator. but now I have retail Win7 professional, it doesn't work anymore! the drives still open fine for writing, but calling WriteFile on the successfully opened Drive now fails! does anyone know why this might be? could it have something to do with opening it with shared flags? this is always what I have done before, and its worked. could it be that something is now sharing the drive? blocking the writes? is there some way to properly "unmount" A drive, or atleast the partitions on it so that I would have exclusive access to it? some other tools that used to work don't any more either, but some do, like the WD Diagnostic's erase functionality. and after it has erased the drive, my tool then works on it too! leading me to belive there is some "unmount" process I need to be doing to the drive first, to free up permission to write to it. Any ideas?

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • Improve drawingvisual render's speed

    - by Michael Hao
    I create my own FrameworkElement and override VisualChildrenCount{get;} and GetVisualChild(int index) by returning my own DrawingVisual collection instance.I have override OnRender . I will add 20-50 DrawingVisuals in this FrameworkElement ,every DrawingVisual will have 2000 line segments.The logic value of these points between 0 to 60000.when I zoom into 1:1 the FrameworkElement 's Height will be 60000, the rending time will be 15 minutes!! How do I improve the rending performance ?

    Read the article

  • Write to file using CopyHere without using WScript.Sleep

    - by mlevit
    Hi guys, I've written a small VBScript to creates a .zip file and then copies the contents of a specified folder into that .zip file. I copy the files over one by one for a reason (I know I can do the whole lot at once). However my problem is when I try to copy them one by one without a WScript.Sleep between each loop iteration I get a "File not found or no read permission." error; if I place a WScript.Sleep 200 after each write it works but not 100% of the time. Pretty much I'd like to get rid of the Sleep function and not rely on that because depending on the file size it may take longer to write therefore 200 milliseconds may not be enough etc. As you can see with the small piece of code below, I loop through the files, then if they match the extension I place them into the .zip (zipFile) For Each file In folderToZip.Items For Each extension In fileExtensions if (InStr(file, extension)) Then zipFile.CopyHere(file) WScript.Sleep 200 Exit For End If Next Next Any suggestions on how I can stop relying on the Sleep function? Thanks

    Read the article

  • HttpModule to Write Out JavaScript Script References to the Response

    - by Chris
    On my page in the Page_Load event I add a collection of strings to the Context object. I have an HttpModule that will fire EndRequest and retrieve the collection of strings. What I then do is write out a script reference tag (based on the collection of strings) to the response. The problem is that the page reads the script reference but doesn't retrieve the contents of the file (I imagine because this is occurring in the EndRequest event). I can't fire the BeginRequest event because I won't have access to the Context Items collection. I tried to also registering an HttpHandler which Processes the request of the script reference but I can't access the collection of strings in the Context.Items from there. Any suggestions? Page_Load: protected void Page_Load(object sender, EventArgs e) { Context.Items.Add("ScriptFile", "/UserControls.js"); } HttpModule: public void OnEndRequest(Object s, EventArgs e) { HttpApplication app = s as HttpApplication; object script = app.Context.Items["ScriptFile"]; app.Response.Write("<script type='text/javascript' src='" + script + "'></script>"); }

    Read the article

  • Speed up compilation with mockito on Android

    - by pbreault
    I am currently developing an android app in eclipse using: One project for the app One project for the tests (Instrumentation and Pojo tests) In the test project, I am importing the mockito library for standard POJO testing. However, when I import the library, the compilation time skyrockets from 1 second to about 30 seconds in eclipse. The cause seems to be that the whole library is converted each time. So basically, each time a make a modification that I want to test, I have to wait 30 seconds. The only workarounds that I have found so far would be: Disable "Build Automatically" Create a project that includes only pojo tests and put mockito only there. Use another library that compiles faster (e.g. easymock) Any other suggestion?

    Read the article

  • C# Lambda Expression Speed

    - by Nathan
    I have not used many lambda expressions before and I ran into a case where I thought I could make slick use of one. I have a custom list of ~19,000 records and I need to find out if a record exists or not in the list so instead of writing a bunch of loops or using linq to go through the list I decided to try this: for (int i = MinX; i <= MaxX; ++i) { tempY = MinY; while (tempY <= MaxY) { bool exists = myList.Exists(item => item.XCoord == i && item.YCoord == tempY); ++tempY; } } Only problem is it take ~9 - 11 seconds to execute. Am I doing something wrong is this just a case of where I shouldn't be using an expression like this? Thanks.

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • SQL: Speed Improvement - Cluttered union query

    - by vol7ron
    SELECT * FROM ( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) foo -- UNION -- SELECT a.user_id , a.f_name , a.l_name , '' , '' , '' FROM current_tbl a WHERE a.user_id NOT IN ( select user_id from( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) bar ) ORDER BY user_id Example of table population: current_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Berry A3 | Calv | Chard | | import_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Butcher <- last_name different | | Expected Output: ----------------------------------------------------------------------- user_id1 | f_name1 | l_name1 | user_id2 | f_name2 | l_name2 ----------+-----------+-----------+------------+-----------+----------- A1 | Adam | Acorn | A1 | Adam | Acorn A2 | Beth | Berry | A2 | Beth | Butcher A3 | Calv | Chard | | | Doing this method gets rid of conditions where the row would be: A2 | Beth | Berry | A2 | Beth | Butcher But it keeps the A3 row I hope this makes sense and I haven't overly simplified it. This is a continuation question from my other question. The succession of these improvements has dropped the query down from ~32000ms to where it's at now ~1200ms - quite an improvement. I supect I can optimize by using UNION ALL in the subquery and of course the usual index optimizations, but I'm looking for the best SQL optimization. FYI this particular case is for PostgreSQL.

    Read the article

  • Ways to save enums in database

    - by corgrath
    Hey guys. I am wondering what the best ways to save enums into a database is. I know there are name() and valueOf() methods to make it into a String back. But are there any other (flexible) options to store these values? Is there a smart way to make them into unique numbers (ordinal() is not safe to use)? Any comments and suggestions would be helpful :) Update: Thanks for all awesome and fast answers! It was as I suspected. However a note to 'toolkit'; That is one way. The problem is that I would have to add the same methods with each enum type i create. Thats a lot of duplicated code and, at the moment, Java does not support any solutions to this (You cannot let enum extend other classes). However, thanks for all answers!

    Read the article

  • Speed up SQL Server Fulltext Index through Text Duplication of Non-Indexed Columns

    - by Alex
    1) I have the text fields FirstName, LastName, and City. They are fulltext indexed. 2) I also have the FK int fields AuthorId and EditorId, not fulltext indexed. A search on FirstName = 'abc' AND AuthorId = 1 will first search the entire fulltext index for 'abc', and then narrow the resultset for AuthorId = 1. This is bad because it is a huge waste of resources as the fulltext search will be performed on many records that won't be applicable. Unfortunately, to my knowledge, this can't be turned around (narrow by AuthorId first and then fulltext-search the subset that matches) because the FTS process is separate from SQL Server. Now my proposed solution that I seek feedback on: Does it make sense to create another computed column which will be included in the fulltext search which will identify the Author as text (e.g. AUTHORONE). That way I could get rid of the AuthorId restriction, and instead make it part of my fulltext search (a search for 'abc' would be 'abc' and 'AUTHORONE' - all executed as part of the fulltext search). Is this a good idea or not? Why?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >