Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 196/811 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • .NET Data Provider for SqlServer

    - by DMcKenna
    Has anybody managed to get the ".NET Data Provider for SqlServer" to actually work within perfmon.exe. I have a .NET app that uses nhibernate to interact with sql server 2005 db. All I want to do is to view the NumberOfActiveConnectionPools, NumberOfActiveConnections and the NumberOfFreeConnections within perfmon.exe Can somebody explain to me how exactly I get this to work? Regards, David

    Read the article

  • Python faster way to read fixed length fields form a file into dictionary

    - by Martlark
    I have a file of names and addresses as follows (example line) OSCAR ,CANNONS ,8 ,STIEGLITZ CIRCUIT And I want to read it into a dictionary of name and value. Here self.field_list is a list of the name, length and start point of the fixed fields in the file. What ways are there to speed up this method? (python 2.6) def line_to_dictionary(self, file_line,rec_num): file_line = file_line.lower() # Make it all lowercase return_rec = {} # Return record as a dictionary for (field_start, field_length, field_name) in self.field_list: field_data = file_line[field_start:field_start+field_length] if (self.strip_fields == True): # Strip off white spaces first field_data = field_data.strip() if (field_data != ''): # Only add non-empty fields to dictionary return_rec[field_name] = field_data # Set hidden fields # return_rec['_rec_num_'] = rec_num return_rec['_dataset_name_'] = self.name return return_rec

    Read the article

  • Sql server query using function and view is slower

    - by Lieven Cardoen
    I have a table with a xml column named Data: CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](max) NOT NULL, [LastName] [nvarchar](max) NOT NULL, [Email] [nvarchar](250) NOT NULL, [Password] [nvarchar](max) NULL, [UserName] [nvarchar](250) NOT NULL, [LanguageId] [int] NOT NULL, [Data] [xml] NULL, [IsDeleted] [bit] NOT NULL,... In the Data column there's this xml <data> <RRN>...</RRN> <DateOfBirth>...</DateOfBirth> <Gender>...</Gender> </data> Now, executing this query: SELECT UserId FROM Users WHERE data.value('(/data/RRN)[1]', 'nvarchar(max)') = @RRN after clearing the cache takes (if I execute it a couple of times after each other) 910, 739, 630, 635, ... ms. Now, a db specialist told me that adding a function, a view and changing the query would make it much more faster to search a user with a given RRN. But, instead, these are the results when I execute with the changes from the db specialist: 2584, 2342, 2322, 2383, ... This is the added function: CREATE FUNCTION dbo.fn_Users_RRN(@data xml) RETURNS varchar(100) WITH SCHEMABINDING AS BEGIN RETURN @data.value('(/data/RRN)[1]', 'varchar(max)'); END; The added view: CREATE VIEW vwi_Users WITH SCHEMABINDING AS SELECT UserId, dbo.fn_Users_RRN(Data) AS RRN from dbo.Users Indexes: CREATE UNIQUE CLUSTERED INDEX cx_vwi_Users ON vwi_Users(UserId) CREATE NONCLUSTERED INDEX cx_vwi_Users__RRN ON vwi_Users(RRN) And then the changed query: SELECT UserId FROM Users WHERE dbo.fn_Users_RRN(Data) = '59021626919-61861855-S_FA1E11' Why is the solution with a function and a view going slower?

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • PHP fastest method of reading server response

    - by Peter John
    Hi there, im having some real problems with the lag produced by using fgets to grab the server's response to some batch database calls im making. Im sending through a batch of say, 10,000 calls and ive tracked the lag down to fgets causing the hold up in the speed of my application as the response for each call needs to be grabbed. I have found this thread http://bugs.php.net/bug.php?id=32806 which explains the problem quite well, but hes reading a file, not a server response so fread could be a bit tricky as i could get part of the next line, and extra stuff which i dont want. Any help much appreciated!

    Read the article

  • What programming language is this?

    - by Richard M.
    I recently stumbled over a very odd source listing on a rather old programming-related site (lost it somewhere in my browser history as I didn't care about it at first). I think that this is part of a simple (console-based?) snake game. I searched and searched but didn't find a language that looked somwhat like this. This seems like a mix of Python, Ruby and C++. What the hell? What programming-language is the below source listing written in? Maybe you can figure it out? my Snake.hasProps { length parts xDir yDir } & hasMethods { init: length = 0 parts[0].x,y = 5 move: parts[ 0 ].x,y.!add xDir | yDir # Move the head map parts(i,v): parts[ i ] = parts[ i + 1 ] checkBiteSelf checkFeed checkBiteSelf: part } my SnakePart.hasProps { x y } fork SnakePart to !Feed my Game.hasProps { frameTime = 30 } & hasMethods { init: mainloop mainloop: sys.util.sleep frameTime Snake.move Field.getInput -> Snake.xDir | Snake.yDir Field.reDraw with Snake & Feed & Game # For FPS } main.isMethod { game.init }

    Read the article

  • Ideas Related to Subset Sum with 2,3 and more integers

    - by rolandbishop
    I've been struggling with this problem just like everyone else and I'm quite sure there has been more than enough posts to explain this problem. However in terms of understanding it fully, I wanted to share my thoughts and get more efficient solutions from all the great people in here related to Subset Sum problem. I've searched it over the Internet and there is actually a lot sources but I'm really willing to re-implement an algorithm or finding my own in order to understand fully. The key thing I'm struggling with is the efficiency considering the set size will be large. (I do not have a limit, just conceptually large). The two phases I'm trying to implement ideas on is finding two numbers that are equal to given integer T, finding three numbers and eventually K numbers. Some ideas I've though; For the two integer part I'm thing basically sorting the array O(nlogn) and for each element in the array searching for its negative value. (i.e if the array element is 3 searching for -3). Maybe a hash table inclusion could be better, providing a O(1) indexing the element? For the three or more integers I've found an amazing blog post;http://www.skorks.com/2011/02/algorithms-a-dropbox-challenge-and-dynamic-programming/. However even the author itself states that it is not applicable for large numbers. So I was for 2 and 3 and more integers what ideas could be applied for the subset problem. I'm struggling with setting up a dynamic programming method that will be efficient for the large inputs as well.

    Read the article

  • WCF high instance count: anyone knows negative sideffects?

    - by Alex
    Hi there! Did anyone experience or know of negative side effects from having a high service instance count like 60k? Aside from the memory consumption of course. I am planning to increase the threshold for the maximum allowed instance count in our production environments. I am basically sick of severe production incidents just because "something" forgot to close a proxy properly. I plan to go to something like 60k instances which will allow the service to survive using default session timeouts at a call rate average for our clients. Thanks, Alex

    Read the article

  • fastest way to crawl recursive ntfs directories in C++

    - by Peter Parker
    I have written a small crawler to scan and resort directory structures. It based on dirent(which is a small wrapper around FindNextFileA) In my first benchmarks it is surprisingy slow: around 123473ms for 4500 files(thinkpad t60p local samsung 320 GB 2.5" HD). 121481 files found in 123473 milliseconds Is this speed normal? This is my code: int testPrintDir(std::string strDir, std::string strPattern="*", bool recurse=true){ struct dirent *ent; DIR *dir; dir = opendir (strDir.c_str()); int retVal = 0; if (dir != NULL) { while ((ent = readdir (dir)) != NULL) { if (strcmp(ent->d_name, ".") !=0 && strcmp(ent->d_name, "..") !=0){ std::string strFullName = strDir +"\\"+std::string(ent->d_name); std::string strType = "N/A"; bool isDir = (ent->data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) !=0; strType = (isDir)?"DIR":"FILE"; if ((!isDir)){ //printf ("%s <%s>\n", strFullName.c_str(),strType.c_str());//ent->d_name); retVal++; } if (isDir && recurse){ retVal += testPrintDir(strFullName, strPattern, recurse); } } } closedir (dir); return retVal; } else { /* could not open directory */ perror ("DIR NOT FOUND!"); return -1; } }

    Read the article

  • Does the order of columns matter in a group by clause?

    - by Jeff Meatball Yang
    If I have two columns, one with very high cardinality and one with very low cardinality (unique # of values), does it matter in which order I group by? Here's an example: select dimensionName, dimensionCategory, sum(someFact) from SomeFact f join SomeDim d on f.dimensionKey = d.dimensionKey group by d.dimensionName, -- large number of unique values d.dimensionCategory -- small number of unique values Are there situations where it matters?

    Read the article

  • Optimizing a shared buffer in a producer/consumer multithreaded environment

    - by Etan
    I have some project where I have a single producer thread which writes events into a buffer, and an additional single consumer thread which takes events from the buffer. My goal is to optimize this thing for a single machine to achieve maximum throughput. Currently, I am using some simple lock-free ring buffer (lock-free is possible since I have only one consumer and one producer thread and therefore the pointers are only updated by a single thread). #define BUF_SIZE 32768 struct buf_t { volatile int writepos; volatile void * buffer[BUF_SIZE]; volatile int readpos;) }; void produce (buf_t *b, void * e) { int next = (b->writepos+1) % BUF_SIZE; while (b->readpos == next); // queue is full. wait b->buffer[b->writepos] = e; b->writepos = next; } void * consume (buf_t *b) { while (b->readpos == b->writepos); // nothing to consume. wait int next = (b->readpos+1) % BUF_SIZE; void * res = b->buffer[b->readpos]; b->readpos = next; return res; } buf_t *alloc () { buf_t *b = (buf_t *)malloc(sizeof(buf_t)); b->writepos = 0; b->readpos = 0; return b; } However, this implementation is not yet fast enough and should be optimized further. I've tried with different BUF_SIZE values and got some speed-up. Additionaly, I've moved writepos before the buffer and readpos after the buffer to ensure that both variables are on different cache lines which resulted also in some speed. What I need is a speedup of about 400 %. Do you have any ideas how I could achieve this using things like padding etc?

    Read the article

  • Avoid having a huge collection of ids by calling a DAO.getAll()

    - by Michael Bavin
    Instead of returning a List<Long> of ids when calling PersonDao.getAll() we wanted not to have an entire collection of ids in memory. Seems like returning a org.springframework.jdbc.support.rowset.SqlRowSet and iterate over this rowset would not hold every object in memory. The only problem here is i cannot cast this row to my entity. Is there a better way for this?

    Read the article

  • Why is autorelease especially dangerous/expensive for iPhone applications?

    - by e.James
    I'm looking for a primary source (or a really good explanation) to back up the claim that the use of autorelease is dangerous or overly expensive when writing software for the iPhone. Several developers make this claim, and I have even heard that Apple does not recommend it, but I have not been able to turn up any concrete sources to back it up. SO references: autorelease-iphone Why does this create a memory leak (iPhone)? Note: I can see, from a conceptual point of view, that autorelease is slightly more expensive than a simple call to release, but I don't think that small penalty is enough to make Apple recommend against it. What's the real story?

    Read the article

  • Is opening too many datacontexts bad?

    - by ryudice
    I've been checking my application with linq 2 sql profiler, and I noticed that it opens a lot of datacontexts, most of them are opened by the linq datasource I used, since my repositories use only the instance stored in Request.Items, is it bad to open too many datacontext? and how can I make my linqdatasource to use the datacontext that I store in Request.Items for the duration of the request? thanks for any help!

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • How to decide on what hardware to deploy web application

    - by Yuval A
    Suppose you have a web application, no specific stack (Java/.NET/LAMP/Django/Rails, all good). How would you decide on which hardware to deploy it? What rules of thumb exist when determining how many machines you need? How would you formulate parameters such as concurrent users, simultaneous connections and DB read/write ratio to a decision on how much, and which, hardware you need? Any resources on this issue would be very helpful...

    Read the article

  • any faster alternative??

    - by kaushik
    I have to read a file from a particular line number and i know the line number say "n": i have been thinking of two choice: 1)for i in range(n) fname.readline() k=readline() print k 2)i=0 for line in fname: dictionary[i]=line i=i+1 but i want to know faster alternative as i might have to perform this on different files 20000 times. is there is any other better alternatives?? thanking u

    Read the article

  • Speeding up jQuery empty() or replaceWith() Functions When Dealing with Large DOM Elements

    - by Levi Hackwith
    Let me start off by apologizing for not giving a code snippet. The project I'm working on is proprietary and I'm afraid I can't show exactly what I'm working on. However, I'll do my best to be descriptive. Here's a breakdown of what goes on in my application: User clicks a button Server retrieves a list of images in the form of a data-table Each row in the table contains 8 data-cells that in turn each contain one hyperlink Each request by the user can contain up to 50 rows (I can change this number if need be) That means the table contains upwards of 800 individual DOM elements My analysis shows that jQuery("#dataTable").empty() and jQuery("#dataTable).replaceWith(tableCloneObject) take up 97% of my overall processing time and take on average 4 - 6 seconds to complete. I'm looking for a way to speed up either of the above mentioned jQuery functions when dealing with massive DOM elements that need to be removed / replaced. I hope my explanation helps.

    Read the article

  • Monetary Escrow Recommendation

    - by buzzappsoftware
    We are a software development company. We need to establish a monetary escrow for one of our clients. The idea is they pay for the whole fixed bid project up front. However, the money is not released to us until agreed upon milestones have been met. This is very similar to the system Elance uses. From our side, we need to protect ourselves in the event we complete a client's project and by the time that happens they cannot pay for one reason or another. Thank you for your time. All help appreciated.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >