Search Results

Search found 16126 results on 646 pages for 'wcf performance'.

Page 532/646 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • Entity data field validation and part data submission

    - by pradeeptp
    I have an entity class that has 10 fields. I am using MS Validation Application block to mark all fields as mandatory (IsRequired). I am implementing a securiy feature in which during updation of the data, not all the fields in the entity class will have data. For example a few users can only view 5 fileds while others all 10 fields during updation on GUI I have the following options 1) Bring all the data for all the fields from the DB table and hide the ones not accessible to the users in GUI. I am concerned about the performance because everytime GUI will pull unncessary data. 2) Bring the data (e.g. only 5 fields) that are permissible for access/view by the user on GUI. During submit, validation block will throw exception because all fields are marked as IsRequired and only data for 5 fields are sent back to the server. I want to know if there are any other good approaches to solve problems like this. I am using .NET 3.5 Thanks.

    Read the article

  • Count of memory copies in *nix systems between packet at NIC and user application?

    - by Michael_73
    Hi there, This is just a general question relating to some high-performance computing I've been wondering about. A certain low-latency messaging vendor speaks in its supporting documentation about using raw sockets to transfer the data directly from the network device to the user application and in so doing it speaks about reducing the messaging latency even further than it does anyway (in other admittedly carefully thought-out design decisions). My question is therefore to those that grok the networking stacks on Unix or Unix-like systems. How much difference are they likely to be able to realise using this method? Feel free to answer in terms of memory copies, numbers of whales rescued or areas the size of Wales ;) Their messaging is UDP-based, as I understand it, so there's no problem with establishing TCP connections etc. Any other points of interest on this topic would be gratefully thought about! Best wishes, Mike

    Read the article

  • Can any JavaScript library perform as well as the Cut The Rope JavaScript implementation?

    - by joe
    Now that the canvas tag is starting to get hardware execration [acceleration - thanks guys!] by many browsers, developing casual games in HTML5 is becoming more feasible. ZeptoLabs did a great job porting Cut The Rope to HTML5 for use as a Windows 8 Metro App. You can find some of the details here but they do not get into specifics. I was wondering if anyone knew if they used a library (such as Impact or Crafty) or if you need to write all custom and optimized JavaScript code in order to get this type of performance. Thanks!

    Read the article

  • postgres SQL - pg_class question

    - by Sachin Chourasiya
    PostgreSQL stores statistics about tables in the system table called pg_class. The query planner accesses this table for every query. These statistics may only be updated using the analyze command. If the analyze command is not run often, the statistics in this table may not be accurate and the query planner may make poor decisions which can degrade system performance. Another strategy is for the query planner to generate these statistics for each query (including selects, inserts, updates, and deletes). This approach would allow the query planner to have the most up-to-date statistics possible. Why postgres always rely on pg_class instead?

    Read the article

  • Monitoring Reasoning Progress using the Pellet Reasoner

    - by Nico
    I am currently constructing an OWL ontology, which - until very recently classified rapidly using the Pellet reasoner. However, since the introduction of several new classes, the reasoning performance has slowed to a crawl. Although the reasoner completes and the ontology does not contain any unsatisfiable concepts etc, the time the reasoning takes is unacceptable. I am currently trying to track down the offending classes/class that may have led to the slowdown. Here's my question: is it possible to log the reasoning progreess of Pellet? I.e. is it possible to produce some output that will document how long pellet has spent on certain reasoning tasks/traces how long reasoning over any given class and axiom takes? If so, does anyone have some java code they could post up? Thanks in advance for your answers!

    Read the article

  • NSMutableSet addObject question

    - by Jacob Relkin
    I've got a class that wraps around an NSMutableSet object, and I have an instance method that adds objects (using the addObject: method) to the NSMutableSet. This works well, but I'm smelling a performance hitch because inside the method i'm explicitly calling containsObject: before adding the object to the set. Three part question: Do I need to be calling containsObject: before I add an object to the set? If so, then what actual method should I be using, containsObject or containsObjectIdenticalTo:? If that is not so, what contains method gets invoked under the hood of addObject:? This is important to me because if I pass an object to containsObject: it would return true, but if I pass it to containsObjectIdenticalTo: it would return false.

    Read the article

  • Why are gettimeofday() intervals occasionally negative?

    - by Andres Jaan Tack
    I have an experimental library whose performance I'm trying to measure. To do this, I've written the following: struct timeval begin; gettimeofday(&begin, NULL); { // Experiment! } struct timeval end; gettimeofday(&end, NULL); // Print the time it took! std::cout << "Time: " << 100000 * (end.tv_sec - begin.tv_sec) + (end.tv_usec - begin.tv_usec) << std::endl; Occasionally, my results include negative timings, some of which are nonsensical. For instance: Time: 226762 Time: 220222 Time: 210883 Time: -688976 What's going on?

    Read the article

  • POS Desktop Application using DB or Localfiles ? using WPF

    - by Panindra
    I am planning to build a POS Application for my shop. I have enough knowledge to build the application using DB and also using local files( system.IO - binary files ) to store and access the data for my application. But , i have no deployment experience and confused in choosing data storing option. Database using MDF may be good option ( may ease plenty of coding ) but i don't want to have SQL server on my desktop. as i am using WPF for building , my concern is that my application may get slow due to server response and design rendering of WPF. Then i tried to use only local data (binary files) to store the data and retrive using class and objects. but this coding is taking lot of time , so in the middle of the process i struck in the dilemma of going back to Database . Please help , for performance wise whic one is better . and in Practical World ,in professional applications which one is widely using .. please give suggestions ..

    Read the article

  • Using IF in T-SQL weakens or breaks execution plan caching?

    - by AnthonyWJones
    It has been suggest to me that the use of IF statements in t-SQL batches is detrimental to performance. I'm trying to find some confirmation of this assertion. I'm using SQL Server 2005 and 2008. The assertion is that with the following batch:- IF @parameter = 0 BEGIN SELECT ... something END ELSE BEGIN SELECT ... something else END SQL Server cannot re-use the execution plan generated because the next execution may need a different branch. This implies that SQL Server will eliminate one branch entirely from execution plan on the basis that for the current execution it can already determine which branch is needed. Is this really true? In addition what happens in this case:- IF EXISTS (SELECT ....) BEGIN SELECT ... something END ELSE BEGIN SELECT ... something else END where it's not possible to determine in advance which branch will be executed?

    Read the article

  • Validating Column Data Stored as CSV Against Another Table

    - by Jakkwylde
    I wanted to see what some suggested approaches would be to validate a field that is stored as a CSV against a table containing appropriate values. Althought it would be desired, it is NOT an option to split the CSV list into another related table. In the example data below I would be trying to capture the code 99 for widget A. Below is an example data representation. Table: Widgets WidgetName WidgetCodeList A 1, 2, 3 B 1 C 2, 3 D 99 Table: WidgetCodes WidgetCode 1 2 3 An earlier approach was to query the CSV column as rows using various string manipulations and CONNECT_BY_LEVEL however the performance was not acceptible.

    Read the article

  • UNIX-style RegExp Replace running extremely slowly under windows. Help?

    - by John Sullivan
    I'm trying to run a unix regEXP on every log file in a 1.12 GB directory, then replace the matched pattern with ''. Test run on a 4 meg file is took about 10 minutes, but worked. Obviously something is murdering performance by several orders of magnitude. Find: ^(?!.*155[0-2][0-9]{4}\s.*).*$ -- NOTE: match any line NOT starting 152[0-2]NNNN where in is a number 0-9. Replace with: ''. Is there some justifiable reason for my regExp to take this long to replace matching text, or is the program I am using (this is windows / a program called "grepWin") most likely poorly optimized? Thanks.

    Read the article

  • Rails/mysql SUM distinct records - optimization

    - by pepernik
    Hey. How would you optimize this SQL SELECT SUM(tmp.cost) FROM ( SELECT DISTINCT clients.id as client, countries.credits_cost AS cost FROM countries INNER JOIN clients ON clients.country_id = countries.id INNER JOIN clients_groups ON clients_groups.client_id=clients.id WHERE clients_groups.group_id IN (1,2,3,4,5,6,7,8,9) GROUP BY clients.id ) AS tmp; I'm using this example as part of my Ruby on Rails project. Note that my nested SQL (tmp) can have more then 10 milion records. You can split that in more SQLs if the performance is better. Should I add any indexes to make it quicker (i have it on IDs)?

    Read the article

  • How bad is opening and closing a SQL connection for several times? What is the exact effect?

    - by Eren
    For example, I need to fill lots of DataTables with SQLDataAdapter's Fill() method: DataAdapter1.Fill(DataTable1); DataAdapter2.Fill(DataTable2); DataAdapter3.Fill(DataTable3); DataAdapter4.Fill(DataTable4); DataAdapter5.Fill(DataTable5); .... .... Even all the dataadapter objects use the same SQLConnection, each Fill method will open and close the connection unless the connection state is already open before the method call. What I want to know is how does unnecessarily opening and closing SQLConnections affect the performance of the application. How much does it need to scale to see the bad effects of this problem (100,000s of concurrent users?). In a mid-size website (daily 50000 users) does it worth bothering and searching for all the Fill() calls, keeping them together in the code and opening the connection before any Fill() call and closing afterwards?

    Read the article

  • What could be adding "Pragma:no-cache" to my response Headers? (Apache, PHP)

    - by Daniel Magliola
    I have a website whose maintenance I've inherited, which is a big hairy mess. One of the things i'm doing is improving performance. Among other things, I'm adding Expires headers to images. Now, there are some images that are served through a PHP file, and I notice that they do have the Expires header, but they also get loaded every time. Looking at Response Headers, I see this: Expires Wed, 15 Jun 2011 18:11:55 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Which obviously explains the problem. Now, i've looked all over the code base, and it doesn't say "pragma" anywhere. .htaccess doesn't seem to have anything related either. Any ideas who could be setting those "pragma" (and "cache-control") headers, and how I can avoid it? Thanks! Daniel

    Read the article

  • How to implement a mailing system with Rails that sends emails in the background

    - by Tam
    I want to implement a reliable mailing system with Ruby on Rails that sends emails in the background as sending email sometimes takes like 10 seconds or more so I don't want the user to wait. Some ideas I thought of: 1- Write to a table in DB a have a background process that go over and send email (concern: potential many reads/writes to DB slows down my application) 2- Messaging Queue background process / Rake task (concern: if server crashes queued mails will be lost also might eat up a lot of memory if many emails) I was wondering if you a know of a good solution that provides a balance between reliability and performance.

    Read the article

  • Should I use a hosted version of JQuery? Which one?

    - by ataylor
    Should I use a local copy of jquery, or should I link to a copy provided by Google or Microsoft? I'm primarily concerned about speed. I've heard that just pulling content from other domains can have performance advantages related to how browsers limit connections. In particular, has anyone benchmarked the speed and latency of Google vs. Microsoft vs. local? Also, do I have to agree to any conditions or licenses to link from a third-party?

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • Website content hosted with Google. Good or bad?

    - by user305052
    I recently decided to host my styles.css and various scripts on Google Docs and link them into my website. I also have all my images hosted through Picasa so that they too will load much faster and consistently across users. My site has most of its traffic from Japan, Africa, and South America, so I assume there will be a performance boost for my users since my server is hosted in Hong Kong. I (in Canada) have measured my load times to be half of what they used to be. Basically it's a free CDN for my personal stuff. I'm not too sure about all of this yet, so here's my question: what are the caveats of this setup?

    Read the article

  • Should I use a global var or call the function every time? C++

    - by extintor
    Im using: bool GetOS(LPTSTR pszOS) { OSVERSIONINFOEX osve; BOOL bOsVersionInfoEx; ZeroMemory(&osve, sizeof(OSVERSIONINFOEX)); osve.dwOSVersionInfoSize = sizeof(OSVERSIONINFOEX); if( !(bOsVersionInfoEx = GetVersionEx ((OSVERSIONINFO *) &osve)) ) return false; TCHAR buf[80]; StringCchPrintf( buf, 80, TEXT("%u.%u.%u.%u"), osve.dwPlatformId, osve.dwMajorVersion, osve.dwMinorVersion, osve.dwBuildNumber); StringCchCat(pszOS, BUFSIZE, buf); return true; } to get the windows version, and I am planning to use pszOS every a few minutes, Should I use pszOS as a global var or call GetOS() every time? What's the best option from a performance point of view.

    Read the article

  • How to correct this glitch

    - by Rebol Tutorial
    I removed background: url(none); in my stylesheet because of load performance http://stackoverflow.com/questions/2577422/why-firebug-pretends-that-my-stylesheet-is-calling-my-xmlrpc The problem is that it now causes some glitch on css list. Any idea how to fix this ? Thanks. Update: picture below Tried to put background: none as suggested but didn't solve the problem/ ul.sidebar_list li ul li ul { margin: 0px; padding: 0px!important; float: left; width: 100%; list-style-type: none; background: none; }

    Read the article

  • Best way to store chat messages and files

    - by Stnaire
    I would like to know what do you think about storing chat messages in a database? I need to be able to bind other stuff to them (like files, or contacts) and using a database is the best way I see for now. The same question comes for files, because they can be bound to chat messages, I have to store them in the database too.. With thousands of messages and files I wonder about performance drops and database size. What do you think considering I'm using PHP with MySQL/Doctrine?

    Read the article

  • Screen capture during testing

    - by Edwward
    This is an application for reviewing performance tests. Simple in concept, tricky to describe. Picture: 1) Recording interactions with a WPF program so the inputs can be played back. 2) Playing the inputs back while doing a continuous screen capture. 3) Capturing wall time as well as continuous CPU percentages during playback. 4) Repeating steps (2) and (3) lots of times. 5) Writing the relevant stuff out to files/db. 6) Reading it and putting it all in a fancy UI for easy review/analysis. The killer for me is (2). I could use some guidance on a good, possibly commercial, screen capture SDK. I would also welcome the news that my whole problem already has a solution. And of course any thoughts on the overall idea would also be great. Thanks. Ed

    Read the article

  • Google App Engine - Dealing with concurrency issues of storing an object

    - by Spines
    My User object that I want to create and store in the datastore has an email, and a username. How do I make sure when creating my User object that another User object doesn't also have either the same email or the same username? If I just do a query to see if any other users have already used the username or the email, then there could be a race condition. UPDATE: The solution I'm currently considering is to use the MemCache to implement a locking mechanism. I would acquire 2 locks before trying to store the User object in the datastore. First a lock that locks based on email, then another that locks based on username. Since creating new User objects only happens at user registration time, and it's even rarer that two people try to use either the same username or the same email, I think it's okay to take the performance hit of locking. I'm thinking of using the MemCache locking code that is here: http://appengine-cookbook.appspot.com/recipe/mutex-using-memcache-api/ What do you guys think?

    Read the article

  • Are there Adaptive Replacement Cache patent-free alternatives?

    - by aleccolocco
    An open source high-performance project I'm working on needs to keep a cache of parsed/compiled files. A plain LRU or a plain LFU wouldn't fit. Plain LRU wouldn't work as there will be remote batch/spider processes hitting the service regularly. Plain LFU wouldn't work because content will age. ARC seems like the perfect solution but since IBM holds patents to it at least one open source project dropped it. Are there any (good enough) alternatives? EDIT: I'm not looking for exactly the same thing, just something that could handle those two situations. Perhaps some simple strategy with timestamps and sources. There have to be many programmers who faced this situation before. That's why the "good enough" bit.

    Read the article

  • MySQL: Get unique values across multiple columns in alphabetical order

    - by RuCh
    Hey everyone, If my table looks like this: id | colA | colB | colC =========================== 1 | red | blue | yellow 2 | orange | red | red 3 | orange | blue | cyan What SELECT query do I run such that the results returned are: blue, cyan, orange, red, yellow Basically, I want to extract a collective list of distinct values across multiple columns and return them in alphabetical order. I am not concerned with performance optimization, because the results are being parsed to an XML file that will serve as a cache (database is hardly updated). So even a dirty solution would be fine. Thanks for any help!

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >