Search Results

Search found 16126 results on 646 pages for 'wcf performance'.

Page 535/646 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • GetHashCode Method reliability in Silverlight/WP7.1

    - by abhinav
    I am attempting to hash and keep(the hash) an object of type IEnumerable<anotherobject> which has about a 1000 entries. I'll be generating another such object, but this time I'd like to check for any changes in the values of the entries using the hash codes of the two objects. Basically, I was wondering if GetHashCode() is apt for this, both from a performance perspective and reliability perspective (getting different values for different object values and same values for same object values, always). If I have to override it, what would be a good way to do so, does it always depend on the type of anotherobject and what Equals means when comparing two anotherobjects? Is there a generic way to do it? This concern is because my object can be quite big.

    Read the article

  • Is it possible to find out what FlashBuilder is doing during compilation?

    - by justkevin
    I've found that Flash Builder 4 (formerly Flex Builder) has trouble working with large projects. After a certain point, builds seem to take longer and longer. I've tried many different ways of improving build time including: Moving embedded resources into externally linked projects. Using -incremental. Tweaking the .ini jvm settings including memory and -server. Turning off automatic build (I'd prefer not to have to do this, because one of the main reasons for using an IDE is to be told about errors as you make them). Deleting the project and re-checking out from the repository. While some of these may help a bit, the performance is still annoyingly slow. I feel if I knew what was taking so long I could refactor my projects to build faster. Is there some setting that tells FlashBuilder to let me see what parts of the build process take so much time?

    Read the article

  • Tracing or Logging Resource Governor classification function behavior in Sql Server 2008

    - by nganju
    I'm trying to use the Resource Governor in SQL Server 2008 but I find it hard to debug the classification function and figure out what the input variables will have, i.e. does SUSER_NAME() contain the domain name? What does the APP_NAME() string look like? It's also hard to verify that it's working correctly. What group did the function return? The only way I can see this is to fire up the performance monitor and watch unblinkingly for little blips in the right CPU counter. Is there some way I can either run it in Debug mode, where I can set a breakpoint and step through and look at variable values, or can I at least do the old-school method of writing trace statements to a file so I can see what's going on? Thanks...

    Read the article

  • assignment vs std::swap and merging and keeping duplicates in seperate object

    - by rubenvb
    Say I have two std::set<std::string>s. The first one, old_options, needs to be merged with additional options, contained in new_options. I can't just use std::merge (well, I do, but not only that) because I also check for doubles and warn the user about this accordingly. To this effect, I have void merge_options( set<string> &old_options, const set<string> &new_options ) { // find duplicates and create merged_options, a stringset containing the merged options // handle duplicated the way I want to // ... old_options = merged_options; } Is it better to use std::swap( merged_options, old_options ); or the assignment I have? Is there a better way to filter duplicates and return the merged set than consecutive calls to std::set_intersection and std::set_union to detect dupes and merge the sets? I know it's slower than one traversal and doing both at once, but these sets are small (performance is not critical) and I trust the Standard more than I trust myself.

    Read the article

  • How can I configure different worker pools using celery?

    - by Chris R
    I need to deploy a queued execution service with (generally) the following three classes of worker: A periodic, low-priority job class that takes a long time and can be processed serially; these jobs should only use 0..2 workers in the system at most. A periodic, deadline-sensitive job class that take a short to medium amount of time (say, topping out at 5 minutes) An ad-hoc job class, that is higher priority than #1, but can interleave with #2. Any workers from class #2 that are inactive when this type of job comes in should handle it, without ever starving the pool of workers for #2 All three job classes are the same task, the only difference between them is how they're requested; they'll take the same input and generate the same output, but each one has different performance guarantees. How can I implement this using celery?

    Read the article

  • NSArray vs. SQLite for Complex Queries on iPhone

    - by GingerBreadMane
    Developing for iPhone, I have a collection of points that I need to make complex queries on. For example: "How many points have a y-coordinate of 10" and "Return all points with an X-coordinate between 3 and 5 and a y-coordinate of 7". Currently, I am just cycling through each element of an NSArray and checking to see if each element matches my query. It's a pain to write the queries though. SQLite would be much nicer. I'm not sure which would be more efficient though since a SQLite database resides on disk and not in memory (to my understanding). Would SQLite be as efficient or more efficient here? Or is there a better way to do it other than these methods that I haven't thought of? I would need to perform the multiple queries with multiple sets of points thousands of times, so the best performance is important.

    Read the article

  • Subtract displaced mask using OpenCV

    - by dario_ramos
    I want to do: masked = image - mask But I want to "displace" mask. That is, move it vertically and horizontally (as long as the intersection between it and image is not empty, this would be valid). I have some hand-coded assembly (which uses MMX instructions) which does this, embedded in a C++ program, but it's unstable when doing vertical displacemente, so I thought of using OpenCV instead. Would it be possible to do this calling only one OpenCV function? Performance is critical; using OpenCV, time should be at least in the same order of magnitude as the assembly code.

    Read the article

  • Determine if a decimal can be stored as int32

    - by anchandra
    I am doing some custom serializing, and in order to save some space, i want to serialize the decimals as int, if possible value wise. Performance is a concern, since i am dealing with a high volume of data. The current method i use is: if ((value > Int32.MinValue) && (value < Int32.MaxValue) && ((valueAsInt = Decimal.ToInt32(value)) == value)) { return true; } Can this be improved?

    Read the article

  • Why hasn't anybody started a hosted continuous integration service?

    - by Teflon Ted
    There's a dozen services that provide hosted version control, hosted ticket tracking, hosted project management, and combinations of all of the above, there's even hosted web-based IDEs. But nobody's yet offered a hosted continuous integration service; at least that I can find. The concept seems simple enough: I register and provide the URL to my source code repository, it grabs my code and builds it via ant/rake/whatever, then runs the suite of tests and some metrics (code coverage, performance, etc.). Is there some prohibitive barrier to entry I'm not considering?

    Read the article

  • Available options for hosting FTP server in .NET application

    - by duane
    I need to implement an FTP service inside my .NET application (running as a Windows Service) and have not had much luck finding good/current source code or vendors. Ideally it needs to be able to respond to the basic FTP Protocol and accept the data stream from an upload via a stream, enabling me to process the data as it is being received (think on the fly hashing). I need to be able to integrate it into my service because it will stack on top of our current code base with an existing custom TCP/IP communication protocol. I don't want to write (and then spend time debugging and performance testing) my own protocol, or implementation. I have already found plenty of ftp client implementations, I just need an acceptable server solution.

    Read the article

  • Java data structure to use with Hibernate to store unknown number of parameters?

    - by Lunikon
    Following problem: I want to render a news stream of short messages based on localized texts. In various places of these messages I have to insert parameters to "customize" them. I guess you know what I mean ;) My question probably falls into the "Which is the best style to do it?" category: How would you store these parameters (they may be Strings and Numbers that need to be formatted according to Locale) in the database? I'm using Hibernate to do the ORM and I can think of the following solutions: build a combined String and save it as such (ugly and hard to maintain I think) do some kind of fancy normalization and and make every parameter a single row on the database (clean I guess, but a performance nightmare) Put the params into an Array, Map or other Java data structure and save it in binary format (probably causes a lot of overhead size-wise) I tend towards option #3 but I'm afraid that it might be to costly in terms of size in the database. What do you think?

    Read the article

  • parsing/matching string occurrence in C

    - by David
    I have the following string: const char *str = "\"This is just some random text\" 130 28194 \"Some other string\" \"String 3\"" I would like to get the the integer 28194 of course the integer varies, so I can't do strstr("20194"). So I was wondering what would be a good way to get that part of the string? I was thinking to use #include <regex.h> which I already have a procedure to match regexp's but not sure how the regexp in C will look like using the POSIX style notation. [:alpha:]+[:digit:] and if performance will be an issue. Or will it be better using strchr,strstr? Any ideas will be appreciate it

    Read the article

  • Is there an additional runtime cost for using named parameters?

    - by Jurily
    Consider the following struct: public struct vip { string email; string name; int category; public vip(string email, int category, string name = "") { this.email = email; this.name = name; this.category = category; } } Is there a performance difference between the following two calls? var e = new vip(email: "foo", name: "bar", category: 32); var e = new vip("foo", 32, "bar"); Is there a difference if there are no optional parameters defined?

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • Managing Data Prefetching and Dependencies with .NET Typed Datasets

    - by Derek Morrison
    I'm using .NET typed datasets on a project, and I often get into situations where I prefetch data from several tables into a dataset and then pass that dataset to several methods for processing. It seems cleaner to let each method decide exactly which data it needs and then load the data itself. However, several of the methods work with the same data, and I want the performance benefit of loading data in the beginning only once. My problem is that I don't know of a good way or pattern to use for managing dependencies (I want to be sure I load all the data that I'm going to need for each class/method that will use the dataset). Currently, I just end up looking through the code for the various classes that will use the dataset to make sure I'm loading everything appropriately. What are good approaches or patterns to use in this situation? Am I doing something fundamentally wrong? Although I'm using typed datasets, this seems like it would be a common situation where prefetching data is used. Thanks!

    Read the article

  • How do I calculate a good hash code for a list of strings?

    - by Ian Ringrose
    Background: I have a short list of strings. The number of strings is not always the same, but are nearly always of the order of a “handful” In our database will store these strings in a 2nd normalised table These strings are never changed once they are written to the database. We wish to be able to match on these strings quickly in a query without the performance hit of doing lots of joins. So I am thinking of storing a hash code of all these strings in the main table and including it in our index, so the joins are only processed by the database when the hash code matches. So how do I get a good hashcode? I could: Xor the hash codes of all the string together Xor with multiply the result after each string (say by 31) Cat all the string together then get the hashcode Some other way So what do people think? (If you care we are using .NET and SqlServer)

    Read the article

  • how do copyright permission systems for content hosting sites work?

    - by zebraman
    I am wondering about subscription sites that host content, like recorded performances from concerts. I'm sure there is a tangle of copyright permissions that must be granted for these video/audio files to be hosted. For example, if a band plays a cover of another band's song, permission must be obtained from not only the band that performed, but the band that owns the song. Perhaps even from the venue that hosted the performance, to record the video and post the content. I am curious how websites that host content like this work. How might an automated copyright system work to keep track of who has ownership of certain performances and obtain permission from said owners to record and post their content.

    Read the article

  • ExtJS Grid slow with 3000+ records

    - by Oliver Watkins
    I am using ExtJS Grid and its getting pretty slow with 3000+ records. Sorting takes about 4 seconds. Compared to other more Javascript tables, this is pretty slow. I am thinking maybe to use pagination in my table. However after reading the documentation, I am still a bit unsure about how pagination works in extjs. Does this pull data from the server each time u turn a page? I would prefer that wasn't the case. I would prefer the 3000 records are saved in the browser and then what is rendered is just a portion of those rows. Also I am using Extjs version 4.2.1. If I upgrade to version 5. will I get some performance improvements?

    Read the article

  • On Memory Allocation and C++

    - by Arpan
    And I quote from MSDN http://msdn.microsoft.com/en-us/library/aa366533(VS.85).aspx: The malloc function has the disadvantage of being run-time dependent. The new operator has the disadvantage of being compiler dependent and language dependent. Now the questions folks: a) What do we mean that malloc is run-time dependent? What kind of dynamic memory allocation functions can be independent of run-time? This statement sounds real strange. b) new is language dependent? Of course it should be right? Are HeapAlloc, LocalAlloc etc language independent? c) From a pure performance perspective are the MSVC provided routines preferable? Arpan

    Read the article

  • Dealing with Windows line-endings in Python

    - by Adam Nelson
    I've got a 700MB XML file coming from a Windows provider. As one might expect, the line endings are '\r\n' (or ^M in vi). What is the most efficient way to deal with this situation aside from getting the supplier to send over '\n' :-) Use os.linesep Use rstrip() (requiring opening the file ... which seems crazy) Using Universal newline support is not standard on my Mac Snow Leopard - so isn't an option. I'm open to anything that requires Python 2.6+ but it needs to work on Snow Leopard and Ubuntu 9.10 with minimal external requirements. I don't mind a small performance penalty but I am looking for the standard best way to deal with this.

    Read the article

  • Reducing template bloat with inheritance

    - by benoitj
    Does anyone have experience reducing template code bloat using inheritance? i hesitate rewriting our containers this way: class vectorBase { public: int size(); void clear(); int m_size; void *m_rawData; //.... }; template< typename T > class vector : public vectorBase { void push_back( const T& ); //... }; I should keep maximum performance while reducing compile time I'm also wondering why stl implementations do not uses this approach Thanks for your feedbacks

    Read the article

  • Different i18n in spring according to url

    - by Fanooos
    I have a spring web application that is required to work as following the application will be accessed from two different URLs www.domain1.com and www.domain2.com and it is required that the two URLs looks like two different applications with different CSS and I18n. for the css part is done but I am stuck with the i18n part How to make spring load different i18n properties file according to the domain name? The solution that I thought in is to implement a filter that check the request URL and according to the URL it clears the message source bean and load the required i18n file but it does not looks good for the performance by the way I am using ReloadableResourceBundleMessageSource message source Another solution is to implement two different message sources. The problem with this solution is that from the source code I can manage the bean that I use but how can I tell the fmt:message tag which data source to use ? Thanks in advance and best regards

    Read the article

  • Interview Question: .Any() vs if (.Length > 0) for testing if a collection has elements

    - by Chris
    In a recent interview I was asked what the difference between .Any() and .Length > 0 was and why I would use either when testing to see if a collection had elements. This threw me a little as it seems a little obvious but feel I may be missing something. I suggested that you use .Length when you simply need to know that a collection has elements and .Any() when you wish to filter the results. Presumably .Any() takes a performance hit too as it has to do a loop / query internally.

    Read the article

  • Emulating a transaction-safe SEQUENCE in MySQL

    - by Michael Pliskin
    We're using MySQL with InnoDB storage engine and transactions a lot, and we've run into a problem: we need a nice way to emulate Oracle's SEQUENCEs in MySQL. The requirements are: - concurrency support - transaction safety - max performance (meaning minimizing locks and deadlocks) We don't care if some of the values won't be used, i.e. gaps in sequence are ok. There is an easy way to archieve that by creating a separate InnoDB table with a counter, however this means it will take part in transaction and will introduce locks and waiting. I am thinking to try a MyISAM table with manual locks, any other ideas or best practices?

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >