Search Results

Search found 16126 results on 646 pages for 'wcf performance'.

Page 441/646 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • How to refactor this MySQL code?

    - by Jader Dias
    SELECT * ( SELECT * FROM `table1` WHERE `id` NOT IN ( SELECT `id` FROM `table2` WHERE `col4` = 5 ) group by `col2` having sum(`col3`) > 0 UNION SELECT * FROM `table1` WHERE `id` NOT IN ( SELECT `id` FROM `table2` WHERE `col4` = 5 ) group by `col2` having sum(`col3`) = 0 ) t1; For readability and performance reasons, I think this code could be refactored. But how?

    Read the article

  • Which is the "best" data access framework/approach for C# and .NET?

    - by Frans
    (EDIT: I made it a community wiki as it is more suited to a collaborative format.) There are a plethora of ways to access SQL Server and other databases from .NET. All have their pros and cons and it will never be a simple question of which is "best" - the answer will always be "it depends". However, I am looking for a comparison at a high level of the different approaches and frameworks in the context of different levels of systems. For example, I would imagine that for a quick-and-dirty Web 2.0 application the answer would be very different from an in-house Enterprise-level CRUD application. I am aware that there are numerous questions on Stack Overflow dealing with subsets of this question, but I think it would be useful to try to build a summary comparison. I will endeavour to update the question with corrections and clarifications as we go. So far, this is my understanding at a high level - but I am sure it is wrong... I am primarily focusing on the Microsoft approaches to keep this focused. ADO.NET Entity Framework Database agnostic Good because it allows swapping backends in and out Bad because it can hit performance and database vendors are not too happy about it Seems to be MS's preferred route for the future Complicated to learn (though, see 267357) It is accessed through LINQ to Entities so provides ORM, thus allowing abstraction in your code LINQ to SQL Uncertain future (see Is LINQ to SQL truly dead?) Easy to learn (?) Only works with MS SQL Server See also Pros and cons of LINQ "Standard" ADO.NET No ORM No abstraction so you are back to "roll your own" and play with dynamically generated SQL Direct access, allows potentially better performance This ties in to the age-old debate of whether to focus on objects or relational data, to which the answer of course is "it depends on where the bulk of the work is" and since that is an unanswerable question hopefully we don't have to go in to that too much. IMHO, if your application is primarily manipulating large amounts of data, it does not make sense to abstract it too much into objects in the front-end code, you are better off using stored procedures and dynamic SQL to do as much of the work as possible on the back-end. Whereas, if you primarily have user interaction which causes database interaction at the level of tens or hundreds of rows then ORM makes complete sense. So, I guess my argument for good old-fashioned ADO.NET would be in the case where you manipulate and modify large datasets, in which case you will benefit from the direct access to the backend. Another case, of course, is where you have to access a legacy database that is already guarded by stored procedures. ASP.NET Data Source Controls Are these something altogether different or just a layer over standard ADO.NET? - Would you really use these if you had a DAL or if you implemented LINQ or Entities? NHibernate Seems to be a very powerful and powerful ORM? Open source Some other relevant links; NHibernate or LINQ to SQL Entity Framework vs LINQ to SQL

    Read the article

  • Using XmlSerializers.dll

    - by Erup
    I know the .XmlSerializers.dll generated, is usefull to improve the startup performance of a XmlSerializer when it serializes or deserializes objects. But how clients can use this assembly?

    Read the article

  • string.format vs + for string concatenatoin

    - by AMissico
    Which is better in respect to performance and memory utilization? // + Operator oMessage.Subject = "Agreement, # " + sNumber + ", Name: " + sName; // String.Format oMessage.Subject = string.Format("Agreement, # {0}, Name: {1}", sNumber, sName); My preference is memory utilization. The + operator is used throughout the application. String.Format and StringBuilder is rarely use. I want to reduce the amount of memory fragmentation caused by excessive string allocations.

    Read the article

  • Pointcut matching methods with annotated parameters

    - by Sinuhe
    I need to create an aspect with a pointcut matching a method if: - Is public - Its class is annotated with @Controller - One of its parameters (can have many) is annotated with @MyParamAnnotation. I think the first two conditions are easy, but I don't know if its possible to accomplish the third with Spring. If it is not, maybe I can change it into: - One of its parameters is an instance of type com.me.MyType (or implements some interface) Do you think it's possible to achieve this? And will performance be good? Thanks

    Read the article

  • OpenGL - GL_FRONT versus GL_FRONT_AND_BACK

    - by Drew Noakes
    I'm tinkering with an open source project that uses OpenGL for rendering in 3D. In the construction of the materials I see code like this: // set ambient material reflectance glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, mAmbient); In other examples, this is used: glMaterialfv(GL_FRONT, GL_AMBIENT, mAmbient); So my question is, what is the difference here? Under what circumstances would it look different and, if my volume is enclosed with all normals pointing outwards, is there any performance difference?

    Read the article

  • NHibernate auditing in disconnected mode

    - by Ciaran
    I'm developing an app with a Silverlight UI, transferring my domain objects over WCF and persisting them via NHibernate. I'm therefore working with NHibernate in a disconnected mode. I'm already using the NHibernate PreUpdate and PreInsert EventListeners to perform some metadata operations (updating Create/Update date, created/updated by etc) and they are working fine. I now have a requirement to perform data logging on some of my domain objects. So I will need to have an audit table that has a before-save and after-save state of certain entities. I had wanted to use the @event.Persister.OldState and @event.Persister.NewState to perform this logging, but because I am in a disconnected scenario (using different Sessions from when data is retrieved to when it is persisted), @event.Persister.OldState is null when I am saving my changes back to the database. How is anyone else doing data logging in a disconnected scenario with NHibernate?

    Read the article

  • SQL: How to order values inside group by

    - by Denis Yaremov
    Consider the following MS SQL Server table: ID | X | Y ------+-------+------- 1 | 1 | 1 2 | 1 | 2 3 | 1 | 3 4 | 2 | 40 5 | 2 | 500 6 | 3 | 1 7 | 3 | 100 8 | 3 | 10 I need to select the ID of the row that has the maximum value of Y grouped by x, i.e: ID | X | Y ------+-------+------- 3 | 1 | 3 5 | 2 | 500 7 | 3 | 100 The query will be nested several times so an optimal performance solution is required... Thank you!

    Read the article

  • What is the best way to download files via HTTP using .NET?

    - by Shamika
    In one of my application I'm using the WebClient class to download files from a web server. Depending on the web server sometimes the application download millions of documents. It seems to be when there are lot of documents, performance vise the WebClient doesn't scale up well. Also it seems to be the WebClient doesn't immediately close the connection it opened for the WebServer even after it successfully download the particular document. I would like to know what other alternatives I have.

    Read the article

  • Is there a common practice how to make freeing memory for Garbage Collector easier in .NET?

    - by MartyIX
    I've been thinking if there's a way how to speed up freeing memory in .NET. I'm creating a game in .NET (only managed code) where no significant graphics is needed but still I would like to write it properly in order to not to lose performance for nothing. For example is it useful to assign null value to objects that are not longer needed? I see this in a few samples over Internet. Thanks for answers!

    Read the article

  • Is a display list best for this? (OpenGL)

    - by user146780
    I'm rendering 2D polygons with the GLUTesselator the first time, then they are stored in a display list for subsequent use. I think VBO's might be faster, but since I can't access the stuff that the tesselator outputs, and since it uses mixes of gl_triangle, quad, strip etc, i'm not sure how I could do this, even though I would like to use VBO's once the GLUTesselator is done with them for optimal performance. Thanks

    Read the article

  • Checking an empty Core Data relationship (SQLite)

    - by rwat
    I have a to-many relationship in my data model, and I'd like to get all the objects that have no corresponding objects in the relationship. For example: Customer - Purchases I want to get all Customers that have 0 Purchases. I've read somewhere that I could use "Purchases[SIZE] = 0", but this gives me an unsupported function expression error, which I think means it doesn't work with a SQLite backing store (which I don't want to switch from, due to some performance constraints). Any ideas?

    Read the article

  • Application Engineering and Number of Users

    - by Kramii
    Apart from performance concerns, should web-based applications be built differently according to the number of (concurrent) users? If so, what are the main differences for (say) 4, 40, 400 and 4000 users? I'm particularly interested in how logging, error handling, design patterns etc. would be be used according to the number of concurrent users.

    Read the article

  • How to choose between UUIDs, autoincrement/sequence keys and sequence tables for database primary keys?

    - by Tim
    I'm looking at the pros and cons of these three primary methods of coming up with primary keys for database rows. So assuming I am using a database that supports more than one of these methods, is there a simple heuristic to determine what the best option would be for me? How do considerations such a distributed/multiple masters, performance requirements, ORM use, security and testing have on the choice? Any unexpected drawbacks that one might run into?

    Read the article

  • Which Visual Studio 2010 edition for sole developer

    - by bufferz
    I am the sole .net developer for a small company. My projects span many .net technologies including WinForms, WPF, SQL, XNA, Linq, WCF, WTF?, and others. I struggle staying on top of all these projects so I'm looking to make my life easier with the release of VS2010. Without a mentor I rely heavily on StackOverflow and whatever else Google comes up with. Should I convince my company to get an edition with an MSDN subscription? Is it one of those things where once you have it, you can't imagine life without it? What about the source control that comes with VS2010, do you all find it better than an SVN server? We're looking to hire another programmer this year, would I be best off getting a Team edition of VS2010 to be best prepared for that hire? Thanks!

    Read the article

  • Unfamiliar Javascript Syntax

    - by user1051643
    Long and short of the story is, whilst reading John Resig's blog (specifically http://ejohn.org/blog/javascript-trie-performance-analysis/) I came across a line which makes absolutely no sense to me whatsoever. Essentially it boils down to object = object[key] = something; (this can be found in the first code block of the article I've linked.) This has proven rather difficult to google, so if anyone can offer some insight / a good online resource for me to learn for myself, I'd much appreciate it.

    Read the article

  • Is multithreading the right way to go for my case?

    - by Julien Lebosquain
    Hello, I'm currently designing a multi-client / server application. I'm using plain good old sockets because WCF or similar technology is not what I need. Let me explain: it isn't the classical case of a client simply calling a service; all clients can 'interact' with each other by sending a packet to the server, which will then do some action, and possible re-dispatch an answer message to one or more clients. Although doable with WCF, the application will get pretty complex with hundreds of different messages. For each connected client, I'm of course using asynchronous methods to send and receive bytes. I've got the messages fully working, everything's fine. Except that for each line of code I'm writing, my head just burns because of multithreading issues. Since there could be around 200 clients connected at the same time, I chose to go the fully multithreaded way: each received message on a socket is immediately processed on the thread pool thread it was received, not on a single consumer thread. Since each client can interact with other clients, and indirectly with shared objects on the server, I must protect almost every object that is mutable. I first went with a ReaderWriterLockSlim for each resource that must be protected, but quickly noticed that there are more writes overall than reads in the server application, and switched to the well-known Monitor to simplify the code. So far, so good. Each resource is protected, I have helper classes that I must use to get a lock and its protected resource, so I can't use an object without getting a lock. Moreover, each client has its own lock that is entered as soon as a packet is received from its socket. It's done to prevent other clients from making changes to the state of this client while it has some messages being processed, which is something that will happen frequently. Now, I don't just need to protect resources from concurrent accesses. I must keep every client in sync with the server for some collections I have. One tricky part that I'm currently struggling with is the following: I have a collection of clients. Each client has its own unique ID. When a client connects, it must receive the IDs of every connected client, and each one of them must be notified of the newcomer's ID. When a client disconnects, every other client must know it so that its ID is no longer valid for them. Every client must always have, at a given time, the same clients collection as the server so that I can assume that everybody knows everybody. This way if I'm sending a message to client #1 telling "Client #2 has done something", I know that it will always be correctly interpreted: Client 1 will never wonder "but who is Client 2 anyway?". My first attempt for handling the connection of a new client (let's call it X) was this pseudo-code (remember that newClient is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("newClient with id X has connected"); } } clients.Add(newClient); newClient.Send("the list of other clients"); } Now imagine that in the same time, another client has sent a packet that translates into a message that must be broadcasted to every connected client, the pseudo-code will be something like this (remember that the current client - let's call it Y - is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("something"); } } } An obvious deadlock occurs here: on one thread X is locked, the clients lock has been entered, started looping through the clients, and at one moment must get Y's lock... which is already acquired on the second thread, itself waiting for the clients collection lock to be released! This is not the only case like this in the server application. There are other collections which must be kept in sync with the clients, some properties on a client can be changed by another one, etc. I tried other types of locks, lock-free mechanisms and a bunch of other things. Either there were obvious deadlocks when I'm using too much locks for safety, or obvious race conditions otherwise. When I finally find a good middle point between the two, it usually comes with very subtle race conditions / dead locks and other multi-threading issues... my head hurts very quickly since for any single line of code I'm writing I have to review almost the whole application to ensure everything will behave correctly with any number of threads. So here's my final question: how would you resolve this specific case, the general case, and more importantly: aren't I going the wrong way here? I have little problems with the .NET framework, C#, simple concurrency or algorithms in general. Still, I'm lost here. I know I could use only one thread processing the incoming requests and everything will be fine. However, that won't scale well at all with more clients... But I'm thinking more and more to go this simple way. What do you think? Thanks in advance to you, StackOverflow people which have taken the time to read this huge question. I really had to explain the whole context if I want to get some help.

    Read the article

  • How does JSON compare to XML in terms of file size and serialisation/deserialisation time?

    - by nbolton
    I have an application that performs a little slow over the internet due to bandwidth reasons. I have enabled GZip which has improved download time by a significant amout, but I was also considering whether or not I could switch from XML to JSON in order to squeeze out that last bit of performance. Would using JSON make the message size significantly smaller, or just somewhat smaller? Let's say we're talking about 250kB of XML data (which compresses to 30kB).

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >