Search Results

Search found 14924 results on 597 pages for 'selector performance'.

Page 464/597 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • NHibernate transaction management in ASP.NET MVC - how should it be done?

    - by adrin
    I am writing a simple ASP.NET MVC using session per request and transaction per request patterns (custom HttpModule). It seems to work properly, but.. the performance is terrible (a simple page loads ~7 seconds). For every http request, graphical resources incuding (all images on the site) a transaction is created and that seems to delay the loading times (without the transactions loading times per one image are ~1-10 ms with transactions they are over 1 second). What is the proper way to manage transactions in ASP.NET MVC + NH stack? When i've put all transactions into my repository methods, for some obscure reasons I got 'implicit transactions' warning in NHProf (the SQL statements were executed outside transaction, even that in code session.Save()/Update()/etc methods were invoked within transaction 'using' scope and before transaction.Commit() call) BTW are implicit transactions really bad?

    Read the article

  • rails using jruby 1.5 - slow!!

    - by gucki
    Hi! I'm currently using passenger with ree 1.8.7 in production for a rails 2.3.5 project using postgresql as a database. ab -n 10000 -c 100: 285.69 [#/sec] (mean) I read jruby should be the fastest solution, so I installed jruby-1.5.0.rc2 together with jdbc postgres adapter and glassfish. As the performance is really poor, I also started running my application using "jruby --server -J-Druby.jit.threshold=0 script/server -e production". Anyway, I only get ab -n 10000 -c 100: 43.88 [#/sec] (mean) Thread_safe! is activated in my rails config. Java seems to use all cores, cpu usage is around 350% (top). ruby -v: jruby 1.5.0.RC2 (ruby 1.8.7 patchlevel 249) (2010-04-28 7c245f3) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_16) [amd64-java] I wonder what I'm doing wrong and how to get better performancre with jruby than with ree? Thanks, Corin

    Read the article

  • jQuery/Javascript framework efficiency

    - by Russell
    My latest project is using a javascript framework (jQuery), along with some plugins (validation, jquery-ui, datepicker, facebox, ...) to help make a modern web application. I am now finding pages loading slower than I am used to. After some js profiling (thanks VS2010!), it seems a lot of the time is taken procesing inside the framework. Now I understand the more complex the ui tools, the more processing needs to be done. The project is not yet at a large stage and I think would be average functions. At this stage I can see it is not going to scale well. I noticed things like the 'each' command in jQuery takes quite a lot of processing time. Have others experienced some extra latency using JS frameworks? How do I minimise their effect on page performance? Are there best practices on implementation using JS frameworks? Thanks

    Read the article

  • mySQL and general database normalization question

    - by Sinan
    I have question about normalization. Suppose I have an applications dealing with songs. First I thought about doing like this: Songs Table: id | song_title | album_id | publisher_id | artist_id Albums Table: id | album_title | etc... Publishers Table: id | publisher_name | etc... Artists Tale: id | artist_name | etc... Then as I think about normalization stuff. I thought I should get rid of "album_id, publisher_id, and artist_id in songs table and put them in intermediate tables like this. Table song_album: song_id, album_id Table song_publisher song_id, publisher_id Table song_artist song_id, artist_id Now I can't decide which is the better way. I'm not an expert on database design so If someone would point out the right direction. It would awesome. Are there any performance issues between two approaches? Thanks

    Read the article

  • How to populate List<string> with Datarow values from single columns...

    - by James
    Hi, I'm still learning (baby steps). Messing about with a function and hoping to find a tidier way to deal with my datatables. For the more commonly used tables throughout the life of the program, I'll dump them to datatables and query those instead. What I'm hoping to do is query the datatables for say column x = "this", and convert the values of column "y" directly to a List to return to the caller: private List<string> LookupColumnY(string hex) { List<string> stringlist = new List<string>(); DataRow[] rows = tblDataTable.Select("Columnx = '" + hex + "'", "Columny ASC"); foreach (DataRow row in rows) { stringlist.Add(row["Columny"].ToString()); } return stringlist; } Anyone know a slightly simpler method? I guess this is easy enough, but I'm wondering if I do enough of these if iterating via foreach loop won't be a performance hit. TIA!

    Read the article

  • FreeBSD or NetBSD based commercial TCP/IP stack vendor?

    - by Vineet
    Hi - Receiving recommendations for commercial TCP/IP stack implementation based on FreeBSD or NetBSD. Requirements are similar to a typical desktop PC running a browser, email and streaming voice/video. Which is to say a rich network functionality for a end-host type of device with mature implementation and reasonable performance. BSD derived network stacks are deployed in wide variety of situations for years and hence have mature implementation. It's supposed to run on a proprietary RTOS. Most vendors I found don't advertise if their stack is based on BSD. Any recommendations? -- Vineet

    Read the article

  • High density Silverlight charting control

    - by ahosie
    I've been looking into Silverlight charting controls to display a large number of samples, (~10,000 data points in five separate series - ~50k points all up). I have found the existing options produced by Dundas, Visifire, Microsoft etc to be extremely poor performers when displaying more than a few hundred data points. I believe the performance issues with existing chart controls is caused by the heavy use of vector graphics. Ergo one solution would be a client-side chart control that uses the WritableBitmap class to generate a raster chart. Before I fall too far down the wheel re-invention rabbit hole - has anyone found a third party or OSS control that will manage large numbers of data points on a sparkline?

    Read the article

  • Faking a dynamic schema in Core Data?

    - by Gouldsc
    From reading the Apple Docs on Core Data, I've learned that you should not use Core Data when you need a dynamic schema. If I wanted to provide the user the ability to create their own properties, in a core data model would it work if I created some "dummy" attributes like "custom decimal 1", "custom decimal 2", "custom text 1", "custom text 2" etc that the user could name and use for their own purposes? Obviously this won't work for relationships, but for simple properties it seems like a reasonable workaround. Will creating a bunch of dummy attributes on my entities that go unused by most users noticeably decrease performance for them? Have any of you tried something like this? Thanks!

    Read the article

  • Subclassing UIScrollView for drawing w/o views

    - by David Dunham
    I'm contemplating subclassing UIScrollView (the way UITextView does) to draw a fairly large amount of text (formatted in ways that NSTextView can't). So far the view won't actually scroll. I'm setting contentSize, and when I drag, I see the scroll indicator. But nothing changes (and I don't get a drawRect: message). An alternate approach is to use a child view, and I've done this. The view can be over 5000 pixels high, however, and I'm a bit concerned about performance on an actual device. (The other approach, be like UITableView, would be a huge pain -- I'm "porting" Mac Cocoa code, and a collection of views would be a huge architecture change.) I've done some searching, but haven't found anyone who is using UIScrollView to do the drawing. Has anyone done this and know of any pitfalls?

    Read the article

  • How can I reuse my javascript code between client and server?

    - by Chris Farmer
    I have some javascript code that includes an ANTLR-generated lexer and parser, and some associated syntax tree evaluation functionality. This code runs in the browser in my web app to support users who author code snippets which process scientific data. Now I'd like to do some additional background processing on the server using the same generated parser. I would prefer not to have to re-implement this stuff in C# and have multiple bits of code that did the exact same thing. Performance isn't as critical to me as eliminating duplication, since this is a background process. So, how can I call into my javascript code from C#? And how can I format my script so that it plays nicely with my .NET web app?

    Read the article

  • Flexible argument list in LibreOffice Calc Macro

    - by Patru
    I want to write a function that geometrically links performance data which is usually provided as percentages, so the function will basically return (1+a)*(1+b)*(1+c)* … *(1+x)-1 This should be done using LibreOffice-calc and it should behave similarly to the regular sum function. As you may throw any number of arguments at sum I would like to be able to do the same with my alternative geoSum function but I am unable to find suitable documentation on handling a variable number of arguments with variable types (i.e. an arbitrary mix of numbers, cells and ranges). How would I have to specify the arguments to my LibreOffice-Basic function and how would I have to interpret it?

    Read the article

  • C# Process flow - Datastream, XML and datagrid

    - by Farstucker
    Im looking for some advice/suggestions on how I should setup the work flow of a small application Im building. When the application is launched the datagrid will be populated via the XML file. Once running the application will receive a datastream that I hope to update the file and datagrid. So Im curious what you would suggest on how I setup the workflow (ie, split the data from the data stream and simultaneously populate the file and grid or would you suggest populating the XML file first and setting up a timer to have the grid read the file?) Im really looking for optimal performance.

    Read the article

  • Subsonic SQLite Multiple Files

    - by Marcus Vinicius de LIma
    Hi, I have an application that must be accessed for many users. To optimize the performance I intend to store each user profile information at a independant database file. I need everytime a user login the application, to setup a new provider linked with his own database. All databases have the same structure. So while querying user the commom generated DAL classes must switch for the database file relative the the user. Is there a way for configure SubSonic for doing that switch at runtime? Thanks.

    Read the article

  • Situations to prefer Apache Lucene over Solr?

    - by Karussell
    There are several advantages to use Solr (out-of-the-box facetting search, grouping, replication, http administration vs. luke, ...). Even if I embed a search-functionality in my Java application I could use SolrJ to avoid the HTTP trade-off when using Solr. So, when would you recommend to use "pure-Lucene"? Does it have a better performance or requires less RAM? Is it better unit-testable? PS: I am aware of this question.

    Read the article

  • SQL - query inside NOT IN takes longer than the complete query ??

    - by Aleksandar Tomic
    Hi every1, I'm using NOT IN inside my SQL query. For example: select columnA from table1 where columnA not in ( select columnB from table2) How is it possible that this part of the query select columnB from table2 takes 30sec to complete, but the whole query above takes 0.1sec to complete?? Shouldn't the complete query take 30sec + ? BTW, both queries return valid results. Thanks! Answers to Comments Is it because the second query hasn't actually completed but has only returned back the first 'x' rows (out of a very large table?) No, the query is completed after 30 seconds, not to many rows returned (eg. 50). But @Aleksandar wondered why the question congaing the performance killer was so fast. my point exactly Also how long does select distinct columnB from table2 take to execute? actually, the original query is "select distinct...

    Read the article

  • How to modularize a b2b webservice transformation application

    - by hstoerr
    How would you modularize a large application that has some incoming (SOAP) webservices, some outgoing webservices, transformations between them and internal formats, internal logging services, accesses external archiving webservices, delays stuff and works on this asynchronously and so forth? One way is to split the functionality into a collection of WAR, deploy all of them on one application server and have them communicate with internal webservices. This has some overhead, especially if the messages are large, and you might run into performance problems due to thread count restrictions and so forth. Another way would be to put everything into a giant WAR, such that you can communicate directly. Not exactly modularization. What would you do?

    Read the article

  • select only new row in oracle

    - by Hlex
    Hi, I have table with "varchar2" as primary key. Transaction is about 1 000 000 per day. my app wake up every 5 minute to generate text file by query only new record. It will remember last point and do only new record. 1)Do you have idea how query with good performance? I able to add new column if need. 2)what do you think this process should do by? plsql? java?

    Read the article

  • Does a multithreaded crawler in Python really speed things up?

    - by beagleguy
    Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..? Basically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded? thanks!

    Read the article

  • How can I combine sequential expression trees into a fast method?

    - by chillitom
    Suppose I have the following expressions: Expression<Action<T, StringBuilder>> expr1 = (t, sb) => sb.Append(t.Name); Expression<Action<T, StringBuilder>> expr2 = (t, sb) => sb.Append(", "); Expression<Action<T, StringBuilder>> expr3 = (t, sb) => sb.Append(t.Description); I'd like to be able to compile these into a method/delegate equivalent to the following: void Method(T t, StringBuilder sb) { sb.Append(t.Name); sb.Append(", "); sb.Append(t.Description); } What is the best way to approach this? I'd like it to perform well, ideally with performance equivalent to the above method.

    Read the article

  • The speed of .NET in numerical computing

    - by Yin Zhu
    In my experience, .net is 2 to 3 times slower than native code. (I implemented L-BFGS for multivariate optimization). I have traced the ads on stackoverflow to http://www.centerspace.net/products/ the speed is really amazing, the speed is close to native code. How can they do that? They said that: Q. Is NMath "pure" .NET? A. The answer depends somewhat on your definition of "pure .NET". NMath is written in C#, plus a small Managed C++ layer. For better performance of basic linear algebra operations, however, NMath does rely on the native Intel Math Kernel Library (included with NMath). But there are no COM components, no DLLs--just .NET assemblies. Also, all memory allocated in the Managed C++ layer and used by native code is allocated from the managed heap. Can someone explain more to me? Thanks!

    Read the article

  • Help replace this SQL cursor with better code

    - by user318573
    Can anyone give me a hand improving the performance of this cursor logic from SQL 2000. It runs great in SQl2005 and SQL2008, but takes at least 20 minutes to run in SQL 2000. BTW, I would never choose to use a cursor, and I didn't write this code, just trying to get it to run faster. Upgrading this client to 2005/2008 is not an option in the immediate future. ------------------------------------------------------------------------------- ------- Rollup totals in the chart of accounts hierarchy ------------------------------------------------------------------------------- DECLARE @B_SubTotalAccountID int, @B_Debits money, @B_Credits money, @B_YTDDebits money, @B_YTDCredits money DECLARE Bal CURSOR FAST_FORWARD FOR SELECT SubTotalAccountID, Debits, Credits, YTDDebits, YTDCredits FROM xxx WHERE AccountType = 0 AND SubTotalAccountID Is Not Null and (abs(credits)+abs(debits)+abs(ytdcredits)+abs(ytddebits)<>0) OPEN Bal FETCH NEXT FROM Bal INTO @B_SubTotalAccountID, @B_Debits, @B_Credits, @B_YTDDebits, @B_YTDCredits --For Each Active Account WHILE @@FETCH_STATUS = 0 BEGIN --Loop Until end of subtotal chain is reached WHILE @B_SubTotalAccountID Is Not Null BEGIN UPDATE xxx2 SET Debits = Debits + @B_Debits, Credits = Credits + @B_Credits, YTDDebits = YTDDebits + @B_YTDDebits, YTDCredits = YTDCredits + @B_YTDCredits WHERE GLAccountID = @B_SubTotalAccountID SET @B_SubTotalAccountID = (SELECT SubTotalAccountID FROM xxx2 WHERE GLAccountID = @B_SubTotalAccountID) END FETCH NEXT FROM Bal INTO @B_SubTotalAccountID, @B_Debits, @B_Credits, @B_YTDDebits, @B_YTDCredits END CLOSE Bal DEALLOCATE Bal

    Read the article

  • How to Obtain Data to Pre-Populate Forms.

    - by Stan
    The objective is to have a form reflect user's defined constraints on a search. At first, I relied entirely upon server-side scripting to achieve this; recently I tried to shift the functionality to JavaScript. On the server side, the search parameters are stored in a ColdFusion struct which makes it particularly convenient to have the data JSON'ed and sent to the client. Then it's just a matter of separately iterating over 'checkable' and text fields to reflect the user's search parameters; jQuery proved to be exceptionally effective in simplifying the workload. One observable difference lies in performance. The second method appeared to be somewhat slower and didn't work in IE8. Evidently, the returned JSON'ed struct was seen as an empty object. I'm sure it can be fixed, though before spending any more time with it, I'm curious to hear how others would approach the task. I'd gladly appreciate any suggestions. --Stan

    Read the article

  • How reliable is HTTP compression using gzip?

    - by Liam
    YSlow has suggested that I use HTTP compression to improve the performance of my site. However, as noted by Yahoo that are some problems. There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically. I understand that the most common problem occurs with IE6 behind a proxy. But how common are these problems today? To quantify it, roughly what percentage of web users experience bugs with HTTP compression?

    Read the article

  • Does a Collection<T> wrap an IList<T> or enumerate over the IList<T>?

    - by Brian Triplett
    If I am exposing a internal member via a Collection property via: public Collection<T> Entries { get { return new Collection<T>(this.fieldImplimentingIList<T>); } } When this property is called what happens? For example what happens when the following lines of code are called: T test = instanceOfAbove.Entries[i]; instanceOfAbove[i] = valueOfTypeT; It's clear that each time this property is called a new reference type is created but what acctually happens? Does it simply wrap the IList<T> underneath, does it enumerate over the IList<T> and to create a new Collection<T> instance? I'm concerned about performance if this property is used in a for loop.

    Read the article

  • Compilier optimization of repeated accessor calls C#

    - by apocalypse9
    I've found recently that for some types of financial calculations that the following pattern is much easier to follow and test especially in situations where we may need to get numbers from various stages of the computation. public class nonsensical_calculator { ... double _rate; int _term; int _days; double monthlyRate { get { return _rate / 12; }} public double days { get { return (1 - i); }} double ar { get { return (1+ days) /(monthlyRate * days) double bleh { get { return Math.Pow(ar - days, _term) public double raar { get { return bleh * ar/2 * ar / days; }} .... } Obviously this often results in multiple calls to the same accessor within a given formula. I was curious as to whether or not the compiler is smart enough to optimize away these repeated calls with no intervening change in state, or whether this style is causing a decent performance hit. Further reading suggestions are always appreciated

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >