Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 464/594 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • rails using jruby 1.5 - slow!!

    - by gucki
    Hi! I'm currently using passenger with ree 1.8.7 in production for a rails 2.3.5 project using postgresql as a database. ab -n 10000 -c 100: 285.69 [#/sec] (mean) I read jruby should be the fastest solution, so I installed jruby-1.5.0.rc2 together with jdbc postgres adapter and glassfish. As the performance is really poor, I also started running my application using "jruby --server -J-Druby.jit.threshold=0 script/server -e production". Anyway, I only get ab -n 10000 -c 100: 43.88 [#/sec] (mean) Thread_safe! is activated in my rails config. Java seems to use all cores, cpu usage is around 350% (top). ruby -v: jruby 1.5.0.RC2 (ruby 1.8.7 patchlevel 249) (2010-04-28 7c245f3) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_16) [amd64-java] I wonder what I'm doing wrong and how to get better performancre with jruby than with ree? Thanks, Corin

    Read the article

  • Cost of using repeated parameters

    - by Palimondo
    I consider refactoring few method signatures that currently take parameter of type List or Set of concrete classes --List[Foo]-- to use repeated parameters instead: Foo*. This would allow me to use the same method name and overload it based on the parameter type. This was not possible using List or Set, because List[Foo] and List[Bar] have same type after erasure: List[Object]. In my case the refactored methods work fine with scala.Seq[Foo] that results from the repeated parameter. I would have to change all the invocations and add a sequence argument type annotation to all collection parameters: baz.doStuffWith(foos:_*). Given that switching from collection parameter to repeated parameter is semantically equivalent, does this change have some performance impact that I should be aware of? Is the answer same for scala 2.7._ and 2.8?

    Read the article

  • mySQL and general database normalization question

    - by Sinan
    I have question about normalization. Suppose I have an applications dealing with songs. First I thought about doing like this: Songs Table: id | song_title | album_id | publisher_id | artist_id Albums Table: id | album_title | etc... Publishers Table: id | publisher_name | etc... Artists Tale: id | artist_name | etc... Then as I think about normalization stuff. I thought I should get rid of "album_id, publisher_id, and artist_id in songs table and put them in intermediate tables like this. Table song_album: song_id, album_id Table song_publisher song_id, publisher_id Table song_artist song_id, artist_id Now I can't decide which is the better way. I'm not an expert on database design so If someone would point out the right direction. It would awesome. Are there any performance issues between two approaches? Thanks

    Read the article

  • FreeBSD or NetBSD based commercial TCP/IP stack vendor?

    - by Vineet
    Hi - Receiving recommendations for commercial TCP/IP stack implementation based on FreeBSD or NetBSD. Requirements are similar to a typical desktop PC running a browser, email and streaming voice/video. Which is to say a rich network functionality for a end-host type of device with mature implementation and reasonable performance. BSD derived network stacks are deployed in wide variety of situations for years and hence have mature implementation. It's supposed to run on a proprietary RTOS. Most vendors I found don't advertise if their stack is based on BSD. Any recommendations? -- Vineet

    Read the article

  • How to populate List<string> with Datarow values from single columns...

    - by James
    Hi, I'm still learning (baby steps). Messing about with a function and hoping to find a tidier way to deal with my datatables. For the more commonly used tables throughout the life of the program, I'll dump them to datatables and query those instead. What I'm hoping to do is query the datatables for say column x = "this", and convert the values of column "y" directly to a List to return to the caller: private List<string> LookupColumnY(string hex) { List<string> stringlist = new List<string>(); DataRow[] rows = tblDataTable.Select("Columnx = '" + hex + "'", "Columny ASC"); foreach (DataRow row in rows) { stringlist.Add(row["Columny"].ToString()); } return stringlist; } Anyone know a slightly simpler method? I guess this is easy enough, but I'm wondering if I do enough of these if iterating via foreach loop won't be a performance hit. TIA!

    Read the article

  • jQuery/Javascript framework efficiency

    - by Russell
    My latest project is using a javascript framework (jQuery), along with some plugins (validation, jquery-ui, datepicker, facebox, ...) to help make a modern web application. I am now finding pages loading slower than I am used to. After some js profiling (thanks VS2010!), it seems a lot of the time is taken procesing inside the framework. Now I understand the more complex the ui tools, the more processing needs to be done. The project is not yet at a large stage and I think would be average functions. At this stage I can see it is not going to scale well. I noticed things like the 'each' command in jQuery takes quite a lot of processing time. Have others experienced some extra latency using JS frameworks? How do I minimise their effect on page performance? Are there best practices on implementation using JS frameworks? Thanks

    Read the article

  • High density Silverlight charting control

    - by ahosie
    I've been looking into Silverlight charting controls to display a large number of samples, (~10,000 data points in five separate series - ~50k points all up). I have found the existing options produced by Dundas, Visifire, Microsoft etc to be extremely poor performers when displaying more than a few hundred data points. I believe the performance issues with existing chart controls is caused by the heavy use of vector graphics. Ergo one solution would be a client-side chart control that uses the WritableBitmap class to generate a raster chart. Before I fall too far down the wheel re-invention rabbit hole - has anyone found a third party or OSS control that will manage large numbers of data points on a sparkline?

    Read the article

  • Subclassing UIScrollView for drawing w/o views

    - by David Dunham
    I'm contemplating subclassing UIScrollView (the way UITextView does) to draw a fairly large amount of text (formatted in ways that NSTextView can't). So far the view won't actually scroll. I'm setting contentSize, and when I drag, I see the scroll indicator. But nothing changes (and I don't get a drawRect: message). An alternate approach is to use a child view, and I've done this. The view can be over 5000 pixels high, however, and I'm a bit concerned about performance on an actual device. (The other approach, be like UITableView, would be a huge pain -- I'm "porting" Mac Cocoa code, and a collection of views would be a huge architecture change.) I've done some searching, but haven't found anyone who is using UIScrollView to do the drawing. Has anyone done this and know of any pitfalls?

    Read the article

  • Faking a dynamic schema in Core Data?

    - by Gouldsc
    From reading the Apple Docs on Core Data, I've learned that you should not use Core Data when you need a dynamic schema. If I wanted to provide the user the ability to create their own properties, in a core data model would it work if I created some "dummy" attributes like "custom decimal 1", "custom decimal 2", "custom text 1", "custom text 2" etc that the user could name and use for their own purposes? Obviously this won't work for relationships, but for simple properties it seems like a reasonable workaround. Will creating a bunch of dummy attributes on my entities that go unused by most users noticeably decrease performance for them? Have any of you tried something like this? Thanks!

    Read the article

  • NSThread vs. NSOperationQueue vs. ??? on the iPhone

    - by kubi
    Currently I'm using NSThread to cache images in another thread. [NSThread detachNewThreadSelector:@selector(cacheImage:) toTarget:self withObject:image]; Alternatively: [self performSelectorInBackground:@selector(cacheImage:) withObject:image]; Alternatively, I can use an NSOperationQueue NSInvocationOperation *invOperation = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(cacheImage:) object:image]; NSOperationQueue *opQueue = [[NSOperationQueue alloc] init]; [opQueue addOperation:invOperation]; Is there any reason to switch away from NSThread? GCD is a 4th option when it's released for the iPhone, but unless there's a significant performance gain, I'd rather stick with methods that work in most platforms.

    Read the article

  • How can I reuse my javascript code between client and server?

    - by Chris Farmer
    I have some javascript code that includes an ANTLR-generated lexer and parser, and some associated syntax tree evaluation functionality. This code runs in the browser in my web app to support users who author code snippets which process scientific data. Now I'd like to do some additional background processing on the server using the same generated parser. I would prefer not to have to re-implement this stuff in C# and have multiple bits of code that did the exact same thing. Performance isn't as critical to me as eliminating duplication, since this is a background process. So, how can I call into my javascript code from C#? And how can I format my script so that it plays nicely with my .NET web app?

    Read the article

  • Subsonic SQLite Multiple Files

    - by Marcus Vinicius de LIma
    Hi, I have an application that must be accessed for many users. To optimize the performance I intend to store each user profile information at a independant database file. I need everytime a user login the application, to setup a new provider linked with his own database. All databases have the same structure. So while querying user the commom generated DAL classes must switch for the database file relative the the user. Is there a way for configure SubSonic for doing that switch at runtime? Thanks.

    Read the article

  • C# Process flow - Datastream, XML and datagrid

    - by Farstucker
    Im looking for some advice/suggestions on how I should setup the work flow of a small application Im building. When the application is launched the datagrid will be populated via the XML file. Once running the application will receive a datastream that I hope to update the file and datagrid. So Im curious what you would suggest on how I setup the workflow (ie, split the data from the data stream and simultaneously populate the file and grid or would you suggest populating the XML file first and setting up a timer to have the grid read the file?) Im really looking for optimal performance.

    Read the article

  • Flexible argument list in LibreOffice Calc Macro

    - by Patru
    I want to write a function that geometrically links performance data which is usually provided as percentages, so the function will basically return (1+a)*(1+b)*(1+c)* … *(1+x)-1 This should be done using LibreOffice-calc and it should behave similarly to the regular sum function. As you may throw any number of arguments at sum I would like to be able to do the same with my alternative geoSum function but I am unable to find suitable documentation on handling a variable number of arguments with variable types (i.e. an arbitrary mix of numbers, cells and ranges). How would I have to specify the arguments to my LibreOffice-Basic function and how would I have to interpret it?

    Read the article

  • Situations to prefer Apache Lucene over Solr?

    - by Karussell
    There are several advantages to use Solr (out-of-the-box facetting search, grouping, replication, http administration vs. luke, ...). Even if I embed a search-functionality in my Java application I could use SolrJ to avoid the HTTP trade-off when using Solr. So, when would you recommend to use "pure-Lucene"? Does it have a better performance or requires less RAM? Is it better unit-testable? PS: I am aware of this question.

    Read the article

  • SQL - query inside NOT IN takes longer than the complete query ??

    - by Aleksandar Tomic
    Hi every1, I'm using NOT IN inside my SQL query. For example: select columnA from table1 where columnA not in ( select columnB from table2) How is it possible that this part of the query select columnB from table2 takes 30sec to complete, but the whole query above takes 0.1sec to complete?? Shouldn't the complete query take 30sec + ? BTW, both queries return valid results. Thanks! Answers to Comments Is it because the second query hasn't actually completed but has only returned back the first 'x' rows (out of a very large table?) No, the query is completed after 30 seconds, not to many rows returned (eg. 50). But @Aleksandar wondered why the question congaing the performance killer was so fast. my point exactly Also how long does select distinct columnB from table2 take to execute? actually, the original query is "select distinct...

    Read the article

  • select only new row in oracle

    - by Hlex
    Hi, I have table with "varchar2" as primary key. Transaction is about 1 000 000 per day. my app wake up every 5 minute to generate text file by query only new record. It will remember last point and do only new record. 1)Do you have idea how query with good performance? I able to add new column if need. 2)what do you think this process should do by? plsql? java?

    Read the article

  • How to modularize a b2b webservice transformation application

    - by hstoerr
    How would you modularize a large application that has some incoming (SOAP) webservices, some outgoing webservices, transformations between them and internal formats, internal logging services, accesses external archiving webservices, delays stuff and works on this asynchronously and so forth? One way is to split the functionality into a collection of WAR, deploy all of them on one application server and have them communicate with internal webservices. This has some overhead, especially if the messages are large, and you might run into performance problems due to thread count restrictions and so forth. Another way would be to put everything into a giant WAR, such that you can communicate directly. Not exactly modularization. What would you do?

    Read the article

  • How can I combine sequential expression trees into a fast method?

    - by chillitom
    Suppose I have the following expressions: Expression<Action<T, StringBuilder>> expr1 = (t, sb) => sb.Append(t.Name); Expression<Action<T, StringBuilder>> expr2 = (t, sb) => sb.Append(", "); Expression<Action<T, StringBuilder>> expr3 = (t, sb) => sb.Append(t.Description); I'd like to be able to compile these into a method/delegate equivalent to the following: void Method(T t, StringBuilder sb) { sb.Append(t.Name); sb.Append(", "); sb.Append(t.Description); } What is the best way to approach this? I'd like it to perform well, ideally with performance equivalent to the above method.

    Read the article

  • Does a multithreaded crawler in Python really speed things up?

    - by beagleguy
    Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..? Basically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded? thanks!

    Read the article

  • HAProxy + Percona XtraDB Cluster

    - by rottmanj
    I am attempting to setup HAproxy in conjunction with Percona XtraDB Cluster on a series of 3 EC2 instances. I have found a few tutorials online dealing with this specific issue, but I am a bit stuck. Both the Percona servers and the HAproxy servers are running ubuntu 12.04. The HAProxy version is 1.4.18, When I start HAProxy I get the following error: Server pxc-back/db01 is DOWN, reason: Socket error, check duration: 2ms. I am not really sure what the issue could be. I have verified the following: EC2 security groups ports are open Poured over my config files looking for issues. I currently do not see any. Ensured that xinetd was installed Ensured that I am using the correct ip address of the mysql server. Any help with this is greatly appreciated. Here are my current config Load Balancer /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy debug #quiet daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend pxc-front bind 0.0.0.0:3307 mode tcp default_backend pxc-back frontend stats-front bind 0.0.0.0:22002 mode http default_backend stats-back backend pxc-back mode tcp balance leastconn option httpchk server db01 10.86.154.105:3306 check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats MySql Server /etc/xinetd.d/mysqlchk # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID #only_from = 0.0.0.0/0 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } MySql Server /etc/services Added the line mysqlchk 9200/tcp # mysqlchk MySql Server /usr/bin/clustercheck # GNU nano 2.2.6 File: /usr/bin/clustercheck #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="testuser" MYSQL_PASSWORD="" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" fi

    Read the article

  • Help replace this SQL cursor with better code

    - by user318573
    Can anyone give me a hand improving the performance of this cursor logic from SQL 2000. It runs great in SQl2005 and SQL2008, but takes at least 20 minutes to run in SQL 2000. BTW, I would never choose to use a cursor, and I didn't write this code, just trying to get it to run faster. Upgrading this client to 2005/2008 is not an option in the immediate future. ------------------------------------------------------------------------------- ------- Rollup totals in the chart of accounts hierarchy ------------------------------------------------------------------------------- DECLARE @B_SubTotalAccountID int, @B_Debits money, @B_Credits money, @B_YTDDebits money, @B_YTDCredits money DECLARE Bal CURSOR FAST_FORWARD FOR SELECT SubTotalAccountID, Debits, Credits, YTDDebits, YTDCredits FROM xxx WHERE AccountType = 0 AND SubTotalAccountID Is Not Null and (abs(credits)+abs(debits)+abs(ytdcredits)+abs(ytddebits)<>0) OPEN Bal FETCH NEXT FROM Bal INTO @B_SubTotalAccountID, @B_Debits, @B_Credits, @B_YTDDebits, @B_YTDCredits --For Each Active Account WHILE @@FETCH_STATUS = 0 BEGIN --Loop Until end of subtotal chain is reached WHILE @B_SubTotalAccountID Is Not Null BEGIN UPDATE xxx2 SET Debits = Debits + @B_Debits, Credits = Credits + @B_Credits, YTDDebits = YTDDebits + @B_YTDDebits, YTDCredits = YTDCredits + @B_YTDCredits WHERE GLAccountID = @B_SubTotalAccountID SET @B_SubTotalAccountID = (SELECT SubTotalAccountID FROM xxx2 WHERE GLAccountID = @B_SubTotalAccountID) END FETCH NEXT FROM Bal INTO @B_SubTotalAccountID, @B_Debits, @B_Credits, @B_YTDDebits, @B_YTDCredits END CLOSE Bal DEALLOCATE Bal

    Read the article

  • The speed of .NET in numerical computing

    - by Yin Zhu
    In my experience, .net is 2 to 3 times slower than native code. (I implemented L-BFGS for multivariate optimization). I have traced the ads on stackoverflow to http://www.centerspace.net/products/ the speed is really amazing, the speed is close to native code. How can they do that? They said that: Q. Is NMath "pure" .NET? A. The answer depends somewhat on your definition of "pure .NET". NMath is written in C#, plus a small Managed C++ layer. For better performance of basic linear algebra operations, however, NMath does rely on the native Intel Math Kernel Library (included with NMath). But there are no COM components, no DLLs--just .NET assemblies. Also, all memory allocated in the Managed C++ layer and used by native code is allocated from the managed heap. Can someone explain more to me? Thanks!

    Read the article

  • How to Obtain Data to Pre-Populate Forms.

    - by Stan
    The objective is to have a form reflect user's defined constraints on a search. At first, I relied entirely upon server-side scripting to achieve this; recently I tried to shift the functionality to JavaScript. On the server side, the search parameters are stored in a ColdFusion struct which makes it particularly convenient to have the data JSON'ed and sent to the client. Then it's just a matter of separately iterating over 'checkable' and text fields to reflect the user's search parameters; jQuery proved to be exceptionally effective in simplifying the workload. One observable difference lies in performance. The second method appeared to be somewhat slower and didn't work in IE8. Evidently, the returned JSON'ed struct was seen as an empty object. I'm sure it can be fixed, though before spending any more time with it, I'm curious to hear how others would approach the task. I'd gladly appreciate any suggestions. --Stan

    Read the article

  • How reliable is HTTP compression using gzip?

    - by Liam
    YSlow has suggested that I use HTTP compression to improve the performance of my site. However, as noted by Yahoo that are some problems. There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically. I understand that the most common problem occurs with IE6 behind a proxy. But how common are these problems today? To quantify it, roughly what percentage of web users experience bugs with HTTP compression?

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >