Search Results

Search found 14924 results on 597 pages for 'selector performance'.

Page 464/597 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • How can I combine sequential expression trees into a fast method?

    - by chillitom
    Suppose I have the following expressions: Expression<Action<T, StringBuilder>> expr1 = (t, sb) => sb.Append(t.Name); Expression<Action<T, StringBuilder>> expr2 = (t, sb) => sb.Append(", "); Expression<Action<T, StringBuilder>> expr3 = (t, sb) => sb.Append(t.Description); I'd like to be able to compile these into a method/delegate equivalent to the following: void Method(T t, StringBuilder sb) { sb.Append(t.Name); sb.Append(", "); sb.Append(t.Description); } What is the best way to approach this? I'd like it to perform well, ideally with performance equivalent to the above method.

    Read the article

  • jQuery/Javascript framework efficiency

    - by Russell
    My latest project is using a javascript framework (jQuery), along with some plugins (validation, jquery-ui, datepicker, facebox, ...) to help make a modern web application. I am now finding pages loading slower than I am used to. After some js profiling (thanks VS2010!), it seems a lot of the time is taken procesing inside the framework. Now I understand the more complex the ui tools, the more processing needs to be done. The project is not yet at a large stage and I think would be average functions. At this stage I can see it is not going to scale well. I noticed things like the 'each' command in jQuery takes quite a lot of processing time. Have others experienced some extra latency using JS frameworks? How do I minimise their effect on page performance? Are there best practices on implementation using JS frameworks? Thanks

    Read the article

  • Google Analytics API - Tying Behavior to Specific Dates

    - by DavidS
    I am using the API to understand the performance of Adwords ad campaigns. I need to know how to attribute metrics back to the date dimension. For instance, for a given date, if I have 20 clicks, 18 visits, and 3 goal completions, does it mean that: 1) All of these actions happened on the day in question and are otherwise independent (meaning that the 3 goals could have been for people that clicked any time in the past 30 days, not who clicked on that day) 2) The on-site actions are a subset of the click activity on that day (i.e. on that day, 20 people clicked, 18 registered a real visit, and 3 completed a goal) If it is scenario 2, does that mean there is a need to refresh old rows every day? Thanks!

    Read the article

  • Subclassing UIScrollView for drawing w/o views

    - by David Dunham
    I'm contemplating subclassing UIScrollView (the way UITextView does) to draw a fairly large amount of text (formatted in ways that NSTextView can't). So far the view won't actually scroll. I'm setting contentSize, and when I drag, I see the scroll indicator. But nothing changes (and I don't get a drawRect: message). An alternate approach is to use a child view, and I've done this. The view can be over 5000 pixels high, however, and I'm a bit concerned about performance on an actual device. (The other approach, be like UITableView, would be a huge pain -- I'm "porting" Mac Cocoa code, and a collection of views would be a huge architecture change.) I've done some searching, but haven't found anyone who is using UIScrollView to do the drawing. Has anyone done this and know of any pitfalls?

    Read the article

  • Situations to prefer Apache Lucene over Solr?

    - by Karussell
    There are several advantages to use Solr (out-of-the-box facetting search, grouping, replication, http administration vs. luke, ...). Even if I embed a search-functionality in my Java application I could use SolrJ to avoid the HTTP trade-off when using Solr. So, when would you recommend to use "pure-Lucene"? Does it have a better performance or requires less RAM? Is it better unit-testable? PS: I am aware of this question.

    Read the article

  • DbDataReader with DbTransactions

    - by Gustavo Paulillo
    Its the wrong way or lack of performance, using DbDataReader combinated with DbTransactions? An example of code: public DbDataReader ExecuteReader() { try { if (this._baseConnection.State == ConnectionState.Closed) this._baseConnection.Open(); if (this._baseCommand.Transaction != null) return this._baseCommand.ExecuteReader(); return this._baseCommand.ExecuteReader(CommandBehavior.CloseConnection); } catch (Exception excp) { if (this._baseCommand.Transaction != null) this._baseCommand.Transaction.Rollback(); this._baseCommand.CommandText = string.Empty; this._baseConnection.Close(); throw new Exception(excp.Message); } } Some methods call this operation. Sometimes openning a DbTransaction. Its using DbConnection and DbCommand. The real problem, is in production enviroment (like 5,000 access/day) the ADO operations start throwing exceptions

    Read the article

  • Variable as numeric sent to stored procedure (SQL Server 2005)

    - by TimCarrett
    I see that with SQL Server 2005 you can pass a parameter as numeric e.g. create procedure dbo.TestSP @Param1 numeric as But what does this equate to? E.g. Numeric(10,0), Numeric(9,2), etc? We have some Developers here who are using this instead of the correct definition for the field that this parameter is going to be used against e.g. instead of using Numeric(10, 0) for the parameter @Param1. Also are there any underlying performance issues with using Numeric instead of the data type defined against the field in the table? Many thanks.

    Read the article

  • Separate Database for Integration Testing

    - by john doe
    I am performance integration testing where I fire up the ASPX pages using WatiN and fill the fields and insert into the database. There are couple of problems that I am facing. 1) Should I use a completely separate database for integration testing? I already gave db_test and db_dev. db_test is for unit testing and is cleared after each test. db_dev is for developers. 2) When I run WatiN test which are contained in a separate assembly (not separate from unit test assembly which should be better since WatiN test take so much time to run). So WatiN test fire up the WebApps project and uses their web.config which is pointing to the dev database. Is there anyway I can tell WatiN to use a separate web.config which contains a different database name?

    Read the article

  • Symfony app - how to add calculated fields to Propel objects?

    - by Thomas Kohl
    What is the best way of working with calculated fields of Propel objects? Say I have an object "Customer" that has a corresponding table "customers" and each column corresponds to an attribute of my object. What I would like to do is: add a calculated attribute "Number of completed orders" to my object when using it on View A but not on Views B and C. The calculated attribute is a COUNT() of "Order" objects linked to my "Customer" object via ID. What I can do now is to first select all Customer objects, then iteratively count Orders for all of them, but I'd think doing it in a single query would improve performance. But I cannot properly "hydrate" my Propel object since it does not contain the definition of the calculated field(s). How would you approach it?

    Read the article

  • Interspire to Magento migration

    - by patrikas
    Hello, I recently started with Magento and decided to migrate Interspire shopping cart I already made time ago to it. At first look Magento seems a very huge beast - lots of options, maybe lack of simplicity resulting in some performance loss. I've got user guide from which I am not getting much of benefit since there're just descriptions of very ordinary tasks that I could easily discover myself by poking around frontend/backend. So my first tasks are category and product export. Interspire seems to be exporting ONLY products in three available formats: Default MYOB Peachtree accounting I did some searching on Magento's product importing and found a blog post which says that I should create a few sample products with all the necessary attributes myself and then start the import. But what should I do with categories ? Is it possible to import them or instruct Magento to automatically create categories when importing product file if unknown category is encountered ? Thanks

    Read the article

  • Faking a dynamic schema in Core Data?

    - by Gouldsc
    From reading the Apple Docs on Core Data, I've learned that you should not use Core Data when you need a dynamic schema. If I wanted to provide the user the ability to create their own properties, in a core data model would it work if I created some "dummy" attributes like "custom decimal 1", "custom decimal 2", "custom text 1", "custom text 2" etc that the user could name and use for their own purposes? Obviously this won't work for relationships, but for simple properties it seems like a reasonable workaround. Will creating a bunch of dummy attributes on my entities that go unused by most users noticeably decrease performance for them? Have any of you tried something like this? Thanks!

    Read the article

  • Help replace this SQL cursor with better code

    - by user318573
    Can anyone give me a hand improving the performance of this cursor logic from SQL 2000. It runs great in SQl2005 and SQL2008, but takes at least 20 minutes to run in SQL 2000. BTW, I would never choose to use a cursor, and I didn't write this code, just trying to get it to run faster. Upgrading this client to 2005/2008 is not an option in the immediate future. ------------------------------------------------------------------------------- ------- Rollup totals in the chart of accounts hierarchy ------------------------------------------------------------------------------- DECLARE @B_SubTotalAccountID int, @B_Debits money, @B_Credits money, @B_YTDDebits money, @B_YTDCredits money DECLARE Bal CURSOR FAST_FORWARD FOR SELECT SubTotalAccountID, Debits, Credits, YTDDebits, YTDCredits FROM xxx WHERE AccountType = 0 AND SubTotalAccountID Is Not Null and (abs(credits)+abs(debits)+abs(ytdcredits)+abs(ytddebits)<>0) OPEN Bal FETCH NEXT FROM Bal INTO @B_SubTotalAccountID, @B_Debits, @B_Credits, @B_YTDDebits, @B_YTDCredits --For Each Active Account WHILE @@FETCH_STATUS = 0 BEGIN --Loop Until end of subtotal chain is reached WHILE @B_SubTotalAccountID Is Not Null BEGIN UPDATE xxx2 SET Debits = Debits + @B_Debits, Credits = Credits + @B_Credits, YTDDebits = YTDDebits + @B_YTDDebits, YTDCredits = YTDCredits + @B_YTDCredits WHERE GLAccountID = @B_SubTotalAccountID SET @B_SubTotalAccountID = (SELECT SubTotalAccountID FROM xxx2 WHERE GLAccountID = @B_SubTotalAccountID) END FETCH NEXT FROM Bal INTO @B_SubTotalAccountID, @B_Debits, @B_Credits, @B_YTDDebits, @B_YTDCredits END CLOSE Bal DEALLOCATE Bal

    Read the article

  • SQL - query inside NOT IN takes longer than the complete query ??

    - by Aleksandar Tomic
    Hi every1, I'm using NOT IN inside my SQL query. For example: select columnA from table1 where columnA not in ( select columnB from table2) How is it possible that this part of the query select columnB from table2 takes 30sec to complete, but the whole query above takes 0.1sec to complete?? Shouldn't the complete query take 30sec + ? BTW, both queries return valid results. Thanks! Answers to Comments Is it because the second query hasn't actually completed but has only returned back the first 'x' rows (out of a very large table?) No, the query is completed after 30 seconds, not to many rows returned (eg. 50). But @Aleksandar wondered why the question congaing the performance killer was so fast. my point exactly Also how long does select distinct columnB from table2 take to execute? actually, the original query is "select distinct...

    Read the article

  • Does a multithreaded crawler in Python really speed things up?

    - by beagleguy
    Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..? Basically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded? thanks!

    Read the article

  • How to Obtain Data to Pre-Populate Forms.

    - by Stan
    The objective is to have a form reflect user's defined constraints on a search. At first, I relied entirely upon server-side scripting to achieve this; recently I tried to shift the functionality to JavaScript. On the server side, the search parameters are stored in a ColdFusion struct which makes it particularly convenient to have the data JSON'ed and sent to the client. Then it's just a matter of separately iterating over 'checkable' and text fields to reflect the user's search parameters; jQuery proved to be exceptionally effective in simplifying the workload. One observable difference lies in performance. The second method appeared to be somewhat slower and didn't work in IE8. Evidently, the returned JSON'ed struct was seen as an empty object. I'm sure it can be fixed, though before spending any more time with it, I'm curious to hear how others would approach the task. I'd gladly appreciate any suggestions. --Stan

    Read the article

  • C# Process flow - Datastream, XML and datagrid

    - by Farstucker
    Im looking for some advice/suggestions on how I should setup the work flow of a small application Im building. When the application is launched the datagrid will be populated via the XML file. Once running the application will receive a datastream that I hope to update the file and datagrid. So Im curious what you would suggest on how I setup the workflow (ie, split the data from the data stream and simultaneously populate the file and grid or would you suggest populating the XML file first and setting up a timer to have the grid read the file?) Im really looking for optimal performance.

    Read the article

  • rails using jruby 1.5 - slow!!

    - by gucki
    Hi! I'm currently using passenger with ree 1.8.7 in production for a rails 2.3.5 project using postgresql as a database. ab -n 10000 -c 100: 285.69 [#/sec] (mean) I read jruby should be the fastest solution, so I installed jruby-1.5.0.rc2 together with jdbc postgres adapter and glassfish. As the performance is really poor, I also started running my application using "jruby --server -J-Druby.jit.threshold=0 script/server -e production". Anyway, I only get ab -n 10000 -c 100: 43.88 [#/sec] (mean) Thread_safe! is activated in my rails config. Java seems to use all cores, cpu usage is around 350% (top). ruby -v: jruby 1.5.0.RC2 (ruby 1.8.7 patchlevel 249) (2010-04-28 7c245f3) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_16) [amd64-java] I wonder what I'm doing wrong and how to get better performancre with jruby than with ree? Thanks, Corin

    Read the article

  • How can I reuse my javascript code between client and server?

    - by Chris Farmer
    I have some javascript code that includes an ANTLR-generated lexer and parser, and some associated syntax tree evaluation functionality. This code runs in the browser in my web app to support users who author code snippets which process scientific data. Now I'd like to do some additional background processing on the server using the same generated parser. I would prefer not to have to re-implement this stuff in C# and have multiple bits of code that did the exact same thing. Performance isn't as critical to me as eliminating duplication, since this is a background process. So, how can I call into my javascript code from C#? And how can I format my script so that it plays nicely with my .NET web app?

    Read the article

  • High density Silverlight charting control

    - by ahosie
    I've been looking into Silverlight charting controls to display a large number of samples, (~10,000 data points in five separate series - ~50k points all up). I have found the existing options produced by Dundas, Visifire, Microsoft etc to be extremely poor performers when displaying more than a few hundred data points. I believe the performance issues with existing chart controls is caused by the heavy use of vector graphics. Ergo one solution would be a client-side chart control that uses the WritableBitmap class to generate a raster chart. Before I fall too far down the wheel re-invention rabbit hole - has anyone found a third party or OSS control that will manage large numbers of data points on a sparkline?

    Read the article

  • How to populate List<string> with Datarow values from single columns...

    - by James
    Hi, I'm still learning (baby steps). Messing about with a function and hoping to find a tidier way to deal with my datatables. For the more commonly used tables throughout the life of the program, I'll dump them to datatables and query those instead. What I'm hoping to do is query the datatables for say column x = "this", and convert the values of column "y" directly to a List to return to the caller: private List<string> LookupColumnY(string hex) { List<string> stringlist = new List<string>(); DataRow[] rows = tblDataTable.Select("Columnx = '" + hex + "'", "Columny ASC"); foreach (DataRow row in rows) { stringlist.Add(row["Columny"].ToString()); } return stringlist; } Anyone know a slightly simpler method? I guess this is easy enough, but I'm wondering if I do enough of these if iterating via foreach loop won't be a performance hit. TIA!

    Read the article

  • How to modularize a b2b webservice transformation application

    - by hstoerr
    How would you modularize a large application that has some incoming (SOAP) webservices, some outgoing webservices, transformations between them and internal formats, internal logging services, accesses external archiving webservices, delays stuff and works on this asynchronously and so forth? One way is to split the functionality into a collection of WAR, deploy all of them on one application server and have them communicate with internal webservices. This has some overhead, especially if the messages are large, and you might run into performance problems due to thread count restrictions and so forth. Another way would be to put everything into a giant WAR, such that you can communicate directly. Not exactly modularization. What would you do?

    Read the article

  • The speed of .NET in numerical computing

    - by Yin Zhu
    In my experience, .net is 2 to 3 times slower than native code. (I implemented L-BFGS for multivariate optimization). I have traced the ads on stackoverflow to http://www.centerspace.net/products/ the speed is really amazing, the speed is close to native code. How can they do that? They said that: Q. Is NMath "pure" .NET? A. The answer depends somewhat on your definition of "pure .NET". NMath is written in C#, plus a small Managed C++ layer. For better performance of basic linear algebra operations, however, NMath does rely on the native Intel Math Kernel Library (included with NMath). But there are no COM components, no DLLs--just .NET assemblies. Also, all memory allocated in the Managed C++ layer and used by native code is allocated from the managed heap. Can someone explain more to me? Thanks!

    Read the article

  • Best way to store large dataset in SQL Server?

    - by gary
    I have a dataset which contains a string key field and up to 50 keywords associated with that information. Once the data has been inserted into the database there will be very few writes (INSERTS) but mostly queries for one or more keywords. I have read "Tagsystems: performance tests" which is MySQL based and it seems 2NF appears to be a good method for implementing this, however I was wondering if anyone had experience with doing this with SQL Server 2008 and very large datasets. I am likely to initially have 1 million key fields which could have up to 50 keywords each. Would a structure of keyfield, keyword1, keyword2, ... , keyword50 be the best solution or two tables keyid keyfield | 1 | | M keyid keyword Be a better idea if my queries are mostly going to be looking for results that have one or more keywords?

    Read the article

  • improving conversions to binary and back in C#

    - by Saad Imran.
    I'm trying to write a general purpose socket server for a game I'm working on. I know I could very well use already built servers like SmartFox and Photon, but I wan't to go through the pain of creating one myself for learning purposes. I've come up with a BSON inspired protocol to convert the the basic data types, their arrays, and a special GSObject to binary and arrange them in a way so that it can be put back together into object form on the client end. At the core, the conversion methods utilize the .Net BitConverter class to convert the basic data types to binary. Anyways, the problem is performance, if I loop 50,000 times and convert my GSObject to binary each time it takes about 5500ms (the resulting byte[] is just 192 bytes per conversion). I think think this would be way too slow for an MMO that sends 5-10 position updates per second with a 1000 concurrent users. Yes, I know it's unlikely that a game will have a 1000 users on at the same time, but like I said earlier this is supposed to be a learning process for me, I want to go out of my way and build something that scales well and can handle at least a few thousand users. So yea, if anyone's aware of other conversion techniques or sees where I'm loosing performance I would appreciate the help. GSBitConverter.cs This is the main conversion class, it adds extension methods to main datatypes to convert to the binary format. It uses the BitConverter class to convert the base types. I've shown only the code to convert integer and integer arrays, but the rest of the method are pretty much replicas of those two, they just overload the type. public static class GSBitConverter { public static byte[] ToGSBinary(this short value) { return BitConverter.GetBytes(value); } public static byte[] ToGSBinary(this IEnumerable<short> value) { List<byte> bytes = new List<byte>(); short length = (short)value.Count(); bytes.AddRange(length.ToGSBinary()); for (int i = 0; i < length; i++) bytes.AddRange(value.ElementAt(i).ToGSBinary()); return bytes.ToArray(); } public static byte[] ToGSBinary(this bool value); public static byte[] ToGSBinary(this IEnumerable<bool> value); public static byte[] ToGSBinary(this IEnumerable<byte> value); public static byte[] ToGSBinary(this int value); public static byte[] ToGSBinary(this IEnumerable<int> value); public static byte[] ToGSBinary(this long value); public static byte[] ToGSBinary(this IEnumerable<long> value); public static byte[] ToGSBinary(this float value); public static byte[] ToGSBinary(this IEnumerable<float> value); public static byte[] ToGSBinary(this double value); public static byte[] ToGSBinary(this IEnumerable<double> value); public static byte[] ToGSBinary(this string value); public static byte[] ToGSBinary(this IEnumerable<string> value); public static string GetHexDump(this IEnumerable<byte> value); } Program.cs Here's the the object that I'm converting to binary in a loop. class Program { static void Main(string[] args) { GSObject obj = new GSObject(); obj.AttachShort("smallInt", 15); obj.AttachInt("medInt", 120700); obj.AttachLong("bigInt", 10900800700); obj.AttachDouble("doubleVal", Math.PI); obj.AttachStringArray("muppetNames", new string[] { "Kermit", "Fozzy", "Piggy", "Animal", "Gonzo" }); GSObject apple = new GSObject(); apple.AttachString("name", "Apple"); apple.AttachString("color", "red"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.5); GSObject lemon = new GSObject(); apple.AttachString("name", "Lemon"); apple.AttachString("color", "yellow"); apple.AttachBool("inStock", false); apple.AttachFloat("price", (float)0.8); GSObject apricoat = new GSObject(); apple.AttachString("name", "Apricoat"); apple.AttachString("color", "orange"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.9); GSObject kiwi = new GSObject(); apple.AttachString("name", "Kiwi"); apple.AttachString("color", "green"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)2.3); GSArray fruits = new GSArray(); fruits.AddGSObject(apple); fruits.AddGSObject(lemon); fruits.AddGSObject(apricoat); fruits.AddGSObject(kiwi); obj.AttachGSArray("fruits", fruits); Stopwatch w1 = Stopwatch.StartNew(); for (int i = 0; i < 50000; i++) { byte[] b = obj.ToGSBinary(); } w1.Stop(); Console.WriteLine(BitConverter.IsLittleEndian ? "Little Endian" : "Big Endian"); Console.WriteLine(w1.ElapsedMilliseconds + "ms"); } Here's the code for some of my other classes that are used in the code above. Most of it is repetitive. GSObject GSArray GSWrappedObject

    Read the article

  • Subsonic SQLite Multiple Files

    - by Marcus Vinicius de LIma
    Hi, I have an application that must be accessed for many users. To optimize the performance I intend to store each user profile information at a independant database file. I need everytime a user login the application, to setup a new provider linked with his own database. All databases have the same structure. So while querying user the commom generated DAL classes must switch for the database file relative the the user. Is there a way for configure SubSonic for doing that switch at runtime? Thanks.

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >