Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 403/527 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Which can handle a huge surge of queries: SQL Server 2008 Fulltext or Lucene

    - by Luke101
    I am creating a widget that will be installed on several websites and blogs. The widget will analyse the remote webpage title and content, then it will return relevent articles/links on my website. The amount of traffic we expect will be very huge roughly 500K queries a day and up from there. I need the queries to be returned very quickly, so I need the candidate to be high performance, similar to google adsense. The remote title can be from 5 to 50 words and the description we will use no more then 3000 words. Which of these two do you think can handle the load.

    Read the article

  • How to achieve a palette effect on iPhone using OpenGL

    - by Joe
    I'm porting a 2d retro game to iPhone that has the following properties: targets OpenGL ES 1.1 entire screen is filled with tiles (textured triangle strip tile textured using a single 256x256 RGBA texture image the texture is passed to OpenGL once at the start of the game only 4 displayed colours are used one of the displayed colours is black The original game flashed the screen when time starts to run out by toggling the black pixels to white using an indexed palette. What is the best (i.e. most efficient) way to achieve this in OpenGL ES 1.1? My thoughts so far: Generate an alternative texture with white instead of black pixels, and pass to OpenGL when the screen is flashing Render a white poly underneath the background and render the texture with alpha on to display it Try and render a poly on top with some blending that achieves the effect (not sure this is possible) I'm fairly new to OpenGL so I'm not sure what the performance drawbacks of each of these are, or if there's a better way of doing this.

    Read the article

  • Building a case for solr

    - by Midhat
    Our product consists of multiple applications, All using Lucene. 2 of the applications I am involved with have Lucene indexes of about 3 GB and 12GB. Another team is building an application, for which they estimate the LUCENE INDEX size to be close to 1 Terabyte. New documents are added to the indexes every 15 days approx. We do not have any apparent performance issues with the current applications. So my question is SHould we be using Solr now? When should one stop using Lucene and graduate to Solr? Any disadvantages/problems for using Solr? The client applications are made in ASP.Net, but I assume they will be able to use a solr server using solrnet

    Read the article

  • Validation Rules in Webtesting using VS2010

    - by Lexipain
    I'm creating a simple webtest (Recorded Web performance test) that makes sure that a correct error message is displayed if i try to login with a username that does not exist. However, there are two types of error messages that handle incorrect login info. One is for all the usernames that do not exist and therefore are not allowed, and the other is for usernames that start with the letter 'Q' (which is not allowed for a few reasons). Now what i want to do is use the 'Find Text' validation rule and the test should pass if ONE of the 'Find Text' parameters is found, and in that case i want the second 'Find Text' rule to be ignored so it doesn't fail the test. In other words the test should always pass if one of the 'Find Test' rules is found. How can i achieve that? Is there some if,else statement that i can use for this?

    Read the article

  • how does NTFS actually work with B-tree ?

    - by bakra
    To improve performance, NTFS directories use a special data management structure called a B-tree. "B-tree" concept here refers to a "tree of storage units" that hold the contents of an individual directory. What I don't understand is where on the disk is this tree stored? Its surely not created every-time we reboot...that would take lots of time. and since its a tree(dynamic Data structure) unlike arrays it will grow. so space needs to be allocated every-time it grows. so how is this "dynamic meta-data" stored ?

    Read the article

  • How to improve INSERT INTO ... SELECT locking behavior

    - by Artem
    In our production database, we ran the following pseudo-code SQL batch query running every hour: INSERT INTO TemporaryTable (SELECT FROM HighlyContentiousTableInInnoDb WHERE allKindsOfComplexConditions are true) Now this query itself does not need to be fast, but I noticed it was locking up HighlyContentiousTableInInnoDb, even though it was just reading from it. Which was making some other very simple queries take ~25 seconds (that's how long that other query takes). Then I discovered that InnoDB tables in such a case are actually locked by a SELECT! http://www.mysqlperformanceblog.com/2006/07/12/insert-into-select-performance-with-innodb-tables/ But I don't really like the solution in the article of selecting into an OUTFILE, it seems like a hack (temporary files on filesystem seem sucky). Any other ideas? Is there a way to make a full copy of an InnoDB table without locking it in this way during the copy. Then I could just copy the HighlyContentiousTable to another table and do the query there.

    Read the article

  • Using Windows media foundation

    - by Martin Beckett
    Ok so my new gig is high performance video (think Google streetview but movies) - the hard work is all embedded capture and image processing but: I was looking at the new MS video offerings to display content = Windows Media Foundation. Is anyone actually using this ? There are no books on the topic. The only documentation is a developer team blog with a single entry 9 months old. I thought we had got past having to learn an MS api by spying on the com control messages! Is it just another wrapper around the same old activeX control?

    Read the article

  • improving conversions to binary and back in C#

    - by Saad Imran.
    I'm trying to write a general purpose socket server for a game I'm working on. I know I could very well use already built servers like SmartFox and Photon, but I wan't to go through the pain of creating one myself for learning purposes. I've come up with a BSON inspired protocol to convert the the basic data types, their arrays, and a special GSObject to binary and arrange them in a way so that it can be put back together into object form on the client end. At the core, the conversion methods utilize the .Net BitConverter class to convert the basic data types to binary. Anyways, the problem is performance, if I loop 50,000 times and convert my GSObject to binary each time it takes about 5500ms (the resulting byte[] is just 192 bytes per conversion). I think think this would be way too slow for an MMO that sends 5-10 position updates per second with a 1000 concurrent users. Yes, I know it's unlikely that a game will have a 1000 users on at the same time, but like I said earlier this is supposed to be a learning process for me, I want to go out of my way and build something that scales well and can handle at least a few thousand users. So yea, if anyone's aware of other conversion techniques or sees where I'm loosing performance I would appreciate the help. GSBitConverter.cs This is the main conversion class, it adds extension methods to main datatypes to convert to the binary format. It uses the BitConverter class to convert the base types. I've shown only the code to convert integer and integer arrays, but the rest of the method are pretty much replicas of those two, they just overload the type. public static class GSBitConverter { public static byte[] ToGSBinary(this short value) { return BitConverter.GetBytes(value); } public static byte[] ToGSBinary(this IEnumerable<short> value) { List<byte> bytes = new List<byte>(); short length = (short)value.Count(); bytes.AddRange(length.ToGSBinary()); for (int i = 0; i < length; i++) bytes.AddRange(value.ElementAt(i).ToGSBinary()); return bytes.ToArray(); } public static byte[] ToGSBinary(this bool value); public static byte[] ToGSBinary(this IEnumerable<bool> value); public static byte[] ToGSBinary(this IEnumerable<byte> value); public static byte[] ToGSBinary(this int value); public static byte[] ToGSBinary(this IEnumerable<int> value); public static byte[] ToGSBinary(this long value); public static byte[] ToGSBinary(this IEnumerable<long> value); public static byte[] ToGSBinary(this float value); public static byte[] ToGSBinary(this IEnumerable<float> value); public static byte[] ToGSBinary(this double value); public static byte[] ToGSBinary(this IEnumerable<double> value); public static byte[] ToGSBinary(this string value); public static byte[] ToGSBinary(this IEnumerable<string> value); public static string GetHexDump(this IEnumerable<byte> value); } Program.cs Here's the the object that I'm converting to binary in a loop. class Program { static void Main(string[] args) { GSObject obj = new GSObject(); obj.AttachShort("smallInt", 15); obj.AttachInt("medInt", 120700); obj.AttachLong("bigInt", 10900800700); obj.AttachDouble("doubleVal", Math.PI); obj.AttachStringArray("muppetNames", new string[] { "Kermit", "Fozzy", "Piggy", "Animal", "Gonzo" }); GSObject apple = new GSObject(); apple.AttachString("name", "Apple"); apple.AttachString("color", "red"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.5); GSObject lemon = new GSObject(); apple.AttachString("name", "Lemon"); apple.AttachString("color", "yellow"); apple.AttachBool("inStock", false); apple.AttachFloat("price", (float)0.8); GSObject apricoat = new GSObject(); apple.AttachString("name", "Apricoat"); apple.AttachString("color", "orange"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.9); GSObject kiwi = new GSObject(); apple.AttachString("name", "Kiwi"); apple.AttachString("color", "green"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)2.3); GSArray fruits = new GSArray(); fruits.AddGSObject(apple); fruits.AddGSObject(lemon); fruits.AddGSObject(apricoat); fruits.AddGSObject(kiwi); obj.AttachGSArray("fruits", fruits); Stopwatch w1 = Stopwatch.StartNew(); for (int i = 0; i < 50000; i++) { byte[] b = obj.ToGSBinary(); } w1.Stop(); Console.WriteLine(BitConverter.IsLittleEndian ? "Little Endian" : "Big Endian"); Console.WriteLine(w1.ElapsedMilliseconds + "ms"); } Here's the code for some of my other classes that are used in the code above. Most of it is repetitive. GSObject GSArray GSWrappedObject

    Read the article

  • Java Swing: how to add an image to a JPanel ?

    - by Leonel
    I have a JPanel to which I'd like to add JPEG and PNG images that I generate on the fly. All the examples I've seen so far in the Swing Tutorials, specially in the Swing examples use ImageIcons. I'm generating these images as byte arrays, and they are usually larger than the common icon they use in the examples, at 640x480. Is there any (performance or other) problem in using the ImageIcon class to display an image that size in a JPanel ? What's the usual way of doing it ? How to add an image to a JPanel without using the ImageIcon class ? Edit: A more careful examination of the tutorials and the API shows that you cannot add an ImageIcon directly to a JPanel. Instead, they achieve the same effect by setting the image as an icon of a JLabel. This just doesn't fill right...

    Read the article

  • How to SET ARITHABORT ON for connections in Linq To SQL

    - by Laurence
    By default, the SQL connection option ARITHABORT is OFF for OLEDB connections, which I assume Linq To SQL is using. However I need it to be ON. The reason is that my DB contains some indexed views, and any insert/update/delete operations against tables that are part of an indexed view fail if the connection does not have ARITHABORT ON. Even selects against the indexed view itself fail if the WITH(NOEXPAND) hint is used (which you have to use in SQL Standard Edition to get the performance benefit of the indexed view). Is there somewhere in the data context I can specify I want this option ON? Or somewhere in code I can do it?? I have managed a clumsy workaround, but I don't like it .... I have to create a stored procedure for every select/insert/update/delete operation, and in this proc first run SET ARITHABORT ON, then exec another proc which contains the actual select/insert/update/delete. In other words the first proc is just a wrapper for the second. It doesn't work to just put SET ARITHABORT ON above the select/insert/update/delete code.

    Read the article

  • Compress components with gzip - J2EE

    - by Venkata Sirish
    I am looking to improve front-end performance of my application, so I used YSlow tool in Firefox. When I ran this tool for my app, in the YSlow grade tab it showed up a issue 'Grade F on Compress components with gzip'. Seems to be that we need to compress the files(js, css) while sending from the server to client to increase the server response time. My app is a Struts Java application. Can anyone let me know how to compress and send the front end UI files(JS,CSS) from server so that the response time increases and my pages lot fastly? What are the things that I need to do to compress these files in Java at server?

    Read the article

  • Choosing between ExtJS and YUI based of application parameters.

    - by Kabeer
    Hello. I need help in taking call to choose between Ext JS and YUI libraries. Here are the key factors I have derived from my application requirements & development process: Complex, windows forms like controls Widgets, Layouts, Utilities Inter widget communication Easy to extend Easy to learn Intuitive & concise coding Strong exception handling Active support / community To update with upcoming technologies (HTML5, etc.) Skins & Themes to be easy to change Skins & Themes to support variety (a text box for different context to appear differently) Support & Utilities for standard protocols (XmlHttp, JSON) Good performance (responsive) Cost is not crucial, but I don't mind saving :)

    Read the article

  • Looking for the most painless non-RDBMS storage method in C#

    - by NateD
    I'm writing a simple program that will run entirely client-side. (Desktop programming? do people still do that?) and I need a simple way to store trivial amounts of data in a structured form, but really don't see any need to use a database system. What's more, some of the data needs to be serialized and passed around to different users, like some kind of "file" or perhaps a "document". (has anyone ever done that before?) So, I've looked at using .Net DataSets, LINQ, direct XML manipulation, and they all seem like they would get the job done, but I would like to know before I dive into any of them if there's one method that is generally regarded as easier to code than others. As I said, the amount of data to be stored is trivial, even if one hundred people all used the same machine we're not talking about more than 10 MB, so performance is not as large a concern as is codeability/maintainability. Thank you all in advance!

    Read the article

  • C#, WinForms: Which view type for periodically updated list?

    - by rdoubleui
    I'm having an application, that periodically polls a web service (about every 10 seconds). In my application logic I'm having a List<Message> holding the messages. All messages have an id, and might be received out of order. Therefore the class implements the Comparable Interface. What WinForm control would fit to be regurarly updated (with the items in order). I plan to hold the last 500 messages. Should I sort the list and then update the whole form? Or is data binding approriate (concerning performance)?

    Read the article

  • Python: split a list based on a condition?

    - by Parand
    What's the best way, both aesthetically and from a performance perspective, to split a list of items into multiple lists based on a conditional? The equivalent of: good = [x for x in mylist if x in goodvals] bad = [x for x in mylist if x not in goodvals] is there a more elegant way to do this? Update: here's the actual use case, to better explain what I'm trying to do: # files looks like: [ ('file1.jpg', 33L, '.jpg'), ('file2.avi', 999L, '.avi'), ... ] IMAGE_TYPES = ('.jpg','.jpeg','.gif','.bmp','.png') images = [f for f in files if f[2].lower() in IMAGE_TYPES] anims = [f for f in files if f[2].lower() not in IMAGE_TYPES]

    Read the article

  • Precompile Lambda Expression Tree conversions as constants?

    - by Nathan
    It is fairly common to take an Expression tree, and convert it to some other form, such as a string representation (for example this question and this question, and I suspect Linq2Sql does something similar). In many cases, perhaps even most cases, the Expression tree conversion will always be the same, i.e. if I have a function public string GenerateSomeSql(Expression<Func<TResult, TProperty>> expression) then any call with the same argument will always return the same result for example: GenerateSomeSql(x => x.Age) //suppose this will always return "select Age from Person" GenerateSomeSql(x => x.Ssn) //suppose this will always return "select Ssn from Person" So, in essence, the function call with a particular argument is really just a constant, except time is wasted at runtime re-computing it continuously. Assuming, for the sake of argument, that the conversion was sufficiently complex to cause a noticeable performance hit, is there any way to pre-compile the function call into an actual constant?

    Read the article

  • Very long strings as primary keys in a database for caching

    - by Bill Zimmerman
    Hi, I am working on a web app that allows users to create dynamic PDF files based on what they enter into a form (it is not very structured data). The idea is that User 1 enters several words (arbitrary # of words, practically capped of course), for example: A B C D E There is no such string in the database, so I was thinking: Store this string as a primary key in a MySQL database (it could be maybe around 50-100k of text, but usually probably less than 200 words) Generate the PDF file, and create a link to it in the database When the next user requests A B C D E, then I can just serve the file instead of recreating it each time. (simple cache) The PDF is cpu intensive to generate, so I am trying to cache as much as I can... My questions are: Does anyone have any alternative ideas to my approach What will the database performance be like? Is there a better way to design the schema than using the input string as the primary key?

    Read the article

  • Add Core Data Index to certain Attributes via migration

    - by steipete
    For performance reasons, i want to set the Indexed Attribute to some of my entities. I created a new core data model version to perform the changes. Core Data detects the changes and migrates my model to the new version, however, NO INDEXES ARE GENERATED. If I recreate the database from scratch, the indexes are there. I checked with SQLite Browser both on the iPhone and on the Simulator. The problem only occurs if a database in the prior format is already there. Is there a way to manually add the indexes? Write some sql for that? Or am I missing something? I did already some more critical migrations, no problems there. But those missing indexes are bugging me. Thanks for helping!

    Read the article

  • Cloud Agnostic Architecture?

    - by Dave
    Hi, I'm doing some architecture work on a new solution which will initially run in Windows Azure. However I'd like the solution (or at least the architecture/design) to be Cloud Agnostic (to whatever extent is realistic). Has anyone done any work on this front or seen any good white papers/blog posts? Our highlevel architecture will consist of a payload being sent to a web service (WCF for instance), this will be dumped on a queue (for arguments sake) and a worker process will grab messages off this queue and proccess them. There will be a database of customer information which we'd ideally like to keep out of the cloud however there are obvious performance considerations. Keen to hear other's thoughts. Cheers Dave

    Read the article

  • arbitrary typed data in django model

    - by Dmitry Shevchenko
    I have a model, say, Item. I want to store arbitrary amount of attributes on it, like title, description, release_date. And i want them to be not just strings but have python type, so string, boolean, datetime etc. What are my options here? EAV pattern with separate name-value table won't work because of the same DB type across all values. JSONField can probably help, but it doesn't know about datetime, for example. Also i was looking at PickeField, it fits perfectly, but i'm a bit concerned about performance.

    Read the article

  • Portable C++ library for IPC (processes and shared memory), Boost vs ACE vs Poco?

    - by user363778
    Hi, I need a portable C++ library for doing IPC. I used fork() and SysV shared memory until now but this limits me to Linux/Unix. I found out that there are 3 major C++ libraries that offer a portable solution (including Windows and Mac OS X). I really like Boost, and would like to use it but I need processes and it seems like that this is only an experimental branch until now!? I have never heard of ACE or POCO before and thus I am stuck I do not know which one to choose. I need fork(), sleep() (usleep() would be great) and shared memory of course. Performance and documentation are also important criteria. Thanks, for your Help!

    Read the article

  • Concatenation Operator - PHP

    - by Chaitanya
    This might be a silly question but it struck me, and here i ask. <?php $x="Hi"; $y=" There"; $z = $x.$y; $a = "$x$y"; echo "$z"."<br />"."$a"; ?> $z uses the traditional concatenation operator provided by php and concatenates, conversely $a doesn't, My questions: a. by not using the concatenation operator, does it effect the performance? b. If it doesn't why at all have the concatenation operator. c. Why have 2 modes of implementation when one does the work?

    Read the article

  • Tables with no Primary Key

    - by Matt Hamilton
    I have several tables whose only unique data is a uniqueidentifier (a Guid) column. Because guids are non-sequential (and they're client-side generated so I can't use newsequentialid()), I have made a non-primary, non-clustered index on this ID field rather than giving the tables a clustered primary key. I'm wondering what the performance implications are for this approach. I've seen some people suggest that tables should have an auto-incrementing ("identity") int as a clustered primary key even if it doesn't have any meaning, as it means that the database engine itself can use that value to quickly look up a row instead of having to use a bookmark. My database is merge-replicated across a bunch of servers, so I've shied away from identity int columns as they're a bit hairy to get right in replication. What are your thoughts? Should tables have primary keys? Or is it ok to not have any clustered indexes if there are no sensible columns to index that way?

    Read the article

  • Can I get the stack traces of all threads in my c# app?

    - by Drew Shafer
    I'm debugging an apparent concurrency issue in a largish app that I hack on at work. The bug in question only manifests on certain lower-performance machines after running for many (12+) hours, and I have never reproduced it in the debugger. Because of this, my debugging tools are basically limited to analyzing log files. C# makes it easy to get the stack trace of the thread throwing the exception, but I'd like to additionally get the stack traces of every other thread currently executing in my AppDomain at the time the exception was thrown. Is this possible?

    Read the article

  • Dispelling the UIImage imageNamed: FUD

    - by Roger Nolan
    I see a lot of people saying imageNamed is bad but equal numbers of people saying the performance is good - especially when rendering UITableViews. See this SO question for example or this article on iPhoneDeveloperTips.com UIImage's imageNamed method used to leak so it was best avoided but has been fixed in recent releases. I'd like to understand the caching algorithm better in order to make a reasoned decision about where I can trust the system to cache my images and where I need to go the extra mile and do it myself. My current basic understanding is that it's a simple NSMutableDictionary of UIImages referenced by filename. It gets bigger and when memory runs out it gets a lot smaller. For example, does anyone know for sure that the image cache behind imageNamed does not respond to didReceiveMemoryWarning? It seems unlikely that Apple would not do this. If you have any insight into the caching algorithm, please post it here.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >