Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 247/1148 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Smartest way to draw 100+ images on screen

    - by Henrik
    Hello all, First of all I'm a newbie when it comes to Android programming. So if this question is totally stupid please delete it ASAP :-). Question: I'm going to draw a grid of 10x10 PNG images. Each image is 32x32 px. All of the images are unique. I'm thinking that the easiest way seems to be to put each image in an ImageView. If adding all ImageView's to the layout would this give me some kind of performance hit? Would there be any smarter way to draw these images? Thanks. / Henrik

    Read the article

  • DBnull in Linq query causing problems

    - by nat
    hi there i am doing a query thus: int numberInInterval = (from dsStatistics.J2RespondentRow item in jStats.J2Respondent where item.EndTime > dtIntervalLower && item.EndTime <= dtIntervalUpper select item).count(); there appear to be some dbnulls in the endtime column.. any way i can avoid these? tried adding && where item.endtime != null.. and even != dbnull.value do i have to do a second (first) query to grab all that arent null then run the above one? im sure its super simple fix, but im still missing it.. as per thanks nat

    Read the article

  • MySQL slow query

    - by andrhamm
    SELECT items.item_id, items.category_id, items.title, items.description, items.quality, items.type, items.status, items.price, items.posted, items.modified, zip_code.state_prefix, zip_code.city, books.isbn13, books.isbn10, books.authors, books.publisher FROM ( ( items LEFT JOIN bookitems ON items.item_id = bookitems.item_id ) LEFT JOIN books ON books.isbn13 = bookitems.isbn13 ) LEFT JOIN zip_code ON zip_code.zip_code = items.item_zip WHERE items.rid = $rid` I am running this query to get the list of a user's items and their location. The zip_code table has over 40k records and this might be the issue. It currently takes up to 15 seconds to return a list of about 20 items! What can I do to make this query more efficient?

    Read the article

  • C# Memoization of functions with arbitrary number of arguments

    - by Lirik
    I'm trying to create a memoization interface for functions with arbitrary number of arguments, but I'm failing miserably. The first thing I tried is to define an interface for a function which gets memoized automatically upon execution: class EMAFunction:IFunction { Dictionary<List<object>, List<object>> map; class EMAComparer : IEqualityComparer<List<object>> { private int _multiplier = 97; public bool Equals(List<object> a, List<object> b) { List<object> aVals = (List<object>)a[0]; int aPeriod = (int)a[1]; List<object> bVals = (List<object>)b[0]; int bPeriod = (int)b[1]; return (aVals.Count == bVals.Count) && (aPeriod == bPeriod); } public int GetHashCode(List<object> obj) { // Don't compute hash code on null object. if (obj == null) { return 0; } // Get length. int length = obj.Count; List<object> vals = (List<object>) obj[0]; int period = (int) obj[1]; return (_multiplier * vals.GetHashCode() * period.GetHashCode()) + length;; } } public EMAFunction() { NumParams = 2; Name = "EMA"; map = new Dictionary<List<object>, List<object>>(new EMAComparer()); } #region IFunction Members public int NumParams { get; set; } public string Name { get; set; } public object Execute(List<object> parameters) { if (parameters.Count != NumParams) throw new ArgumentException("The num params doesn't match!"); if (!map.ContainsKey(parameters)) { //map.Add(parameters, List<double> values = new List<double>(); List<object> asObj = (List<object>)parameters[0]; foreach (object val in asObj) { values.Add((double)val); } int period = (int)parameters[1]; asObj.Clear(); List<double> ema = TechFunctions.ExponentialMovingAverage(values, period); foreach (double val in ema) { asObj.Add(val); } map.Add(parameters, asObj); } return map[parameters]; } public void ClearMap() { map.Clear(); } #endregion } Here are my tests of the function: private void MemoizeTest() { DataSet dataSet = DataLoader.LoadData(DataLoader.DataSource.FROM_WEB, 1024); List<String> labels = dataSet.DataLabels; Stopwatch sw = new Stopwatch(); IFunction emaFunc = new EMAFunction(); List<object> parameters = new List<object>(); int numRuns = 1000; long sumTicks = 0; parameters.Add(dataSet.GetValues("open")); parameters.Add(12); // First call for(int i = 0; i < numRuns; ++i) { emaFunc.ClearMap();// remove any memoization mappings sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks not-memoized " + (sumTicks/numRuns)); sumTicks = 0; // Repeat call for (int i = 0; i < numRuns; ++i) { sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks memoized " + (sumTicks/numRuns)); } The performance is confusing me... I expected the memoized function to be faster, but it didn't work out that way: Average ticks not-memoized 106,182 Average ticks memoized 198,854 I tried doubling the data instances to 2048, but the results were about the same: Average ticks not-memoized 232,579 Average ticks memoized 446,280 I did notice that it was correctly finding the parameters in the map and it going directly to the map, but the performance was still slow... I'm either open for troubleshooting help with this example, or if you have a better solution to the problem then please let me know what it is.

    Read the article

  • Create a SQL query to retrieve most recent records

    - by mattruma
    I am creating a status board module for my project team. The status board allows the user to to set their status as in or out and they can also provide a note. I was planning on storing all the information in a single table ... and example of the data follows: Date User Status Notes ------------------------------------------------------- 1/8/2009 12:00pm B.Sisko In Out to lunch 1/8/2009 8:00am B.Sisko In 1/7/2009 5:00pm B.Sisko In 1/7/2009 8:00am B.Sisko In 1/7/2009 8:00am K.Janeway In 1/5/2009 8:00am K.Janeway In 1/1/2009 8:00am J.Picard Out Vacation I would like to query the data and return the most recent status for each user, in this case, my query would return the following results: Date User Status Notes ------------------------------------------------------- 1/8/2009 12:00pm B.Sisko In Out to lunch 1/7/2009 8:00am K.Janeway In 1/1/2009 8:00am J.Picard Out Vacation I am try to figure out the TRANSACT-SQL to make this happen? Any help would be appreciated.

    Read the article

  • Optimizing a Soundex Query for finding similar names

    - by xkingpin
    My application will offer a list of suggestions for English names that "sound like" a given typed name. The query will need to be optimized and return results as quick as possible. Which option would be most optimal for returning results quickly. (Or your own suggestion if you have one) A. Generate the Soundex Hash and store it in the "Names" table then do something like the following: (This saves generating the soundex hash for at least every row in my db per query right?) select name from names where NameSoundex = Soundex('Ann') B. Use the Difference function (This must generate the soundex for every name in the table?) select name from names where Difference(name, 'Ann') = 3 C. Simple comparison select name from names where Soundex(name) = Soundex('Ann') Option A seems like to me it would be the fastest to return results because it only generates the Soundex for one string then compares to an indexed column "NameSoundex" Option B should give more results than option A because the name does not have to be an exact match of the soundex, but could be slower Assuming my table could contain millions of rows, what would yield the best results?

    Read the article

  • Get multiple records with one query

    - by Lewy
    User table: name lastname Bob Presley Jamie Cox Lucy Bush Find users q = Query.new("Bob Presley, Cox, Lucy") q.find_users => {0=>{:name=>"Bob", :lastname=>"Presley"}, 1=>{:lastname=>"Cox"}, 2=>{:name=>"Lucy"}} Question: I've got hash with few names and lastnames. I need to build Activerecord query to fetch all users from that hash. I can do object = [] hash = q.find_users hash.each do |data| #check if data[:lastname] and data[:name] exist # object << User.where(:name => ..., :lastname => ...) end But I think it is higly inefficient. How should I do this ?

    Read the article

  • HQL query for entity with max value

    - by Rob
    I have a Hibernate entity that looks like this (accessors ommitted for brevity): @Entity @Table(name="FeatureList_Version") @SecondaryTable(name="FeatureList", pkJoinColumns=@PrimaryKeyJoinColumn(name="FeatureList_Key")) public class FeatureList implements Serializable { @Id @Column(name="FeatureList_Version_Key") private String key; @Column(name="Name",table="FeatureList") private String name; @Column(name="VERSION") private Integer version; } I want to craft an HQL query that retrieves the most up to date version of a FeatureList. The following query sort of works: Select f.name, max(f.version) from FeatureList f group by f.name The trouble is that won't populate the key field, which I need to contain the key of the record with the highest version number for the given FeatureList. If I add f.key in the select it won't work because it's not in the group by or an aggregate and if I put it in the group by the whole thing stops working and it just gives me every version as a separate entity. So, can anybody help?

    Read the article

  • How to cache and store objects and set an expire policy in android?

    - by virsir
    I have an app fetch data from internet, for better performance and bandwidth, I need to implement a cache layer. There are two different data coming from the internet, one is changing every one hour and another one does not change basically. So for the first type of data, I need to implement an expire policy to make it self deleted after it was created for 1 hour, and when user request that data, I will check the storage first and then goto internet if nothing found. I thought about using a SharedPrefrence or SQLDatabase to store the json data or serialized object string. My question is: 1) What should I use, SharedPrefrence or SQLDatabase or anything else, a piece of data is not big but there are maybe many instances of that data. 2) How to implement that expire system.

    Read the article

  • Nhibernate criteria query inserts an extra order by expression when using JoinType.LeftOuterJoin and Projections

    - by Aaron Palmer
    Why would this nhibernate criteria query produce the sql query below? return Session.CreateCriteria(typeof(FundingCategory), "fc") .CreateCriteria("FundingPrograms", "fp") .CreateCriteria("Projects", "p", JoinType.LeftOuterJoin) .Add(Restrictions.Disjunction() .Add(Restrictions.Eq("fp.Recipient.Id", recipientId)) .Add(Restrictions.Eq("p.Recipient.Id", recipientId)) ) .SetProjection(Projections.ProjectionList() .Add(Projections.GroupProperty("fc.Name"), "fcn") .Add(Projections.Sum("fp.ObligatedAmount"), "fpo") .Add(Projections.Sum("p.ObligatedAmount"), "po") ) .AddOrder(Order.Desc("fpo")) .AddOrder(Order.Desc("po")) .AddOrder(Order.Asc("fcn")) .List<object[]>(); SELECT this_.Name as y0_, sum(fp1_.ObligatedAmount) as y1_, sum(p2_.ObligatedAmount) as y2_ FROM fundingCategories this_ inner join fundingPrograms fp1_ on this_.fundingCategoryId = fp1_.fundingCategoryId left outer join projects p2_ on fp1_.fundingProgramId = p2_.fundingProgramId WHERE (fp1_.recipientId = 6 /* @p0 */ or p2_.recipientId = 6 /* @p1 */) GROUP BY this_.Name ORDER BY p2_.name asc, y1_ desc, y2_ desc, y0_ asc It is incorrectly putting the p2_name asc into the ORDER BY statement, and causing it to crash. This only happens when I use JoinType.LeftOuterJoin on my Projects criteria. Is this a known nhibernate bug? I'm using nhibernate 2.0.1.4000. Thanks for any insight.

    Read the article

  • mysql query: SELECT DISTINCT column1, GROUP BY column2

    - by Adam
    Right now I have the following query: SELECT name, COUNT(name), time, price, ip, SUM(price) FROM tablename WHERE time >= $yesterday AND time <$today GROUP BY name And what I'd like to do is add a DISTINCT by column 'ip', i.e. SELECT DISTINCT ip FROM tablename So my final output would be all the columns, from all the rows that where time is today, grouped by name (with name count for each repeating name) and no duplicate ip addresses. What should my query look like? (or alternatively, how can I add the missing filter to the output with php)? Thanks in advance.

    Read the article

  • Open source / commercial alternative for jiffy.js?

    - by Marcel
    Hi, we'd like to measure "client side web site performance". i.e. we would like to have answers to the following questions: how long did it takt to load and render a webpage (including alle graphics scripts etc.) this bundled with additional informations like user-agent, operating system etc. a graphical tool to analyze the data would be perfect I know jiffy ( http://code.google.com/p/jiffy-web/wiki/Jiffy_js ) but that isn't maintained anymore. I would prefer either a hosted solution (like google analytics) or a java based solution that we deploy for ourselves. Do you know something like that? Thanks, Marc

    Read the article

  • Import Namespace System.Query

    - by GateKiller
    I am trying to load Linq on my .Net 3.5 enabled web server by adding the following to my .aspx page: <%@ Import Namespace="System.Query" %> However, this fails and tells me it cannot find the namespace. The type or namespace name 'Query' does not exist in the namespace 'System' I have also tried with no luck: System.Data.Linq System.Linq System.Xml.Linq I believe that .Net 3.5 is working because var hello = "Hello World" seems to work. Can anyone help please? Cheers, Stephen PS: I just want to clarify that I don't use Visual Studio, I simple have a Text Editor and write my code directly into .aspx files.

    Read the article

  • complex mysql query problem

    - by Scarface
    Hey guys I have a query that selects data and organizes but not in the correct order. What I want to do is select all the comments for a user in that week and sort it by each topic, then sort the cluster by the latest timestamp of each comment in their respective cluster. My current query selects the right data, but in seemingly random order. Does anyone have any ideas? select * from ( SELECT topic.topic_title, topic.topic_id FROM comments JOIN topic ON topic.topic_id=comments.topic_id WHERE comments.user='$user' AND comments.timestamp>$week order by comments.timestamp desc) derived_table group by topic_id

    Read the article

  • Django: query with ManyToManyField count?

    - by AP257
    In Django, how do I construct a COUNT query for a ManyToManyField? My models are as follows, and I want to get all the people whose name starts with A and who are the lord or overlord of at least one Place, and order the results by name. class Manor(models.Model): lord = models.ManyToManyField(Person, null=True, related_name="lord") overlord = models.ManyToManyField(Person, null=True, related_name="overlord") class Person(models.Model): name = models.CharField(max_length=100) So my query should look something like this... but how do I construct the third line? people = Person.objects.filter( Q(name__istartswith='a'), Q(lord.count > 0) | Q(overlord.count > 0) # pseudocode ).order_by('name'))

    Read the article

  • Best solution for reporting database

    - by zzyzx
    Here is the situation: There is a transaction intensive database - used for both routine transactions and reports. I was wondering if I could isolate these two operations and 2 independent databases, so reports could run off of one database and all the transactions could occur in another one. This would improve performance for the OLTP SQL database. I have gone over a few options like, Mirroring, Log shipping, Replication, Snapshots, Clustering - but would like to discuss the best possible strategy for the desired result. Please advise the best solution to implement this strategy, or any other thoughts/suggestion you may have.

    Read the article

  • Using PHP to place database rows into an array?

    - by Hamed Szilazi
    I was just wondering how i would be able to code perform an SQL query and then place each row into a new array, for example, lets say a table looked like the following: $people= mysql_query("SELECT * FROM friends") Output: | ID | Name | Age | --1----tom----32 --2----dan----22 --3----pat----52 --4----nik----32 --5----dre----65 How could i create a multidimensional array that works in the following way, the first rows second column data could be accessed using $people[0][1] and fifth rows third column could be accessed using $people[4][2]. How would i go about constructing this type of array? Sorry if this is a strange question, its just that i am new to PHP+SQL and would like to know how to directly access data. Performance and speed is not a issue as i am just writing small test scripts to get to grips with the language.

    Read the article

  • Browser caching issue on a https site pressing f5

    - by sushil bharwani
    i am working on a website where i have content entry form. This form contains a tiny mce control. The control is composed of some 40-50 files. The testing reported that the entry form loads slow and evertime shows up 50 files loading to completely load the page. Is there a way i can decrease this time. I have taken help of browser caching by setting the expires header of static content to very far date. When i access the form through its link second or later times it loads fast without saying 40 files remaining. but when i do f5 it reloads the entire page. I m confused as to how is f5 different from clicking on the link. Just to add my url is https.Any suggestion to increase the performance of this form will be great.

    Read the article

  • Should I trust Redis for data integrity?

    - by Jiaji
    In my current project, I have PostgreSQL as my master DB, and Redis as kind of a slave, e.g., when some user adds another as a friend, first the relationship will be stored in PostgreSQL and then a friend list in Redis will be updated. When some user's friend list is requested, it will be pulled out of Redis instead of PostgreSQL. The question is: when I update the friend list in Redis, should I get a fresh copy outof PostgreSQL, and replace the old list in Redis with the new one or should I keep the old list and simply SADD the userid into the list? The latter is of course best for performance, but intuitively the former does a better job in keep the data integrity? And if something like Celery is used, is the second method worth the risk?

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • How to determine cpu, ram needed for rails app?

    - by Ben
    What is the most accurate way to determine the amount of cpu speed and ram needed to run my rails app? I believe there are stress testing tools like Tsung, but how do I determine, for example, that I need X more ram, or X more CPU? I would like to find some way to roughly gauge the performance needs of my application so I can anticipate future needs. I think this data will also be useful for me to decide whether to upgrade one machine, or get another dedicated machine and put all the databases on that one. Essentially, I am concerned about scaling issues, and how to anticipate them. Thanks in advance for the help!

    Read the article

  • SQL Server 2005 - query with case statement

    - by user329266
    Trying to put a single query together to be used eventually in a SQL Server 2005 report. I need to: Pull in all distinct records for values in the "eventid" column for a time frame - this seems to work. For each eventid referenced above, I need to search for all instances of the same eventid to see if there is another record with TaskName like 'review1%'. Again, this seems to work. This is where things get complicated: For each record where TaskName is like review1, I need to see if another record exists with the same eventid and where TaskName='End'. Utimately, I need a count of how many records have TaskName like 'review1%', and then how many have TaskName like 'review1%' AND TaskName='End'. I would think this could be accomplished by setting a new value for each record, and for the eventid, if a record exists with TaskName='End', set to 1, and if not, set to 0. The query below seems to accomplish item #1 above: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000'))) AS T WHERE seq = 1 order by eventid And the query below seems to accomplish #2: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 order by eventid This will bring back the eventid's that also have a TaskName='End': SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and eventid in (Select eventid from eventrecords where TaskName = 'End') order by eventid So I've tried the following to TRY to accomplish #3: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and case when (eventid in (Select eventid from eventrecords where TaskName = 'End') then 1 else 0) as bit end order by eventid When I try to run this, I get: "Incorrect syntax near the keyword 'then'." Not sure what I'm doing wrong. Haven't seen any examples anywhere quite like this. I should mention that eventrecords has a primary key, but it doesn't seem to help anything when I include it, and I am not permitted to change the table. (ugh) I've received one suggestion to use a cursor and temporary table, but am not sure how badley that would bog down performance when the report is running. Thanks in advance.

    Read the article

  • Would watching a file for changes or redundantly querying that file be more efficient?

    - by badpanda
    I am wondering whether watching a file/directory for changes using the FileSystemWatcher class is extremely memory intensive. I am developing a desktop application in C# that will be running behind the scenes continuously on low-performance computers, and I need some way of checking to see if various files have changed. I can think of a few solutions: Watch the directories using FileSystemWatcher. Run a timed thread on an interval that goes through and manually checks this. Check manually every time the actionhandler thread runs (the program will occasionally do something, on an action). Any suggestions? Thanks! badPanda

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >