Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 251/1148 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Query a column and a calculation of columns at the same time PostgreSQL

    - by pablo
    Hi I have two tables, Products and BundleProducts that have o2o relation with BaseProducts. A BundleProduct is a collection of Products using a m2m realtion to the Products table. Products has a price column and the price of a BundleProduct is calculated as the sum of the prices of its Products. BaseProducts have columns like name and description so I can query it to get both Products and BundleProducts. Is it possible to query and sort by price both for the price column of the Products and calculated price of the BundleProducts? Thanks

    Read the article

  • Whats wrong with my SQL query?

    - by William
    I'm trying to set up a query that shows the first post of each thread and is ordered by the date of the last post in each thread. I got the first part down with this query: SELECT * FROM ( SELECT Min( ID ) AS MinID FROM test_posts GROUP BY Thread )tmin JOIN test_posts ON test_posts.ID = tmin.MinID Now I need to figure out how to call the last post of each thread into a table, than use that table to order the first tables results. So far I got this, but it doesn't work. SELECT * FROM ( SELECT Min( ID ) AS MinID FROM test_posts GROUP BY Thread )tmin JOIN test_posts ON test_posts.ID = tmin.MinID ORDER BY (SELECT MAX( ID ) AS MaxID, Thread, MAX( Date ) FROM test_posts GROUP BY Thread )tmax tmax.Date

    Read the article

  • How to keep Hibernate mapping use under control as requirements grow

    - by David Plumpton
    I've worked on a number of Java web apps where persistence is via Hibernate, and we start off with some central class (e.g. an insurance application) without any time being spent considering how to break things up into manageable chunks. Over time as features are added we add more mappings (rates, clients, addresses, etc.) and then amount of time spent saving and loading an insurance object and everything it connects to grows. In particular you get close to a go-live date and performance testing with larger amounts of data in each table is starting to demonstrate that it's all too slow. Obviously there are a number of ways that we could attempt to partition things up, e.g. map only the client classes for the client CRUD screens, etc., which would have been better to get in place earlier rather than trying to work it in at the end of the dev cycle. I'm just wondering if there are recommendations about ways to handle/mitigate this.

    Read the article

  • In .NET which loop runs faster for or foreach

    - by Binoj Antony
    In c#/VB.NET/.NET which loop runs faster for or foreach? Ever since I read that for loop works faster than foreach a long time ago I assumed it stood true for all collections, generic collection all arrays etc. I scoured google and found few articles but most of them are inconclusive (read comments on the articles) and open ended. What would be ideal is to have each scenarios listed and the best solution for the same e.g: (just example of how it should be) for iterating an array of 1000+ strings - for is better than foreach for iterating over IList (non generic) strings - foreach is better than for Few references found on the web for the same: Original grand old article by Emmanuel Schanzer CodeProject FOREACH Vs. FOR Blog - To foreach or not to foreach that is the question asp.net forum - NET 1.1 C# for vs foreach [Edit] Apart from the readability aspect of it I am really interested in facts and figures, there are applications where the last mile of performance optimization squeezed do matter.

    Read the article

  • C sharp: read last "n" lines of log file

    - by frictionlesspulley
    need a snippet of code which would read out last "n lines" of a log file. I came up with the following code from the net.I am kinda new to C sharp. Since the log file might be quite large, I want to avoid overhead of reading the entire file.Can someone suggest any performance enhancement. I do not really want to read each character and change position. var reader = new StreamReader(filePath, Encoding.ASCII); reader.BaseStream.Seek(0, SeekOrigin.End); var count = 0; while (count <= tailCount) { if (reader.BaseStream.Position <= 0) break; reader.BaseStream.Position--; int c = reader.Read(); if (reader.BaseStream.Position <= 0) break; reader.BaseStream.Position--; if (c == '\n') { ++count; } } var str = reader.ReadToEnd();

    Read the article

  • need for tcp fine-tuning on heavily used proxy server

    - by Vijay Gharge
    Hi all, I am using squid like Internet proxy server on RHEL 4 update 6 & 8 with quite heavy load i.e. 8k established connections during peak hour. Without depending much on application provider's expertise I want to achieve maximum o/p from linux. W.r.t. that I have certain questions as following: How to find out if there is scope for further tcp fine-tuning (without exhausting available resources) as the benchmark values given by vendor looks poor! Is there any parameter value that is available from OS / network stack that will show me the results. If at all there is scope, how shall I identify & configure OS tcp stack parameters i.e. using sysctl or any specific parameter Post tuning how shall I clearly measure performance enhancement / degradation ?

    Read the article

  • MongoDB efficient dealing with embedded documents

    - by Sebastian Nowak
    I have serious trouble finding anything useful in Mongo documentation about dealing with embedded documents. Let's say I have a following schema: { _id: ObjectId, ... data: [ { _childId: ObjectId // let's use custom name so we can distinguish them ... } ] } What's the most efficient way to remove everything inside data for particular _id? What's the most efficient way to remove embedded document with particular _childId inside given _id? What's the performance here, can _childId be indexed in order to achieve logarithmic (or similar) complexity instead of linear lookup? If so, how? What's the most efficient way to insert a lot of (let's say a 1000) documents into data for given _id? And like above, can we get O(n log n) or similar complexity with proper indexing? What's the most efficient way to get the count of documents inside data for given _id?

    Read the article

  • Speed-up of readonly MyISAM table

    - by Ozzy
    We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want. Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.

    Read the article

  • Problem with sqlite query when using the wrapper

    - by user285096
    - (IBAction)EnterButtonPressed:(id)sender { Sqlite *sqlite = [[Sqlite alloc] init]; NSArray *paths =NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *writableDBPath = [documentsDirectory stringByAppendingPathComponent:@"test.sqlite"]; if (![sqlite open:writableDBPath]) return; NSArray *query = [sqlite executeQuery:@"SELECT AccessCode FROM UserAccess"]; NSLog(@"%@",query); I am getting the output as : { ( AccessCode=abcd; ) } Where as in I want it as : abcd I am using the wrapper from : http://th30z.netsons.org/2008/11/objective-c-sqlite-wrapper/ Please help .

    Read the article

  • Query multiple currencies

    - by TiuTalk
    I need store multiple currencies on my database... Here's the problem: Example Tables: [ Products ] id (INT, PK) name (VARCHAR) price (DECIMAL) currency (INT, FK) [ Currencies ] id (INT, PK) name (VARCHAR) conversion (DECIMAL) # To U$ I'll store the product price with the currency selected by the user... Later I need to search the products using a price interval like "Search products with price from U$ 50 to U$ 100" and I need the system convert these values "on the fly" to run the SQL Query and filter the products. And I really don't know how to make this query... :/

    Read the article

  • SHA1 Password returns as cleartext after DB query

    - by Code Sherpa
    Hi. I have a SHA1 password and PasswordSalt in my aspnet_Membership table. but, when I run a query from the server (a Sql Query), the reader reveals that the pass has returned as its cleartext equivalent. I am wondering if my web.config configuration is causing this? <membership defaultProvider="CustomMembershipProvider" userIsOnlineTimeWindow="20" hashAlgorithmType="SHA1"> <providers> <clear/> <add name="CustomMembershipProvider" type="Custom.Utility.CustomMembershipProvider" connectionStringName="MembershipDB" enablePasswordRetrieval="false" enablePasswordReset="true" requiresUniqueEmail="false" requiresQuestionAndAnswer="false" passwordStrengthRegularExpression="" minRequiredPasswordLength="1" minRequiredNonalphanumericCharacters="0" passwordFormat="Hashed" thanks in advance...

    Read the article

  • Temporary intermediate table

    - by user289429
    In our project to generate massive reports in oracle we use some permanent table to hold intermediate results. For example to generate one report we run few queries and populate the table, at the final step we join the intermediate table with huge application tables. These intermediate tables are cleared for next report run. We have few concerns in performance areas. These intermediate tables are transactional and don't have statistics. Is it good idea to join these with application tables which are partitioned and have up to date statistics. We need these results stored in the intermediate tables to be available across requests from UI hence we are not in a position to use oracle provided temporary tables. Any thoughts on what could be done would be appreciated.

    Read the article

  • Combining query rows in a loop

    - by icemanind
    I have the following ColdFusion 9 code: <cfloop from="1" to="#arrayLen(tagArray)#" index="i"> <cfquery name="qryGetSPFAQs" datasource="#application.datasource#"> EXEC searchFAQ '#tagArray[i]#' </cfquery> </cfloop> The EXEC executes a stored procedure on the database server, which returns rows of data, depending on what the parameter is. What I am trying to do is combine the queries into one query object. In other words, if it loops 3 times and each loop returns 4 rows, I want a query object that has all 12 rows in one object. How do I acheive this?

    Read the article

  • in java, which is better - three arrays of booleans or 1 array of bytes?

    - by joe_shmoe
    I know the question sounds silly, but consider this: I have an array of items and a labelling algorithm. at any point the item is in one of three states. The current version holds these states in a byte array, where 0, 1 and 2 represent the three states. alternatively, I could have three arrays of boolean - one for each state. which is better (consumes less memory) depends on how jvm (sun's version) stores the arrays - is a boolean represented by 1 bit? (p.s. don't start with all that "this is not the way OO/Java works" - I know, but here performance comes in front. plus the algorithm is simple and perfectly readable even in such form). Thanks a lot

    Read the article

  • Wordpress Custom Query

    - by InnateDev
    I have posts that use a custom field for start date and end date. Query_posts returns an array of posts that exist in the category I'm filtering. How do I query posts using this custom field that has date i.e. 03/11/2010 and not the full array. Pagination works on the full array so it returns all posts. I can use an if else to only show the posts newer that today, then pagination doesn't work. Would I have to build a custom mysql query?

    Read the article

  • Performing complex query with Dynamics CRM 4.0

    - by dub
    Hi, I have two custom entites, Product and ProductType, linked together in many-to-one relationship. Product has a lookup field to ProductType. I'm trying to write a query to fetch Type1 products with a price over 100, and Type2 products with a price lower than 100. Here's how I would do it in SQL : select * from Product P inner join ProductType T on T.Id = P.TypeId where (T.Code = 'Type1' and P.Price >= 100) or (T.Code = 'Type2' and P.Price < 100) I can't figure out a way to build a QueryExpression to do exactly that. I know I could do it with two queries, but I'd like to minimize roundtrips to the server. Is there a way to perform that query in only one operation ? Thanks!

    Read the article

  • Why do a small amount of add/deletes take several seconds in EF4?

    - by TomWij
    Using the Entity Framework 4. I have created a Code First database in the past and a piece of code needs to delete and add 16 objects, this takes 6 seconds each. That's 300+ ms for each query! The deletes/adds occur in a foreach scope and there is a SaveChanges() outside the foreach. In the above image you see that each takes 6 seconds, which is 34% of the time, for 16 calls. That doesn't sound normal to me... Why is this and how can I improve the performance? If there is no solution: Are there any workarounds I can use? It would be a pain to rewrite my project...

    Read the article

  • MYSQL Query with 2 columns in Table A related to 1 column in Table B

    - by CYREX
    I have 2 Tables, User and Mail. In User Table i have 2 columns that i will use, the ID column which makes the relation with the Mail Table and it is the Index of User Table and the Name column. In Mail Table i have Receiver Column and Sender Column. Both columns, Receiver and Sender have a number that relates to the ID Column in the User Table. In the User Table is where the name columns resides and i want to make a query that shows me the Receiver and Sender Columns but with the name of the user, not the ID. Up to this point i have this: SELECT name AS Send, name AS Receive FROM mail,user WHERE sender=guid; I know there is still a part of the query missing but i can not figure out what else to put to tell it to show in the SEND output column the name of the sender and in the RECEIVE output column the name of the receiver.

    Read the article

  • Anyone Know a Great Sparse One Dimensional Array Library in Python?

    - by TheJacobTaylor
    I am working on an algorithm in Python that uses arrays heavily. The arrays are typically sparse and are read from and written to constantly. I am currently using relatively large native arrays and the performance is good but the memory usage is high (as expected). I would like to be able to have the array implementation not waste space for values that are not used and allow an index offset other than zero. As an example, if my numbers start at 1,000,000 I would like to be able to index my array starting at 1,000,000 and not be required to waste memory with a million unused values. Array reads and writes needs to be fast. Expanding into new territory can be a small delay but reads and writes should be O(1) if possible. Does anybody know of a library that can do it? Thanks!

    Read the article

  • SQL Not Exists in this Query - is it possible

    - by jason barry
    This is my script - it simply looks for the image file associated to a person record. Now the error will display if there is NO .jpg evident when the query runs. Msg 4860, Level 16, State 1, Line 1 Cannot bulk load. The file "C:\Dev\ClientServices\Defence\RAN\Shore\Config\Photos\002054.2009469432270600.001.jpg" does not exist. Is there a way to write this query to 'IF not exists then set id_number = '002054.2009469432270427.001' - so it wil always display this photo for any records without a picture. ALTER procedure [dbo].[as_ngn_sp_REP_PH108_photo] (@PMKEYS nvarchar(50)) AS ---exec [as_ngn_sp_REP_PH108_photo] '8550733' SET NOCOUNT ON DECLARE @PATH AS NVARCHAR(255) DECLARE @ID_NUMBER NVARCHAR(27) DECLARE @SQL AS NVARCHAR(MAX) EXEC DB_GET_DB_SETTING'STAFF PICTURE FILE LOCATION', 0, @PATH OUTPUT IF RIGHT(@PATH,1) <> '\' SET @PATH = @PATH + '\' SELECT @ID_NUMBER = ID_NUMBER FROM aView_person WHERE EXTRA_CODE_1 = @PMKEYS SET @PATH = @PATH + @ID_NUMBER + '.jpg' SET @SQL = 'SELECT ''Picture1'' [Picture], BulkColumn FROM OPENROWSET(Bulk ''' + REPLACE(@PATH,'''','''''') + ''', SINGLE_BLOB) AS RAN' EXEC SP_EXECUTESQL @SQL

    Read the article

  • Is memcached a dinosaur in comparison to Redis?

    - by Industrial
    Hi everyone, I have worked quite a bit with memcached the last weeks and just found out about Redis. When I read this part of their readme, I suddenly got a warm, cozy feeling in my stomach: Redis can be used as a memcached on steroids because is as fast as memcached but with a number of features more. Like memcached, Redis also supports setting timeouts to keys so that this key will be automatically removed when a given amount of time passes. This sounds amazing. I'd also found this page with benchmarks: http://www.ruturaj.net/redis-memcached-tokyo-tyrant-mysql-comparison So, honestly - Is memcache really that old dinousaur that is a bad choice from a performance perspective when compared to this newcomer called Redis? I haven't heard lot about Redis previously, thereby the approach for my question!

    Read the article

  • Adding a Third Table to a Two-Table Join Query

    - by John
    Hello, The query below works just fine. It pulls fields from two MySQL tables, "comment" and "login". It does this for rows where "username" in the table "login" equals the variable "$profile." It also pulls fields for rows where "loginid" in the table "comment" equals the "loginid" that is also being pulled from "login." I would like to pull data from a third table called "submission," which has the following fields: submissionid loginid title url displayurl datesubmitted I would like to pull fields from rows in "submission" where "loginid" equals the "loginid" that is already being pulled from the other two tables, "login" and "comment." How can I do this? Thanks in advance, John Query: $sqlStrc = "SELECT l.username, l.loginid, c.loginid, c.commentid, c.submissionid, c.comment, c.datecommented FROM comment AS c INNER JOIN login AS l ON c.loginid = l.loginid WHERE l.username = '$profile' ORDER BY c.datecommented DESC LIMIT 10";

    Read the article

  • django - order query set by postgres function

    - by thebiglife
    My initial question was here and was related to the postgres backend. http://stackoverflow.com/questions/2408965/postgres-subquery-ordering-by-subquery Now my problem has moved onwards to the Django ORM layer. I essentially want to order a query by a postgres function ('idx', taken from the above stackoverflow work) I've gone through trying to use model.objects.extra(order_by ) or simply order_by but I believe both of these need the order_by parameter to be an attribute or a field known to Django. I'm trying to think how to solve this without having to revert to using an entirely raw SQL query through a model manager.

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >