Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 478/1148 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Rows dropping when I try to join data from two tables

    - by blcArmadillo
    I have a fairly simple query I'm try to write. If I run the following query: SELECT parts.id, parts.type_id FROM parts WHERE parts.type_id=1 OR parts.type_id=2 OR parts.type_id=4 ORDER BY parts.type_id; I get all the rows I expect to be returned. Now when I try to grab the parent_unit from another table with the following query six rows suddenly drop out of the result: SELECT parts.id, parts.type_id, sp.parent_unit FROM parts, serialized_parts sp WHERE (parts.type_id=1 OR parts.type_id=2 OR parts.type_id=4) AND sp.parts_id = parts.id ORDER BY parts.type_id In the past I've never really dealt with ORs in my queries so maybe I'm just doing it wrong. That said I'm guessing it's just a simple mistake. Let me know if you need sample data and I'll post some. Thanks.

    Read the article

  • Entity framework and many to many queries unusable?

    - by John Landheer
    I'm trying EF out and I do a lot of filtering based on many to many relationships. For instance I have persons, locations and a personlocation table to link the two. I also have a role and personrole table. EDIT: Tables: Person (personid, name) Personlocation (personid, locationid) Location (locationid, description) Personrole (personid, roleid) Role (roleid, description) EF will give me persons, roles and location entities. EDIT: Since EF will NOT generate the personlocation and personrole entity types, they cannot be used in the query. How do I create a query to give me all the persons of a given location with a given role? In SQL the query would be select p.* from persons as p join personlocations as pl on p.personid=pl.personid join locations as l on pl.locationid=l.locationid join personroles as pr on p.personid=pr.personid join roles as r on pr.roleid=r.roleid where r.description='Student' and l.description='Amsterdam' I've looked, but I can't seem to find a simple solution.

    Read the article

  • SQL: Need to SUM on results that meet a HAVING statement

    - by Wasauce
    I have a table where we record per user values like money_spent, money_spent_on_candy and the date. So the columns in this table (let's call it MoneyTable) would be: UserId Money_Spent Money_Spent_On_Candy Date My goal is to SUM the total amount of money_spent -- but only for those users where they have spent more than 10% of their total money spent for the date range on candy. What would that query be? I know how to select the Users that have this -- and then I can output the data and sum that by hand but I would like to do this in one single query. Here would be the query to pull the sum of Spend per user for only the users that have spent 10% of their money on candy. SELECT UserId, SUM(Money_Spent), SUM(Money_Spent_On_Candy) / SUM(Money_Spent) AS PercentCandySpend FROM MoneyTable WHERE DATE >= '2010-01-01' HAVING PercentCandySpend > 0.1;

    Read the article

  • Can a PL/pgSQL function contain a dynamic subquery?

    - by morpheous
    I am writing a PL/pgSQL function. The function has input parameters which specify (indirectly), which tables to read filtering information from. The function embeds business logic which allows it to select data from different tables based on the input arguments. The function dynamically builds a subquery which returns filtering data which is then used to run the main query. My questions are: Is it 'legal' to use a dynamic subquery in a PL/pgSQL function. I cant see why not - but this question is related to the next one. AFAIK, PL/pgSQL are cached or precompiled by the query engine. How does having a function that generates dynamic subqueries impact the work of the query engine?

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • Eager loading vs. many queries with PHP, SQLite

    - by Mike
    I have an application that has an n+1 query problem, but when I implemented a way to load the data eagerly, I found absolutely no performance gain. I do use an identity map, so objects are only created once. Here's a benchmark of ~3000 objects. first query + first object creation: 0.00636100769043 sec. memory usage: 190008 bytes iterate through all objects (queries + objects creation): 1.98003697395 sec. memory usage: 7717116 bytes And here's one when I use eager loading. query: 0.0881109237671 sec. memory usage: 6948004 bytes object creation: 1.91053009033 sec. memory usage: 12650368 bytes iterate through all objects: 1.96605396271 sec. memory usage: 12686836 bytes So my questions are Is SQLite just magically lightning fast when it comes to small queries? (I'm used to working with MySQL.) Does this just seem wrong to anyone? Shouldn't eager loading have given much better performance?

    Read the article

  • Graph API - Get events by owner/creator

    - by jwynveen
    Is there a way with the Facebook Graph API to get a list of all events created by a single profile? Our client creates a bunch of events and we want to pull a list of them all. I said that they would just have to make sure they set themselves to be attending the event, because then I can easily pull the list of events that profileId is attending, but I'm curious if there's another way. Maybe an FQL query? They look to require a query on the primary key though. And what would that FQL query look like if that's the way to do it??

    Read the article

  • strange SqlAlchemy update behaviour

    - by Max
    I'm new to SqlAlchemy and Elixir, so I've started from tutorial and tried to create table, insert a record, and then update it as follows: #'elixir_test.py' from elixir import * metadata.bind = "postgresql://myuser:mypwd@localhost:5432/dbname" metadata.bind.echo = True class Movie(Entity): title = Field(Unicode(30)) year = Field(Integer) description = Field(UnicodeText) def __repr__(self): return '<Movie "%s" (%d)>' % (self.title, self.year) and in another file in the same directory: from elixir_test import * setup_all() #create table create_all() Movie(title=u"Blade Runner", year=1982) #add record session.commit() #get records Movie.query.all() #trying to update record and commit changes, BUT... movie = Movie.query.first() movie.year = 1983 session.commit() #now we have two records in our table, one #with year=1982 and one with year=1983 Movie.query.all() What did I missed?

    Read the article

  • Rails eager loading

    - by Dimitar Vouldjeff
    HI, I have a Test model, which has_many questions, and Question, which has_many answers... When I make a query for a Test with :include = [:questions, {:questions = :answers}] ActiveRecord makes two more queries to fetch the questions and then to fetch the answers - it doesn`t join them!!! When I do the query with :joins ActiveRecord makes the query, but later when I need the Test.questions or Test.questions.answers ActiveRecord makes again those 2 extra queries!!! And later when I enumerate the questions or answers in the log I see other queries for each object, but it has Cache tag... Is this normal?

    Read the article

  • Paging enormous tables on DB2

    - by grenade
    We have a view that, without constraints, will return 90 million rows and a reporting application that needs to display paged datasets of that view. We're using nhibernate and recently noticed that its paging mechanism looks like this: select * from (select rownumber() over() as rownum, this_.COL1 as COL1_20_0_, this_.COL2 as COL2_20_0_ FROM SomeSchema.SomeView this_ WHERE this_.COL1 = 'SomeValue') as tempresult where rownum between 10 and 20 The query brings the db server to its knees. I think what's happening is that the nested query is assigning a row number to every row satisfied by the where clause before selecting the subset (rows 10 - 20). Since the nested query will return a lot of rows, the mechanism is not very efficient. I've seen lots of tips and tricks for doing this efficiently on other SQL platforms but I'm struggling to find a DB2 solution. In fact an article on IBM's own site recommends the approach that nhibernate has taken. Is there a better way?

    Read the article

  • Using sub filters/queries in Google App Engine

    - by fredrik
    Hi, I'm trying to use figure out how to sub query a query that uses a filter. From what I've figured out so far while using .filter() it changes the original query, that leads to a second .filter() would also have to match the first filter. I would like to make something like this: modules = data.Modules.all().filter('page = ', page.key()) modules.filter('name = ', 'Test') modules.filter('name = ', 'Test2') I can't get the "Test2" filter to work. The only solution I have at the moment is to make all new queries. data.Modules.all().filter('page = ', page.key()).filter('name = ', "Test").get() data.Modules.all().filter('page = ', page.key()).filter('name = ', "Test2").get() Or write the same as an GQL. But for me it seams quite stupid way to go. I've looked at using ancestors, but I don't quite understand it and honestly don't know if that's the way to go. Any ideas? ..fredrik

    Read the article

  • Guide to MySQL & NoSQL, Webinar Q&A

    - by Mat Keep
    0 0 1 959 5469 Homework 45 12 6416 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Yesterday we ran a webinar discussing the demands of next generation web services and how blending the best of relational and NoSQL technologies enables developers and architects to deliver the agility, performance and availability needed to be successful. Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into areas like auto-sharding and cross-shard JOINs, replication, performance, client libraries, etc. So I thought it would be useful to post those below, for the benefit of those unable to attend the webinar. Before getting to the Q&A, there are a couple of other resources that maybe useful to those looking at NoSQL capabilities within MySQL: - On-Demand webinar (coming soon!) - Slides used during the webinar - Guide to MySQL and NoSQL whitepaper  - MySQL Cluster demo, including NoSQL interfaces, auto-sharing, high availability, etc.  So here is the Q&A from the event  Q. Where does MySQL Cluster fit in to the CAP theorem? A. MySQL Cluster is flexible. A single Cluster will prefer consistency over availability in the presence of network partitions. A pair of Clusters can be configured to prefer availability over consistency. A full explanation can be found on the MySQL Cluster & CAP Theorem blog post.  Q. Can you configure the number of replicas? (the slide used a replication factor of 1) Yes. A cluster is configured by an .ini file. The option NoOfReplicas sets the number of originals and replicas: 1 = no data redundancy, 2 = one copy etc. Usually there's no benefit in setting it >2. Q. Interestingly most (if not all) of the NoSQL databases recommend having 3 copies of data (the replication factor).    Yes, with configurable quorum based Reads and writes. MySQL Cluster does not need a quorum of replicas online to provide service. Systems that require a quorum need > 2 replicas to be able to tolerate a single failure. Additionally, many NoSQL systems take liberal inspiration from the original GFS paper which described a 3 replica configuration. MySQL Cluster avoids the need for a quorum by using a lightweight arbitrator. You can configure more than 2 replicas, but this is a tradeoff between incrementally improved availability, and linearly increased cost. Q. Can you have cross node group JOINS? Wouldn't that run into the risk of flooding the network? MySQL Cluster 7.2 supports cross nodegroup joins. A full cross-join can require a large amount of data transfer, which may bottleneck on network bandwidth. However, for more selective joins, typically seen with OLTP and light analytic applications, cross node-group joins give a great performance boost and network bandwidth saving over having the MySQL Server perform the join. Q. Are the details of the benchmark available anywhere? According to my calculations it results in approx. 350k ops/sec per processor which is the largest number I've seen lately The details are linked from Mikael Ronstrom's blog The benchmark uses a benchmarking tool we call flexAsynch which runs parallel asynchronous transactions. It involved 100 byte reads, of 25 columns each. Regarding the per-processor ops/s, MySQL Cluster is particularly efficient in terms of throughput/node. It uses lock-free minimal copy message passing internally, and maximizes ID cache reuse. Note also that these are in-memory tables, there is no need to read anything from disk. Q. Is access control (like table) planned to be supported for NoSQL access mode? Currently we have not seen much need for full SQL-like access control (which has always been overkill for web apps and telco apps). So we have no plans, though especially with memcached it is certainly possible to turn-on connection-level access control. But specifically table level controls are not planned. Q. How is the performance of memcached APi with MySQL against memcached+MySQL or any other Object Cache like Ecache with MySQL DB? With the memcache API we generally see a memcached response in less than 1 ms. and a small cluster with one memcached server can handle tens of thousands of operations per second. Q. Can .NET can access MemcachedAPI? Yes, just use a .Net memcache client such as the enyim or BeIT memcache libraries. Q. Is the row level locking applicable when you update a column through memcached API? An update that comes through memcached uses a row lock and then releases it immediately. Memcached operations like "INCREMENT" are actually pushed down to the data nodes. In most cases the locks are not even held long enough for a network round trip. Q. Has anyone published an example using something like PHP? I am assuming that you just use the PHP memcached extension to hook into the memcached API. Is that correct? Not that I'm aware of but absolutely you can use it with php or any of the other drivers Q. For beginner we need more examples. Take a look here for a fully worked example Q. Can I access MySQL using Cobol (Open Cobol) or C and if so where can I find the coding libraries etc? A. There is a cobol implementation that works well with MySQL, but I do not think it is Open Cobol. Also there is a MySQL C client library that is a standard part of every mysql distribution Q. Is there a place to go to find help when testing and/implementing the NoSQL access? If using Cluster then you can use the [email protected] alias or post on the MySQL Cluster forum Q. Are there any white papers on this?  Yes - there is more detail in the MySQL Guide to NoSQL whitepaper If you have further questions, please don’t hesitate to use the comments below!

    Read the article

  • NSStream sockets missing data

    - by Chris T.
    I am trying to pull some sample data from FreeDB as a proof of concept, but I am having a tough time retrieving all of the data off the incoming stream (I am only getting the last bits for the final query listed here (if handshakeCode = 3) I think this may be something with the threading on the main runloop, but I am not sure. Odd thing is when the buffer size is larger than 1-2 bytes (which works as expected), I seem to be losing access to the data programmatically (the totalOutput variable on the first set of data is incomplete). I set up a packet capture, and it looks like those 1024 bytes are coming across the wire, but the app just isn't working with it. It looks like the next event is coming through and basically taking over. I tried using an NSLock to no avail as well. If I drop the buffer size down to 1 or 2, things seem to be reading just fine. This is probably obvious to someone who does this all the time, but this is my first foray into this with something I am familiar with, technology wise in other languages / platforms. The following code will show you what is happening. Run with the buffer set to 1024, and you will see a short final string, but once you set it to 1, you will see the amount of data I was expecting (I was even expecting it to be split, so that's not a big worry) #import <Foundation/Foundation.h> #import <Cocoa/Cocoa.h> //STACK OVERFLOW CODE: @interface stackoverflow : NSObject <NSStreamDelegate> { NSInputStream *iStream; NSOutputStream *oStream; int handshakeCode; NSString *selectedDiscId; NSString *selectedGenre; } -(void)getMatchesFromFreeDB; -(void)sendToOutputStream:(NSString*)command; @end @implementation stackoverflow -(void)getMatchesFromFreeDB { NSHost *host = [NSHost hostWithName:@"freedb.freedb.org"]; [NSStream getStreamsToHost:host port:8880 inputStream:&iStream outputStream:&oStream]; [iStream retain]; [oStream retain]; [iStream setDelegate:self]; [oStream setDelegate:self]; [iStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [oStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [iStream open]; [oStream open]; handshakeCode = 0; //not done any processing } -(void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode { switch(eventCode) { case NSStreamEventOpenCompleted: { NSLog(@"Stream open completed"); break; } case NSStreamEventHasBytesAvailable: { NSLog(@"Stream has bytes available"); if (aStream == iStream) { NSMutableString *totalOutput = [NSMutableString stringWithString:@""]; //read data uint8_t buffer[1024]; int len; while ([iStream hasBytesAvailable]) { len = [iStream read:buffer maxLength:sizeof(buffer)]; if (len 0) { NSString *output = [[NSString alloc] initWithBytes:buffer length:len encoding:NSUTF8StringEncoding]; //this could have also been put into an NSData object if (nil != output) { //append to the total output [totalOutput appendString:output]; } } } NSLog(@"OUTPUT , %i:\n\n%@", [totalOutput lengthOfBytesUsingEncoding:NSUTF8StringEncoding], totalOutput); NSArray *outputComponents = [totalOutput componentsSeparatedByString:@" "]; //Attempt to get handshake code, since we haven't done it yet: if (handshakeCode == 1) { //we are just getting the sign-on banner: //let's move on: handshakeCode = 2; } else if (handshakeCode == 2) { handshakeCode = [[outputComponents objectAtIndex:0] intValue]; if (handshakeCode == 200) { NSLog(@"---Handshake OK %i", handshakeCode); NSMutableString *query = [NSMutableString stringWithString:@"cddb query f3114b11 17 225 19915 36489 54850 69425 87025 103948 123242 136075 152817 178335 192850 211677 235104 262090 284882 308658 4430\n"]; handshakeCode = 3; [self sendToOutputStream:query]; } } else if (handshakeCode == 3) { //now, we are reading out the matches: if ([[outputComponents objectAtIndex:0] intValue] == 200) //found exact match: { NSLog(@"Found exact match"); selectedGenre = [outputComponents objectAtIndex:1] ; selectedDiscId = [outputComponents objectAtIndex:2]; if (selectedGenre && selectedDiscId) { //send off the request to get the entry: NSString *query = [NSString stringWithFormat:@"cddb read %@ %@\n", selectedGenre, selectedDiscId]; [self sendToOutputStream:query]; handshakeCode = 4; } } } } break; } case NSStreamEventEndEncountered: { NSLog(@"Stream event end encountered"); break; } case NSStreamEventErrorOccurred: { NSLog(@"Stream error occurred"); break; } case NSStreamEventHasSpaceAvailable: { NSLog(@"Stream has space available"); if (aStream == oStream) { if (handshakeCode == 0) { handshakeCode = 1; [self sendToOutputStream:@"cddb hello stackoverflow localhost.localdomain test .01BETA\n"]; } } break; } } } -(void)sendToOutputStream:(NSString*)command { const uint8_t *rawCommand = (const uint8_t *)[command UTF8String]; [oStream write:rawCommand maxLength:strlen(rawCommand)]; NSLog(@"Sent command: %@",command); } @end int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; stackoverflow *test = [[stackoverflow alloc] init]; [test getMatchesFromFreeDB]; NSRunLoop *runLoop = [NSRunLoop currentRunLoop]; [runLoop run]; [pool drain]; return 0; } Any help is much appreciated! Thanks

    Read the article

  • android content provider robustness on provider crash

    - by user1298992
    On android platforms (confirmed on ICS), if a content provider dies while a client is in the middle of a query (i.e. has a open cursor) the framework decides to kill the client processes holding a open cursor. Here is a logcat output when i tried this with a download manager query that sleeps after doing a query. The "sleep" was to reproduce the problem. you can imagine it happening in a regular use case when the provider dies at the right/wrong time. And then do a kill of com.android.media (which hosts the downloadProvider). "Killing com.example (pid 12234) because provider com.android.providers.downloads.DownloadProvider is in dying process android.process.media" I tracked the code for this in ActivityManagerService::removeDyingProviderLocked Is this a policy decision or is the cursor access unsafe after the provider has died? It looks like the client cursor is holding a fd for an ashmem location populated by the CP. Is this the reason the clients are killed instead of throwing an exception like Binders when the server (provider) dies ?

    Read the article

  • Disabling scoring in Lucene(.NET)

    - by user72185
    Hi, When searching, is there a way to disable scoring for any query? The scenario is that the user refines his query by trying different combinations of words, phrases etc., and needs realtime (well, reasonably fast at least) responses on the number of hits. Search time slows down a lot when there are millions of hits due to scoring, but the user really doesn't care about all these documents. As soon as he sees there are 1M+ hits he will start adding additional words to the query. A "Sort by relevance" option would allow him to do this quickly, while turning scoring back on when the number of hits is reasonable. Is this possible? I'm using Lucene.NET 2.9.2 but AFAIK it is identical to the Java version.

    Read the article

  • Fetch data from multiple MySQL tables

    - by Jon McIntosh
    My two tables look like this: TABLE1 TABLE2 +--------------------+ +--------------------+ |field1|field2|field3| and |field2|field4|field5| +--------------------+ +--------------------+ I am already running a SELECT query for TABLE1, and assorting all of the data into variables: $query = "SELECT * FROM TABLE1 WHERE field2 = 2"; $result = mysql_query($query); $num_rows = mysql_num_rows($result); if((!is_bool($result) || $result) && $num_rows) { while($row = mysql_fetch_array($result)) { $field1 = $row['field1']; $field2 = $row['field2']; $field3 = $row['field3']; } } What I want to do is get the data from 'field4' on TABLE2 and add it to my variables. I would want to get field4 WHERE field2 = 2

    Read the article

  • Detect if a table contains a column in Android/sqlite

    - by sandis
    So I have an app on the market, and with an update I want to add some columns to the database. No problems so far. But I want to detect if the database in use is missing these columns, and add them if this is the case. I need this to be done dynamically and not just after the update to the new version, because the application is supposed to still be able to import older databases. Normally I would be able to use the PRAGMA query, but Im not sure how to do this with Android. I cant use execSQL since it is a query, and I cant figure out how to use PRAGMA with the query()-function. Ofcourse I could just catch exceptions and then add the column, or always add the columns to each table before I start to work with it, but that is not a neat solution. Cheers,

    Read the article

  • Problem with order by in LINQ

    - by vikitor
    Hi, I'm passing from the controller an array generated by the next code: public ActionResult GetClasses(bool ajax, string kingdom) { int _kingdom = _taxon.getKingdom(kingdom); var query = (from c in vwAnimalsTaxon.All() orderby c.ClaName select new { taxRecID = c.ClaRecID, taxName = c.ClaName }).Distinct(); return Json(query, JsonRequestBehavior.AllowGet); } The query List should be ordered, but it doesn't work, I get the names of the classes ordered wrong in the array, because I've seen it debugging that the names are not ordered.The view is just a dropdownbox loaded automatically, so I'm almost sure the problem is with the action. Do you see anything wrong?Am I missing something?

    Read the article

  • Oracle Tutor: Top 10 to Implement Sustainable Policies and Procedures

    - by emily.chorba(at)oracle.com
    Overview Your organization (executives, managers, and employees) understands the value of having written business process documents (process maps, procedures, instructions, reference documents, and form abstracts). Policies and procedures should be documented because they help to reduce the range of individual decisions and encourage management by exception: the manager only needs to give special attention to unusual problems, not covered by a specific policy or procedure. As more and more procedures are written to cover recurring situations, managers will begin to make decisions which will be consistent from one functional area to the next.Companies should take a project management approach when implementing an environment for a sustainable documentation program and do the following:1. Identify an Executive Champion2. Put together a winning team3. Assign ownership4. Centralize publishing5. Establish the Document Maintenance Process Up Front6. Document critical activities only7. Document actual practice8. Minimize documentation9. Support continuous improvement10. Keep it simple 1. Identify an Executive ChampionAppoint a top down driver. Select one key individual to be a mentor for the procedure planning team. The individual should be a senior manager, such as your company president, CIO, CFO, the vice-president of quality, manufacturing, or engineering. Written policies and procedures can be important supportive aids when known to express the thinking for the chief executive officer and / or the president and to have his or her full support. 2. Put Together a Winning TeamChoose a strong Project Management Leader and staff the procedure planning team with management members from cross functional groups. Make sure team members have the responsibility - and the authority - to make things happen.The winning team should consist of the Documentation Project Manager, Document Owners (one for each functional area), a Document Controller, and Document Specialists (as needed). The Tutor Implementation Guide has complete job descriptions for these roles. 3. Assign Ownership It is virtually impossible to keep process documentation simple and meaningful if employees who are far removed from the activity itself create it. It is impossible to keep documentation up-to-date when responsibility for the document is not clearly understood.Key to the Tutor methodology, therefore, is the concept of ownership. Each document has a single owner, who is responsible for ensuring that the document is necessary and that it reflects actual practice. The owner must be a person who is knowledgeable about the activity and who has the authority to build consensus among the persons who participate in the activity as well as the authority to define or change the way an activity is performed. The owner must be an advocate of the performers and negotiate, not dictate practices.In the Tutor environment, a document's owner is the only person with the authority to approve an update to that document. 4. Centralize Publishing Although it is tempting (especially in a networked environment and with document management software solutions) to decentralize the control of all documents -- with each owner updating and distributing his own -- Tutor promotes centralized publishing by assigning the Document Administrator (gate keeper) to manage the updates and distribution of the procedures library. 5. Establish a Document Maintenance Process Up Front (and stick to it) Everyone in your organization should know they are invited to suggest changes to procedures and should understand exactly what steps to take to do so. Tutor provides a set of procedures to help your company set up a healthy document control system. There are many document management products available to automate some of the document change and maintenance steps. Depending on the size of your organization, a simple document management system can reduce the effort it takes to track and distribute document changes and updates. Whether your company decides to store the written policies and procedures on a file server or in a database, the essential tasks for maintaining documents are the same, though some tasks are automated. 6. Document Critical Activities Only The best way to keep your documentation simple is to reduce the number of process documents to a bare minimum and to include in those documents only as much detail as is absolutely necessary. The first step to reducing process documentation is to document only those activities that are deemed critical. Not all activities require documentation. In fact, some critical activities cannot and should not be standardized. Others may be sufficiently documented with an instruction or a checklist and may not require a procedure. A document should only be created when it enhances the performance of the employee performing the activity. If it does not help the employee, then there is no reason to maintain the document. Activities that represent little risk (such as project status), activities that cannot be defined in terms of specific tasks (such as product research), and activities that can be performed in a variety of ways (such as advertising) often do not require documentation. Sometimes, an activity will evolve to the point where documentation is necessary. For example, an activity performed by single employee may be straightforward and uncomplicated -- that is, until the activity is performed by multiple employees. Sometimes, it is the interaction between co-workers that necessitates documentation; sometimes, it is the complexity or the diversity of the activity.7. Document Actual Practices The only reason to maintain process documentation is to enhance the performance of the employee performing the activity. And documentation can only enhance performance if it reflects reality -- that is, current best practice. Documentation that reflects an unattainable ideal or outdated practices will end up on the shelf, unused and forgotten.Documenting actual practice means (1) auditing the activity to understand how the work is really performed, (2) identifying best practices with employees who are involved in the activity, (3) building consensus so that everyone agrees on a common method, and (4) recording that consensus.8. Minimize Documentation One way to keep it simple is to document at the highest level possible. That is, include in your documents only as much detail as is absolutely necessary.When writing a document, you should ask yourself, What is the purpose of this document? That is, what problem will it solve?By focusing on this question, you can target the critical information.• What questions are the end users likely to have?• What level of detail is required?• Is any of this information extraneous to the document's purpose? Short, concise documents are user friendly and they are easier to keep up to date. 9. Support Continuous Improvement Employees who perform an activity are often in the best position to identify improvements to the process. In other words, continuous improvement is a natural byproduct of the work itself -- but only if the improvements are communicated to all employees who are involved in the process, and only if there is consensus among those employees.Traditionally, process documentation has been used to dictate performance, to limit employees' actions. In the Tutor environment, process documents are used to communicate improvements identified by employees. How does this work? The Tutor methodology requires a process document to reflect actual practice, so the owner of a document must routinely audit its content -- does the document match what the employees are doing? If it doesn't, the owner has the responsibility to evaluate the process, to build consensus among the employees, to identify "best practices," and to communicate these improvements via a document update. Continuous improvement can also be an outgrowth of corrective action -- but only if the solutions to problems are communicated effectively. The goal should be to solve a problem once and only once, which means not only identifying the solution, but ensuring that the solution becomes part of the process. The Tutor system provides the method through which improvements and solutions are documented and communicated to all affected employees in a cost-effective, timely manner; it ensures that improvements are not lost or confined to a single employee. 10. Keep it Simple Process documents don't have to be complex and unfriendly. In fact, the simpler the format and organization, the more likely the documents will be used. And the simpler the method of maintenance, the more likely the documents will be kept up-to-date. Keep it simply by:• Minimizing skills and training required• Following the established Tutor document format and layout• Avoiding technology just for technology's sake No other rule has as major an impact on the success of your internal documentation as -- keep it simple. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum.   Emily Chorba Principle Product Manager Oracle Tutor & BPM 

    Read the article

  • MySQLNonTransientConnectionException in jdbc program at run time

    - by Sunil Kumar Sahoo
    Hi I have created jdbc mysql connection. my program works fine for simple execution of query. But if i run the same program for more than 10 hour and execute query then i receives the following mysql exception. I have not used close() method anywhere. i created database connection and opened it forever and always execute query. there is no where that i explicitly mentioned timeout for connection. i am unable to identify the problem com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Connection.close() has already been called. Invalid operation in this state. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after statement closed. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sample code for database connection: String driver = PropertyReader.getDriver(); String url = dbURLPath; Class.forName(driver); connectToServerDB = DriverManager.getConnection(url); connectToServerDB.setAutoCommit(false);

    Read the article

  • Count Distinct With IF in MySQL?

    - by user1600801
    I need to do a query with count distinct and IF, but the results always are 0. What I need to do, is count the different users from a table in different months, using IF. My individual query, for one month is this: SELECT COUNT(DISTINCT(idUsers)) AS num_usuarios FROM table01 WHERE date1='201207' But I need to get the results by different months in the same query. What I'm trying to do is this: SELECT IF(date1=(201207), count(distinct(idUsers)), 0) as user30, IF(fecha1=(201206), count(distinct(idUsers)), 0) as user60, IF(fecha1=(201205), count(distinct(idUsers)), 0) as user90, IF(fecha1=(201204), count(distinct(idUsers)), 0) as user120, IF(fecha1=(201203), count(distinct(idUsers)), 0) as user150 FROM table01 But the all the results are always 0.

    Read the article

  • How to implement a left outer join in the Entity Framework.

    - by user206736
    I have the following SQL query:- select distinct * from dbo.Profiles profiles left join ProfileSettings pSet on pSet.ProfileKey = profiles.ProfileKey left join PlatformIdentities pId on pId.ProfileKey = profiles.Profilekey I need to convert it to a LinqToEntities expression. I have tried the following:- from profiles in _dbContext.ProfileSet let leftOuter = (from pSet in _dbContext.ProfileSettingSet select new { pSet.isInternal }).FirstOrDefault() select new { profiles.ProfileKey, Internal = leftOuter.isInternal, profiles.FirstName, profiles.LastName, profiles.EmailAddress, profiles.DateCreated, profiles.LastLoggedIn, }; The above query works fine because I haven't considered the third table "PlatformIdentities". Single left outer join works with what I have done above. How do I include PlatformIdentities (the 3rd table) ? I basically want to translate the SQL query I specified at the beginning of this post (which gives me exactly what I need) in to LinqToEntities. Thanks

    Read the article

  • Searching for Records

    - by 47
    I've come up with a simple search view to search for records in my app. The user just enters all parameters in the search box then all this is matched against the database, then results are returned. One of these fields is the phone number....now in the database it's stored in the format XXX-XXX-XXX. A search, for example, for "765-4321" pull up only "416-765-4321...however I want it to return both "416-765-4321" and "4167654321" My view is as below: def search(request, page_by=None): query = request.GET.get('q', '') if query: term_list = query.split(' ') q = Q(first_name__icontains=term_list[0]) | Q(last_name__icontains=term_list[0]) | Q(email_address__icontains=term_list[0]) | Q(phone_number__icontains=term_list[0]) for term in term_list[1:]: q.add((Q(first_name__icontains=term) | Q(last_name__icontains=term) | Q(email_address__icontains=term) | Q(phone_number__icontains=term)), q.connector) results = Customer.objects.filter(q).distinct() all = results.count() else: results = [] if 'page_by' in request.GET: page_by = int(request.REQUEST['page_by']) else: page_by = 50 return render_to_response('customers/customers-all.html', locals(), context_instance=RequestContext(request))

    Read the article

  • How to avoid SQLiteException locking errors

    - by TheArchedOne
    I'm developing an android app. It has multiple threads reading from and writing to the android SQLite db. I am receiving the following error: SQLiteException: error code 5: database is locked I understand the SQLite locks the entire db on inserting/updating, but these errors only seems to happen when inserting/updating while I'm running a select query. The select query returns a cursor which is being left open quite a wile (a few seconds some times) while I iterate over it. If the select query is not running, I never get the locks. I'm surprised that the select could be locking the db.... is this possible, or is something else going on? What's the best way to avoid such locks? Thanks TAO

    Read the article

  • Issue Calculating from Rows and Columns(Summing two columns with the third of a different row)

    - by vstsdev
    With reference to my previous question Adding columns resulting from GROUP BY clause SELECT AcctId,Date, Sum(CASE WHEN DC = 'C' THEN TrnAmt ELSE 0 END) AS C, Sum(CASE WHEN DC = 'D' THEN TrnAmt ELSE 0 END) AS D FROM Table1 where AcctId = '51' GROUP BY AcctId,Date ORDER BY AcctId,Date I executed the above query and got my desired result.. AcctId Date C D 51 2012-12-04 15000 0 51 2012-12-05 150000 160596 51 2012-12-06 600 0 now I have a another operation to do on the same query i.e. I need the result to be like this AcctId Date Result 51 2012-12-04 (15000-0)-> 15000 51 2012-12-05 (150000-160596) + (15000->The first value) 4404 51 2012-12-06 600-0 +(4404 ->The calculated 2nd value) 5004 Is it possible with the same query??.

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >