Search Results

Search found 33029 results on 1322 pages for 'database queries'.

Page 307/1322 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Can I make this two LINQ queries into one query only?

    - by Holli
    From a List of builtAgents I need all items with OptimPriority == 1 and only 5 items with OptimPriority == 0. I do this with two seperate queries but I wonder if I could make this with only one query. IEnumerable<Agent> priorityAgents = from pri in builtAgents where pri.OptimPriority == 1 select pri; IEnumerable<Agent> otherAgents = (from oth in builtAgents where oth.OptimPriority == 0 select oth).Take(5);

    Read the article

  • What is the difference between these 2 XML LINQ Queries?

    - by Jon
    I have the 2 following LINQ queries and I haven't quite got my head around LINQ so what is the difference between the 2 approaches below? Is there a circumstance where one approach is better than another? ChequeDocument.Descendants("ERRORS").Where(x=>(string)x.Attribute("D") == "").Count(); (from x in ChequeDocument.Descendants("ERRORS") where (string)x.Attribute("D") == "" select x).Count())

    Read the article

  • how to reloadData in tableView when tableview access data from database.

    - by Ajeet Kumar Yadav
    I am new in iphone i am developing a application that take value from data base and display data in tableview. in this application we save data from one data table to other data table this is when add first time work and when we do second time application is crash. how to solve this problem i am not understand code is given bellow my appdelegate code for insert value from one table to other is given bellow -(void)sopinglist { //////databaseName= @"SanjeevKapoor.sqlite"; databaseName =@"AjeetTest.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(addStmt == nil) { ///////////const char *sql = "insert into Dataa(item) Values(?)"; const char *sql = " insert into Slist select * from alootikki"; ///////////// const char *sql =" Update Slist ( Incredients, Recipename,foodtype) Values(?,?,?)"; if(sqlite3_prepare_v2(database, sql, -1, &addStmt, NULL) != SQLITE_OK) NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(database)); } /////for( NSString * j in k) sqlite3_bind_text(addStmt, 1, [k UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_int(addStmt,1,i); // sqlite3_bind_text(addStmt, 1, [coffeeName UTF8String], -1, SQLITE_TRANSIENT); // sqlite3_bind_double(addStmt, 2, [price doubleValue]); if(SQLITE_DONE != sqlite3_step(addStmt)) NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(database)); else //SQLite provides a method to get the last primary key inserted by using sqlite3_last_insert_rowid coffeeID = sqlite3_last_insert_rowid(database); //Reset the add statement. sqlite3_reset(addStmt); // sqlite3_clear_bindings(detailStmt); //} } sqlite3_finalize(addStmt); addStmt = nil; sqlite3_close(database); } And the table View code for access data from database is given bellow SanjeevKapoorAppDelegate *appDelegate =(SanjeevKapoorAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate sopinglist]; ////[appDelegate recpies]; /// NSArray *a =[[appDelegate list1] componentsJoinedByString:@","]; k= [[appDelegate list1] componentsJoinedByString:@","];

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • Need help with 2 MySql Queries. Join vs Subqueries.

    - by BugBusterX
    I have 2 tables: user: id, name message: sender_id, receiver_id, message, read_at, created_at There are 2 results I need to retrieve and I'm trying to find the best solution. I have included queries that I'm using in the very end. I need to retrieve a list of users, and also with each user have information available whether there are any unread messages from each user (them as sender, me as receiver) and whether or not there are any read messages between us ( they send I'm receiver or I send they are receivers) I need Same as above, but only those members where there has been any messaging between us, sorted by unread first, then by last message received. Can you advise please? Should this be done with joins or subqueries? In first case I do not need the count, I just need to know whether or not there is at least one unread message. I'm posting code and my current queries, please have a look when you get a chance: BTW, everything is the way I want in fist query. My concern is: In second query I would like to order by messages.created_at, but I dont think I can do that with grouping? And also I dont know if this approach is the most optimized and fast. CREATE TABLE `user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `user` VALUES (1,'User 1'),(2,'User 2'),(3,'User 3'),(4,'User 4'),(5,'User 5'); CREATE TABLE `message` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `sender_id` bigint(20) DEFAULT NULL, `receiver_id` bigint(20) DEFAULT NULL, `message` text, `read_at` datetime DEFAULT NULL, `created_at` datetime NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `message` VALUES (1,3,1,'Messge',NULL,'2010-10-10 10:10:10'),(2,1,4,'Hey','2010-10-10 10:10:12','2010-10-10 10:10:11'),(3,4,1,'Hello','2010-10-10 10:10:19','2010-10-10 10:10:15'),(4,1,4,'Again','2010-10-10 10:10:25','2010-10-10 10:10:21'),(5,3,1,'Hiii',NULL,'2010-10-10 10:10:21'); SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) GROUP BY u.id SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) where m.id IS NOT NULL GROUP BY u.id

    Read the article

  • How to combine these three sql queries into one?

    - by lam3r4370
    How to combine these two sql queries into one? SELECT DISTINCT * FROM rss WHERE MATCH(content,title) AGAINST ('$filter') SELECT COUNT(content) FROM rss WHERE MATCH(content,title) AGAINST ('$filters') And if the result is 0 from the above query - SELECT DISTINCT * FROM rss WHERE content LIKE '%$filters%' OR title LIKE '%$filters%'; $filter .= $row['filter']; $filters = $row['filter']; $filters may be more than one keyword

    Read the article

  • Entity framework: Need a easy going, clean database migration solution.

    - by user469652
    I'm using entity framework model first development, and I need to do database migration often, The EF database generation power pack doesn't help a lot, because that data migration never worked here. The database migration here I mean, change the model, and then I can update the existing database from the model, but creating a new one. Is there any free of charge tool here invented here yet? Or would this going to be a new feature of next EF release? PS: I love django's ORM.

    Read the article

  • Which is more efficient in mysql, a big join or multiple queries of single table?

    - by Tom Greenpoint
    I have a mysql database like this Post – 500,000 rows (Postid,Userid) Photo – 200,000 rows (Photoid,Postid) About 50,000 posts have photos, average 4 each, most posts do not have photos. I need to get a feed of all posts with photos for a userid, average 50 posts each. Which approach would be more efficient? 1: Big Join select * from post left join photo on post.postid=photo.postid where post.userid=123 2: Multiple queries select * from post where userid=123 while (loop through rows) { select * from photo where postid=row[postid] }

    Read the article

  • How to write native SQL queries in Hibernate without hardcoding table names and fields?

    - by serg555
    Sometimes you have to write some of your queries in native SQL rather than hibernate HQL. Is there a nice way to avoid hardcoding table names and fields and get this data from existing mapping? For example instead of: String sql = "select user_name from tbl_user where user_id = :id"; something like: String sql = "select " + Hibernate.getFieldName("user.name") + " from " + Hibernate.getTableName(User.class) + " where " + Hibernate.getFieldName("user.id") + " = :id";

    Read the article

  • How could i send a html5 sqlite database from an iphone ?

    - by Edoc
    Hi everybody, i'm developing a webapp for iphone/ipod and using the client's side database. I want my webapp to work mainly offline and allow users to fill the database and at a certain point i'd like to send my database to a server to process and store all the informations. The problem is i can't find a proper way to send my tables to a server in order to fill its database with the one sent from the iphone. Does anyone has a clue ?

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Oracle GoldenGate 11g Release 2 Launch Webcast Replay Available

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} For those of you who missed Oracle GoldenGate 11g Release 2 launch webcasts last week, the replay is now available from the following url. Harnessing the Power of the New Release of Oracle GoldenGate 11g I would highly recommend watching the webcast to meet many new features of the new release and hear the product management team respond to the questions from the audience in a nice long Q&A section. In my blog last week I listed the media coverage for this new release. There is a new article published by ITJungle talking about Oracle GoldenGate’s heterogeneity and support for DB2 for iSeries: Oracle Completes DB2/400 Support in Data Replication Tool As mentioned in last week’s blog, we received over 150 questions from the audience and in this blog I'd like to continue to post some of the frequently asked,  questions and their answers: Question: What are the fundamental differences between classic data capture and integrated data capture? Do both use the redo logs in the source database? Answer: Yes, they both use redo logs. Classic capture parses the redo log data directly, whereas the Integrated Capture lets the Oracle database parse the redo log record using an internal API. Question: Does GoldenGate version need to match Oracle Database version? Answer: No, they are not directly linked. Oracle GoldenGate 11g Release 2 supports Oracle Database version 10gR2 as well. For Oracle Database version 10gR1 and Oracle Database version 9i you will need GoldenGate11g Release 1 or lower. And for Oracle Database 8i you need Oracle GoldenGate 10 or earlier versions. Question: If I already use Data Guard, do I need GoldenGate? Answer: Data Guard is designed as the best disaster recovery solution for Oracle Database. If you would like to implement a bidirectional Active-Active replication solution or need to move data between heterogeneous systems, you will need GoldenGate. Question: On Compression and GoldenGate, if the source uses compression, is it required that the target also use compression? Answer: No, the source and target do not need to have the same compression settings. Question: Does GG support Advance Security Option on the Source database? Answer: Yes it does. Question: Can I use GoldenGate to upgrade the Oracle Database to 11g and do OS migration at the same time? Answer: Yes, this is a very common project where GoldenGate can eliminate downtime, give flexibility to test the target as needed, and minimize risks with fail-back option to the old environment. For more information on database upgrades please check out the following white papers: Best Practices for Migrating/Upgrading Oracle Database Using Oracle GoldenGate 11g Zero-Downtime Database Upgrades Using Oracle GoldenGate Question: Does GoldenGate create any trigger in the source database table level or row level to for real-time data integration? Answer: No, GoldenGate does not create triggers. Question: Can transformation be done after insert to destination table or need to be done before? Answer: It can happen in the Capture (Extract) process, in the  Delivery (Replicat) process, or in the target database. For more resources on Oracle GoldenGate 11gR2 please check out our Oracle GoldenGate 11gR2 resource kit as well.

    Read the article

  • Should I cache the data or hit the database?

    - by JD01
    I have not worked with any caching mechanisms and was wondering what my options are in the .net world for the following scenario. We basically have a a REST Service where the user passes an ID of a Category (think folder) and this category may have lots of sub categories and each of the sub categories could have 1000 of media containers (think file reference objects) which contain information about a file that may be on a NAS or SAN server (files are videos in this case). The relationship between these categories is stored in a database together with some permission rules and meta data about the sub categories. So from a UI perspective we have a lazy loaded tree control which is driven by the user by clicking on each sub folder (think of Windows explorer). Once they come to a URL of the video file, they then can watch the video. The number of users could grow into the 1000s and the sub categories and videos could be in the 10000s as the system grows. The question is should we carry on the way it is currently working where each request hits the database or should we think about caching the data? We are on using IIS 6/7 and Asp.net.

    Read the article

  • SQL SERVER – Identifying guest User using Policy Based Management

    - by pinaldave
    If you are following my recent blog posts, you may have noticed that I’ve been writing a lot about Guest User in SQL Server. Here are all the blog posts which I have written on this subject: SQL SERVER – Disable Guest Account – Serious Security Issue SQL SERVER – Force Removing User from Database – Fix: Error: Could not drop login ‘test’ as the user is currently logged in SQL SERVER – Detecting guest User Permissions – guest User Access Status SQL SERVER – guest User and MSDB Database – Enable guest User on MSDB Database One of the requests I received was whether we could create a policy that would prevent users unable guest user in user databases. Well, here is a quick tutorial to answer this. Let us see how quickly we can do it. Requirements Check if the guest user is disabled in all the user-created databases. Exclude master, tempdb and msdb database for guest user validation. We will create the following conditions based on the above two requirements: If the name of the user is ‘guest’ If the user has connect (@hasDBAccess) permission in the database Check in All user databases, except: master, tempDB and msdb Once we create two conditions, we will create a policy which will validate the conditions. Condition 1: Is the User Guest? Expand the Database >> Management >> Policy Management >> Conditions Right click on the Conditions, and click on “New Condition…”. First we will create a condition where we will validate if the user name is ‘guest’, and if it’s so, then we will further validate if it has DB access. Check the image for the necessary configuration for condition: Facet: User Expression: @Name = ‘guest’ Condition 2: Does the User have DBAccess? Expand the Database >> Management >> Policy Management >> Conditions Right click on Conditions and click on “New Condition…”. Now we will validate if the user has DB access. Check the image for necessary configuration for condition: Facet: User Expression: @hasDBAccess = False Condition 3: Exclude Databases Expand the Database >> Management >> Policy Management >> Conditions Write click on Conditions and click on “New Condition…” Now we will create condition where we will validate if database name is master, tempdb or msdb and if database name is any of them, we will not validate our first one condition with them. Check the image for necessary configuration for condition: Facet: Database Expression: @Name != ‘msdb’ AND @Name != ‘tempdb’ AND @Name != ‘master’ The next step will be creating a policy which will enforce these conditions. Creating a Policy Right click on Policies and click “New Policy…” Here, we justify what condition we want to validate against what the target is. Condition: Has User DBAccess Target Database: Every Database except (master, tempdb and MSDB) Target User: Every User in Target Database with name ‘guest’ Now we have options for two evaluation modes: 1) On Demand and 2) On Schedule We will select On Demand in this example; however, you can change the mode to On Schedule through the drop down menu, and select the interval of the evaluation of the policy. Evaluate the Policies We have selected OnDemand as our policy evaluation mode. We will now evaluate by means of executing Evaluate policy. Click on Evaluate and it will give the following result: The result demonstrates that one of the databases has a policy violation. Username guest is enabled in AdventureWorks database. You can disable the guest user by running the following code in AdventureWorks database. USE AdventureWorks; REVOKE CONNECT FROM guest; Once you run above query, you can already evaluate the policy again. Notice that the policy violation is fixed now. You can change the method of the evaluation policy to On Schedule and validate policy on interval. You can check the history of the policy and detect the violation. Quiz I have created three conditions to check if the guest user has database access or not. Now I want to ask you: Is it possible to do the same with 2 conditions? If yes, HOW? If no, WHY NOT? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Policy Management

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >