Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 207/285 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • sql: DELETE + INSERT vs UPDATE + INSERT

    - by user93422
    A similar question has been asked, but since it always depends, I'm asking for my specific situation separately. I have a web-site page that shows some data that comes from a database, and to generate the data from that database I have to do some fairly complex multiple joins queries. The data is being updated once a day (nightly). I would like to pre-generate the data for the said view to speed up the page access. For that I am creating a table that contains exact data I need. Question: for my situation, is it reasonable to do complete table wipe followed by insert? or should I do update,insert? SQL wise seems like DELETE + INSERT will be easier (it is single SQL expression). EDIT: RDBMS: MS SQL Server 2008 Ent

    Read the article

  • How to return relationships in a custom un-typed dataservice provider

    - by monkey_p
    I have a custom .Net DataService and can't figure out how to return the data for relationships. The data base has 2 tables (Customer, Address). A Customer can have multiple addresses, but each address can only have on customer. I'm using Dictionary<string,object> as my data type. My question, for the following 2 urls how do i return the data. http://localhost/DataService/Customer(1)/Address http://localhost/DataService/Address(1)/Customer For the none relational queries I return a List<Dictionary<string,object>> So I imagined for the relation I should just populate the element with a either a Dictionary<string,object> for the single ones and a List<Dictionary<string,object>> for many relationships. But this just gives me a NullRefferenceException So what am I doing wrong?

    Read the article

  • Using a custom URL parameter in Wordpress (with permalinks)?

    - by kiko
    Hello - Is there a way to perhaps edit .htaccess to add a custom URL parameter to Wordpress, so that the parameter is not stripped out by permalinks? I have a Wordpress site with a page that queries a separate (non-Wordpress) database. I passed the URL parameter "pubID" to display individual books and it is working OK. Example: http://www.uglyducklingpresse.org/catalog/browse/item/?pubID=63 But the individual books are not showing up properly in Google - maybe because they all have the same auto-generated "canonical" URL meta tab - one with the "pubID" parameter stripped out. Thank you for any help.

    Read the article

  • Which index is used in select and why?

    - by Lukasz Lysik
    I have the table with zip codes with following columns: id - PRIMARY KEY code - NONCLUSTERED INDEX city When I execute query SELECT TOP 10 * FROM ZIPCodes I get the results sorted by id column. But when I change the query to: SELECT TOP 10 id FROM ZIPCodes I get the results sorted by code column. Again, when I change the query to: SELECT TOP 10 code FROM ZIPCodes I get the results sorted by code column again. And finally when I change to: SELECT TOP 10 id,code FROM ZIPCodes I get the results sorted by id column. My question is in the title of the question. I know which indexes are used in the queries, but my question is, why those indexes are used? I the second query (SELECT TOP 10 id FROM ZIPCodes) wouldn't it be faster if the clusteder index was used? How the query engine chooses which index to use?

    Read the article

  • Is there a good open source search engine including indexing bot which can be used to make up speci

    - by Skuta
    Hello, Our application (C#/.NET) needs a lot of queries to search. Google's 50,000 policy per day is not enough. We need something that would crawl Internet websites by specific rules we set (for ex. country domains) and gather URLs, Texts, keywords, name of websites and create our own internal catalogue so we wouldn't be limited to any massive external search engine like Google or Yahoo. Is there any free open source solution we could use to install it on our server? No point in re-inventing the wheel.

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • create temporary table from cursor

    - by Claudiu
    Is there any way, in PostgreSQL accessed from Python using SQLObject, to create a temporary table from the results of a cursor? Previously, I had a query, and I created the temporary table directly from the query. I then had many other queries interacting w/ that temporary table. Now I have much more data, so I want to only process 1000 rows at a time or so. However, I can't do CREATE TEMP TABLE ... AS ... from a cursor, not as far as I can see. Is the only thing to do something like: rows = cur.fetchmany(1000); cur2 = conn.cursor() cur2.execute("""CREATE TEMP TABLE foobar (id INTEGER)""") for row in rows: cur2.execute("""INSERT INTO foobar (%d)""" % row) or is there a better way? This seems awfully inefficient.

    Read the article

  • Some help needed with a SQL query

    - by Psyche
    Hello, I need some help with a MySQL query. I have two tables, one with offers and one with statuses. An offer can has one or more statuses. What I would like to do is get all the offers and their latest status. For each status there's a table field named 'added' which can be used for sorting. I know this can be easily done with two queries, but I need to make it with only one because I also have to apply some filters later in the project. Here's my setup: CREATE TABLE `test`.`offers` ( `id` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY , `client` TEXT NOT NULL , `products` TEXT NOT NULL , `contact` TEXT NOT NULL ) ENGINE = MYISAM ; CREATE TABLE `statuses` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `offer_id` int(11) NOT NULL, `options` text NOT NULL, `deadline` date NOT NULL, `added` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1

    Read the article

  • Passing GPS speed via the Eclipse Emulator Control

    - by Nick
    Hi, my app queries the GPS-speed using .getSpeed() on a LocationListener. Is there a way to set this speed using the Eclipse Emulator Control or the command line? I tried to feed multiple sets of coordinates to the emulator via the manual GPS-control, but it didn't pick up a speed from that. Also, using a pre-defined GPX-file and playing it doesn't work for me. I would like to test my app without having to take it on a test-drive in my car every time ;)! Thanks!

    Read the article

  • Create a complex SQL query?

    - by mazzzzz
    Hey guys, I have a program that allows me to run queries against a large database. I have two tables that are important right now, Deposits and withdraws. Each contains a history of every user. I need to take each table, add up every deposit and withdraws (per user), then subtract the withdraws from the deposits. I then need to return every user whos result is negative (aka they withdrew more then they deposited). Is this possible in one query? Example: Deposit Table: |ID|UserName|Amount| |1 | Use1 |100.00| |2 | Use1 |50.00 | |3 | Use2 |25.00 | |4 | Use1 | 5.00 | WithDraw Table: |ID|UserName|Amount| |2 | Use2 | 5.00 | |1 | Use1 |100.00| |4 | Use1 | 5.00 | |3 | Use2 |25.00 | So then the result would output: |OverWithdrawers| | Use2 | Is this possible (I sure don't know how to do it)? Thanks for any help, Max

    Read the article

  • MySQL inner join different results

    - by Darryl at NetHosted
    I am trying to work out why the following two queries return different results: SELECT DISTINCT i.id, i.date FROM `tblinvoices` i INNER JOIN `tblinvoiceitems` it ON it.userid=i.userid INNER JOIN `tblcustomfieldsvalues` cf ON it.relid=cf.relid WHERE i.`tax` = 0 AND i.`date` BETWEEN '2012-07-01' AND '2012-09-31' and SELECT DISTINCT i.id, i.date FROM `tblinvoices` i WHERE i.`tax` = 0 AND i.`date` BETWEEN '2012-07-01' AND '2012-09-31' Obviously the difference is the inner join here, but I don't understand why the one with the inner join is returning less results than the one without it, I would have thought since I didn't do any cross table references they should return the same results. The final query I am working towards is SELECT DISTINCT i.id, i.date FROM `tblinvoices` i INNER JOIN `tblinvoiceitems` it ON it.userid=i.userid INNER JOIN `tblcustomfieldsvalues` cf ON it.relid=cf.relid WHERE cf.`fieldid` =5 AND cf.`value` REGEXP '[A-Za-z]' AND i.`tax` = 0 AND i.`date` BETWEEN '2012-07-01' AND '2012-09-31' But because of the different results that seem incorrect when I add the inner join (it removes some results that should be valid) it's not working at present, thanks.

    Read the article

  • getting rid of repeated customer id's in mysql query

    - by bsandrabr
    I originally started by selecting customers from a group of customers and then for each customer querying the records for the past few days and presenting them in a table row. All working fine but I think I might have got too ambitious as I tried to pull in all the records at once having heard that mutiple queries are a big no no. here is the mysqlquery i came up with to pull in all the records at once SELECT morning, afternoon, date, date2, fname, lname, customers.customerid FROM customers LEFT OUTER JOIN attend ON ( customers.customerid = attend.customerid ) RIGHT OUTER JOIN noattend ON ( noattend.date2 = attend.date ) WHERE noattend.date2 BETWEEN '$date2' AND '$date3' AND DayOfWeek( date2 ) %7 >1 AND group ={$_GET['group']} ORDER BY lname ASC , fname ASC , date2 DESC tables are customer-customerid,fname,lname attend-customerid,morning,afternoon,date noattend-date2 (a table of all the days to fill in the blanks) Now the problem I have is how to start a new row in the table when the customer id changes My query above pulls in customer 1 morning 2 customer 1 morning 1 customer 2 morning 2 customer 2 morning 1 whereas I'm trying to get customer1 morning2 morning1 customer2 morning2 morning1 I dont know whether this is possible in the sql or more likely in the php

    Read the article

  • How should I set up these tables for searching?

    - by thewebguy
    My PHP site is an online store with about 5k products. Products belong to a vendor, a category, and possibly a subcategory. Each of those items has a name and the products have descriptions. The search queries we've set up work wonderfully, but tend to run pretty slow. They range between 0.20s and 30s (yes 30 seconds). We've optimized like crazy and I'm starting to think we're out of room to improve on that front, so we're caching them and that's making life a lot easier. But when they run they are still killing the server, because what appears to be all of the table locking that comes with MyISAM. So on to my question: Is there a way for us to use InnoDB (row-level locking) and still maintain FULLTEXT? Should we move our DB offsite and use a service like DB2? Is there some other search engine type software we should use instead? Any help is greatly appreciated :)

    Read the article

  • Database design: one huge table or separate tables?

    - by littlegreen
    Currently I am designing a database for use in our company. We are using SQL Server 2008. The database will hold data gathered from several customers. The goal of the database is to acquire aggregate benchmark numbers over several customers. Recently, I have become worried with the fact that one table in particular will be getting very big. Each customer has approximately 20.000.000 rows of data, and there will soon be 30 customers in the database (if not more). A lot of queries will be done on this table. I am already noticing performance issues and users being temporarily locked out. My question, will we be able to handle this table in the future, or is it better to split this table up into smaller tables for each customer?

    Read the article

  • Why Does TFS Allow Orphaned Content and How Do I Get Rid of It?

    - by Chad
    My TfsVersionControl database has grown to 40+ GB in size. We recently did a TFS Destroy on a folder tree that should have cleared up at least 10 GB but instead it seemed to have no effect. When I look at the tables in TfsVersionControl, I am first shocked to see that there are no foreign keys at all in the database. Running a few queries, I see that there is some orphaning going on: tbl_Content has 13.9 GB of records that don't have a related tbl_File record tbl_File and tbl_Content have 2.4 GB that don't have a related tbl_Namespace record The cleanup job seems to be running nightly (prc_DeleteUnusedContent) and running it against the database manually doesn't remove any orphans. I see in the log for the cleanup job that it failed on 3/16, which is the morning after I destroyed the large amount of data. The error was due to a full transaction log. Could that error be the reason I'm left with all this orphaned data that can't be deleted? How can I permanently destroy this unneeded content?

    Read the article

  • advanced search with mysql

    - by Arsenal
    I'm creating a search function for my website where the user can put in anything he likes in a textfield. It get's matched against anything (name, title, job, car brand, ... you name it) I initially wrote the query with an INNER JOIN on every table that needed to be searched. SELECT column1, column2, ... FROM person INNER JOIN person_car ON ... INNER JOIN car ... This ended up in a query with 6 or 8 INNER JOINs, and a whole lot WHERE ... LIKE '%searchvalue%' Now this query seems to cause a time'out in MySql, and I even got a warning from my hosting provider that the queries just taking up too many resources. Now obviously I'm doing this very wrong, but I was wondering how the correct approach to these kind of search functions is. Thanks!

    Read the article

  • Selecting previous and next row using sp

    - by davor
    I want to select previous and next row from table based on current id. I send current id to stored procedure and use this query: -- previous select top 1 id from table where id < @currentId order by id desc -- next select top 1 id from table where id < @currentId order by id asc The problem is when I send currentId which is the last id in the table and want to select next row. Then is nothing selected. Same problem for previous row, when I send currentId which is first id in table Is it possible to solve this in sql without additional queries?

    Read the article

  • Oracle: Finding Columns with only null values

    - by Jorge Valois
    I have a table with a lot of columns and a type column. Some columns seem to be always empty for a specific type. I want to create a view for each type and only show the relevant columns for each type. Working under the assumption that if a column has ONLY null values for a specific type, then that columns should not be part of the view, how can you find that out with queries? Is there a SELECT [columnName] FROM [table] WHERE [columnValues] ARE ALL [null] I know I COMPLETELY made it all up above... I'm just trying to get the idea across. Thanks in advance!

    Read the article

  • What will happen if I change the type of a column from int to year?

    - by MachinationX
    I have a table in MySQL 4.0 which currently has a year field as a smallint(6) type. What will happen if I convert it directly to a Year type with a query like the following: ALTER TABLE t MODIFY y YEAR(4) NOT NULL DEFAULT CURRENT_TIMESTAMP; When the current members of column y have values like 2010? I assume that because the year type is technically values from 1-255, values above that will be truncated or broken. So if MySQL isn't smart enough to realize that 2010(int) = 110(year), what would be the simplest query or queries to convert the values? Thanks for your help!

    Read the article

  • MySQLi String comparisons using keys

    - by asdasd
    I have a table with lets say 2 columns. id number, and value. Value is a string (var char). Lets say i have a number x, and a list of numbers a1, a2, a3, a4, a5..... where x is not in the list. All of these numbers correspond to a unique row in the table. I want to know if the string value for x in the table is contained in one of the string values for any table entry for a1, a2, a3, a4... Lets say i have these rows: x, aaa a1, bbb a2, ccc a3, ddd a4, aaabbbcc then i want somehow a confirmation that yes, the value for x is included in one of the values in my list of numbers (a4 contains x). I know i can do this in a couple queries and shove it down some PHP and get my answer. But can i do this with one query?

    Read the article

  • I want to exchange the Value of a column in two different rows in Microsoft SQL server

    - by Silmaril89
    Hi, I want to do the following two SQL Queries in Microsoft SQL SERVER UPDATE Partnerships SET sortOrder = 2 WHERE sortOrder = 1; UPDATE Partnerships SET sortOrder = 1 WHERE sortOrder = 2; The only problem is, I don't allow for sortOrder to contain the same value, it is a unique key. How could I get around this, because the first query violates the unique key rule and terminates? Or will I have to get rid of the unique key rule I have? Thanks!

    Read the article

  • db optimization - have a total field or query table?

    - by Dorian Fife
    I have an app where users get points for actions they perform - either 1 point for an easy action or 2 for a difficult one. I wish to display to the user the total number of points he got in my app and the points obtained this week (since Monday at midnight). I have a table that records all actions, along with their time and number of points. I have two alternatives and I'm not sure which is better: Every time the user sees the report perform a query and sum the points the user got Add two fields to each user that records the number of points obtained so far (total and weekly). The weekly points value will be set to 0 every Monday at midnight. The first option is easier, but I'm afraid that as I'll get many users and actions queries will take a long time. The second option bares the risk of inconsistency between the table of actions and the summary values. I'm very interested in what you think is the best alternative here. Thanks, Dorian

    Read the article

  • How can an SQL query return data from multiple tables

    - by Fluffeh
    I would like to know how to get data from multiple tables in my database, what types of methods are there to do this, what are joins and unions and how are they different from one another? When should I use each one compared to the others? I am planning to use this in my (for example - PHP) application, but don't want to run multiple queries against the database, what options do I have to get data from multiple tables in a single query? Note: I am writing this as I would like to be able to link to a well written guide on the numerous questions that I constantly come across in the PHP queue, so I can link to this for further detail when I post an answer. The answers cover off the following: Part 1 - Joins and Unions Part 2 - Subqueries Part 3 - Tricks and Efficient Code

    Read the article

  • How to improve the speed of a loop containing a sqlalchemy query statement as conditional

    - by LtPinback
    This loop checks if a record is in the sqlite database and builds a list of dictionaries for those records that are missing and then executes a multiple insert statement with the list. This works but it is very slow (at least i think it is slow) as it takes 5 minutes to loop over 3500 queries. I am a complete newbie in python, sqlite and sqlalchemy so I wonder if there is a faster way of doing this. list_dict = [] session = Session() for data in data_list: if session.query(Class_object).filter(Class_object.column_name_01 == data[2]).filter(Class_object.column_name_00 == an_id).count() == 0: list_dict.append({'column_name_00':a_id, 'column_name_01':data[2]}) conn = engine.connect() conn.execute(prices.insert(),list_dict) conn.close() session.close() edit: I moved session = Session() outside the loop. Did not make a difference.

    Read the article

  • Which approach to create the data access layer has the highest performance?

    - by pooyakhamooshi
    I have to create a very high performance application. Currently, I am using Entity Framework for my data access layer. My application has to insert some communication data almost every second. I found that Entity Framework is slow; it has about 2 seconds delay to finish the SaveChanges() method. I was thinking I have the following options: 1. Create the data access layer myself using ADO.NET; using stored procedures or ad-hoc queries 2. Use Enterprise Library Data access Layer 3. Use NHibernate 4. Use Repository Factory: http://pooyakhamooshi.blogspot.com/search?q=repository What do you think? which one is quicker for inserting data? Which one is quicker to set up?

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >