Search Results

Search found 28930 results on 1158 pages for 'sql ce'.

Page 609/1158 | < Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >

  • Fetch posts with attachments in a certain category?

    - by TiuTalk
    I need to retreive a list of posts that have (at least) one attachment that belongs to a category in WordPress. The relation between attachments and categories I made by myself using the WordPress default method. Here's the query that i'm running right now: SELECT p.* FROM `wp_posts` AS p # The post INNER JOIN `wp_posts` AS a # The attachment ON p.`ID` = a.`post_parent` AND a.`post_type` = 'attachment' INNER JOIN `wp_term_relationships` AS ra ON a.`ID` = ra.`object_id` AND ra.`term_taxonomy_id` IN (3) # The category ID list WHERE p.`post_type` = 'post' ORDER BY p.`post_date` DESC LIMIT 15 The problem here is that the query only use the first found attachment, and if it doesn't belongs to the category, the result isn't returned.

    Read the article

  • Using PHP variables inside SQL statements?

    - by Homer
    For some reason I can't pass a var inside a mysql statement. I have a function that can be used for multiple tables. So instead of repeating the code I want to change the table that is selected from like so, function show_all_records($table_name) { mysql_query("SELECT * FROM $table_name"); etc, etc... } And to call the function I use show_all_records("some_table") or show_all_records("some_other_table") depending on which table I want to select from at the moment. But it's not working, is this because variables can't be passed through mysql statements?

    Read the article

  • Is the RESTORE process dependent on schema?

    - by Martin Aatmaa
    Let's say I have two database instances: InstanceA - Production server InstanceB - Test server My workflow is to deploy new schema changes to InstanceB first, test them, and then deploy them to InstanceA. So, at any one time, the instance schema relationship looks like this: InstanceA - Schema Version 1.5 InstanceB - Schema Version 1.6 (new version being tested) An additional part of my workflow is to keep the data in InstanceB as fresh as possible. To fulfill this, I am taking the database backups of InstanceA and applying them (restoring them) to InstanceB. My question is, how does schema version affect the restoral process? I know I can do this: Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.5 But can I do this? Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.6 (new version being tested) If no, what would the failure look like? If yes, would the type of schema change matter? For example, if Schema Version 1.6 differed from Schema Version 1.5 by just having an altered storec proc, I imagine that this type of schema change should't affect the restoral process. On the other hand, if Schema Version 1.6 differed from Schema Version 1.5 by having a different table definition (say, an additional column), I image this would affect the restoral process. I hope I've made this clear enough. Thanks in advance for any input!

    Read the article

  • Recursive SQL giving ORA-01790

    - by PenFold
    Using Oracle 11g release 2, the following query gives an ORA-01790: expression must have same datatype as corresponding expression: with intervals(time_interval) AS (select trunc(systimestamp) from dual union all select (time_interval + numtodsinterval(10, 'Minute')) from intervals where time_interval < systimestamp) select time_interval from intervals; The error suggests that the datatype of both subqueries of the UNION ALL are returning different datatypes. Even if I cast to TIMESTAMP in each of the subqueries, then I get the same error. What am I missing?

    Read the article

  • how to do complete multiple replaces throughout table in ms-access

    - by silverkid
    i am a little confused in finding out what would be the best way to replace all occurances of 1. Blanks 2. - 3. NA from all collumns of TableA with question mark ? charachter. Sample Row in orignal tableA 444586 RAUR <blank> 8 570 NA - 13 - SCHS299 MP 339 70 EN <blank> Same Row in Expected TableA 444586 RAUR ? 8 570 ? ? 13 ? SCHS299 MP 339 70 EN ? please help me out I cant use the Find Replace Toolbar of access.

    Read the article

  • mysql result set joining existing table

    - by Yang
    is there any way to avoid using tmp table? I am using a query with aggregate function (sum) to generate the sum of each product: the result looks like this: product_name | sum(qty) product_1 | 100 product_2 | 200 product_5 | 300 now i want to join the above result to another table called products. so that i will have a summary like this: product_name | sum(qty) product_1 | 100 product_2 | 200 product_3 | 0 product_4 | 0 product_5 | 300 i know 1 way of doing this is the dump the 1st query result to a temp table then join it with products table. is there a better way?

    Read the article

  • Error: The conversion of a nvarchar data type to a datetime data type resulted in an out-of-range value

    - by CPM
    I know that there are simmilar questions like this on the forum, however I am still having problems to update a datetime field o the database. I dont get any problems when inserting but I get problems when updating and I am formating the same way , like this: e.Values.Item("SelectionStartDate") = Format(startdate, "yyyy-MM-dd") + " " + startTime1 + ".000" startTime is of type string. I have tried different solution that I came across on the internet but still get this error. Please help. Thanks in advance

    Read the article

  • Should I commit or rollback a transaction that creates a temp table, reads, then deletes it?

    - by Triynko
    To select information related to a list of hundreds of IDs... rather than make a huge select statement, I create temp table, insert the ids into it, join it with a table to select the rows matching the IDs, then delete the temp table. So this is essentially a read operation, with no permanent changes made to any persistent database tables. I do this in a transaction, to ensure the temp table is deleted when I'm finished. My question is... what happens when I commit such a transaction vs. let it roll it back? Performance-wise... does the DB engine have to do more work to roll back the transaction vs committing it? Is there even a difference since the only modifications are done to a temp table? Related question here, but doesn't answer my specific case involving temp tables: http://stackoverflow.com/questions/309834/should-i-commit-or-rollback-a-read-transaction

    Read the article

  • MySQL Stored Procedures not working with SELECT (basic question)

    - by TMG
    Hello, I am using a platform (perfectforms) that requires me to use stored procedures for most of my queries, and having never used stored procedures, I can't figure out what I'm doing wrong. The following statement executes without error: DELIMITER // DROP PROCEDURE IF EXISTS test_db.test_proc// CREATE PROCEDURE test_db.test_proc() SELECT 'foo'; // DELIMITER ; But when I try to call it using: CALL test_proc(); I get the following error: #1312 - PROCEDURE test_db.test_proc can't return a result set in the given context I am executing these statements from within phpmyadmin 3.2.4, PHP Version 5.2.12 and the mysql server version is 5.0.89-community. When I write a stored procedure that returns a parameter, and then select it, things work fine (e.g.): DELIMITER // DROP PROCEDURE IF EXISTS test_db.get_sum// CREATE PROCEDURE test_db.get_sum(out total int) BEGIN SELECT SUM(field1) INTO total FROM test_db.test_table; END // DELIMITER ; works fine, and when I call it: CALL get_sum(@t); SELECT @t; I get the sum no problem. Ultimately, what I need to do is have a fancy SELECT statement wrapped up in a stored procedure, so I can call it, and return multiple rows of multiple fields. For now I'm just trying to get any select working. Any help is greatly appreciated.

    Read the article

  • Full Text Search in MSSQL2008 shows wrong display_item for Thai language

    - by ensecoz
    I am working with MSSQL2008. My task is to investigate the issue where FTS cannot find the right result for Thai. First, I have the table which enables the FTS on the column 'ItemName' which is nvarchar. The Catalog is created with the Thai Language. Note that the Thai language is one of the languages that doesn't separate the word by spaces, so '????' '???' '????' are written like this in a sentence: '???????????' In the table, there are many rows that include the word (????); for example row#1 (ItemName: '???????????') On the webpage, I try to search for '????' but SQLServer cannot find it. So I try to investigate it by trying the following query in SQLServer: select * from sys.dm_fts_parser(N'"???????????"', 1054, 0, 0) ...to see how the words are broken. The first one is the text to be broken. The second parameter is to specify that we're using Thai (WorkBreaker, so on). Here is the result: row#1 (display_item: '????', source_item: '???????????') row#2 (display_item: '????', source_item: '???????????') row#3 (display_item: '??', source_item: '???????????') Notice that the first and second row display the wrong display_item '?' in the '????' isn't even Thai characters. '?' in '????' is not a Thai character either. So the question is where did those alien characters come from? I guess this why I cannot search for '????' because the word breaker is broken and keeping the wrong character in the indexes. Please help!

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • A better way to build this MySQL statement with subselects

    - by Corey Maass
    I have five tables in my database. Members, items, comments, votes and countries. I want to get 10 items. I want to get the count of comments and votes for each item. I also want the member that submitted each item, and the country they are from. After posting here and elsewhere, I started using subselects to get the counts, but this query is taking 10 seconds or more! SELECT `items_2`.*, (SELECT COUNT(*) FROM `comments` WHERE (comments.Script = items_2.Id) AND (comments.Active = 1)) AS `Comments`, (SELECT COUNT(votes.Member) FROM `votes` WHERE (votes.Script = items_2.Id) AND (votes.Active = 1)) AS `votes`, `countrys`.`Name` AS `Country` FROM `items` AS `items_2` INNER JOIN `members` ON items_2.Member=members.Id AND members.Active = 1 INNER JOIN `members` AS `members_2` ON items_2.Member=members.Id LEFT JOIN `countrys` ON countrys.Id = members.Country GROUP BY `items_2`.`Id` ORDER BY `Created` DESC LIMIT 10 My question is whether this is the right way to do this, if there's better way to write this statement OR if there's a whole different approach that will be better. Should I run the subselects separately and aggregate the information?

    Read the article

  • Stop the user entering ' char

    - by Phil
    I have a search page where I would like to stop the user entering a ' into textboxes, or replace it with a suitable character. Can anyone help me achieve this in asp.net vb ? For example if a user searches for O'Reilly the search crashes with error: Line 1: Incorrect syntax near 'Reilly'. Unclosed quotation mark before the character string ' '. Thanks!

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • postgresql: Fast way to update the latest inserted row

    - by Anonymous
    What is the best way to modify the latest added row without using a temporary table. E.g. the table structure is id | text | date My current approach would be an insert with the postgresql specific command "returning id" so that I can update the table afterwards with update myTable set date='2013-11-11' where id = lastRow However I have the feeling that postgresql is not simply using the last row but is iterating through millions of entries until "id = lastRow" is found. How can i directly access the last added row?

    Read the article

  • How to clean sys.conversation_endpoints

    - by Manjoor
    I have a table, a trigger on the table implemented using service broker. More than Half million records are inserted daily into the table. The asynchronous SP is used to check sveral condition by using inserted data and update other tables. It was running fine for last 1 month and the SP was get executed withing 2-3 seconds of insertion of record. But now it take more than 90 minute. At present sys.conversation_endpoints have too much records. (Note that all the table are truncated daily as I do not need those records day after) Other database activities are normal (average 60% CPU Utilization). Now where i need to look?? I can re-create database without any problem but i don't think it is a good way to resolve the problem

    Read the article

  • Help with query

    - by hdoe123
    Hi, I'm trying make a query that looks at a single table and looks to see if a student is a team called CMHT and in a medic team - if they are I don't want to see the result. I only want see if there only in CMHT or medic not both. Would the right direction be using sub query to filer it out? I've done a search on NOT IN but how could you get to see check if its in more then 2 teams are not? Student Team ref 1 CMHT 1 1 Medic 2 2 Medic 3 this would be in the result 3 CMHT 5 this would be in the result So far I've done the following code would I need use a sub query or do a self join and filter it that way? SELECT Table1.Student, Table1.Team, Table1.refnumber FROM Table1 WHERE (((Table1.Team) In ('Medics','CMHT'))

    Read the article

  • Can this be done with the ORM? - Django

    - by RadiantHex
    Hi folks, I have a few item listed in a database, ordered through Reddit's algorithm. This is it: def reddit_ranking(post): t = time.mktime(post.created_on.timetuple()) - 1134000000 x = post.score if x>0: y=1 elif x==0: y=-0 else: y=-1 if x<0: z=1 else: z=x return (log(z) + y * t/45000) I'm wondering if there is any clever way of using Django's ORM, in order to UPDATE the models in bulk. Without doing this: items = Item.objects.filter(created_on__gte=datetime.now()-timedelta(days=7)) for item in items: item.reddit_rank = reddit_rank(item) item.save() I know about the F() object, but I can't figure out if this function can be performed inside the ORM. Any ideas? Help would be very much appreciated!

    Read the article

  • MS ACCESS: How can i count distinct value using access query?

    - by Sadat
    here is the current complex query given below. SELECT DISTINCT Evaluation.ETCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.TVenue, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Answer.QCode, Answer.Answer, Count(Answer.Answer) AS [Count], Questions.SL, Questions.Question FROM ((Evaluation INNER JOIN Training ON Evaluation.ETCode=Training.TCode) INNER JOIN Answer ON Evaluation.ECode=Answer.ECode) INNER JOIN Questions ON Answer.QCode=Questions.QCode GROUP BY Evaluation.ETCode, Answer.QCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.Tvenue, Answer.Answer, Questions.Question, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Questions.SL ORDER BY Answer.QCode, Answer.Answer; There is an another column Training.TCode. I need to count distinct Training.TCode, can anybody help me? If you need more information please let me know

    Read the article

  • MySQL Column Value Pivot

    - by manyxcxi
    I have a MySQL InnoDB table laid out like so: id (int), run_id (int), element_name (varchar), value (text), line_order, column_order `MyDB`.`MyTable` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL It is used to store data generated by a Java program that used to output this in CSV format, hence the line_order and column_order. Lets say I have 2 entries (according to the table description): 1,1,'ELEMENT 1','A',0,0 2,1,'ELEMENT 2','B',0,1 I want to pivot this data in a view for reporting so that it would look like more like the CSV would, where the output would look this: --------------------- |ELEMENT 1|ELEMENT 2| --------------------- | A | B | --------------------- The data coming in is extremely dynamic; it can be in any order, can be any of over 900 different elements, and the value could be anything. The Run ID ties them all together, and the line and column order basically let me know where the user wants that data to come back in order.

    Read the article

  • How to avoid overlapping date ranges when using a grouping clause?

    - by k rey
    I have a situation where I need to find time spans between value changes. I tried a simple group by clause but it eliminates overlapping changes. Consider the following example: create table #items ( code varchar(4) , class varchar(4) , txdate datetime ) insert into #items (code, class, txdate) values ('A', 'C', '2010-01-01'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-02'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-03'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-04'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-05'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-06'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-07'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-08'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-09'); select code , class , min(txdate) mindate , max(txdate) maxdate from #items group by code, class This returns the following results (notice the overlapping date ranges): |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-07| |A |D |2010-01-04|2010-01-09| I would like to have the query return the following: |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-03| |A |D |2010-01-04|2010-01-05| |A |C |2010-01-06|2010-01-07| |A |D |2010-01-08|2010-01-09| Any ideas and suggestions?

    Read the article

  • How to define a query om a n-m table

    - by user559889
    Hi, I have some troubles defining a query. I have a Product and a Category table. A product can belong to multiple categories and vice versa so there is also a Product-Category table. Now I want to select all products that belong to a certain category. But if the user does not provide a category I want all products. I try to create a query using a join but this results in the product being selected multiple times if it belongs to multiple categories (in the case no specific category is queried). What kind of query do I have to create? Thanks

    Read the article

  • How to move from untyped DataSets to POCO\LINQ2SQL in legacy application

    - by artvolk
    Good day! I've a legacy application where data access layer consists of classes where queries are done using SqlConnection/SqlCommand and results are passed to upper layers wrapped in untyped DataSets/DataTable. Now I'm working on integrating this application into newer one where written in ASP.NET MVC 2 where LINQ2SQL is used for data access. I don't want to rewrite fancy logic of generating complex queries that are passed to SqlConnection/SqlCommand in LINQ2SQL (and don't have permission to do this), but I'd like to have result of these queries as strong-typed objects collection instead of untyped DataSets/DataTable. The basic idea is to wrap old data access code in a nice-looking from ASP.NET MVC "Model". What is the fast\easy way of doing this?

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

< Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >