Search Results

Search found 28220 results on 1129 pages for 'sql clr'.

Page 607/1129 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • Fetch posts with attachments in a certain category?

    - by TiuTalk
    I need to retreive a list of posts that have (at least) one attachment that belongs to a category in WordPress. The relation between attachments and categories I made by myself using the WordPress default method. Here's the query that i'm running right now: SELECT p.* FROM `wp_posts` AS p # The post INNER JOIN `wp_posts` AS a # The attachment ON p.`ID` = a.`post_parent` AND a.`post_type` = 'attachment' INNER JOIN `wp_term_relationships` AS ra ON a.`ID` = ra.`object_id` AND ra.`term_taxonomy_id` IN (3) # The category ID list WHERE p.`post_type` = 'post' ORDER BY p.`post_date` DESC LIMIT 15 The problem here is that the query only use the first found attachment, and if it doesn't belongs to the category, the result isn't returned.

    Read the article

  • How to switch from MSSQL to MySQL for using with Excel pivot

    - by Sim
    Hi all, I have a table that contains sales transaction (~20 mil rows). Previously I used MSSQL and export it to an Excel pivot. Data refresh took 10-15 minutes but still do-able. However, after I migrated to MySQL (using XamppLite & ODBC), it took forever to refresh the data in the Excel pivot. Maybe I didn't optimize the MySQL good enough ? Anyone can share some thoughts 'bout this ? Thanks

    Read the article

  • How do I save an object that contains and EntitySet?

    - by Pete
    Say I have an User (mapped to a User table) and the Edit view (in MVC) displays a multiselectlist of Modules (mapped to a Modules table) that user can access, with the Modules pre-selected based on the User's EntitySet of Modules. I have tried saving the User then deleting all User_Modules manually and adding them back based on what's selected on submit, but the User has a null EntitySet for User.User_Modules. I cannot find the correct way to handle this scenario anywhere online. Can anyone help?

    Read the article

  • Recursive SQL giving ORA-01790

    - by PenFold
    Using Oracle 11g release 2, the following query gives an ORA-01790: expression must have same datatype as corresponding expression: with intervals(time_interval) AS (select trunc(systimestamp) from dual union all select (time_interval + numtodsinterval(10, 'Minute')) from intervals where time_interval < systimestamp) select time_interval from intervals; The error suggests that the datatype of both subqueries of the UNION ALL are returning different datatypes. Even if I cast to TIMESTAMP in each of the subqueries, then I get the same error. What am I missing?

    Read the article

  • how to do complete multiple replaces throughout table in ms-access

    - by silverkid
    i am a little confused in finding out what would be the best way to replace all occurances of 1. Blanks 2. - 3. NA from all collumns of TableA with question mark ? charachter. Sample Row in orignal tableA 444586 RAUR <blank> 8 570 NA - 13 - SCHS299 MP 339 70 EN <blank> Same Row in Expected TableA 444586 RAUR ? 8 570 ? ? 13 ? SCHS299 MP 339 70 EN ? please help me out I cant use the Find Replace Toolbar of access.

    Read the article

  • Help with query

    - by hdoe123
    Hi, I'm trying make a query that looks at a single table and looks to see if a student is a team called CMHT and in a medic team - if they are I don't want to see the result. I only want see if there only in CMHT or medic not both. Would the right direction be using sub query to filer it out? I've done a search on NOT IN but how could you get to see check if its in more then 2 teams are not? Student Team ref 1 CMHT 1 1 Medic 2 2 Medic 3 this would be in the result 3 CMHT 5 this would be in the result So far I've done the following code would I need use a sub query or do a self join and filter it that way? SELECT Table1.Student, Table1.Team, Table1.refnumber FROM Table1 WHERE (((Table1.Team) In ('Medics','CMHT'))

    Read the article

  • Stop the user entering ' char

    - by Phil
    I have a search page where I would like to stop the user entering a ' into textboxes, or replace it with a suitable character. Can anyone help me achieve this in asp.net vb ? For example if a user searches for O'Reilly the search crashes with error: Line 1: Incorrect syntax near 'Reilly'. Unclosed quotation mark before the character string ' '. Thanks!

    Read the article

  • Full Text Search in MSSQL2008 shows wrong display_item for Thai language

    - by ensecoz
    I am working with MSSQL2008. My task is to investigate the issue where FTS cannot find the right result for Thai. First, I have the table which enables the FTS on the column 'ItemName' which is nvarchar. The Catalog is created with the Thai Language. Note that the Thai language is one of the languages that doesn't separate the word by spaces, so '????' '???' '????' are written like this in a sentence: '???????????' In the table, there are many rows that include the word (????); for example row#1 (ItemName: '???????????') On the webpage, I try to search for '????' but SQLServer cannot find it. So I try to investigate it by trying the following query in SQLServer: select * from sys.dm_fts_parser(N'"???????????"', 1054, 0, 0) ...to see how the words are broken. The first one is the text to be broken. The second parameter is to specify that we're using Thai (WorkBreaker, so on). Here is the result: row#1 (display_item: '????', source_item: '???????????') row#2 (display_item: '????', source_item: '???????????') row#3 (display_item: '??', source_item: '???????????') Notice that the first and second row display the wrong display_item '?' in the '????' isn't even Thai characters. '?' in '????' is not a Thai character either. So the question is where did those alien characters come from? I guess this why I cannot search for '????' because the word breaker is broken and keeping the wrong character in the indexes. Please help!

    Read the article

  • Is the RESTORE process dependent on schema?

    - by Martin Aatmaa
    Let's say I have two database instances: InstanceA - Production server InstanceB - Test server My workflow is to deploy new schema changes to InstanceB first, test them, and then deploy them to InstanceA. So, at any one time, the instance schema relationship looks like this: InstanceA - Schema Version 1.5 InstanceB - Schema Version 1.6 (new version being tested) An additional part of my workflow is to keep the data in InstanceB as fresh as possible. To fulfill this, I am taking the database backups of InstanceA and applying them (restoring them) to InstanceB. My question is, how does schema version affect the restoral process? I know I can do this: Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.5 But can I do this? Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.6 (new version being tested) If no, what would the failure look like? If yes, would the type of schema change matter? For example, if Schema Version 1.6 differed from Schema Version 1.5 by just having an altered storec proc, I imagine that this type of schema change should't affect the restoral process. On the other hand, if Schema Version 1.6 differed from Schema Version 1.5 by having a different table definition (say, an additional column), I image this would affect the restoral process. I hope I've made this clear enough. Thanks in advance for any input!

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • A better way to build this MySQL statement with subselects

    - by Corey Maass
    I have five tables in my database. Members, items, comments, votes and countries. I want to get 10 items. I want to get the count of comments and votes for each item. I also want the member that submitted each item, and the country they are from. After posting here and elsewhere, I started using subselects to get the counts, but this query is taking 10 seconds or more! SELECT `items_2`.*, (SELECT COUNT(*) FROM `comments` WHERE (comments.Script = items_2.Id) AND (comments.Active = 1)) AS `Comments`, (SELECT COUNT(votes.Member) FROM `votes` WHERE (votes.Script = items_2.Id) AND (votes.Active = 1)) AS `votes`, `countrys`.`Name` AS `Country` FROM `items` AS `items_2` INNER JOIN `members` ON items_2.Member=members.Id AND members.Active = 1 INNER JOIN `members` AS `members_2` ON items_2.Member=members.Id LEFT JOIN `countrys` ON countrys.Id = members.Country GROUP BY `items_2`.`Id` ORDER BY `Created` DESC LIMIT 10 My question is whether this is the right way to do this, if there's better way to write this statement OR if there's a whole different approach that will be better. Should I run the subselects separately and aggregate the information?

    Read the article

  • MySQL Stored Procedures not working with SELECT (basic question)

    - by TMG
    Hello, I am using a platform (perfectforms) that requires me to use stored procedures for most of my queries, and having never used stored procedures, I can't figure out what I'm doing wrong. The following statement executes without error: DELIMITER // DROP PROCEDURE IF EXISTS test_db.test_proc// CREATE PROCEDURE test_db.test_proc() SELECT 'foo'; // DELIMITER ; But when I try to call it using: CALL test_proc(); I get the following error: #1312 - PROCEDURE test_db.test_proc can't return a result set in the given context I am executing these statements from within phpmyadmin 3.2.4, PHP Version 5.2.12 and the mysql server version is 5.0.89-community. When I write a stored procedure that returns a parameter, and then select it, things work fine (e.g.): DELIMITER // DROP PROCEDURE IF EXISTS test_db.get_sum// CREATE PROCEDURE test_db.get_sum(out total int) BEGIN SELECT SUM(field1) INTO total FROM test_db.test_table; END // DELIMITER ; works fine, and when I call it: CALL get_sum(@t); SELECT @t; I get the sum no problem. Ultimately, what I need to do is have a fancy SELECT statement wrapped up in a stored procedure, so I can call it, and return multiple rows of multiple fields. For now I'm just trying to get any select working. Any help is greatly appreciated.

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • How to clean sys.conversation_endpoints

    - by Manjoor
    I have a table, a trigger on the table implemented using service broker. More than Half million records are inserted daily into the table. The asynchronous SP is used to check sveral condition by using inserted data and update other tables. It was running fine for last 1 month and the SP was get executed withing 2-3 seconds of insertion of record. But now it take more than 90 minute. At present sys.conversation_endpoints have too much records. (Note that all the table are truncated daily as I do not need those records day after) Other database activities are normal (average 60% CPU Utilization). Now where i need to look?? I can re-create database without any problem but i don't think it is a good way to resolve the problem

    Read the article

  • MS ACCESS: How can i count distinct value using access query?

    - by Sadat
    here is the current complex query given below. SELECT DISTINCT Evaluation.ETCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.TVenue, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Answer.QCode, Answer.Answer, Count(Answer.Answer) AS [Count], Questions.SL, Questions.Question FROM ((Evaluation INNER JOIN Training ON Evaluation.ETCode=Training.TCode) INNER JOIN Answer ON Evaluation.ECode=Answer.ECode) INNER JOIN Questions ON Answer.QCode=Questions.QCode GROUP BY Evaluation.ETCode, Answer.QCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.Tvenue, Answer.Answer, Questions.Question, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Questions.SL ORDER BY Answer.QCode, Answer.Answer; There is an another column Training.TCode. I need to count distinct Training.TCode, can anybody help me? If you need more information please let me know

    Read the article

  • How to avoid overlapping date ranges when using a grouping clause?

    - by k rey
    I have a situation where I need to find time spans between value changes. I tried a simple group by clause but it eliminates overlapping changes. Consider the following example: create table #items ( code varchar(4) , class varchar(4) , txdate datetime ) insert into #items (code, class, txdate) values ('A', 'C', '2010-01-01'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-02'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-03'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-04'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-05'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-06'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-07'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-08'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-09'); select code , class , min(txdate) mindate , max(txdate) maxdate from #items group by code, class This returns the following results (notice the overlapping date ranges): |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-07| |A |D |2010-01-04|2010-01-09| I would like to have the query return the following: |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-03| |A |D |2010-01-04|2010-01-05| |A |C |2010-01-06|2010-01-07| |A |D |2010-01-08|2010-01-09| Any ideas and suggestions?

    Read the article

  • MySQL Column Value Pivot

    - by manyxcxi
    I have a MySQL InnoDB table laid out like so: id (int), run_id (int), element_name (varchar), value (text), line_order, column_order `MyDB`.`MyTable` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL It is used to store data generated by a Java program that used to output this in CSV format, hence the line_order and column_order. Lets say I have 2 entries (according to the table description): 1,1,'ELEMENT 1','A',0,0 2,1,'ELEMENT 2','B',0,1 I want to pivot this data in a view for reporting so that it would look like more like the CSV would, where the output would look this: --------------------- |ELEMENT 1|ELEMENT 2| --------------------- | A | B | --------------------- The data coming in is extremely dynamic; it can be in any order, can be any of over 900 different elements, and the value could be anything. The Run ID ties them all together, and the line and column order basically let me know where the user wants that data to come back in order.

    Read the article

  • Replace into equivalent for postgresql and then autoincrementing an int

    - by Mohamed Ikal Al-Jabir
    Okay no seriously, if a postgresql guru can help out I'm just getting started. Basically what I want is a simple table like such: CREATE TABLE schema.searches ( search_id serial NOT NULL, search_query character varying(255), search_count integer DEFAULT 1, CONSTRAINT pkey_search_id PRIMARY KEY (search_id) ) WITH ( OIDS=FALSE ); I need something like REPLACE INTO for mysql. I don't know if I have to write my own procedure or something? Basically: check if the query already exists if so, just add 1 to the count it not, add it to the db I can do this in my php code but I'd rather all that be done in postgres C engine Thanks for helping

    Read the article

  • Best practices for querying an entire row in a database table? (MySQL / CodeIgniter)

    - by Walker
    Sorry for the novice question! I have a table called cities in which I have fields called id, name, xpos, ypos. I'm trying to use the data from each row to set a div's position and name. What I'm wondering is what's the best practice for dynamically querying an unknown amount of rows (I don't know how many cities there might be, I want to pull the information from all of them) and then passing the variables from the model into the view and then setting attributes with it? Right now I've 'hacked' a solution where I run a different function each time which pulls a value using a query ('SELECT id FROM cities;'), then I store that in a global array variable and pass it into view. I do this for each var so I have arrays called: city_idVar, city_nameVar, city_xposVar, city_yposVar then I know that the city_nameVar[0] matches up with city_xposVar[0] etc. Is there a better way?

    Read the article

  • Can this be done with the ORM? - Django

    - by RadiantHex
    Hi folks, I have a few item listed in a database, ordered through Reddit's algorithm. This is it: def reddit_ranking(post): t = time.mktime(post.created_on.timetuple()) - 1134000000 x = post.score if x>0: y=1 elif x==0: y=-0 else: y=-1 if x<0: z=1 else: z=x return (log(z) + y * t/45000) I'm wondering if there is any clever way of using Django's ORM, in order to UPDATE the models in bulk. Without doing this: items = Item.objects.filter(created_on__gte=datetime.now()-timedelta(days=7)) for item in items: item.reddit_rank = reddit_rank(item) item.save() I know about the F() object, but I can't figure out if this function can be performed inside the ORM. Any ideas? Help would be very much appreciated!

    Read the article

  • SQLce Select query problem

    - by DieHard
    Wrote a Truck show Contest voting app, financial etc using sqlite. decided to write backup app for show day using ce 3.5. Created db moved to data directory, created tables configured dgridviews all is well. Entered some test data started management studio 08 ran select query against table and got null returns. Started app from vs studio and found that test data is gone. Re entered data ran query in MS data gone again. If I use VS Studio can start and enter data, close app restart and data is still there, seems only when using outside tool on select query data deletes. I don't know ce that well but this cannot be right. select * from votes = delete * from votes??????????????

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >