Search Results

Search found 31421 results on 1257 pages for 'entity sql'.

Page 684/1257 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • Unit testing a SQL code generator

    - by Tom H.
    The team I'm on is currently writing code in TSQL to generate TSQL code that will be saved as scripts and later run. We're having a little difficulty in separating our unit tests between testing the code generator parts and testing the actual code that they generate. I've read through another similar question, but I was hoping to get some specific examples of what kind of unit test cases we might have. As an example, let's say that I have a bit of code that simply generates a DROP statement for a view, given the view schema and name. Do I just test that the generated code matches some expected outcome using string comparisons and then in a later integration or system test make sure that the drop actually drops the view if it exists, does nothing if the view doesn't exist, or raises an error if the view is one that we are marking as not allowing a drop? Thanks for any advice!

    Read the article

  • What to monitor on MSSQL Server

    - by user361434
    Hi all I have been asked to monitor MSSQL Server (2005 & 2008) and am wondering what are good metrics to look at? I can access WMI counters but am slightly lost as to how much depth is going to be useful. Currently I have on my list: user connections logins per second latch waits per second total latch wait time dead locks per second errors per second Log and data file sizes I am looking to be able to monitor values that will indicate a degradation of performance on the machine or a potential serious issue. To this end I am also wondering at what values some of these things would be considered normal vs problematic? As I reckon it would probably be a really good question to have answered for the general community I thought i'd court some of you DBA experts out there (I am certainly not one of them!) Apologies if a rather open ended question. Ry

    Read the article

  • mysql - speedup regex

    - by Uwe
    I have a table: +--------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------+------------------+------+-----+---------+----------------+ | idurl | int(11) | NO | PRI | NULL | auto_increment | | idsite | int(10) unsigned | NO | MUL | NULL | | | url | varchar(2048) | NO | | NULL | | +--------+------------------+------+-----+---------+----------------+ the select statement is: SELECT idurl, url FROM URL WHERE idsite = 34 AND url REGEXP '^https\\://www\\.domain\\.com/checkout/step_one\\.php.*' The query needs 5 seconds on a table with 1000000 rows. Can I achieve a speedup with indexes or something else?

    Read the article

  • Finding the right terminology for a dictionary table

    - by Karl Forner
    My concern is about what I currently call "dictionary tables", that are database tables containing a list of controlled vocabulary. Let's use an example: Suppose you have a table User containing fields: user_id : primary key first_name last_name user_type_id : foreign key to the UserType table and another table UserType with just two fields: user_type_id : primary key name : the name/value of a particular type of user. For instance, the UserType table may contain (1, Administrator), (2, PowerUser), (3, Normal)... My question is: what is the canonical term for a table like UserType, that only contains a list of (dictinct) words. I want to publish some code that help managing this kind of tables, but first I have to name them ! Thanks for your help. Current state of thought: For now I feel Lookup Tables is a good term. It is also used with the same meaning in these posts: http://dbix-class.35028.n2.nabble.com/RFC-Component-for-Lookup-tables-td3504085.html http://tonyandrews.blogspot.de/2004/10/otlt-and-eav-two-big-design-mistakes.html Lookup Tables Best Practices: DB Tables... or Enumerations The only problem is that lookup table is also sometimes used to name a junction table.

    Read the article

  • Why i cant save a long text on my MySQL database?

    - by DomingoSL
    im trying to save to my data base a long text (about 2500 chars) input by my users using a web form and passed to the server using php. When i look in phpmyadmin, the text gets crop. How can i config my table in order to get the complete text? This is my table config: CREATE TABLE `extra_879` ( `id` bigint(20) NOT NULL auto_increment, `id_user` bigint(20) NOT NULL, `title` varchar(300) NOT NULL, `content` varchar(3000) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `id_user` (`id_user`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; Take a look of the field content that have a limit of 3000 chars, but the texts always gets crop at 690 chars. Thanks for any help! EDIT: I found the problem but i dont know how to solve it. The query is getting crop always in the same char, an special char: ù EDIT 2: This is the cropped query: INSERT INTO extra_879 (id,id_user,title,content) VALUES (NULL,'1','Informazione Extra',' Riconoscimenti Laurea di ingegneria presa a le 22 anni e in il terso posto della promozione Diploma analista di sistemi ottenuto il rating massimo 20/20, primo posto della promozione. Borsa di Studio (offerta dal Ministero Esteri Italiano) vinta nel 2010 (Valutazione del territorio attraverso le nueve tecnologie) Pubblicazione di paper; Stima del RCS della nave CCGS radar sulla base dei risultati di H. Leong e H. Wilson. http://www.ing.uc.edu.vek-azozayalarchivospdf/PAPER-Sarmiento.pdf Tesi di laurea: PROGETTAZIONE E REALIZZAZIONE DI UN SIS-TEMA DI TELEMETRIA GSM PER IL CONTROLLO DELLO STATO DI TRANSITO VEICOLARE E CLIMA (ottenuto il punteggio pi') It gets crop just when the (ottenuto il punteggio più alto) phrase, just when ù appear... EDIT 3: I using jquery + ajax to send the query $.ajax({type: "POST", url: "handler.php", data: "e_text="+ $('#e_text').val() + "&e_title="+ $('#extra_title').val(),

    Read the article

  • LINQ Query returns nothing.

    - by gtas
    Why is this query returns 0 lines? There is a record matching the arguments. Deafkaw.Where(p => (p.ImerominiaKataxorisis >= aDate && p.ImerominiaKataxorisis <= DateTime.Now) && (p.Year == etos && p.IsYpodeigma == false) ).ToList(); Am i missing something?

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • Range partition skip check

    - by user289429
    We have large amount of data partitioned on year value using range partition in oracle. We have used range partition but each partition contains data only for one year. When we write a query targeting a specific year, oracle fetches the information from that partition but still checks if the year is what we have specified. Since this year column is not part of the index it fetches the year from table and compares it. We have seen that any time the query goes to fetch table data it is getting too slow. Can we somehow avoid oracle comparing the year values since we for sure know that the partition contains information for only one year.

    Read the article

  • Freebase Query with "JOINS"

    - by codemonkey
    ... Yeah, yeah, I know traditional joins don't exist. I actually like the freebase query methodology in theory, just having a little trouble getting it to actually work for me : ) Anyone have a dumb-simple example of getting Freebase data via MQL that pulls from two different "tables"? In particular, I'm trying to get automotive data... so for example, pulling fields from both /automotive/model_year and /automotive/trim_year. I've read the documentation (for hours actually). There's a distinct possibility that I'm looking right at such an example somewhere and just not seeing it because my OLTP brain just doesn't comprehend what it's seeing. * Note * ... that the two "types" I'm working with above are siblings, not parent/child. Does freebase even allow joining data between sibling nodes... I see examples of queries pulling from parent/child, but not from siblings I don't think (or I've overlooked them).

    Read the article

  • max count with joins

    - by trixet
    I have 3 tables: users: Id Login 1 John 2 Bill 3 Jim computers: Id Name 1 Computer1 2 Computer2 3 Computer3 4 Computer4 5 Computer5 sessions: UserId ComputerId Minutes 1 2 47 2 1 32 1 4 15 2 5 5 1 2 7 1 1 40 2 5 31 I would like to display this resulting table: Login Total_sess Total_min Most_freq_computer Sess_on_most_freq Min_on_most_freq John 4 109 Computer2 2 54 Bill 3 68 Computer5 2 36 Jim - - - - - Myself I can only cover first 3 columns with: SELECT Login, COUNT(sessions.UserId), SUM(Minutes) FROM users LEFT JOIN sessions ON users.Id = sessions.UserId GROUP BY users.Id And some kind of other columns with: SELECT main.* FROM (SELECT UserId, ComputerId, COUNT(*) AS cnt ,SUM(Minutes) FROM sessions GROUP BY UserId, ComputerId) AS main INNER JOIN ( SELECT ComputerId, MAX(cnt) AS maxCnt FROM ( SELECT ComputerId, UserId, COUNT(*) AS cnt FROM sessions GROUP BY ComputerId, UserId ) AS Counts GROUP BY ComputerId) AS maxes ON main.ComputerId = maxes.ComputerId AND main.cnt = maxes.maxCnt But I need to get whole resulting table in one query. I feel I'm doing something completely wrong. Need help.

    Read the article

  • Stored Procedure for Multi-Table Insert Error: Cannot Insert the Value Null into Column

    - by SidC
    Good Evening All, I've created the following stored procedure: CREATE PROCEDURE AddQuote -- Add the parameters for the stored procedure here AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; Declare @CompanyName nvarchar(50), @Addr nvarchar(50), @City nvarchar(50), @State nvarchar(2), @Zip nvarchar(5), @NeedDate datetime, @PartNumber float, @Qty int -- Insert statements for procedure here Insert into dbo.Customers (CompanyName, Address, City, State, ZipCode) Values (@CompanyName, @Addr, @City, @State, @Zip) Insert into dbo.Orders (NeedbyDate) Values(@NeedDate) Insert into dbo.OrderDetail (fkPartNumber,Qty) Values (@PartNumber,@Qty) END GO When I execute AddQuote, I receive an error stating: Msg 515, Level 16, State 2, Procedure AddQuote, Line 31 Cannot insert the value NULL into column 'ID', table 'Diel_inventory.dbo.OrderDetail'; column does not allow nulls. INSERT fails. The statement has been terminated. I understand that I've set Qty field to not allow nulls and want to continue doing so. However, are there other syntax changes I should make to ensure that this sproc works correctly? Thanks, Sid

    Read the article

  • Why index_merge is not used here using MySQL?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing? Update mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 1863 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Version: mysql> select version(); +----------------------+ | version() | +----------------------+ | 5.1.36-community-log | +----------------------+

    Read the article

  • linq join query

    - by SamB09
    Hi, im trying to do a join in linq , however for some reason i cant access the primary key of a table. It's the 'h.ProjectId' that doesn't seem to be accepted. The following error is given CW1.SearchWebService.Bid does not contain a definition for 'ProjectId' and no extention method 'ProjectId' accepting a first argument of type 'CW1SearchWebService.Bid' var allProjects = ctxt.Project.ToList() ; var allBids = ctxt.Bid.ToArray();// return all bids var projects = (from project in allProjects join h in allBids on project.ProjectId equals h.ProjectId

    Read the article

  • SCD2 + Merge Statement + MSSQL

    - by Nev_Rahd
    I am trying work out with MERGE statment to Insert / Update Dimension Table of Type SCD2 My source is a Table var to Merge with Dimension table. My Merget statement is throwing an error as: The target table 'DM.DATA_ERROR.ERROR_DIMENSION' of the INSERT statement cannot be on either side of a (primary key, foreign key) relationship when the FROM clause contains a nested INSERT, UPDATE, DELETE, or MERGE statement. Found reference constraint 'FK_ERROR_DIMENSION_to_AUDIT_CreatedBy'. My Merge Statement: DECLARE @DATAERROROBJECT AS [ERROR_DIMENSION] INSERT INTO DM.DATA_ERROR.ERROR_DIMENSION SELECT ERROR_CODE, DATA_STREAM_ID, [ERROR_SEVERITY], DATA_QUALITY_RATING, ERROR_LONG_DESCRIPTION, ERROR_DESCRIPTION, VALIDATION_RULE, ERROR_TYPE, ERROR_CLASS, VALID_FROM, VALID_TO, CURR_FLAG, CREATED_BY_AUDIT_SK, UPDATED_BY_AUDIT_SK FROM (MERGE DM.DATA_ERROR.ERROR_DIMENSION ED USING @DATAERROROBJECT OBJ ON(ED.ERROR_CODE = OBJ.ERROR_CODE AND ED.DATA_STREAM_ID = OBJ.DATA_STREAM_ID) WHEN NOT MATCHED THEN INSERT VALUES( OBJ.ERROR_CODE ,OBJ.DATA_STREAM_ID ,OBJ.[ERROR_SEVERITY] ,OBJ.DATA_QUALITY_RATING ,OBJ.ERROR_LONG_DESCRIPTION ,OBJ.ERROR_DESCRIPTION ,OBJ.VALIDATION_RULE ,OBJ.ERROR_TYPE ,OBJ.ERROR_CLASS ,GETDATE() ,'9999-12-13' ,'Y' ,1 ,1 ) WHEN MATCHED AND ED.CURR_FLAG = 'Y' AND ( ED.[ERROR_SEVERITY] <> OBJ.[ERROR_SEVERITY] OR ED.[DATA_QUALITY_RATING] <> OBJ.[DATA_QUALITY_RATING] OR ED.[ERROR_LONG_DESCRIPTION] <> OBJ.[ERROR_LONG_DESCRIPTION] OR ED.[ERROR_DESCRIPTION] <> OBJ.[ERROR_DESCRIPTION] OR ED.[VALIDATION_RULE] <> OBJ.[VALIDATION_RULE] OR ED.[ERROR_TYPE] <> OBJ.[ERROR_TYPE] OR ED.[ERROR_CLASS] <> OBJ.[ERROR_CLASS] ) THEN UPDATE SET ED.CURR_FLAG = 'N', ED.VALID_TO = GETDATE() OUTPUT $ACTION ACTION_OUT, OBJ.ERROR_CODE ERROR_CODE, OBJ.DATA_STREAM_ID DATA_STREAM_ID, OBJ.[ERROR_SEVERITY] [ERROR_SEVERITY], OBJ.DATA_QUALITY_RATING DATA_QUALITY_RATING, OBJ.ERROR_LONG_DESCRIPTION ERROR_LONG_DESCRIPTION, OBJ.ERROR_DESCRIPTION ERROR_DESCRIPTION, OBJ.VALIDATION_RULE VALIDATION_RULE, OBJ.ERROR_TYPE ERROR_TYPE, OBJ.ERROR_CLASS ERROR_CLASS, GETDATE() VALID_FROM, '9999-12-31' VALID_TO, 'Y' CURR_FLAG, 555 CREATED_BY_AUDIT_SK, 555 UPDATED_BY_AUDIT_SK ) AS MERGE_OUT WHERE MERGE_OUT.ACTION_OUT = 'UPDATE'; What am i doing wrong ?

    Read the article

  • Is it possible to have a tableless select with multiple rows?

    - by outis
    A SELECT without a FROM clause gets us a multiple columns without querying a table: SELECT 17+23, REPLACE('bannanna', 'nn', 'n'), RAND(), CURRENT_TIMESTAMP; How can we write a query that results in multiple rows without referring to a table? Basically, abuse SELECT to turn it into a data definition statement. The result could have a single column or multiple columns. I'm most interested in a DBMS neutral answer, but others (e.g. based on UNPIVOT) are welcome. There's no technique application behind this question; it's more theoretical than practical.

    Read the article

  • Upgraded activerecord-sqlserver-adapter from 2.2.22 to 2.3.8 and now getting an ODBC error

    - by stuartc
    I have been using MSSQL 2005 with Rails for quite a while now, and decided to bump my gems up on one of my projects and ran into a problem. I moved from 2.2.22 to 2.3.8 (latest as of writing) and all of a sudden I got this: ODBC::Error: S1090 (0) [unixODBC][Driver Manager]Invalid string or buffer length I'm using a DSN connection with FreeTDS my database.yml looks like this: adapter: sqlserver mode: ODBC dsn: 'DRIVER=FreeTDS;TDSVER=7.0;SERVER=10.0.0.5;DATABASE=db;Port=1433;UID=user;PWD=pwd;' Now in the mean time I moved back to 2.2.22 and there are no deprecation warnings and everything seems fine but obviously for the sake of being up to date, any ideas what could have changed in the adaptor that could cause this?

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Mysql stored procedure where clause

    - by Mneva skoko
    I am having a problem with this stored procedure: Delimiter // Create procedure(in varchar(50)) Begin Select * from employees where email = eml; End// Delimiter ; I don't get errors when I run this procedure but when i call it in my php script it returns nothing.

    Read the article

  • Linq like or other construction

    - by Yauhen Kavalenka
    I have DB oracle in my solution. I want to have some results in this query. Query example: select * from doctor where doctor.name like '%IVANOV_A%'; But if i do it at LINQ i cannot get any result. from p in repository.Doctor.Where(x => x.Name.ToLower().Containsname)) select p; Where 'name' is variable of string parameter. Web layout request next string: "Ivanov a" or "A Ivanov" But i suggest for user choose you pattetn for query. How i can to get "patient by name" if name consist of "First name" and "Last name" but user doesn't know your doctor's full name?

    Read the article

  • Delete backup files older than 7 days

    - by rockbala
    Using osql command, SSQL backup of a database is created. It is saved to Disk. Then renamed to match the date of the day backup was taken. All these files are saved in a single folder all the time. for example: Batch1.bat does the following 1) Created backup.bak 2) renamed to backup 12-13-2009.bak (this is done by the combination of % ~ - etc to get the date parameter) This is now automated to take backup everyday by Task scheduler in Windows. Can the batch file also be modified to delete backup files older than 7 days? if so, how ? If it is not possible via batch file, other than manually deleting the files are there any other alternatives to automate the deleting job ? Thanks in advance, Balaji S

    Read the article

  • MSSQL STOREDPROC SELECTING FROM A FIELD FROM 1 TABLE USING LIKE TO CREATE MORE THAN 1 COLUM IN A DAT

    - by djshortbus
    I have a ASPX.NET DataGrid and im trying to USE a select LIKE 'X'% from a table that has 1 field called location. im trying to display the locations that start with a certain letter (example wxxx,axxx,fxxx,) in different columns in my data grid. SELECT DISTINCT LM.LOCATION AS '0 LOCATIONS' , LM.COUNTLEVEL AS 'COUNTLEVEL' FROM SOH S WITH(NOLOCK) JOIN LOCATIONMASTER LM ON LM.LMID = S.LMID WHERE LM.COUNTLEVEL = 1 AND LM.LOCATION NOT IN ('RECOU','PROBLEM','TOSTOCK','PYXVLOC') AND LM.LOCATION LIKE '0%' SELECT DISTINCT LM.LOCATION AS 'A LOCATIONS' , LM.COUNTLEVEL AS 'COUNTLEVEL' FROM SOH S WITH(NOLOCK) JOIN LOCATIONMASTER LM ON LM.LMID = S.LMID WHERE LM.COUNTLEVEL = 1 AND LM.LOCATION NOT IN ('RECOU','PROBLEM','TOSTOCK','PYXVLOC') AND LM.LOCATION LIKE 'A%'

    Read the article

  • Repository vs Data Access

    - by vdh_ant
    Hi guys In the context of the n-tier application, is there a difference between what you would consider your data access classes to be and your repositories? I tend to think yes but I just wanted to see what other thought. My thinking is that the job of the repository is just to contain and execute the raw query itself, where as the data access class would create the context, execute the repository (passing in the context), handle mapping the data model to the domain model and return the result back up... What do you guys think? Also do you see any of this changing in a Linq to XML scenario (assuming that you change the context for the relevant XDocument)? Cheers Anthony

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >