Search Results

Search found 27691 results on 1108 pages for 'multi select'.

Page 417/1108 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • is putting N in front of strings in scripts considered a "best practice"?

    - by jcollum
    Let's say I have a table that has a varchar field. If I do an insert like this: INSERT MyTable SELECT N'the string goes here' Is there any fundamental difference between that and: INSERT MyTable SELECT 'the string goes here' My understanding was that you'd only have a problem if the string contained a Unicode character and the target column wasn't unicode. Other than that, SQL deals with it just fine and converts the string with the N'' into a varchar field (basically ignores the N). I was under the impression that N in front of strings was a good practice, but I'm unable to find any discussion of it that I'd consider definitive. Title may need improvement, feel free.

    Read the article

  • Which workaround to use for the following SQL deadlock?

    - by Marko
    I found a SQL deadlock scenario in my application during concurrency. I belive that the two statements that cause the deadlock are (note - I'm using LINQ2SQL and DataContext.ExecuteCommand(), that's where this.studioId.ToString() comes into play): exec sp_executesql N'INSERT INTO HQ.dbo.SynchronizingRows ([StudioId], [UpdatedRowId]) SELECT @p0, [t0].[Id] FROM [dbo].[UpdatedRows] AS [t0] WHERE NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [dbo].[ReceivedUpdatedRows] AS [t1] WHERE ([t1].[StudioId] = @p0) AND ([t1].[UpdatedRowId] = [t0].[Id]) ))',N'@p0 uniqueidentifier',@p0='" + this.studioId.ToString() + "'; and exec sp_executesql N'INSERT INTO HQ.dbo.ReceivedUpdatedRows ([UpdatedRowId], [StudioId], [ReceiveDateTime]) SELECT [t0].[UpdatedRowId], @p0, GETDATE() FROM [dbo].[SynchronizingRows] AS [t0] WHERE ([t0].[StudioId] = @p0)',N'@p0 uniqueidentifier',@p0='" + this.studioId.ToString() + "'; The basic logic of my (client-server) application is this: Every time someone inserts or updates a row on the server side, I also insert a row into the table UpdatedRows, specifying the RowId of the modified row. When a client tries to synchronize data, it first copies all of the rows in the UpdatedRows table, that don't contain a reference row for the specific client in the table ReceivedUpdatedRows, to the table SynchronizingRows (the first statement taking part in the deadlock). Afterwards, during the synchronization I look for modified rows via lookup of the SynchronizingRows table. This step is required, otherwise if someone inserts new rows or modifies rows on the server side during synchronization I will miss them and won't get them during the next synchronization (explanation scenario to long to write here...). Once synchronization is complete, I insert rows to the ReceivedUpdatedRows table specifying that this client has received the UpdatedRows contained in the SynchronizingRows table (the second statement taking part in the deadlock). Finally I delete all rows from the SynchronizingRows table that belong to the current client. The way I see it, the deadlock is occuring on tables SynchronizingRows (abbreviation SR) and ReceivedUpdatedRows (abbreviation RUR) during steps 2 and 3 (one client is in step 2 and is inserting into SR and selecting from RUR; while another client is in step 3 inserting into RUR and selecting from SR). I googled a bit about SQL deadlocks and came to a conclusion that I have three options. Inorder to make a decision I need more input about each option/workaround: Workaround 1: The first advice given on the web about SQL deadlocks - restructure tables/queries so that deadlocks don't happen in the first place. Only problem with this is that with my IQ I don't see a way to do the synchronization logic any differently. If someone wishes to dwelve deeper into my current synchronization logic, how and why it is set up the way it is, I'll post a link for the explanation. Perhaps, with the help of someone smarter than me, it's possible to create a logic that is deadlock free. Workaround 2: The second most common advice seems to be the use of WITH(NOLOCK) hint. The problem with this is that NOLOCK might miss or duplicate some rows. Duplication is not a problem, but missing rows is catastrophic! Another option is the WITH(READPAST) hint. On the face of it, this seems to be a perfect solution. I really don't care about rows that other clients are inserting/modifying, because each row belongs only to a specific client, so I may very well skip locked rows. But the MSDN documentaion makes me a bit worried - "When READPAST is specified, both row-level and page-level locks are skipped". As I said, row-level locks would not be a problem, but page-level locks may very well be, since a page might contain rows that belong to multiple clients (including the current one). While there are lots of blog posts specifically mentioning that NOLOCK might miss rows, there seems to be none about READPAST (never) missing rows. This makes me skeptical and nervous to implement it, since there is no easy way to test it (implementing would be a piece of cake, just pop WITH(READPAST) into both statements SELECT clause and job done). Can someone confirm whether the READPAST hint can miss rows? Workaround 3: The final option is to use ALLOW_SNAPSHOT_ISOLATION and READ_COMMITED_SNAPSHOT. This would seem to be the only option to work 100% - at least I can't find any information that would contradict with it. But it is a little bit trickier to setup (I don't care much about the performance hit), because I'm using LINQ. Off the top of my head I probably need to manually open a SQL connection and pass it to the LINQ2SQL DataContext, etc... I haven't looked into the specifics very deeply. Mostly I would prefer option 2 if somone could only reassure me that READPAST will never miss rows concerning the current client (as I said before, each client has and only ever deals with it's own set of rows). Otherwise I'll likely have to implement option 3, since option 1 is probably impossible... I'll post the table definitions for the three tables as well, just in case: CREATE TABLE [dbo].[UpdatedRows]( [Id] [uniqueidentifier] NOT NULL ROWGUIDCOL DEFAULT NEWSEQUENTIALID() PRIMARY KEY CLUSTERED, [RowId] [uniqueidentifier] NOT NULL, [UpdateDateTime] [datetime] NOT NULL, ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX IX_RowId ON dbo.UpdatedRows ([RowId] ASC) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[ReceivedUpdatedRows]( [Id] [uniqueidentifier] NOT NULL ROWGUIDCOL DEFAULT NEWSEQUENTIALID() PRIMARY KEY NONCLUSTERED, [UpdatedRowId] [uniqueidentifier] NOT NULL REFERENCES [dbo].[UpdatedRows] ([Id]), [StudioId] [uniqueidentifier] NOT NULL REFERENCES, [ReceiveDateTime] [datetime] NOT NULL, ) ON [PRIMARY] GO CREATE CLUSTERED INDEX IX_Studios ON dbo.ReceivedUpdatedRows ([StudioId] ASC) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[SynchronizingRows]( [StudioId] [uniqueidentifier] NOT NULL [UpdatedRowId] [uniqueidentifier] NOT NULL REFERENCES [dbo].[UpdatedRows] ([Id]) PRIMARY KEY CLUSTERED ([StudioId], [UpdatedRowId]) ) ON [PRIMARY] GO PS! Studio = Client. PS2! I just noticed that the index definitions have ALLOW_PAGE_LOCK=ON. If I would turn it off, would that make any difference to READPAST? Are there any negative downsides for turning it off?

    Read the article

  • Is it better to use a Database or a data structure for network stack?

    - by poly
    I've built a multi threaded messaging application in C and I'm currently using a MySQL Memory table to save the session ID, but I'm not sure whether this was a good decision or not. It works like this, the application sends a message and saves the source session ID in the MySQL table. When the application gets the success response it will remove the session's ID from the MySQL table, or if it received an error response then it will keep the ID to be retried later. I've built it this way so that I don't need to care about building a data structure by myself, and the Database provides flexibility when it comes to querying it. Do you think this is appropriate or do I need to use something else? Please note that the application is expecting to handle a large number of transactions/sec.

    Read the article

  • group by with 3 diffrent

    - by NN
    I have 2 table and I wanna a query with 3 column result in on of them 2 column with view count and title name and in the other 1 column with type_ and i wanna to grouping type_ with max(view count) and show the them title but i didn't have any idea about grouping expression. i think we can solve in by using sub query but i don't know which column use in group by. 2 table join with this expression class pk=resource key i exam this query: SELECT t.title,j.type_ FROM tags asset t,journal article j where type_ in (select type_ from journal article,tags asset where class pk=resource key group by type_) but the answer was wrong

    Read the article

  • SAPPHIRE 2012 : SAP veut imposer sa nouvelle image et simplifier l'IT avec ses outils pour le Big Data, la mobilité et le Cloud

    SAPPHIRE 2012 : SAP veut imposer sa nouvelle image et simplifier l'IT Avec des outils pour le Big Data, la mobilité et le Cloud Après plusieurs années de changements radicaux qui l'ont fait passer d'éditeur mono-produit à fournisseur multi-services (BI, SGBD, Cloud, mobilité, In-computing), SAP rentre dans une nouvelle phase : celle de la consolidation. Cette évolution dans la continuité se traduit jusque dans l'arrivée de nouveaux slogans (comme « SAP runs like never before ») inspiré du traditionnel « Runs Better with SAP », placardés sur les murs du SAPPHIRE ? la grande messe annuelle de l'éditeur qui a ouvert ses portes aujourd'hui à Madrid. L'évolution se...

    Read the article

  • How to combine several movieclips into one scene?

    - by NKelly
    Hi everyone, I'm creating a website that allows kids to designs a tshirt. I will have four section, colour, graphic, text and print. I have created these sections on demos and they are all working. I now need to properly create them all on one movie clip. I'm having problems with it, when i select the chosen tshirt colour and move onto the graphic section the shirt is white again and hasnt came through blue. Its the same for every section, when I select a grahic they dont come through either etc when I click the next button it refreshs the page. Does anyone know how to create this kind of design on one movie clip using different frames and so that the colour etc transfers onto each new page? PLEASE HELP!!!

    Read the article

  • HTML Dynamic Number of Dropdowns

    - by Evilsithgirl
    I have this form on which I would like to create a dynamic number of dropdowns. I have a list of categorized applications which I would like each to have its own dropdown that submits data for each dropdown. The dropdown options will be the same for each. Here is my code. I am not sure how to pass the unique data to the server. As you can see I currently have an iteration over a list of applications that I would like to make each select in that iteration its own dropdown. Thanks in advance. <html:form action="/CategorizeApps.do"> <h3>Uncategorized</h3> <br/> Categorize each application using the dropdown menu then click categorize.<br/> <table class="list"> <thead> <tr class="controls"> <td><input type="submit" name="btnAction" value="Categorize"/></td> </tr> <tr class="fields"> <td>ID</td> <td>Name</td> <td></td> </tr> </thead> <tbody> <logic:iterate id="uncat" name="appsUncat" scope="session"> <tr class="hlist"> <td><bean:write property="id" name="uncat" scope="page"/></td> <td><bean:write property="name" name="uncat" scope="page"/></td> <td><select id="category" name="category"> <logic:iterate id="categories" name="Categories" scope="session"> <option value="<bean:write name="categories" property="id" scope="page"/>"><bean:write name="categories" property="name" scope="page"/></option> </logic:iterate> </select></td> </tr> </logic:iterate> </tbody> </table> </html:form>

    Read the article

  • Calculation of derived field in sql

    - by Matt
    Taking Sql this quarter and not having any luck with the following question: The height of players in feet (inches/12). Include their name. Hint: Calculation or derived field. Be sure to give the calculated field a column header. We're learning the basic Select statment and didn't find any reference on how to make custom data at w3schools. I'm using Microsoft SQL server Management Express Here's my statment so far: select nameLast, nameFirst, height from Master where height (I assume its something like 'Player_Height' = height/12) order by nameLast, nameFirst, height Thanks for the help

    Read the article

  • Performing LINQ Self Join

    - by senfo
    I'm not getting the results I want for a query I'm writing in LINQ using the following: var config = (from ic in repository.Fetch() join oc in repository.Fetch() on ic.Slot equals oc.Slot where ic.Description == "Input" && oc.Description == "Output" select new Config { InputOid = ic.Oid, OutputOid = oc.Oid }).Distinct(); The following SQL returns 53 rows (which is correct), but the above LINQ returns 96 rows: SELECT DISTINCT ic.Oid AS InputOid, oc.Oid AS OutputOid FROM dbo.Config AS ic INNER JOIN dbo.Config AS oc ON ic.Slot = oc.Slot WHERE ic.Description = 'Input' AND oc.Description = 'Output' How would I replicate the above SQL in a LINQ query? Update: I don't think it matters, but I'm working with LINQ to Entities 4.0.

    Read the article

  • Access is re-writing - and breaking - my query!

    - by FrustratedWithFormsDesigner
    I have a query in MS Access (2003) that makes use of a subquery. The subquery part looks like this: ...FROM (SELECT id, dt, details FROM all_recs WHERE def_cd="ABC-00123") AS q1,... And when I switch to Table View to verify the results, all is OK. Then, I wanted the result of this query to be printed on the page header for a report (the query returns a single row that is page-header stuff). I get an error because the query is suddenly re-written as: ...FROM [SELECT id, dt, details FROM all_recs WHERE def_cd="ABC-00123"; ] AS q1,... So it's Ok that the round brackets are automatically replaced by square brackets, Access feels it needs to do that, fine! But why is it adding the ; into the subquery, which causes it to fail? I suppose I could just create new query objects for these subqueries, but it seems a little silly that I should have to do that.

    Read the article

  • jQuery live, change in not working in IE6, IE7

    - by fabian
    The code below works as expected in FF but not in IEs... $(document).ready(function() { $('div.facet_dropdown select').live('change', function() { var changed_facet = $(this).attr('id'); var facets = $('select', $(this).closest('form')); var args = window.location.href.split('?')[0] + '?ajax=1'; var clear = false; for(var i = 0; i < facets.length; i++) { var ob = $(facets[i]); var val = ob.val(); if(clear) { val = ''; } args += '&' + ob.attr('id') + '=' + val; if(ob.attr('id') == changed_facet) { clear = true; } } $.getJSON(args, function(json) { for(widget_id in json) { var sel = '#field-' + widget_id + ' div.widget'; $(sel).html(json[widget_id]); } }); }); });

    Read the article

  • Oracle function always returning null

    - by JavaRocky
    I can't get this function to behave as i desire. Can anyone point out why it always returns null instead of CURRENT_TIMESTAMP? CREATE OR REPLACE FUNCTION nowts RETURN TIMESTAMP IS vTimestamp TIMESTAMP; BEGIN SELECT type_date INTO vTimestamp FROM param_table WHERE param_table = 3 AND exists ( SELECT * FROM param_table WHERE param_table = 2 ); IF vTimestamp IS NULL THEN vTimestamp := CURRENT_TIMESTAMP; END IF; return vTimestamp; END nowts; Right now there is nothing in the table named param_table.

    Read the article

  • Ubutnu 12.04 mdadm inactive

    - by user32274
    For a while now, my RAID 5 has ceased to work. Everytime I tried "madm --detail /dev/md127", its states all the drive and drive info, but that two of the drives have been removed. After some restarts, doing the same thing, i am getting /dev/md127 does not appear to be active. When I go into DiskUtil, I can see all 6 Hard Drives healthy and present, and i can see the Raid 5 at the bottom under Multi-disk Devices. However, the Raid says 0.0kb, and is not active. Please help and let me know how to proceed from here. I would really like to avoid rebuilding the RAID, especially because all 6 drives seem to be healthy and present. Thanks so much.

    Read the article

  • MYSQL how to ignore a table in a 3 table query if it doesnt satisfy the statement

    - by user165242
    I am trying to have information displayed for this query: SELECT o.sub_number,o.unique_id,o.period_from,o.period_to,o.total_amt,i.paid_amt,i.dated,i.payment,i.paid_by,i.entered_date,i.paid_for_unique,j.cheque_num,j.drawn_on,j.dated AS cheque_dated FROM paid_details o, payment_details i,cheque j WHERE o.unique_id=i.unique_id AND o.unique_id=j.unique_id AND o.sub_number IN(SELECT sub_number FROM paid_details WHERE unique_id LIKE '%1271437707%'); it flops. Well the problem is sometimes the cheque might not have any information in it. So how do i get MYSQL to ignore that table and still continue displaying the rest of the information? thanks!

    Read the article

  • Single SQL Server Result Set from Query

    - by JamesC
    Hi Please advise on how to merge two results in to one using SQL Server 2005. I have the situation where an Account can have up to two Settlement Instructions and this have been modeled like so: The slim-ed down schema: Account --------------------- Id AccountName PrimarySettlementId (nullable) AlternateSettlementId (nullable) SettlementInstruction ---------------------- Id Name The output I want is a single result set with a select statement something along the lines of this which will allow me to construct some java objects in my Spring row mapper: select Account.Id as accountId, Account.AccountName as accountName, s1.Id as primarySettlementId, s1.Name as primarySettlementName, s2.Id as alternateSettlementId, s2.Name as alternateSettlementName I've tried various things but cannot find a way to get the result set merged in to one where the primary and alternate FK's are not null. Finally I have searched the forum, but nothing quite seems to fit with what I need.

    Read the article

  • Format MySQL code inside PHP string

    - by JohnA
    Is there any program IDE or not that can format MySQL code inside PHP string e.g. I use PHPStorm IDE and it cannot do it. It does that for PHP and MYSQL but not for MYSQL inside php string. I am ready to use new IDE because now i have to manually format hundreds of database requests that are one line and not readable. Only criteria for my choice is that ide can do that automatically. <?php ... $request1 = "select * from tbl_admin where admin_id= {$_SESSION['admin_id']} and active= 1 order By admin_id Asc"; ... ?> should become <?php ... $request1 = "SELECT * FROM tbl_admin WHERE admin_id = {$_SESSION['admin_id']} AND active = 1 ORDER BY admin_id ASC"; ... ?>

    Read the article

  • Is this an injection attempt or a normal request?

    - by CheeseConQueso
    In cPanel's Analog Stats statistics module, I've noticed countless requests to connect to the following example: /?x=19&y=15 The numbers are random, but its always setting x and y variables. Another category of mysterious requests: /?id=http://nic.bupt.edu.cn/media/j1.txt?? There are other attempts at injections in the request log that have straight sql written into them as well. Example: /jobs/jobinfo.php?id=-999.9 UNION ALL SELECT 1,(SELECT concat(0x7e,0x27,count(table_name),0x27,0x7e) FROM information_schema.tables WHERE table_schema=0x73636363726F6F745F7075626C6963),3,4,5,6,7,8,9,10,11,12,13-- It looks like they are all reaching a 404, but I'm still wondering about the intent behind these. I know this is vague, but maybe someone knows that this is normal while using cPanel & phpMyAdmin services. Also, there was a search box installed on the site which could be the reason. Any suggestions as to what all these are?

    Read the article

  • XSLT 2.0 Header Leaks into Transformed XML

    - by user1303797
    First, a thank you in advance. Second, this is my first post so apologies for any errors or wrongdoings. I am a noob w/ xml and xslt, and can't seem to figure this out. When I transform some xml using xslt 2.0, some of the headers from the xslt leaks into the new xml. It doesn't seem to do it in xslt 1.0 (granted the xslt is a little different). Here is the xml: <?xml version="1.0" encoding="ISO-8859-1" ?> <xml_content> <feed_name>feed</feed_name> <feed_info> <entry_1> <id>1</id> <pub_date>1320814800</pub_date> </entry_1> </feed_info> </xml_content> Here is the xslt: <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/TR/xhtml1/strict"> <xsl:output method="xml" indent="yes" /> <xsl:template match="xml_content"> <Records> <xsl:for-each select="feed_info/entry_1"> <Record> <ID><xsl:value-of select="id" /></ID> <PublicationDate><xsl:value-of select='xs:dateTime("1970-01-01T00:00:00") + xs:integer(pub_date) * xs:dayTimeDuration("PT1S")'/></PublicationDate> </Record> </xsl:for-each> </Records> </xsl:template> </xsl:stylesheet> Here is the new xml. Look specifically at the first "Records" element. <?xml version="1.0" encoding="UTF-8"?> <Records xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/TR/xhtml1/strict"> <Record> <ID>1</ID> <PublicationDate>2011-11-09T05:00:00</PublicationDate> </Record> </Records>

    Read the article

  • Add params before submit form ROR

    - by Jorge Najera T
    It's possible to add some parameters before submit an form? My problem is that I need to send the ticket id to my payment controller. A possibility is to send it through an hidden input field, but there's any other secure way to achieve this? Any help will be appreciated. Thanks. The process of buying a ticket 0) Select the event 1) User select the kind of ticket he wants to buy. 2) User add his personal information 3) Finally the Checkout (payment controller)

    Read the article

  • Mysql Limit column value repetition N times

    - by Paper-bat
    Hi at all, is my first question here, so be patient ^^ I'll go directly to problem, I have two table Customer (idCustomer, ecc.. ecc..) Comment (idCustomer, idComment, ecc.. ecc..) obviosly the two table are joined togheter, for example SELECT * FROM Comment AS co JOIN Customer AS cu ON cu.idCustomer = co.idCustomer With this I select all comment from that table associated with is Customer, but now I wanna limit the number of Comment by 2 max Comment per Customer. The first thing I see is to use 'GROUP BY cu.idCustomer' but it limit only 1 Comment per Customer, but I wanna 2 Comment per Customer.. how now to proceed?

    Read the article

  • How do stream rows from a MSSQL table into a .NET app, in 10000 row chunks?

    - by Gravitas
    I have a table with 300 million rows in Microsoft SQL Server 2008 R2. There is a clustered index on the date column [DataDate] which means that the entire table is ordered by the date column. How do I stream out the data from this table, into my .NET application, in 10000 row chunks? Environment: Using C#. Have to be able to pause the data stream at any point, to allow the client to process the rows. Unfortunately, cannot use a select * from as this will select the entire table (its 50GB - it won't fit into memory).

    Read the article

  • Oracle 10g multiple DELETE statements

    - by bmw0128
    I'm building a dml file that first deletes records that may be in the table, then inserts records. Example: DELETE from foo where field1='bar'; DELETE from foo where fields1='bazz'; INSERT ALL INTO foo(field1, field2) values ('bar', 'x') INTO foo(field1, field2) values ('bazz', 'y') SELECT * from DUAL; When I run the insert statement by itself, it runs fine. When I run the deletes, only the last delete runs. Also, it seems to be necessary to end the multiple insert with the select, is that so? If so, why is that necessary? In the past, when using MySQL, I could just list multiple delete and insert statements, all individually ending with a semicolon, and it would run fine.

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • Eliminate duplicates in SQL query

    - by ewdef
    i have a table with 6 fields. the columns are ID, new_id price,title,Img,Active. I have datawhich is duplicated for the price column. When I do a select i want to show only distinct rows where new_id is not the same. e.g.- ID New_ID Price Title Img Active 1 1 20.00 PA-1 0X4... 1 2 1 10.00 PA-10 0X4... 1 3 3 20.00 PA-11 0X4... 1 4 4 30.00 PA-5 0X4... 1 5 9 20.00 PA-99A 0X4... 1 6 3 50.00 PA-55 0X4... 1 When the select statement runs, only rows with ID (1,4,9,6) should show. Reason being the new_ID with the higher price should show up. How can i do this?

    Read the article

  • Excluding a specific substring from a regex

    - by Matt S
    I'm attempting to mangle a SQL query via regex. My goal is essentially grab what is between FROM and ORDER BY, if ORDER BY exists. So, for example for the query: SELECT * FROM TableA WHERE ColumnA=42 ORDER BY ColumnB it should capture TableA WHERE ColumnA=42, and it should also capture if the ORDER BY expression isn't there. The closest I've been able to come is SELECT (.*) FROM (.*)(?=(ORDER BY)) which fails without the ORDER BY. Hopefully I'm missing something obvious. I've been hammering in Expresso for the past hour trying to get this.

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >