Search Results

Search found 87891 results on 3516 pages for 'server migration'.

Page 949/3516 | < Previous Page | 945 946 947 948 949 950 951 952 953 954 955 956  | Next Page >

  • MSSQL 2005: Update rows in a specified order (like ORDER BY)?

    - by JMTyler
    I want to update rows of a table in a specific order, like one would expect if including an ORDER BY clause, but MS SQL does not support the ORDER BY clause in UPDATE queries. I have checked out this question which supplied a nice solution, but my query is a bit more complicated than the one specified there. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) ORDER BY Parent.Depth DESC; So, what I'm hoping that you'll notice is that a single table (TableA) contains a hierarchy of rows, wherein one row can be the parent or child of any other row. The rows need to be updated in order from the deepest child up to the root parent. This is because TableA.ColA must contain an up-to-date concatenation of its own current value with the values of its children (I realize this query only concats with one child, but that is for the sake of simplicity - the purpose of the example in this question does not necessitate any more verbosity), therefore the query must update from the bottom up. The solution suggested in the question I noted above is as follows: UPDATE messages SET status=10 WHERE ID in (SELECT TOP (10) Id FROM Table WHERE status=0 ORDER BY priority DESC ); The reason that I don't think I can use this solution is because I am referencing column values from the parent table inside my subquery (see WHERE Child.ParentColB = Parent.ColB), and I don't think two sibling subqueries would have access to each others' data. So far I have only determined one way to merge that suggested solution with my current problem, and I don't think it works. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) WHERE Parent.Id IN (SELECT Id FROM TableA ORDER BY Parent.Depth DESC); The WHERE..IN subquery will not actually return a subset of the rows, it will just return the full list of IDs in the order that I want. However (I don't know for sure - please tell me if I'm wrong) I think that the WHERE..IN clause will not care about the order of IDs within the parentheses - it will just check the ID of the row it currently wants to update to see if it's in that list (which, they all are) in whatever order it is already trying to update... Which would just be a total waste of cycles, because it wouldn't change anything. So, in conclusion, I have looked around and can't seem to figure out a way to update in a specified order (and included the reason I need to update in that order, because I am sure I would otherwise get the ever-so-useful "why?" answers) and I am now hitting up Stack Overflow to see if any of you gurus out there who know more about SQL than I do (which isn't saying much) know of an efficient way to do this. It's particularly important that I only use a single query to complete this action. A long question, but I wanted to cover my bases and give you guys as much info to feed off of as possible. :) Any thoughts?

    Read the article

  • What is the best server or Ip address to use for prolonged testing?

    - by eldorel
    I usually run uptime/latency tests against (and from) two servers that we own at different sites and until recently I've used the google dns servers as a control group. However, I've realized there is a potential problem with monitoring latency over extended periods of time. Almost all of the major service providers are using ANYCAST. For short tests this doesn't matter, but I need to run a set of tests for at least a week to try and catch an intermittent problem, and a change in the anycast priority while trying to test latency will cause the latency values for that server to change accordingly. Since I'm submitting graphs of this data to the ISP, I need to avoid/account for as many variables as possible. Spikes in the data for only one of the tested servers will only cause headaches. So can anyone recommend servers that: are not using anycast are owned by an entity that has a good uptime reputation (so they can't claim that the problem is server-side) will respond to ICMP requests Have an available service that runs on TCP/UDP (http or dns preferably) Wont consider an automated request every 10 minutes to be abuse Are accessible from anywhere in the world Are not local to the isp ( consider this an investigation of a hostile party ) Thanks in advance. Edit: added #6 and #7 above. More info: I am attempting to demonstrate a network problem for an entire node of our local ISP's network. They are actively blaming the issue on the equipment installed at the customer sites (our backup site is one of these), and refuse to escalate the problem. (even though 2 of these businesses have ISP provided modems, and all of us have completely different routers/services running) I am already quite familiar with the need to test an isp controlled IP, but they are actively dropping all packets targeted at gateway ip addresses and are only passing traffic addressed beyond the gateways. So to demonstrate the issue, I am sending packets to other systems in the same node, systems one hop away from the affected node, and systems completely outside the network. Unfortunately, all of the systems I have currently are either administered directly by myself, or by people who are biased enough to assist me. I need to have several systems included in the trace/log/graphs that are 100% not in the control of either myself or the isp so that the graphs have a stable/unbiased control group. These requirements are straight from legal, I'm just trying to make sure that everything that could be argued to invalidate the data is already covered. In Summary: I need to be able to show tcp/udp/icmp as 3 separate data points, and I need to be able to show the connections inside the local node, from local node to another nearby node, from those 2 nodes to the internet, and through the internet to both verifiable servers and a control group that I have no control over whatsoever. Again, Google/opendns/yahoo/msn/facebook/etc all use anycast, which throws the numbers off every time the anycast caches expire, so I need suggestions of an IP or server that is available for this type of testing. I was hoping someone knew of a system run by someone such as ISC or ICANN, or perhaps even a .gov server (fcc or nsa maybe?) setup for this type of testing. Thanks again.

    Read the article

  • TSQL: Variable scope and EXEC()

    - by Joel
    declare @test varchar(20) set @test = 'VALUE' exec(' select '+@test+' ') This returns: Invalid column name 'VALUE'. Is there an alternate method to display the variable value on the select statement?

    Read the article

  • Dynamic table design (common lookup table), need a nice query to get the values

    - by Swoosh
    sql2005 This is my simplified example: (in reality there are 40+ tables in here, I only showed 2) I got a table called tb_modules, with 3 columns (id, description, tablename as varchar): 1, UserType, tb_usertype 2, Religion, tb_religion (Last column is actually the name of a different table) I got an other table that looks like this: tb_value (columns:id, tb_modules_ID, usertype_OR_religion_ID) values: 1111, 1, 45 1112, 1, 55 1113, 2, 123 1114, 2, 234 so, I mean 45, 55, 123, 234 are usertype OR religion ID's (45, 55 usertype, 123, 234 religion ID`s) Don't judge, I didn't design the database Question How can I make a select, showing * from tb_value, plus one column That one column would be TITLE from the tb_usertype or RELIGIONNAME from the tb_religion table I would like to make a general thing. Was thinking initially about maybe a SQL function that returns a string, but I think I would need dynamic SQL, which is not ok in a function. Anyone a better idea ?

    Read the article

  • How to consolidate multiple LOG files into one .LDF file in SQL2000

    - by John Galt
    Here is what sp_helpfile says about my current database (recovery model is Simple) in SQL2000: name fileid filename size maxsize growth usage MasterScratchPad_Data 1 C:\SQLDATA\MasterScratchPad_Data.MDF 6041600 KB Unlimited 5120000 KB data only MasterScratchPad_Log 2 C:\SQLDATA\MasterScratchPad_Log.LDF 2111304 KB Unlimited 10% log only MasterScratchPad_X1_Log 3 E:\SQLDATA\MasterScratchPad_X1_Log.LDF 191944 KB Unlimited 10% log only I'm trying to prepare this for a detach then an attach to a sql2008 instance but I don't want to have the 2nd .LDF file (I'd like to have just one file for the log). I have backed up the database. I have issued: BACKUP LOG MasterScratchPad WITH TRUNCATE_ONLY. I have run multiple DBCC SHRINKFILE commands on both of the LOG files. How can I accomplish this goal of having just one .LDF? I cannot find anything on how to delete the one with fileid of 3 and/or how to consolidate multiple files into one log file.

    Read the article

  • Unable to add Solution to TFS 2010 due to existing (invisible)binding

    - by Refracted Paladin
    I have a smallish utility library I made that I had created in TFS Beta 2 to test out TFS. I now have TFS rc1 installed(and Beta 2 uninstalled) and am trying to add my Solution to TFS. I get an error saying that it is already bound to my old TFS, which was on a different system then this one. Strangely, when I go into Source Control and look at the bindings it says there aren't any. Also, I manually deleted the .vss and .vsc files and it still does it. Ideas? I looked through the numerous other SO topics related to this but unless I missed one none of them are dealing with my issue. Ideas?

    Read the article

  • Fastest way to compress a database or .bak file and transfer it

    - by Nai
    As per the question title. I wonder if there are special programmes or commands that makes zipping up a .bak file and transferring it super quick. I read abour xp_cmdshell here but I'm not sure about the speed. My .bak file is about 12 gigs at the moment. Related to this is the possibility of using Red Gate's SQL Data Compare to just transfer the differential data across the network pipeline but I have never used SQL Data Compare before and I'm not sure how it goes about doing INSERTS on tables with Primary Keys and such. Also, not sure about the speed. Does anyone have any experience with this programme or similar programmes? Cheers!

    Read the article

  • Retrieving multipel rows in MS SQL but distinct filteringen only on one

    - by Nicklas
    I have this: SELECT Product.ProductID, Product.Name, Product.GroupID, Product.GradeID, AVG(tblReview.Grade) AS Grade FROM Product left Join tblReview ON Product.GroupID = tblReview.GroupID WHERE (Product.CategoryID = @CategoryID) GROUP BY Product.ProductID, Product.Name, Product.GroupID, Product.GradeID I would like to return only the rows where Product.Name is unique. If I make a SELECT DISTINCT the ProductID is diffrent on every row so all the rows are unique. Thanks in andvance

    Read the article

  • TSQL: grouping customer orders by week

    - by fishhead
    I have a table with a collection of orders. The fields are: customerName (text) DateOfOrder (datetime). I would like to show totals of orders per week per customer. I would like to have it arranged for the Friday of each week so that it looks like this: all dates follow mm/dd/yyyy "bobs pizza", 3/5/2010, 10 "the phone co",3/5/2010,5 "bobs pizza", 3/12/2010, 3 "the phone co",3/12/2010,11 Could somebody please show me how to do this? Thanks

    Read the article

  • Aggregate Functions and Group By Problems

    - by David Stein
    If we start with the following simple SQL statement which works. SELECT sor.FPARTNO, sum(sor.FUNETPRICE) FROM sorels sor GROUP BY sor.FPARTNO FPartNo is the part number and the Funetprice is obviously the net price. The user also wants the description and this causes a problem. If I follow up with this: SELECT sor.FPARTNO, sor.fdesc, sum(sor.FUNETPRICE) FROM sorels sor GROUP BY sor.FPARTNO, sor.fdesc If there are multiple variations of the description for that part number, typically very small variations in the text, then I don't actually aggregate on the part number. Make sense? I'm sure this must be simple. How can I return the first fdesc that corresponds to the part number? Any of the description variations would suffice as they are almost entirely identical. Edit: The description is a text field.

    Read the article

  • Linq to sql Incorrect varchar length

    - by scott
    I have a table with a nullable varchar(50) column in it. When I am updating the value through linq to sql and trace the call in profiler it is defining the parameter as varchar(36). This is obviously causing some minor issues when we are trying to insert data that is between 37 and 50 characters long. I have tried removing the table and re-adding it to the design surface but the same thing happens. I also tried removing that property and adding it manually, same issue. When I look at the designer.cs code it shows the attribute properly: [Column(Storage="_Name", DbType="VarChar(50)")] I am out of ideas, anybody seen this before? Every other column is correct.

    Read the article

  • is there a downside to putting N in front of strings in scripts? Is it considered a "best practice"?

    - by jcollum
    Let's say I have a table that has a varchar field. If I do an insert like this: INSERT MyTable SELECT N'the string goes here' Is there any fundamental difference between that and: INSERT MyTable SELECT 'the string goes here' My understanding was that you'd only have a problem if the string contained a Unicode character and the target column wasn't unicode. Other than that, SQL deals with it just fine and converts the string with the N'' into a varchar field (basically ignores the N). I was under the impression that N in front of strings was a good practice, but I'm unable to find any discussion of it that I'd consider definitive. Title may need improvement, feel free.

    Read the article

  • How Can I Generate Random Unqiue Numbers in C#

    - by peace
    public int GenPurchaseOrderNum() { Random random = new Random(); _uniqueNum = random.Next(13287, 21439); return UniqueNum; } I removed unique constraint from the PONumber column in the db because an employee should only generate P.O. # when the deal is set. Otherwise, P.O. # would have 0. P.O. Number used to have unique constraint, this forces employee to generate P.O. in all cases so the db doesn't throw unique constraint error. Since i removed the unique constraint, any quote doesn't have P.O. will carry 0 value. Otherwise, a unique value is generated for P.O. #. However, i don't have a unique constraint in db which makes it hard for me to know whether the application generated P.O. # is unique or not. What should i do? I hope my question is clear enough

    Read the article

  • change string in mssql to abbreviate

    - by jeff
    How do I return the everything in a string from a sql query before a certain character? My data looks like this: HD TV HM45VM - HDTV widescreen television set with 45" lcd I want to limit or truncate the string to include everything before the dash. So the final result would be "HD TV HM45VM"

    Read the article

  • Efficient paging with large tables in sql 2008

    - by Kumar
    for tables with 1,000,000 rows and possibly many many more ! haven't done any benchmarking myself so wanted to get the experts opinion. Looked at some articles on row_number() but it seems to have performance implications What are the other choices/alternatives ?

    Read the article

  • SQL dynamic date but fixed time query

    - by Marko Lombardi
    I am trying to write a sql query like the example below, however, I need it to always choose the DateEntered field between the current day's date at 8:00am and the current day's date at 4:00pm. Not sure how to go about this. Can someone please help? SELECT OrderNumber , OrderRelease , HeatNumber , HeatSuffix , Operation , COUNT(Operation) AS [Pieces Out of Tolerance] FROM Alerts WHERE (Mill = 3) AND (DateEntered BETWEEN GetDate '08:00' AND GetDate '16:00') GROUP BY OrderNumber, OrderRelease, HeatNumber, HeatSuffix, Operation

    Read the article

  • return 0 with sql query instead of nothing

    - by user1202606
    How do I return a 0 with as Responses with the PossibleAnswerText if count is 0? Right now it won't return anything. select COUNT(sr.Id) AS 'Responses', qpa.PossibleAnswerText from CaresPlusParticipantSurvey.QuestionPossibleAnswer as qpa join CaresPlusParticipantSurvey.SurveyResponse as sr on sr.QuestionPossibleAnswerId = qpa.Id where sr.QuestionPossibleAnswerId = 116 GROUP BY qpa.PossibleAnswerText

    Read the article

  • In TFS 2010 how do you actually create a "ChangeSet"

    - by Mastro
    I've been reading all this stuff about Changesets in TFS, and how you can build and leave out changesets etc. this and that... check in a bunch of files into one Changeset. But how do you physically do it? I see "Shelve changes" which I understand but I don't understand how you actually create a "Changeset" called "New Feature A" and check in all the files associated.

    Read the article

  • INNER JOIN Returns Too Many Results

    - by Alon
    I have the following SQL: SELECT * FROM [Database].dbo.[TagsPerItem] INNER JOIN [Database].dbo.[Tag] ON [Tag].Id = [TagsPerItem].TagId WHERE [Tag].Name IN ('home', 'car') and it returns: Id TagId ItemId ItemTable Id Name SiteId 1 1 1 Content 1 home 1 2 1 2 Content 1 home 1 3 1 3 Content 1 home 1 4 2 4 Content 2 car 1 5 2 5 Content 2 car 1 6 2 12 Content 2 car 1 instead of just two records, which these names are "home" and "car". How can I fix it? Thanks.

    Read the article

  • One or more rows contain values violating non-null, unique, or foreign-key constraints in SQL Script

    - by Musikero31
    Need help on this. I'm just wondering why this error occurred. Below is the script concerned. SELECT loc.ID ,loc.LocCode ,loc.LocName ,st.StateName ,reg.RegionName ,ctry.CountryName ,ISNULL(CONVERT(DATE, loc.UpdatedDate), CONVERT(DATE,loc.CreatedDate)) AS [ModifiedDate] ,stf.Name AS [ModifiedBy] FROM Spkr_Country AS ctry WITH (NOLOCK) INNER JOIN Spkr_Location AS loc WITH (NOLOCK) ON ctry.ID = loc.CountryID INNER JOIN Spkr_State AS st WITH (NOLOCK) ON loc.StateID = st.ID INNER JOIN Spkr_Region AS reg WITH (NOLOCK) ON loc.RegionID = reg.ID INNER JOIN Staff AS stf ON ISNULL(loc.UpdatedBy, loc.CreatedBy) = stf.StaffId WHERE (loc.IsActive = 1) AND ( (@LocCode = '') OR ( @LocCode <> '' AND loc.LocCode LIKE @LocCode + '%' ) ) AND ( (@RegionID < 1) OR ( @RegionID > 0 AND loc.RegionID = @RegionID ) ) AND ( (@StateID < 1) OR ( @StateID > 0 AND loc.StateID = @StateID ) ) AND ( (@CountryID < 1) OR ( @CountryID > 0 AND loc.CountryID = @CountryID ) ) The error probably occurred here INNER JOIN Staff AS stf ON ISNULL(loc.UpdatedBy, loc.CreatedBy) = stf.StaffId The requirement that I wanted is that if the loc.UpdatedBy is null, it will use the loc.CreatedBy column. However, when I used this, it generated the error mentioned. In the database, the loc.CreatedBy is not null while the loc.UpdatedBy is nullable. I checked it by running the script but it's working fine. How do I do with it? What's wrong with my code? Please help.

    Read the article

  • database setup for web application

    - by vbNewbie
    I have an application that requires a database and I have already setup tables but not sure if they match the requirements of the app. The app is a crawler which fetches web urls, crawls and stores appropriate urls and posts and all this is based on client requests which are stored as projects. So for each url stored there is one post and for client there are many projects and for each project there are many types of requests. So we get a client with a request and assign them a project name and then use the request to search for content and store the url and post. A request could already exist and should not be duplicated but should be associated with the right client and project and post etc. Here is my schema now: url table: urlId PK queryId FK url post table: postId PK urlId FK post date request table: queryId PK request client table: clientId PK client Name projectId FK project table: projectID PK queryID FK project Does this look right? or does anyone have suggestions. Of course my stored procedures and insert statements will have to be in depth.

    Read the article

< Previous Page | 945 946 947 948 949 950 951 952 953 954 955 956  | Next Page >