Search Results

Search found 87891 results on 3516 pages for 'server migration'.

Page 961/3516 | < Previous Page | 957 958 959 960 961 962 963 964 965 966 967 968  | Next Page >

  • SQL exclude a column using SELECT * [except columnA] FROM tableA?

    - by uu?????s
    We all know that to select all columns from a table, we can use SELECT * FROM tableA Is there a way to exclude column(s) from a table without specifying all the columns? SELECT * [except columnA] FROM tableA The only way that I know is to manually specify all the columns and exclude the unwanted column. This is really time consuming so I'm looking for ways to save time and effort on this, as well as future maintenance should the table has more/less columns. thanks!

    Read the article

  • C# Dataset Dynamically Add DataColumn

    - by Wesley
    I am trying to add a extra column to a dataset after a query has completed. I have a database relationship of the following: Employees / \ Groups EmployeeGroups Empoyees holds all the data for that individual, I'll name the unique key the UserID. Groups holds all the groups that a employee can be a part of, i.e. Super User, Admin, User; etc. I will name the unique key GroupID EmployeeGroups holds all the associations of which groups each employee belongs too. (UserID | GroupID) What I am trying to accomplish is after querying for a all users I want to loop though each user and add what groups that user is a part of by adding a new column to the dataset named 'Groups' which is a string to insert the values of the next query to get all the groups that user is a part of. Then by user of databinding populate a listview with all employees and their group associations My code is as follows; Position 5 is the new column I am trying to add to the dataset. string theQuery = "select UserID, FirstName, LastName, EmployeeID, Active from Employees"; DataSet theEmployeeSet = itsDatabase.runQuery(theQuery); DataColumn theCol = new DataColumn("Groups", typeof(string)); theEmployeeSet.Tables[0].Columns.Add(theCol); foreach (DataRow theRow in theEmployeeSet.Tables[0].Rows) { theRow.ItemArray[5] = "1234"; } At the moment, the code will create the new column but when i assign the data to that column nothing will be assigned, what am I missing? If there is any further explination or information I can provide, please let me know. Thank you all

    Read the article

  • Maven + SSDM Build and Runtime Environment Automation

    - by Randy
    Preface: My Company, like most, has several run-time environments and several release versions which themselves are composed of different versions of various jars. For example, let us consider release versions 1.1, 1.2, and 1.3 of Software X, which may be deployed to a developer computer, testing, or production. Software-x-1.1 is itself composed of jarA-0.9.1 and jarB-0.7.5, but software-x-1.3 is composed of jarA-1.7.31 and jarB-0.8.1. Currently we use Spring's PropertyPlaceholderConfigurer to configure run-time variables (such as database credentials), however, properties also change with release versions. We also use Maven 2 POM version 4 to specify which versions of our code need to be used. We place the version numbers of our jars as properties within profiles (dev,test,prod) inside of the parent pom and then reference those version numbers in all project poms. As of right now, we have no way to specify which project versions pertain to a given release other than the most current one. Moreover, we deploy our run-time configurations to the SSDM pickup which then configures and creates the services defined by the built versions of our software. -- Questions: Is there any procedure/tool we can use to build our product by merely providing the run-time environment and version number? IE "build 1.1 dev"? Is there anyway we can store the required jar versions for each release build? We are currently versioning all files, including the parent pom, but merely versioning the parent pom does not record which release version is pertinent to that parent pom. What else can we do to further automate the process of builds? For example, if we could manage run-time configurations within the parent pom that would be a step in the right direction, but that seems like a violation of scope. Any tool outside of our framework is inconceivable at this point, but not in the far future. Summary: How can we automate our build process to the fullest extent without being error prone?

    Read the article

  • problem adding a where clause to a T-sql LEFT OUTER JOIN query

    - by Nickson
    SELECT TOP (100) PERCENT dbo.EmployeeInfo.id, MIN(dbo.EmployeeInfo.EmpNo) AS EmpNo, SUM(dbo.LeaveApplications.DaysAuthorised) AS DaysTaken FROM dbo.EmployeeInfo LEFT OUTER JOIN dbo.LeaveApplications ON dbo.EmployeeInfo.id = dbo.LeaveApplications.EmployeeID WHERE (YEAR(dbo.LeaveApplications.ApplicationDate) = YEAR(GETDATE())) GROUP BY dbo.EmployeeInfo.id, dbo.EmployeeMaster.EmpNo ORDER BY DaysTaken DESC The basic functionality i want is to retrieve all records in table dbo.EmployeeInfo irrespective of whether a corresponding record exists in table dbo.LeaveApplications. If a row in EmployeeInfo has no related row in LeaveApplications, i want to return its SUM(dbo.LeaveApplications.DaysAuthorised) AS DaysTaken column as NULL or may be even put a 0. With the above query, if i remove the where condition, am able to achieve what i want, but problem is i also want to return related rows from LeaveApplication only if ApplicationDate is in the current year. Now with the where condition added, am only able to get rows from EmployeeInfo only if they have corresponding rows in LeaveApplications yet i just wanted rows all in EmployeeInfo

    Read the article

  • Execute SQL Task error -

    - by sharadov
    I get the following error when I call a stored proc within an execute sql task in SSIS. "Description: Executing the query "Exec up_CallXXX" failed with the following error: "Incorrect syntax near '13'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly." The stored proc runs fine when I execute it through SSMS. Does anyone have an idea what is going on.

    Read the article

  • Getting max. screen resolution with Group By

    - by Quandary
    Question: I have a website where I gather browser statistics. I have an SQL table (T_Visits), with the following columns: uniqueidentifier Visit_UID, uniqueidentifier User_UID, datetime Visit_DateTime, float Screen_w, float Screen_h, float Resolution = Screen_w * Screen_h varchar resolutionstring = screen_w + ' x ' + screen_h Now I want to get the maximum/minimum resolution each user had: Select User_UID, max(resolution) from T_Visits GROUP BY User_UID How can I get the corresponding resolution string ? I mean I can get the max(screen_w) and max(screen_h), but there's no guarantee that the corresponding resolutionstring would be max(screen_w) +' x '+ max(screen_h)

    Read the article

  • MSSql Query solution cum Suggestion Required

    - by Nirmal
    Hello All... I have a following scenario in my MSSql 2005 database. zipcodes table has following fields and value (just a sample): zipcode latitude longitude ------- -------- --------- 65201 123.456 456.789 65203 126.546 444.444 and "place" table has following fields and value : id name zip latitude longitude -- ---- --- -------- --------- 1 abc 65201 NULL NULL 2 def 65202 NULL NULL 3 ghi 65203 NULL NULL 4 jkl 65204 NULL NULL Now, my requirement is like I want to compare my zip codes of "place" table and update the available latitude and longitude fields from "zipcode" table. And there are some of the zipcodes which has no entry in "zipcode" table, so that should remain null. And the major issue is like I have more then 50,00,000 records in my db. So, query should support this feature. I have tried some of the solutions but unfortunately not getting proper output. Any help would be appreciated...

    Read the article

  • Is adding a bit mask to all tables in a database useful?

    - by Tom
    A colleague is adding a bit mask to all our database tables. In theory this is so we can track certain properties of each row across the entire system. For example... Is the row shipped with the system or added by the client once they've started using the system Has the row been deleted from the table (soft deletes) Is the row a default value within a set of rows Is this a good idea? Are there other uses where this approach would be beneficial? My preference is these properties are obviously important, and having a dedicated column for each property is justified to make what is happening clearer to fellow developers.

    Read the article

  • SQL SELECT with time range

    - by nLL
    Hi, I have below click_log table logging hits for some urls site ip ua direction hit_time ----------------------------------------------------- 1 127.0.0.1 1 20010/01/01 00:00:00 2 127.0.0.1 1 20010/01/01 00:01:00 3 127.0.0.1 0 20010/01/01 00:10:00 .... ......... I want to select incoming hits (direction:1) and group by sites that are: from same ip and browser logged within 10 minutes of each other occured more than 4 times in 10 minutes. I'm not sure if above was clear enough. English is not my first language. Let me try to explain with an example. If site 1 gets 5 hits from same ip and browser with in 10 minutes after getting first unique hit from that ip and browser i want it to be included in the selection. Basically I am trying to find abusers.

    Read the article

  • Saving multiple items per single database cell...

    - by eugeneK
    Hi, i have a countries list. Each user can check multiple countries. Once saved, this "user country list" will be used to get whether other users fit into countries certain user chose. Question is what would be the most efficient approach to this problem... I have one, one to save user selection as delimited list like Canada,USA,France ... in single varchar(max) field but problem with it would be that once user from Germany enters page i perform this check on. To search for Germany i would be needed to get all items and un-delimit each field to check against value or to use sql 'like' which again is pretty damn slow.. If you have better solution or some tips i would be glad to hear. Just to make sure, many users will have their own selections of countries from which and only they want to have users to land on their page. While millions of users will reach those pages. So the faster approach will be the better. technology, MSSQL and ASP.NET thanks

    Read the article

  • Multiple conditions on select query

    - by stats101
    I have a select statement and I wish to calculate the cubic volume based on other values within the table. However I want to check that neither pr.Length_mm or pr.Width_mm or pr.Height_mm are NULL prior. I've looked at CASE statements, however it only seems to evaluate one column at a time. SELECT sa.OrderName, sa.OrderType, pr.Volume_UOM ,pr.Length_mm*pr.Width_mm*pr.Height_mm AS Volume_Cubic ,pr.Length_mm*pr.Width_mm AS Volume_Floor ,pr.Length_mm ,pr.Height_mm ,pr.Width_mm FROM CostToServe_MCB.staging.Sale sa LEFT JOIN staging.Product pr ON sa.ID = pr.ID

    Read the article

  • Order by is not working

    - by coure06
    With Results as ( SELECT Top(100) percent ROW_NUMBER() over (Order by (select 1)) as RowNumber, Ad.Date, Title FROM Ad inner join Job on Ad.Id = Job.AdId Order by case When @sortCol='Date' and @sortDir='ASC' Then Date End ASC, case When @sortCol='Date' and @sortDir='DESC' Then Date End DESC ) Select * from Results Where RowNumber BETWEEN @FirstRow AND @LastRow END Whatever is passed in @sortDir and @sortCol it does not work.What am I doing wrong?

    Read the article

  • Bizarre WHERE col = NULL behavior

    - by Kenneth
    This is a problem one of our developers brought to me. He stumbled across an old stored procedure which used 'WHERE col = NULL' several times. When the stored procedure is executed it returns data. If the query inside the stored procedure is executed manually it will not return data unless the 'WHERE col = NULL' references are changed to 'WHERE col IS NULL'. Can anyone explain this behavior?

    Read the article

  • how to insert based on the date

    - by Gaolai Peng
    I have a table table1 (account, last_contact_date, insert_date), account and last_contact_date are primary keys. The insert_date is set with the time the recored being added by calling getdate(). I also have a temporary table #temp(account, last_contact_date) which I use to update the table1. Here are sample data: table1 account last_contact_date insert_date 1 2012-09-01 2012-09-28 2 2012-09-01 2012-09-28 3 2012-09-01 2012-09-28 #temp account last_contact_date 1 2012-09-27 2 2012-09-27 3 2012-08-01 The result table depends on the inserting date. If the date is 2012-09-28, the result will be table1 account last_contact_date insert_date 1 2012-09-27 2012-09-28 2 2012-09-27 2012-09-28 3 2012-09-01 2012-09-28 If the date is 2012-09-29, the result will be table1 account last_contact_date insert_date 1 2012-09-01 2012-09-28 2 2012-09-01 2012-09-28 3 2012-09-01 2012-09-28 1 2012-09-27 2012-09-29 2 2012-09-27 2012-09-29 Basically the rule is (1) if the inserting date is the same day, i will pick the lastest last_contact_date, otherwise, (2) if the last_contact_date is later than the current last_contact_date, I will insert a new one. How do I write a query for this insert?

    Read the article

  • SaaS Multi-tenancy Applications: How is data import/export/backup being implemented?

    - by Mark Redman
    How are applications providing import / export (or backups) of data in SaaS based multi-tenancy applications, particularly single database designs? Imports: Keeping things simple I think basic imports are useful, ie CSV to a spec (or a way of providing a mapping between CSV columns and fields in the database. Exports: In single database designs I have seen XML exports and HTML (basic sitse generated) exports of data? I would assume that XML is a better option? How does one cater for relational data? Would you reference various things within XML and provide documentation of the relationships or let users figurethis out? Are vendors providing an export/backup that can be imported back in/restored? Your comments appreciated.

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • Insert multiple rows into temp table with one command in SQL2005

    - by Adam Haile
    I've got some data in the following format: -1,-1,-1,-1,701,-1,-1,-1,-1,-1,304,390,403,435,438,439,442,455 I need to insert it into a temp table like this: CREATE TABLE #TEMP ( Node int ) So that I can use it in a comparison with data in another table. The data above represents separate rows of the "Node" column. Is there an easy way to insert this data, all in one command? Also, the data will actually being coming in as seen, as a string... so I need to be able to just concat it into the SQL query string. I can obviously modify it first if needed.

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • Help understand difference in sql query

    - by Anil Namde
    Select user_name [User Name], first_name [First Name], last_name [Last Name] Form tab_user ORDER BY user_name Select user_name [User Name], first_name [First Name], last_name [Last Name] Form tab_user ORDER BY User Name Above are the two queries, Is there any difference because of the user_name used instead of User Name Is there something that should be taken care/worried when using something like this.

    Read the article

  • slow record deletion with large ntext values

    - by asking
    I'm having trouble deleting some records via a stored procedure from a table in SQLServer 2008R2 that has ntext columns. The stored proc is timing out and running the query directly takes a very long time. The initial query was a straight "delete from y where x = z" and I've also tried running it in batches of 1000 with transactions but it is still slow and timing out in a stored proc. The majority of the records in the table will not be deleted each time (it's not just a once-off query but will be run other times). The ntext columns are not used in the where clause and I can't change the column types. Any suggestions on the quickest way to delete records with large ntext values? Thanks

    Read the article

< Previous Page | 957 958 959 960 961 962 963 964 965 966 967 968  | Next Page >