Search Results

Search found 24516 results on 981 pages for 'visual c 2008'.

Page 330/981 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • computed column calculate a value based on different table

    - by adnan
    i've a table c_const code | nvalue -------------- 1 | 10000 2 | 20000 and i've another table t_anytable rec_id | s_id | n_code --------------------- 2 | x | 1 now i want to calculate the x value with computed column, based on rec_id*(select nvalue from c_const where code=ncode) but i get error "Subqueries are not allowed in this context. Only scalar expressions are allowed." how can i calculate the value in this computed column ? thanks.

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • t-sql help with recursive sort of query

    - by stackoverflowuser
    Hi Based on the following table ID Path --------------------------------------- 1 \\Root 2 \\Root\Node0 3 \\Root\Node0\Node1 4 \\Root\Node0\Node2 5 \\Root\Node3 6 \\Root\Node3\Node4 7 \\Root\Node5 ... N \\Root\Node5\Node6\Node7\Node8\Node9\Node10 so on... There are around 1000 rows in this table. I want to display individual node in seperate columns. Maximum columns to be displayed 5 (i.e. node till 5 level deep). So the output will look as below ID Path Level 0 Level 1 Level 2 Level 3 Level 4 Level 5 ---------------------------------------------------------------------------------------- 1 \\Root Root Null Null Null Null Null 2 \\Root\Node0 Root Node 0 Null Null Null Null 3 \\Root\Node0\Node1 Root Node 0 Node 1 Null Null Null 4 \\Root\Node0\Node2 Root Node 0 Node 2 Null Null Null 5 \\Root\Node3 Root Node 3 Null Null Null Null 6 \\Root\Node3\Node4 Root Node 3 Node 4 Null Null Null 7 \\Root\Node5 Root Node 5 Null Null Null Null ... N (see in above table) Root Node 5 Node 6 Node 7 Node 8 Node 9 The only way i can think of is to open a cursor, loop through each row and perform string split, just fetch the first 5 nodes and then insert into a temp table. Pls. suggest. Thanks

    Read the article

  • Store latitudes and longitudes in database for proximity/radius search using Google Maps API, .NET a

    - by poojad
    What is the approach for storing the latitudes and longitudes for multiple addresses as a one time set up. I need to find the nearby stores using Google Maps and I have to get the latitudes and longitudes of all the available stores. As the data is huge and may increase or change in future, can anyone suggest an approach taking performance and maintenance into consideration. Thank you.

    Read the article

  • Cant install .NET application in Clients PC

    - by Niraj Doshi
    Hello all, My client's PC runs Windows 7 Ultimate with .netframework 4 client profile. I am unable to install my application developed in VS2008. I tried uninstalling .NET Framework 4 From his PC and running the Clean up tool provided by Microsoft. But still I am unable to install it successfully. It provides Error 1001. I tried running the program as administrator. I also tried to Turn on .net 3.5 feature from add or remove program. Thanks in advance. :) Edit: The error what i get is shown here. Furthermore, I have confirmed that it is a 32bit processor and i run x86 release version of setup The application is developed in a Windows 7 OS with .NET Framework 3.5 I have installed this application in 7 PCs which have .NET 3.5 installed in them and having OS Windows XP,Vista and Windows 7; and all are working fine. In clients PC, when I try to install .NET 3.5 again, the installer starts but then it disappears suddenly without doing anything I have tried turning on .NET 3.5 framework feature from control panel Program and Features. I have tried running the program as Administrator I have tried setting the application setup in Windows XP and Vista compatible mode. But still the issue persists. Thanks :)

    Read the article

  • Would like help with LOGON Trigger

    - by Risho
    I've created a logon trigger in MS SQL that needs to check dm_exec_sessions for a login. This login is the user listed in the connection string and has owner right to the database. If the login is verified, I need it to update a specific table and send an email. So far I've done just the following piece and it disabled my web site. The error I get is: "Logon failed for login 'dev' due to trigger execution. Changed database context to 'mydatabase'. Changed language setting to us_english." Any idea what I did wrong? Thanks, Risho CREATE TRIGGER TDY_Assets_Notification ON ALL SERVER WITH EXECUTE AS 'dev' FOR LOGON AS BEGIN IF ORIGINAL_LOGIN()='dev' AND (SELECT COUNT(*) FROM sys.dm_exec_sessions WHERE is_user_process = 1 AND original_login_name = 'dev') > 1 UPDATE Assets_TDY SET Suspense = 1, Warning = 1 WHERE (Date_Returned IS NULL) AND (GETDATE() >= DATEADD(day, 3, Date_Return)) END

    Read the article

  • Running an application from an USB device...

    - by Workshop Alex
    I'm working on a proof-of-concept application, containing a WCF service with console host and client, both on a single USB device. On the same device I will also have the client application which will connect to this service. The service uses the entity framework to connect to the database, which in this POC will just return a list of names. If it works, it will be used for a larger project. Creating the client and service was easy and this works well from USB. But getting the service to connect to the database isn't. I've found this site, suggesting that I should modify machine.config but that stops the XCopy deployment. This project cannot change any setting of the PC, so this suggestion is bad. I cannot create a deployment setup either. The whole thing just needs to run from USB disk. So, how do I get it to run? (The service just selects a list of names from the database, which it returns to the client. If this POC works, it will do far more complex things!)

    Read the article

  • How can I join this 2 queries?(A select query with join and An unpivot query)

    - by MANG KANOR
    Here are my two queries SELECT EWND.Position, NKey = CASE WHEN ISNULL(Translation.Name, '') = '' THEN EWND.Name ELSE Translation.Name END, Unit = EW_N_DEF.Units FROM EWND INNER JOIN EW_N_DEF ON EW_N_DEF.Nutr_No = EWND.Nutr_No LEFT JOIN Translation ON Translation.CodeMain = EWND.Nutr_no WHERE Translation.CodeTrans = 1 ORDER BY EWND.Position And this is the unpivot one SELECT * FROM (SELECT N1,N2,N3,N4,N5,N6,N7,N8,N9,N10,N11,N12,N13,N14,N15,N16,N17,N18,N19,N20,N21,N22,N23,N24,N25,N26,N27,N28,N29,N30,N31,N32,N33,N34 FROM EWNVal WHERE Code=6035) Test UNPIVOT (Value FOR NUTCODE IN (N1,N2,N3,N4,N5,N6,N7,N8,N9,N10,N11,N12,N13,N14,N15,N16,N17,N18,N19,N20,N21,N22,N23,N24,N25,N26,N27,N28,N29,N30,N31,N32,N33,N34) )AS test Both Queries put out same number of rows but not columns, Is it possible to join this two? I tried the union but it has problems that I cant solve Thanks in advance!

    Read the article

  • Copy a Table's data from a Stored Procedure

    - by Niike2
    I am learning how to use SQL and Stored Procedures. I know the syntax is incorrect: Copy data from one table into another table on another Database with a Stored Procedure. The problem is I don't know what table or what database to copy to. I want it to use parameters and not specify the columns specifically. I have 2 Databases (Master_db and Master_copy) and the same table structure on each DB. I want to quickly select a table in Master_db and copy that table's data into Master_copy table with same name. I have come up with something like this: USE Master_DB CREATE PROCEDURE TransferData DEFINE @tableFrom, @tableTo, @databaseTo; INSERT INTO @databaseTo.dbo.@databaseTo SELECT * FROM Master_DB.dbo.@tableFrom GO;

    Read the article

  • SQL Server Composite Primary Keys

    - by Colin
    I am attempting to replace all records for a give day in a certain table. The table has a composite primary key comprised of 7 fields. One such field is date. I have deleted all records which have a date value of 2/8/2010. When I try to then insert records into the table for 2/8/2010, I get a primary key violation. The records I am attempting to insert are only for 2/8/2010. Since date is a component of the PK, shouldn't there be no way to violate the constraint as long as the date I'm inserting is not already in the table? Thanks in advance.

    Read the article

  • Query performs poorly unless a temp table is used

    - by Paul McLoughlin
    The following query takes about 1 minute to run, and has the following IO statistics: SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN TASK_REQUESTS AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN AND T3.TASK = 'UPDATE_MEM_BAL' GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'TRANSACTIONS'. Scan count 5977, logical reads 7527408, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TASK_REQUESTS'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 58157 ms, elapsed time = 61437 ms. If I instead introduce a temporary table then the query returns quickly and performs less logical reads: CREATE TABLE #MyTable(RGN VARCHAR(20) NOT NULL, CD VARCHAR(20) NOT NULL, PRIMARY KEY([RGN],[CD])); INSERT INTO #MyTable(RGN, CD) SELECT RGN, CD FROM TASK_REQUESTS WHERE TASK='UPDATE_MEM_BAL'; SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN #MyTable AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'Worktable'. Scan count 5974, logical reads 382339, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TRANSACTIONS'. Scan count 4, logical reads 4547, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#MyTable________________________________________________________________000000000013'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 1420 ms, elapsed time = 1515 ms. The interesting thing for me is that the TASK_REQUEST table is a small table (3 rows at present) and statistics are up to date on the table. Any idea why such different execution plans and execution times would be occuring? And ideally how to change things so that I don't need to use the temp table to get decent performance? The only real difference in the execution plans is that the temp table version introduces an index spool (eager spool) operation.

    Read the article

  • How to synchronize two (or n) replication processes for MS SQL databases?

    - by Yauheni Sivukha
    There are two master databases and two read-only copies updated by standard transactional replication. It is needed to map some entity from both read-only databases, lets say that A databases contains orders and B databases contains lines. The problem is that replication to one database can lag behind replication of second database, and at the moment of mapping R-databases will have inconsistent data. For example. We stored 2 orders with lines at 19:00 and 19:03. Mapping process started at 19:05, but to the moment of mapping A database replication processed all changes up to 19:03, but B database replication processed only changes up to 19:00. After mapping we will have order entity with order as of 19:03 and lines as of 19:00. The troubles are guaranteed:) In my particular case both databases have temporal model, so it is possible to fetch data for every time slice, but the problem is to identify time of latest replication. Question: How to synchronize replication processes for several databases to avoid situation described above?

    Read the article

  • background worker in asp.net

    - by vbNewbie
    I migrate my winforms crawler app to a asp.net web app and would like to know how to implement the background worker thread that I use for very long searches? Another posting mentioned asynchronous pages but I am not sure if this would work or how to apply it. The search function that would run can sometimes run for a few days and I would like the user to have the option to perform other functions still. Can this happen?

    Read the article

  • how to find end of quarter given a date in the quarter

    - by Ramy
    If i'm given a date (say @d = '11-25-2010'), how can I determine the end of the quarter from that date. I'd like to use a timestamp one second before midnight. I can get this: select dateadd(qq, datediff(qq, 0, getdate()), 0) as quarterStart which gives me: '10-1-2010' and I use this for one second before midnight of a given day: select DateAdd(second, -1, DateAdd(day, DateDiff(day, 0, @d))+1, 0) ) AS DayEnd in the end, a quarterEnd method would give me '12-31-2010 23:59:00'

    Read the article

  • Chaining Many-To-Many Dimensional Relationships in SSAS

    - by Ray Saltrelli
    I'm developing a cube in SSAS and attempting to model the following relationships: Many Facts to 1 Customer Many Customers to Many Sales Reps Many Sales Reps (Subordinates) to Sales Reps (Managers) Each M2M relationship is facilitated by a bridge table which also acts as a fact table in the cube I have most of this working. I can slice Facts by Customer and by Sales Rep (Subordinate), but when I add Sales Rep (Manager) to the query it appears to return every subordinate/manager combination regardless of whether or not that relationship exists in the bridge table. Any ideas as to what I might be doing wrong?

    Read the article

  • How to store data to Excel from DataSet without going cell-by-cell?

    - by Jason Barnwell
    Duplicate of: What’s the simplest way to import a System.Data.DataSet into Excel? Using c# under VS2008, we can create an excel app, workbook, and then worksheet fine by doing this: Application excelApp = new Application(); Workbook excelWb = excelApp.Workbooks.Add(template); Worksheet excelWs = (Worksheet)this.Application.ActiveSheet; Then we can access each cell by "excelWs.Cells[i,j]" and write/save without problems. However with large numbers of rows/columns, we are expecting a loss in efficiency. Is there a way to "data bind" from a DataSet object into the worksheet without using the cell-by-cell approach? Most of the methods we have seen at some point revert to the cell-by-cell approach. Thanks for any suggestions.

    Read the article

  • SQL Database dilemma : Optimize for Querying or Writing?

    - by Harry
    I'm working on a personal project (Search engine) and have a bit of a dilemma. At the moment it is optimized for writing data to the search index and significantly slow for search queries. The DTA (Database Engine Tuning Adviser) recommends adding a couple of Indexed views inorder to speed up search queries. But this is to the detriment of writing new data to the DB. It seems I can't have one without the other! This is obviously not a new problem. What is a good strategy for this issue?

    Read the article

  • How should I do this (business logic) in Sql Server? A constraint?

    - by Pure.Krome
    Hi folks, I wish to add some type of business logic constraint to a table, but not sure how / where. I have a table with the following fields. ID INTEGER IDENTITY HubId INTEGER CategoryId INTEGER IsFeatured BIT Foo NVARCHAR(200) etc. So what i wish is that you can only have one featured thingy, per articleId + hubId. eg. 1, 1, 1, 1, 'blah' -- Ok. 2, 1, 2, 1, 'more blah' -- Also Ok 3, 1, 1, 1, 'aaa' -- constraint error 4, 1, 1, 0, 'asdasdad' -- Ok. 5, 1, 1, 0, 'bbbb' -- Ok. etc. so the third row to be inserterd would fail because that hub AND category already have a featured thingy. Is this possible?

    Read the article

  • Profiling help required

    - by Mick
    I have a profiling issue - imagine I have the following code... void main() { well_written_function(); badly_written_function(); } void well_written_function() { for (a small number) { highly_optimised_subroutine(); } } void badly_written_function() { for (a wastefully and unnecessarily large number) { highly_optimised_subroutine(); } } void highly_optimised_subroutine() { // lots of code } If I run this under vtune (or other profilers) it is very hard to spot that anything is wrong. All the hotspots will appear in the section marked "// lots of code" which is already optimised. The badly_written_function() will not be highlighted in any way even though it is the cause of all the trouble. Is there some feature of vtune that will help me find the problem? Is there some sort of mode whereby I can find the time taken by badly_written_function() and all of its sub-functions?

    Read the article

  • MS Sync framework - Identity crisis resolution by partitioning the primary key.

    - by user326136
    Hello, We implementing offline feature to an existing application. We have implemented the syn with SQL Server internal change tracking and over WCF using MS Sync Framework (http://msdn.microsoft.com/en-us/sync/default.aspx) All of our tables have primary key as integer, we cannot move to GUID. So as you are thinking we will have identity crises between applications. So we decided to go with the way Merge replication does(http://msdn.microsoft.com/en-us/library/aa179416(SQL.80).aspx) partition the primary key range. Below is the example scenario - Server Table A - ID Range - 0 to 100 Client 1 Table A - ID Range - 101 to 200 Client 2 Table A - ID Range - 201 to 300 how to implement this ? i know we can use BCC CHECKIDENT (yourtable, reseed, value) CHECK (([ID]<=(100))) but this does not solve the issue.... Merge replication provides an option of "Not for replication"(http://msdn.microsoft.com/en-us/library/aa237102(SQL.80).aspx) to achieve insert form clients and still maintain the set range.. can i use that somehow here? please help...

    Read the article

  • updating a column in a table only if after the update it won't be negative and identifying all updat

    - by Azeem
    Hello all, I need some help with a SQL query. Here is what I need to do. I'm lost on a few aspects as outlined below. I've four relevant tables: Table A has the price per unit for all resources. I can look up the price using a resource id. Table B has the funds available to a given user. Table C has the resource production information for a given user (including the number of units to produce everyday). Table D has the number of units ever produced by any given user (can be identified by user id and resource id) Having said that, I need to run a batch job on a nightly basis to do the following: a. for all users, identify whether they have the funds needed to produce the number of resources specified in table C and deduct the funds if they are available from table B (calculating the cost using table A). b. start the process to produce resources and after the resource production is complete, update table D using values from table C after the resource product is complete. I figured the second part can be done by using an UPDATE with a subquery. However, I'm not sure how I should go about doing part a. I can only think of using a cursor to fetch each row, examine and update. Is there a single sql statement that will help me avoid having to process each row manually? Additionally, if any rows weren't updated, the part b. SQL should not produce resources for that user. Basically, I'm attempting to modify the sql being used for this logic that currently is in a stored procedure to something that will run a lot faster (and won't process each row separately). Please let me know any ideas and thoughts. Thanks! - Azeem

    Read the article

  • How to Calculate longest streak in SQL?

    - by VJ
    I have EMPLOYEE-ID,DATE,IsPresent I want to calculate longest streak for a employee presence.The Present bit will be false for days he didnt come..So I want to calculate the longest number of days he came to office for consecutive dates..I have the Date column field is unique...So I tried this way - Select Id,Count(*) from Employee where IsPresent=1 But the above doesnt work...Can anyone guide me towards how I can calculate streak for this?....I am sure people have come across this...I tried searching online but...didnt understand it well...please help me out..

    Read the article

  • Can I set ignore_dup_key on for a primary key?

    - by Mr. Flibble
    I have a two-column primary key on a table. I have attempted to alter it to set the ignore_dup_key to on with this command: ALTER INDEX PK_mypk on MyTable SET (IGNORE_DUP_KEY = ON); But I get this error: Cannot use index option ignore_dup_key to alter index 'PK_mypk' as it enforces a primary or unique constraint. How else should I set IGNORE_DUP_KEY to on?

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >