Search Results

Search found 14179 results on 568 pages for 'reportingservices 2008'.

Page 256/568 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Project setup for an ADO.NET/WCF DataService

    - by Slauma
    I'd like to implement a ADO.NET/WCF DataService and I am wondering what's the best way to setup a project in VS2008 SP1 for this purpose. Currently I have an ASP.NET web application project (not of "WebSite" project type). The data access layer is an Entity model (EF version 1) with SQL Server database. I have the Entity Model in a separate DLL project and the web application project references to this assembly for all data accesses. The ADO.NET/WCF DataService needs to communicate with the Entity model/database as well. It has to be hosted on the same web server (IIS 7.5) together with the web application. Since the DataService is not directly related to that specific web application (though it will provide and modify data from/in the same database the web application uses as well) my basic idea was to separate the DataService in its own new project (which also references the Entity Model DLL). Now I have seen that there is no project type "ADO.NET/WCF DataService" in VS2008 SP1. It seems only possible to add a DataService as an element to other existing projects, for instance Web Application projects. Why isn't there a separate DataService project type? Does this mean now that I have to add the DataService as an element to my Web Application project? Or shall I create a new Web Application project and add a DataService to it? (I could delete the pregenerated default.aspx since I do not need any web pages in this project.) What's the best way? Thank you for suggestions in advance!

    Read the article

  • Can LINQ-to-SQL omit unspecified columns on insert so a database default value is used?

    - by Todd Ropog
    I have a non-nullable database column which has a default value set. When inserting a row, sometimes a value is specified for the column, sometimes one is not. This works fine in TSQL when the column is omitted. For example, given the following table: CREATE TABLE [dbo].[Table1]( [id] [int] IDENTITY(1,1) NOT NULL, [col1] [nvarchar](50) NOT NULL, [col2] [nvarchar](50) NULL, CONSTRAINT [PK_Table1] PRIMARY KEY CLUSTERED ([id] ASC) ) GO ALTER TABLE [dbo].[Table1] ADD CONSTRAINT [DF_Table1_col1] DEFAULT ('DB default') FOR [col1] The following two statements will work: INSERT INTO Table1 (col1, col2) VALUES ('test value', '') INSERT INTO Table1 (col2) VALUES ('') In the second statement, the default value is used for col1. The problem I have is when using LINQ-to-SQL (L2S) with a table like this. I want to produce the same behavior, but I can't figure out how to make L2S do that. I want to be able to run the following code and have the first row get the value I specify and the second row get the default value from the database: var context = new DataClasses1DataContext(); var row1 = new Table1 { col1 = "test value", col2 = "" }; context.Table1s.InsertOnSubmit(row1); context.SubmitChanges(); var row2 = new Table1 { col2 = "" }; context.Table1s.InsertOnSubmit(row2); context.SubmitChanges(); If the Auto Generated Value property of col1 is False, the first row is created as desired, but the second row fails with a null error on col1. If Auto Generated Value is True, both rows are created with the default value from the database. I've tried various combinations of Auto Generated Value, Auto-Sync and Nullable, but nothing I've tried gives the behavior I want. L2S does not omit the column from the insert statement when no value is specified. Instead it does something like this: INSERT INTO Table1 (col1, col2) VALUES (null, '') ...which of course causes a null error on col1. Is there some way to get L2S to omit a column from the insert statement if no value is given? Or is there some other way to get the behavior I want? I need the default value at the database level because not all row inserts are done via L2S, and in some cases the default value is a little more complex than a hard coded value (e.g. creating the default based on another field) so I'd rather avoid duplicating that logic.

    Read the article

  • SQL orderby / case issue: orderdirection fail

    - by Joris
    I got a stored procedure that delivers a table of students, and it needs to order by surname, name etc... it also needs to sort ascending, descending, depending on the parameter @orderby... code: ORDER BY CASE WHEN @orderby = 'studentkey' THEN Studentkey END ASC, CASE WHEN @orderby = 'studentkey' and @desc = 1 THEN Studentkey END DESC, CASE WHEN @orderby = 'initials' THEN Initials END ASC, CASE WHEN @orderby = 'initials' and @desc = 1 THEN Initials END DESC, CASE WHEN @orderby = 'firstname' THEN Firstname END ASC, CASE WHEN @orderby = 'firstname' and @desc = 1 THEN Firstname END DESC, CASE WHEN @orderby = 'nickname' THEN Nickname END ASC, CASE WHEN @orderby = 'nickname' and @desc = 1 THEN Nickname END DESC, CASE WHEN @orderby = 'insertion' THEN Insertion END ASC, CASE WHEN @orderby = 'insertion' and @desc = 1 THEN Insertion END DESC, CASE WHEN @orderby = 'surname' THEN Surname END ASC, CASE WHEN @orderby = 'surname' and @desc = 1 THEN Surname END DESC NED There is a difference in output between @desc = 1 and @desc = 0, but not what i desire... Does anyone have a solution?

    Read the article

  • Setting Sql server security rights for multiple situations

    - by DanDan
    We have an application which uses an instance of Sql Server locally for its backend storage. The administrator windows login has had its sysadmin right revoked, and instead two sql logins have been created; one for the application with a secret password and one read only login we let users view the raw data with. This was working fine until we moved on FileStreams, which requires intergrated windows authentication. So now the sql server logins must be replaced. As a result, I am now reviewing all of our logins but I am not sure how it is possible. It seems that the application needs full read/write access, yet I still need to lock down writing to the tables so the user cannot login into the database and delete data randomly. Does anyone have any tips for setting multiple levels of security using intergrated windows logins, or can you direct me to any further reading? Some answers can also be found on serverfault: http://serverfault.com/questions/138763/setting-sql-server-security-rights-for-multiple-situations

    Read the article

  • "Illegal characters in path." Visual Studio WinForm Design View

    - by jacksonakj
    I am putting together a lightweight MVP pattern for a WinForms project. Everything compiles and runs fine. However when I attempt to open the WinForm in design mode in Visual Studio I get a "Illegal characters in path" error. My WinForm is using generics and inheriting from a base Form class. Is there a problem with using generics in a WinForm? Here is the WinForm and base Form class. public partial class TapsForm : MvpForm<TapsPresenter, TapsFormModel>, ITapsView { public TapsForm() { InitializeComponent(); } public TapsForm(TapsPresenter presenter) :base(presenter) { InitializeComponent(); UpdateModel(); } public IList<Taps> Taps { set { gridTaps.DataSource = value; } } private void UpdateModel() { Model.RideId = Int32.Parse(cboRide.Text); Model.Latitude = Double.Parse(txtLatitude.Text); Model.Longitude = Double.Parse(txtLongitude.Text); } } Base form MvpForm: public class MvpForm<TPresenter, TModel> : Form, IView where TPresenter : class, IPresenter where TModel : class, new() { private readonly TPresenter presenter; private TModel model; public MvpForm() { } public MvpForm(TPresenter presenter) { this.presenter = presenter; this.presenter.RegisterView(this); } protected override void OnLoad(EventArgs e) { base.OnLoad(e); if (presenter != null) presenter.IntializeView(); } public TModel Model { get { if (model == null) throw new InvalidOperationException("The Model property is currently null, however it should have been automatically initialized by the presenter. This most likely indicates that no presenter was bound to the control. Check your presenter bindings."); return model; } set { model = value;} } }

    Read the article

  • SQL Server schema comparison - leaves behind obsolete files on synchronization

    - by b0x0rz
    Directly related to http://stackoverflow.com/questions/2768489/visual-studio-2010-database-project-is-there-a-visual-way/2772205#2772205 I have a problem synchronizing between database project and a database in Visual Studio. Usually I synchronize FROM database TO database project (using the Visual Studio data scheme compare new schema comparison). The synchronization works, BUT when I for example corrected the spelling of a key in the database and synchronized it - the file with the WRONG spelling of a key remains (albeit is commented out inside). the new one is correctly added. This file happens to be in: [project name]/Scheme Objects/Schemas/dbo/Tables/Keys But for sure there are others elsewhere. how to automatically remove such obsolete files when synchronizing? thnx

    Read the article

  • SQLConnection.Open(); throwing exception

    - by flavour404
    Hi, Updating an old piece of software but in order to maintain backward compatibility I need to connect to a .mdb (access) database. I am using the following connection but keep getting an exception, why? I have validated the path, database existence etc. and that is all correct. string Server = "localhost"; string Database = drive + "\\btc2\\state\\states.mdb"; string Username = ""; string Password = "Lhotse"; string ConnectionString = "Data Source = " + Server + ";" + "Initial Catalog = " + Database + ";" + "User Id = '';" + "Password = " + Password + ";"; SqlConnection SQLConnection = new SqlConnection(); try { SQLConnection.ConnectionString = ConnectionString; SQLConnection.Open(); } catch (Exception Ex) { // Try to close the connection if (SQLConnection != null) SQLConnection.Dispose(); // //can't connect // // Stop here return false; }

    Read the article

  • Crystal Reports Failed Database Login

    - by Marlon
    Hello, After spending a good 3 to 4 hours on google trying to find any solution to my problem I haven't had much luck. Basically, we use crystal reports for our .NET applications with a sql server back end, we have many clients each with their own server and so our reports need to have their connections dynamically set. Up until a week ago this worked fine. However a few days ago a client reported they were getting a database login prompt for a report (only one report, the rest worked fine). We were quite stumped but we managed to reproduce it on a netbook which didn't have visual studio or sql server installed. In the end the dev decided to reproduce the report in the hope it was just an oddity in that particular report. Unfortunately a new client today also experienced the same problem, but this time for every crystal report they had - and also they worked on the netbook, so we are really quite lost here. Below is a screenshot of what our clients get presented with - and here is the code that I use to set the connection information in the report cI.ServerName = (string)builder["Data Source"]; cI.DatabaseName = (string)builder["Initial Catalog"]; cI.UserID = (string)builder["User ID"]; cI.Password = (string)builder["Password"]; foreach (IConnectionInfo info in cryRpt.DataSourceConnections) { info.SetConnection(cI.ServerName, cI.DatabaseName, cI.UserID, cI.Password); } foreach (ReportDocument sub in cryRpt.Subreports) { foreach (IConnectionInfo info in sub.DataSourceConnections) { info.SetConnection(cI.ServerName, cI.DatabaseName, cI.UserID, cI.Password); } } As always, any help much appreciated.

    Read the article

  • 2 large databases - worth merging into 1?

    - by Ardman
    I have 2 large databases that were sharded before. I now have removed the sharding and have created a new database with all of the data except for the tables that were originally sharded. Is it worth importing this data into the new database, or keeping them as seperate entities that I can just scan through? We are talking around 60million records in each sharded table, of which there are 2 tables. Also, whilst I have an empty table, should I be adding indexes which weren't thought of when the database was originally constructed and now too large to add them?

    Read the article

  • dynamically creating the #temp table and then used to insert the data

    - by praveen
    Hi All, I am importing an intermediate result of query into a temp table for further use, so I used a #temp table to maintain the same schema as select * into # temp from schema.tableName where 1<1; insert into # temp from exec(table) While I am doing this statement as a variable to pass the different tableName its not working SET @TEMPSCHEMA = 'SELECT * INTO #temp FROM ' + @PKSchema + '.dbo.' + @PKTableName + ' WHERE 1<1;' exec(@TEMPSCHEMA) INSERT INTO #temp EXEC ( @SELECTSTATEMENT ) however after exec statement it is not taking the values to the temp table. it is showing the invalid object id #temp

    Read the article

  • Should I be relying on WebTests for data validation?

    - by Alexander Kahoun
    I have a suite of web tests created for a web service. I use it for testing a particular input method that updates a SQL Database. The web service doesn't have a way to retrieve the data, that's not its purpose, only to update it. I have a validator that validates the response XML that the web service generates for each request. All that works fine. It was suggested by a teammate that I add data validation so that I check the database to see the data after the initial response validator runs and compare it with what was in the input request. We have a number of services and libraries that are separate from the web service I'm testing that I can use to get the data and compare it. The problem is that when I run the web test the data validation always fails even when the request succeeds. I've tried putting the thread to sleep between the response validation and the data validation but to no avail; It always gets the data from before the response validation. I can set a break point and visually see that the data has been updated in the DB, funny thing is when I step through it in debug with the breakpoint it does validate successfully. Before I get too much more into this issue I have to ask; Is this the purpose of web tests? Should I be able to validate data through service calls in this manner or am I asking too much of a web test and the response validation is as far as I should go?

    Read the article

  • Don't see Clean solution option in Visual Studio

    - by user102533
    For one of the solutions, I don't see the Clean Solution option neither in the context menu when I right click on the solution name in the Solution Explorer nor in the Build menu. When I make any changes to the project and debug, VS never hits the break point and I get the "The breakpoint will not currently be hit. The source code is different from the original version." message. My understanding is that I need to clean the solution. For other solutions, I do see the Clean solution and I don't have the same issue.

    Read the article

  • Why does /MANIFESTUAC:NO work?

    - by Eamon
    Windows 7, C++, VS2008 I have a COM DLL that needs to be registered using "runas administrator" (it is a legacy app that writes to the registry) The DLL is used by a reports app which instantiates it using CoCreateInstance. This failed unless I also ran the reports app as administrator; until I changed the linker setting from /MANIFESTUAC to /MANIFESTUAC:NO Can anyone tell me why this works? Does it mean that I can write apps that bypass the UAC using this setting?

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • Visual studio - precompile - dotless

    - by Pickels
    I wonder if there is a way to precompile *.less files(http://www.dotlesscss.com/) with visual studio. The site gives me a dotless.compiler.exe but I am not sure how to hook this up to visual studio. I am looking for a solution for both Webforms and Mvc. Friendly Greetings, Pickels

    Read the article

  • Anyone using Moles / Pex in production?

    - by dferraro
    Hi all, I did search the forum and did not find a similar question. I'm looking to make a final decision on our mocking framework of choice moving forward as a best practice - I've decided on Moq... untill I just recently discovered MS has finally created a mocking framework called Moles which seems to work similar to TypeMock via the profiler API sexyness etc.. There's a million 'NMock vs Moq vs TypeMock vs Rhino....' threads on here. But I never see Moles involved.In fact, I did not even know if its existence until a short time ago. Anyone using it? In Production? Anyone dump their old mocking framework for it, and if so, which one? How did it compare to ther mocking frameworks you've used? thanks.. ps, we are using VS2008 and are moving to 2010 shortly.

    Read the article

  • Windows update breaks dlls?

    - by shoosh
    I'm compiling a project which uses multiple DLL and compiles with VS2008. After a recent windows update DLLs compiled on my computer stopped working on other computers. After some investigation it turned out that it updated the CRT redistributable library which I'm compiling with from version "9.0.21022.8" to version "9.0.30729.4148" This is evident from the Manifest file of the EXE i'm compiling. it contains the following: <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.21022.8" processorArchitecture="amd64" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.30729.4148" processorArchitecture="amd64" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> Meaning it wants to use two different versions of the CRT at the same time. the second version is needed by the code which I'm compiling right now and the first version is needed by older dlls which were compiled a few weeks ago. In the computers where the application is deployed this becomes a problem since they get their CRT dll from a local folder called Microsoft.VC90.CRT and not from WinSXS. This folder can't contain two different versions of the dll. Is there a known solution to this issue or do I need to start compiling all of the other DLLs with the new CRT?

    Read the article

  • How to protect code from crashes in this situation

    - by Davit Siradeghyan
    int arr[ 5 ] = { 0 }; int i = 8; // out of bounds arr[ i ] = 8; I know that I can just check i like this if( i < 0 || i 5 ) .... I also know about SEH in Visual Studio, but it looks like not working solution. __try { /* code */ } __except(GetExceptionCode() == EXCEPTION_ARRAY_BOUNDS_EXCEEDED) This is not working. As I see SEH working in situations like divide to 0, access protected pages ... How can I protect my program from crashes?

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >