Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 884/1156 | < Previous Page | 880 881 882 883 884 885 886 887 888 889 890 891  | Next Page >

  • When to use RDLC over RDL reports?

    - by Daan
    I have been studying SSRS 2005 / 2008 in the past weeks and have created some server side reports. For some application, a colleague suggested that I look into RDLC for that particular situation. I am now trying to get my head around the main difference between RDL and RDLC. Searching for this information yields fragmented information at best. I have learned that: RDLC reports do not store information about how to get data. RDLC reports can be executed directly by the ReportViewer control. But I still don't fully understand the relation between the RDLC file and the other related systems (the Reporting Server, the source database, the client). In order to get a good grasp on RDLC files, I would like to know how their use differs from RDL files and in what situation one would choose RDLC over RDL. Links to resources are also welcome. Update: A thread on the ASP.NET forums discusses this same issue. From it, I have gained some better understanding on the issue. A feature of RDLC is that it can be run completely client-side in the ReportViewer control. This removes the need for a Reporting Services instance, and even removes the need for any database connection whatsoever, but: It adds the requirement that the data that is needed in the report has to be provided manually. Whether this is an advantage or a disadvantage depends on the particular application. In my application, an instance of Reporting Services is available anyway and the required data for the reports can easily be pulled from a database. Is there any reason left for me to consider RDLC, or should I simply stick with RDL?

    Read the article

  • How do you host multiple public facing websites on a VPS?

    - by Petras
    We host about 30 websites using typical shared hosting plans using ASP.NET and SQL 2000/2005/2008. I am now wondering about hosting all of these websites using our own virtual private server such as http://www.crystaltech.com/vps.aspx This is clearly cheaper but comes with a lot of questions I need answers to: Is the risk of having to keep this VPS server up and running worth it? Until now, the host provider has managed the server and we have not had to worry about crashes, downtime, software patches etc. We are not server administrators, we are programmers, so this is not really our expertise. On the other hand, it may not be hard to learn. When we make a website live, we log in to a domain management control panel and change the primary and secondary name servers to point to our shared web host: Eg ns1.sharedwebhost.com and ns2.sharedwebhost.com These name servers are going to have to change when we have a VPS. I don’t understand anything about how to set this up. Is there some useful info anyone could direct me to? Or is there software we need to install to make the primary and secondary name servers work on our VPS? The control panel we have for shared hosting comes with DNS management like this: What software would I need to install to create this for each site we host at a VPS? The control panel we have for shared hosting also comes with a POP email interface that allows email addresses to be added easily: Is this something that can be easily set up at a VPS so clients can manage their own email addresses? Is there software we need to install to make this work?

    Read the article

  • "Unable To Load Client Print Control" - SSRS Printing problems again

    - by mamorgan1
    Please forgive me as my head is spinning. I have tried so many solutions to this issue, that I'm almost not sure where I am at this point. At this point in time I have these issues in my Production, Test, and Dev environments. For simplicity sake, I will just try to get it working in Dev first. Here is my setup: Database/Reporting Server (Same server): Windows Server 2003 SP2 SQL Server 2005 SP3 Development Box: Windows 7 Visual Studio 2008 SP1 SQL Server 2008 SP1 (not being used in this case, but wanted to include it in case it is relative) Internet Explorer 8 Details: * I have a custom ASP.NET application that is using ReportViewer to access reports on my Database/Reporting Server. * I am able to connect directly to Report Manager and print with no trouble. * When I view source on the page with ReportViewer, it says I'm am using version 9.0.30729.4402 . * The classid of the rsclientprint.dll that keeps getting installed to my c:\windows\downloaded program files directory is {41861299-EAB2-4DCC-986C-802AE12AC499}. * I have tried taking the rsclientprint.cab file from my Database/Reporting Server and installing it directly to my Development Box and had no success. I made sure to unregister the previously installed dll first. I feel like I have read as many solutions as I can, and so I turn to you for some assistance. Please let me know if I can provide further details that would be helpful. Thanks

    Read the article

  • running asp.net 3.5 and asp.net 2.0 in same site

    - by cori
    We're running ASP.Net 2.0 on our corporate web site, and I'd like to get it up to ASP.Net 3.5 as smoothly as possible. The project/solution architecture in VS 2005 is an ASP.Net 2.0 web project and an .Net 2.0 data access layer project which is used by the site code. Upon opening the projects in a new VS 2008 solution they seemed to be converted to .Net 3.5 with a minimum of fuss - they built correctly out of the box, deployed successfully, and seem to work just fine, which is exactly as I would expect given that .Net 2.0 and 3.5 share a common runtime. The major difference after the conversion is that the web.config file's referenced dlls are now the 3.5 versions. What I would like to do is to update the site piecemeal; as I make modifications to a given page send the 3.5 verson of that page over to our webserver and not update the whole site at once. In testing on our dev box this approach seems to be working fine - the site code is interacting with the .Net 3.5 data access layer without difficulty, a handful of pages are running 3.5 page-behind code (by this I mean that they're running assemblies built in VS 2008 - the site is using single-page assemblies for code behind), the 3.5 web.config is in place, and the bulk of the site is running code-behind assemblies built in VS2005. Everything looks great. Which makes me worried that I'm missing something. Is this architecture workable, or is there a problem lying is wait for m that I haven't considered?

    Read the article

  • upgrading .NET application from MapPoint 2004 to 2009...

    - by Joshua
    I am in the process of upgrading a Visual Studio 2005 .NET (C#) application from it's integration with MapPoint 2004 to supporting MapPoint 2009. After a bit of searching and fiddling, I've generated new DLLs using "tldimp" and "aximp" and now have Interop.MapPoint.dll and AxInterop.MapPoint.dll and the namespaces seem to line up to the previous ones, so all the object definitions are available. However, I have lots of errors telling me that various properties do not exist, even though I go into the Object Browser, and they do seem to exist. Here is an example (there are dozens of similar errors)... axMappointControl1.ActiveMap.Altitude = 1000; That object initializes fine, as a MapPoint.Map object, which when I browse to in the Object Browser, I go to MapPoint and Map and under Map there are no properties but when I look deeper there is _Map80 and _Map90 and EACH of these has an Altitude property. Under Map it also lists "Base Types", which has _Map in it which also has all the referenced properties! Yet, I am getting the error: "MapPoint.Map' does not contain a definition for 'Altitude' Pretty much all the properties of both MapPoint.Map and MapPoint.Toolbars are doing this. Any ideas? Thank you! Joshua

    Read the article

  • Cannot get assembly version for footer

    - by Jaxidian
    I'm using the automatic build versioning mentioned in this question (not the selected answer but the answer that uses the [assembly: AssemblyVersion("1.0.*")] technique). I'm doing this in the footer of my Site.Master file in MVC 2. My code for doing this is as follows: <div id="footer"> <a href="emailto:[email protected]">[email protected]</a> - Copyright © 2005-<%= DateTime.Today.Year.ToString() %>, foo LLC. All Rights Reserved. - Version: <%= Assembly.GetEntryAssembly().GetName().Version.ToString() %> </div> The exception I get is a Object reference not set to an instance of an object because GetEntryAssembly() returns NULL. My other options don't work either. GetCallingAssembly() always returns "4.0.0.0" and GetExecutingAssembly() always returns "0.0.0.0". When I go look at my DLLs, everything is versioning as I would expect. But I cannot figure out how to access it to display in my footer!!

    Read the article

  • Unable to convert from Julian INT date to regular TSQL Datetime

    - by Bluehiro
    Help me Stackoverflow, I'm close to going all "HULK SMASH" on my keyboard over this issue. I have researched carefully but I'm obviously not getting something right. I am working with a Julian dates referenced from a proprietary tool (Platinum SQL?), though I'm working in SQL 2005. I can convert their "special" version of Julian into datetime when I run a select statement. Unfortunately it will not insert into a datetime column, I get the following error when I try: The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value. So I can't setup datetime criteria for running a report off of the Stored Procedure. Original Value: 733416 Equivalent Calendar Value: 01-09-2009 Below is my code... I'm so close but I can't quite see what's wrong, I need my convert statement to actually convert the Julian value (733416) into a compatible TSQL DATETIME value. SELECT org_id, CASE WHEN date_applied = 0 THEN '00-00-00' ELSE convert(char(50),dateadd(day,date_applied-729960,convert(datetime, '07-25-99')),101) END AS date_applied, CASE WHEN date_posted = 0 THEN '00-00-00' ELSE convert(char(50),dateadd(day,date_posted-729960,convert(datetime, '07-25-99')),101) END AS date_posted from general_vw

    Read the article

  • How does Visual Studio decide the order in which stack variables should be allocated?

    - by Jason
    I'm trying to turn some of the programs in gera's Insecure Programming by example into client/server applications that could be used in capture the flag scenarios to teach exploit development. The problem I'm having is that I'm not sure how Visual Studio (I'm using 2005 Professional Edition) decides where to allocate variables on the stack. When I compile and run example 1: int main() { int cookie; char buf[80]; printf("buf: %08x cookie: %08x\n", &buf, &cookie); gets(buf); if (cookie == 0x41424344) printf("you win!\n"); } I get the following result: buf: 0012ff14 cookie: 0012ff64 buf starts at an address eighty bytes lower than cookie, and any four bytes that are copied in buf after the first eighty will appear in cookie. The problem I'm having is when I place this code in some other function. When I compile and run the following code, I get a different result: buf appears at an address greater than cookie's. void ClientSocketHandler(SOCKET cs){ int cookie; char buf[80]; char stringToSend[160]; int numBytesRecved; int totalNumBytes; sprintf(stringToSend,"buf: %08x cookie: %08x\n",&buf,&cookie); send(cs,stringToSend,strlen(stringToSend),NULL); The result is: buf: 0012fd00 cookie: 0012fcfc Now there is no way to set cookie to arbitrary data via overwriting buf. Is there any way to tell Visual Studio to allocate cookie before buf? Is there any way to tell beforehand how the variables will be allocated? Thanks, Jason

    Read the article

  • get property from XML using PHP

    - by Adnan
    Hello, I am using PHP's SimpleXML to get some values out of the following XML; - <entry> <id>http://www.google.com/m8/feeds/contacts/email_address%40gmail.com/base/0</id> <updated>2010-01-14T22:06:26.565Z</updated> <category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/contact/2008#contact" /> <title type="text">Customer Name</title> <link rel="http://schemas.google.com/contacts/2008/rel#edit-photo" type="image/*" href="http://www.google.com/m8/feeds/photos/media/email_address%40gmail.com/0/34h5jh34j5kj3444" /> <link rel="self" type="application/atom+xml" href="http://www.google.com/m8/feeds/contacts/email_address%40gmail.com/full/0" /> <link rel="edit" type="application/atom+xml" href="http://www.google.com/m8/feeds/contacts/email_address%40gmail.com/full/0/5555" /> <gd:email rel="http://schemas.google.com/g/2005#other" address="[email protected]" primary="true" /> </entry> I can get the title with: $xml = new SimpleXMLElement($response_h1); foreach ($xml->entry as $entry) { echo $entry->title, '<br />'; } But how to get the address="[email protected]" property?

    Read the article

  • Why is TransactionScope operation is not valid?

    - by Cragly
    I have a routine which uses a recursive loop to insert items into a SQL Server 2005 database The first call which initiates the loop is enclosed within a transaction using TransactionScope. When I first call ProcessItem the myItem data gets inserted into the database as expected. However when ProcessItem is called from either ProcessItemLinks or ProcessItemComments I get the following error. “The operation is not valid for the state of the transaction” I am running this in debug with VS 2008 on Windows 7 and have the MSDTC running to enable distributed transactions. The code below isn’t my production code but is set out exactly the same. The AddItemToDatabase is a method on a class I cannot modify and uses a standard ExecuteNonQuery() which creates a connection then closes and disposes once completed. I have looked at other posting on here and the internet and still cannot resolve this issue. Any help would be much appreciated. using (TransactionScope processItem = new TransactionScope()) { foreach (Item myItem in itemsList) { ProcessItem(myItem); } processItem.Complete(); } private void ProcessItem(Item myItem) { AddItemToDatabase(myItem); ProcessItemLinks(myItem); ProcessItemComments(myItem); } private void ProcessItemLinks(Item myItem) { foreach (Item link in myItem.Links) { ProcessItem(link); } } private void ProcessItemComments(Item myItem) { foreach (Item comment in myItem.Comments) { ProcessItem(comment); } } Here is top part of the stack trace. Unfortunatly I cant show the build up to this point as its company sensative information which I can not disclose. Hope this is enough information. at System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) at System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open()

    Read the article

  • How to I get the value of a custom soap header in WCF

    - by Jason Coyne
    I have created a custom soap header, and added it into my message via IClientMessageInspector public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel) { var header = new MessageHeader<AuthHeader>(); header.Content = new AuthHeader(Key); header.Actor = "Anyone"; var header2 = header.GetUntypedHeader("Auth", "xWow"); request.Headers.Add(header2); return null; } [DataContract(Name="Auth")] public class AuthHeader { public AuthHeader(string key) { this.Key = key; } [DataMember] public string Key { get; set; } } I also have an IDispatchMessageInspector, and I can find the correct header in the list. However, there is no value. I know that the value went across the wire correctly, because the message string is correct <s:Envelope xmlns:s=\"http://schemas.xmlsoap.org/soap/envelope/\">\r\n <s:Header>\r\n <Auth s:actor=\"Anyone\" xmlns=\"xWow\" xmlns:i=\"http://www.w3.org/2001/XMLSchema-instance\">\r\n <Key xmlns=\"http://schemas.datacontract.org/2004/07/xWow.Lib\">HERE IS MY KEY VALUE!!!!</Key>\r\n </Auth>\r\n <To s:mustUnderstand=\"1\" xmlns=\"http://schemas.microsoft.com/ws/2005/05/addressing/none\">http://localhost:26443/AuthService.svc</To>\r\n <Action s:mustUnderstand=\"1\" xmlns=\"http://schemas.microsoft.com/ws/2005/05/addressing/none\">http://tempuri.org/IAuthService/GetPayload</Action>\r\n </s:Header>\r\n <s:Body>\r\n <GetPayload xmlns=\"http://tempuri.org/\" />\r\n </s:Body>\r\n</s:Envelope>" But there does not seem to be any property to retrieve this value. The MessageHeaderInfo class has Actor, etc, but nothing else useful I can find. On the client side I had to convert between Header and Untyped header, is there an equivalent operation on the server?

    Read the article

  • Crosstab/Cube/Pivot Components for Delphi

    - by Anagoge
    I'm looking for a Delphi VCL crosstab/cube/pivotcube/olap grid component for Delphi 2009, 2010, or XE. I'm willing to sacrifice advanced features to get something open/free (or very cheap if I must) to make it easier to collaborate with any future developers without anyone having to purchase more components than I already use, since this will just be used in one screen. If there isn't anything appropriate out there, I may try to implement something simple on my own. I can live with some fairly basic features: drag and drop to configure dimensions, sort by a column, allow totals/min/max for a column, and (optionally) expand/collapse or drill down to sub-categories. Blazing performance and enterprise scalability are not required, since there should be less than 2000 source rows. There appear to be several decent options in the commercial space (ExpressPivotCube, FastCube, HierCube), but they are all a few hundred dollars. This project already uses existing installations of Excel 2007 and SQL Server 2005/2008, so I might consider leveraging those, though I'd prefer a native Delphi component, if possible. There are also the very old Decision Cube components included in Delphi's Source\xtab directory, but they apparently no longer support unicode compilers (Delphi 2009+), since I got dozens of unicode-related compilation errors while test compiling that source in Delphi XE. Those components also still link to the long-deprecated BDE! Has anyone modified Decision Cube to support unicode/pure-TDataSet? The online tutorials I found were incomplete and silent on the dozens of BDE/unicode compilation errors I see, so I might have to tackle that on my own. Does anyone have suggestions where to start for a free/cheap basic crosstab/pivot grid component?

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • RSpec and stubbing parameters for a named scope

    - by Andy Waite
    I'm try to write a spec for a named scope which is date dependent. The spec: it "should return 6 months of documents" do Date.stub!(:today).and_return(Date.new(2005, 03, 03)) doc_1 = Factory.create(:document, :date => '2005-01-01') Document.past_six_months.should == [doc_1] end The named scope in the Document model: named_scope :past_six_months, :conditions => ['date > ? AND date < ?', Date.today - 6.months, Date.today] The spec fails with an empty array, and the query in test.log shows why: SELECT * FROM "documents" WHERE (date > '2009-11-11' AND date < '2010-05-11') i.e. it appears to be ignoring my stubbed Date method. However, if I use a class method instead of a named scope then it passes: def self.past_six_months find(:all, :conditions => ['date > ? AND date < ?', Date.today - 6.months, Date.today]) end I would rather use the named scope approach but I don't understand why it isn't working.

    Read the article

  • Computer science textbooks

    - by Barrett Conrad
    I would like to try the book question a little bit differently. My goal is to know what the community thinks are the quintessential computer science textbooks. <beginsadstory>I lost all of my computer science and math books from college in Hurricane Katrina in 2005. I greatly miss having my familiar tomes to refer to when topics and problems come up, so I am looking to rebuild my library.<endsadstory> What are your recommendations for the best examples of academic caliber books for the field of computer science and its associated mathematics? I am looking for books on subjects like computational theory, programming languages, compilers, operating systems, algorithms and so on. Don't limit your suggestions to your textbooks only. If there is a book you have read that covers computer science or a related math in a formal way, but is not strictly a textbook, I would be love to hear about them as well. Finally, for the sake of creating a good reference for all of us, may I suggest posting one book per answer so they can be rated individually.

    Read the article

  • WCF RIA Silverlight deployment issues

    - by Handleman
    It seems the world is awash with people having problems deploying RIA WCF services, and now I'm one too. I've already tried a bunch of things, but to no avail. I need WCF RIA to support a Silverlight 3 application I've built. The short story is, using the new WCF RIA services (Nov 09?) I open VS 2008, create new project (silverlight application), enabling ".NET RIA services". Add new item to web project - Linq2SQL dbml file (from SQL 2005 DB prepared earlier) and compile. I add a new item to the web project - domain service (link the tables I need) and compiled. Using the domain context I "Load" data with a standard RIA get query in the MainPage and add a TextBlock to display returned data. Build & run (cassini) - success. Using VS to publish to IIS on local PC - success. Using VS to publish to test server (IIS6) - browse to location and the Silverlight app loads but Fiddler tells me I've got a 404 on all the the WCF .svc requests. Use Fiddler to "launch IE" on the service request and it's true - 404. I have already run aspnet_regiis, ServiceModelReg and added mime types for .xap, .xaml, .xbap and .svc. I have included the System.Web.Ria and System.Web.DomainServices DLL with copy local true. I need help with either a) a solution b) an approach to find a solution

    Read the article

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

  • Perl OO frameworks and program design - Moose and Conway's inside-out objects (Class::Std)

    - by Emmel
    This is more of a use-case type of question... but also generic enough to be more broadly applicable: In short, I'm working on a module that's more or less a command-line wrapper; OO naturally. Without going into too many details (unless someone wants them), there isn't a crazy amount of complexity to the system, but it did feel natural to have three or four objects in this framework. Finally, it's an open source thing I'll put out there, rather than a module with a few developers in the same firm working on it. First I implemented the OO using Class::Std, because Perl Best Practices (Conway, 2005) made a good argument for why to use inside-out objects. Full control over what attributes get accessed and so on, proper encapsulation, etc. Also his design is surprisingly simple and clever. I liked it, but then noticed that no one really uses this; in fact it seems Conway himself doesn't really recommend this anymore? So I moved to everyone's favorite, Moose. It's easy to use, although way way overkill feature-wise for what I want to do. The big, major downside is: it's got a slew of module dependencies that force users of my module to download them all. A minor downside is it's got way more functionality than I really need. What are recommendations? Inconvenience fellow developers by forcing them to use a possibly-obsolete module, or force every user of the module to download Moose and all its dependencies? Is there a third option for a proper Perl OO framework that's popular but neither of these two?

    Read the article

  • How to launch correct version of Msbuild

    - by Rory Becker
    When I type... Msbuild<Enter> ... At the command prompt, I get... Microsoft (R) Build Engine Version 2.0.50727.4927 [Microsoft .NET Framework, Version 2.0.50727.4927] Copyright (C) Microsoft Corporation 2005. All rights reserved. This is all very well and good except that when I run this against a vs2010 .sln file, the error message indicates: MyProject.sln(2): Solution file error MSB5014: File format version is not recognized. MSBuild can only read solution files between versions 7.0 and 9.0, inclusive. 0 Warning(s) 1 Error(s) It would appear that the version of msbuild that is being called, is not capable of understanding my solution file. I figured that I would check out my path and see where msbuild is being picked up from. However, it seems that no part of my path points at a location where msbuild is to be found. How is the command line finding the copy of msbuild that it is using and how can I change this version so that the latest version is used?

    Read the article

  • Padding is invalid and cannot be removed

    - by Ajay
    I have hosted an asp.net 2.0 site. Everyday, i am getting an error "Padding is invalid and cannot be removed" 2-3 times. Backend used is sql server 2005. the site is controlled via plesk 9.2 CP. Pooling is enabled with timeout as 120 Minutes, in IIS. Can it be the reason for this? I have not used any encryption except for the stored passwords(MD5) The error message is : " Base Exception = : Padding is invalid and cannot be removed. - Source : System.Web - TargetSite :Void ThrowError(System.Exception, System.String, System.String, Boolean)Message : Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. " & System Log (Application) says " Event code: 4009 Event message: Viewstate verification failed. Reason: The viewstate supplied failed integrity check. Event detail code: 50203 ViewStateException information: Exception message: Invalid viewstate. Port: 31235 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) " I have hosted it in a dedicated server and not in a web farm. Will keeping a static machine key help resolve this issue? Please guide me on this.

    Read the article

  • Large Reports for MSRS

    - by Greg Lorenz
    I have a report that needs to be able to render a very large amount of pages (about 4500 in this instance) in a web browser. The total time needed to finish on the report server from start time to end time is about 30 mins for the instance that I am looking at. Does anyone know what options exist for handling the rendering of such a large report in a web browser? In terms of looking into how this can be resolved I have already performed the following tasks. The report gets its data off of a database table that already has the data flattened to the point that the TimeDataRetrieval on the report server is 17812 or about 18 secs. The report itself has been reformatted to include the least expensive report objects that it can in order to render the data in the correct format. I basically consists of a table with about 4 nested tables and thats it. We were trying to accomplish this on a 2005 report server but continued to run into memory issues that were not feasible for our clients. In response to that we moved this onto a 2008 report server to take advantage of the fact that it uses the file system instead of memory and finally were able to get this to work without running out of the available memory but of course it takes much longer.

    Read the article

  • Problem with libcurl cookie engine

    - by Seb Rose
    [Cross-posted from lib-curl mailing list] I have a single threaded app (MSVC C++ 2005) build against a static LIBCURL 7.19.4 A test application connects to an in house server & performs a bespoke authentication process that includes posting a couple of forms, and when this succeeds creates a new resource (POST) and then updates the resource (PUT) using If-Match. I only use a single connection to libcurl (i.e. only one CURL*) The cookie engine is enabled from the start using curl_easy_setopt(CURLOPT_COOKIEFILE, "") The cookie cache is cleared at the end of the authentication process using curl_easy_setopt(CURLOPT_COOKIELIST, "SESS"). This is required by the authentication process. The next call, which completes a successful authentication, results in a couple of security cookies being returned from the server - they have no expiry date set. The server (and I) expect the security cookies to then be sent with all subsequent requests to the server. The problem is that sometimes they are sent and sometimes they aren't. I'm not a CURL expert, so I'm probably doing something wrong, but I can't figure out what. Running the test app in a loop results shows a random distribution of correct cookie handling. As a workaround I've disabled the cookie engine and am doing basic manual cookie handling. Like this it works as expected, but I'd prefer to use the library if possible. Does anyone have any ideas? Thanks Seb

    Read the article

  • How to deploy RSWebParts.cab manually?

    - by denni
    I'm using the SSRS 2005 Web parts to display my reports in a MOSS 2007 SP1 Portal. I have successfully installed the Web parts in my development, testing, and UAT servers using the following command: stsadm -o addwppack -filename path/to/RSWebParts.cab. But when I tried running the same command in the production server, it will give me the following error: This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application. I know I usually will get this kind of error message when I tried to deploy my custom solutions having no Web application resources (such as web.config entries) to a specific Web application. But this is not my custom solution, it is an out-of-the-box SSRS Web part and it does have resources scoped to a Web application. I tried to even use different combination of the command by providing the -url, -globalinstall, and -force switches but it still give the same error. The configuration of the 4 servers are exactly the same, both from software and hardware perspectives. All other features are working properly on the production server. I even tried to extract the cab file manually to the bin folder of my Web application, then modify the Web.config manually to include the SafeControl element (copied from the manifest.xml inside the cab file). But it gave me an error saying it couldn't find the resources file. Even though, I extracted the whole file, including the resource files in the bin folder. Is there anyone who can help me resolve the problem? Thanks a lot.

    Read the article

  • Accessing deleted rows from a DataTable

    - by Ken
    Hello: I have a parent WinForm that has a MyDataTable _dt as a member. The MyDataTable type was created in the "typed dataset" designer tool in Visual Studio 2005 (MyDataTable inherits from DataTable) _dt gets populated from a db via ADO.NET. Based on changes from user interaction in the form, I delete a row from the table like so: _dt.FindBySomeKey(_someKey).Delete(); Later on, _dt is passed by value to a dialog form. From there, I need to scan through all the rows to build a string: foreach (myDataTableRow row in _dt) { sbFilter.Append("'" + row.info + "',"); } The problem is that upon doing this after a delete, the following exception is thrown: DeletedRowInaccessibleException: Deleted row information cannot be accessed through the row. The work around that I am currently using (which feels like a hack) is the following: foreach (myDataTableRow row in _dt) { if (row.RowState != DataRowState.Deleted && row.RowState != DataRowState.Detached) { sbFilter.Append("'" + row.info + "',"); } } My question: Is this the proper way to do this? Why would the foreach loop access rows that have been tagged via the Delete() method??

    Read the article

  • In Seam what's the difference between injected EntityManager and getEntityManager from EntityHome

    - by Navi
    I am testing a Seam application using the needle test API. In my code I am using the getEntityManager() method from EntityHome. When I run the unit tests against an in memory database I get the following exception: java.lang.IllegalStateException: No application context active at org.jboss.seam.Component.forName(Component.java:1945) at org.jboss.seam.Component.getInstance(Component.java:2005) at org.jboss.seam.Component.getInstance(Component.java:1983) at org.jboss.seam.Component.getInstance(Component.java:1977) at org.jboss.seam.Component.getInstance(Component.java:1972) at org.jboss.seam.framework.Controller.getComponentInstance(Controller.java:272) at org.jboss.seam.framework.PersistenceController.getPersistenceContext(PersistenceController.java:20) at org.jboss.seam.framework.EntityHome.getEntityManager(EntityHome.java:177) etc .. I can resolve some of these errors by injecting the EntityManager with @In EntityManager entityManager; Unfortunately the persist method of EntityHome also calls the getEntityManager. This means a lot of mocks or rewriting the code somehow. Is there any workaround and why is this exception thrown anyway? I am using Seam 2.2.0 GA by the way. There is nothing special about the components. They are generated by seam-gen. The test is performed with in memory database - I followed the examples in http://jbosscc-needle.sourceforge.net/jbosscc-needle/1.0/db-util.html.

    Read the article

< Previous Page | 880 881 882 883 884 885 886 887 888 889 890 891  | Next Page >