Search Results

Search found 69877 results on 2796 pages for 'ibm data studio'.

Page 90/2796 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How to have Excel data validation display different data in drop down than is actually validated

    - by Memitim
    How can I provide a user with a drop-down menu in a cell that displays the contents from one column but actually writes the value from a different column to the cell and validates against the values from that second column? I have a bit of code that very nearly does this (credit: DV0005 from the Contextures site): Private Sub Worksheet_Change(ByVal Target As range) On Error GoTo errHandler If Target.Cells.Count > 1 Then GoTo exitHandler If Target.Column = 10 Then If Target.Value = "" Then GoTo exitHandler Application.EnableEvents = False Target.Value = Worksheets("Measures").range("B1") _ .Offset(Application.WorksheetFunction _ .Match(Target.Value, Worksheets("Measures").range("Measures"), 0) - 1, 1) End If The drop-down displays the values from one column, for example Column B, but when selected actually writes the value on the same row from Column C to the cell. However, data validation is actually validating against Column B, so if I manually enter something from Column C in the cell and try to move to another cell, data validation throws an error.

    Read the article

  • Recover data from SD card

    - by Paul Tarjan
    I have a 2GB kingston microSD card which is about 3 years old. I put it in a reader today in my Windows Vista computer, wrote a 32MB file onto it, safely removed it, and then tried to read it elsewhere. Nothing. Putting it back in vista it now says You need to format the disk in drive F: before you can use it. What should I do? I have access to many computers and OSes if your recommendations need that. I would be very sad if I lost all the contents of the card. Most of the data is backed up, but there are a few things that aren't. :( Doing a # dd if=/dev/sdg of=~/tmp/sd.bin gives me a 2 gig file, and grepping the file it seems like lots of my data is still there, how can I put it back together?

    Read the article

  • Recovering data from a corrupted disk

    - by r_honey
    I use an external harddisk to backup my data (it had 3 partitions). Last week when I plugged it in, the OS (Win 7) hung up and I had to force re-boot the machine. When I turned it back on, the system just did not detect the hard-disk. It was last Sunday and I had to give up after sometime. Now I return back next Sunday (today) and when I plug it back-in to the machine, the OS detects the disk as well as all the 3 partitions on it. But it says all 3 are unformatted and I cant access any of them. Is there any way to recover data from the 3 partitions (I tried PC File Recovery and Recuva from PiriForm but neither detected these partitions).

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Are you reporting Visual Studio 2012 issues to Microsoft correctly?

    - by Tarun Arora
    Issues you may run into while using Visual Studio need to be reported to the Microsoft Product Team via the Microsoft connect site. The Microsoft team then tries to reproduce the issue using the details provided by you. If the information you provide isn’t sufficient to reproduce the issue the team tries to contact you for specifics, this not only increases the cycle time to resolution but the lack of communication also results in issues not being resolved. So, when I report an issue one part of me tells me to include as much detail about the issue as I can clubbing screen shots, repo steps, system information, visual studio version information,… the other half tells me this is so time consuming, leave it for now and come back to fill all these details later. Reporting a bug but not including the supporting information is an invitation to excuses like …     Microsoft has absolutely changed this experience for VS 2012. The Microsoft Visual Studio Feedback tool is designed to simplify the process of providing feedback and reporting issues to Microsoft that you may encounter while using Microsoft Visual Studio 2012. Note – The Microsoft Visual Studio 2012 Feedback client currently only works for VS 2012 and not any other versions of Visual Studio. Setting up the Microsoft Visual Studio 2012 Feedback client Open Visual Studio, from the Tools menu select Extension and Updates. In the Extension and Updates window, click Online from the left pane and search using the text ‘feedback’, download and Install Microsoft Visual Studio 2012 Feedback Tool by following the instructions from the wizard. Note - Restarting Visual Studio after the install is a must! How to report a bug for Visual Studio 2012? Click on the Help menu and choose Report a Bug You should see an icon Microsoft Visual Studio 2012 Feedback Tool come up in the system tray icon area You’ll need to accept the Privacy statement. You have the option of reporting the feedback as private or public. Microsoft works with several Partners, MVP’s and Vendors who get access to early bits of Microsoft products for valuation. This is where it becomes essential to report the feedback privately. I would choose the Public option otherwise. After all if it’s out there in the public, others can discover and add to it easily. You now have the option to report a new issue or add to an existing issue. Should you choose to add to an existing issue you should have the feedback ID of the issue available. This can be obtained from the Microsoft Connect site. For now I am going to focus on reporting a new feedback privately. Filling out the feedback details You will notice that VsInfo.xml and DxDiagOutput.txt are automatically attached as you enter this screen (more on that later).  Feedback Type Choose the feedback type from (Performance, Hang, Crash, Other) Note – The record button will only be enabled once you have enabled once you have chosen the feedback type, Bug-repro recording is not available for Windows Server 2008.     Effective Title and Description Enter a title that helps us differentiate the bug when it appears in a list, so that we can group it with any related bugs, assign it to a developer more effectively, and resolve it more quickly. Example: Imagine that you are submitting a bug because you tried to install Service Pack 1 and got a message that Visual Studio is not installed even though it is. Helpful:  Installed Visual Studio version not detected during Service Pack 1 setup. Not helpful:  Service Pack 1 problem. Tip: Write the problem description first, and then distil it to create a title. Example Description: Helpful: When I run Service Pack 1 Setup, I get the message "No Visual Studio version is detected" even though I have Visual Studio 2010 Ultimate and Visual C++ 2010 Express installed on my machine. Even though I uninstalled both editions, and then first reinstalled Ultimate and then Express, I still get the message. Record: Becoming a first class citizen Often a repro report is invaluable to describe and decipher the issue. Please use this feature to send actionable feedback. The record repro feature works differently depending on the feedback type you selected. Please find below details for each recording option. You can start recording simply by selecting a feedback type, and clicking on the “Record” button. When "Performance" is the bug type: When the Microsoft Visual Studio trace recorder starts, perform the actions that show the performance problem you want to report and then click on the "Stop Recording" button as soon as you experience the performance problem. Because the tool optimizes trace collection, you can run it for as long as it takes to show the problem, up to two hours. Note that, you need to stop recording as soon as the performance issue occurs, because the tool captures only the last couple minutes of your actions to optimize the trace collection. After you stop the recording, the tool takes up to two minutes to assemble the data and attach an ETLTrace.zip file to your bug report. The data includes information about Windows events and the Visual Studio code path. Note that, running the Microsoft Visual Studio trace recorder requires elevated user privilege. When "Crash" is the bug type: When the dialog box appears, select the running Visual Studio instance for which you want to show the steps that cause a crash. When the crash occurs, click on the "Stop Record" button. After you do this, two files are attached to your bug report - an AutomaticCrashDump.zip file that contains information about the crash and a ReproSteps.zip file that shows the repro steps. Repro steps are captured by Windows Problem Steps Recorder. Note that, you can pause the recording, and resume later, or for a specific step, you can add additional comments. When "Hang" is the bug type: The process for recording the steps that cause a hang resembles the one for crashes. The difference is, you can even collect a dump file after the VS hangs; start the VSFT either from the system tray or by starting a new instance of VS, select "Hang" as feedback type and click on the "Record" button. You will be prompted which VS to collect dump about, select the VS instance that hanged. VSFT collects a dump file regarding the hang, called MiniDump.zip, and attaches to your bug report. When "Other" is the bug type: When the problem step recorder starts, perform the actions that show the issue you want to report and then choose the "Stop” button. You can pause the recording, and resume later, or for a specific step, you can add additional comments. Once you’re done, ReproSteps.zip is added to your bug report. Pre-attached files It is essential for Microsoft to know what version of the the product are you currently using and what is the current configuration of your system. Note – The total size of all attachments in a bug report cannot exceed 2 GB, and every uncompressed attachment must be smaller than 512 MB. We recommend that you assemble all of your attachments, compress them together into a .zip file, and then attach the .zip file. Taking a screenshot Associate a screen shot by clicking the Take screenshot button, choose either the entire desktop, the specific monitor (useful if you are working in a multi monitor configuration) or the specific window in question. And finally … click Submit If you need further help, more details can be found here. You can view your feedback online by using the following URL “">https://connect.microsoft.com/VisualStudio/SearchResults.aspx?SearchQuery=<feedbackId>” Happy bug logging

    Read the article

  • Visual Studio 2010 and CrystalReports

    - by LukePet
    I have a dll build with target framework 3.5 that manage reports; this dll use the version 10.5.3700.0 of CrystalDecisions.CrystalReports.Engine Now, I have created a new wpf application based on .NET framework 4.0 and I added the report dll reference to project. I had to install the Crystal Reports for Visual Studio 2010 library (http://www.businessobjects.com/jump/xi/crvs2010/default.asp) to build the application without errors...now it builds success, but the report print don't work. It's generate an error when set datasource...the message is: Unknown Query Engine Error Error in File C:\DOCUME~1\oli15\IMPOST~1\Temp\MyReport {4E514D0E-FC2C-4440-9B3C-11D2CA74895A}.rpt: ... Source=Analysis Server ErrorCode=-2147482942 StackTrace: at CrystalDecisions.ReportAppServer.Controllers.DatabaseControllerClass.ReplaceConnection(Object oldConnection, Object newConnection, Object parameterFields, Object crDBOptionUseDefault) at CrystalDecisions.CrystalReports.Engine.Table.SetDataSource(Object val, Type type) at CrystalDecisions.CrystalReports.Engine.ReportDocument.SetDataSourceInternal(Object val, Type type) I think that it use a different version reference for CrystalDecisions.CrystalReports.Engine, it's possible? How can tell it to use the 10.5.3700.0 version?

    Read the article

  • Visual studio 2010 remote debugging is very slow (across domains, over vpn)

    - by alex
    Overall debugging works, but each step through code takes dosens of seconds. I've already closed all additional windows like stack trace,watches,autos; deleted all breakpoints. server and dev machine are located in different domains, so i set up local user on both, with matching password. remote debugger is running as service. looking at security log, I found quite a lot of entries about remote debugging account logging in (record about every minute). Any suggestions on how i can speed up remote debugging? dev computer: quad core, 8 Gb mem, win 7 x64 , visual studio 2010 ultimate target server: asp.net website , 2xdual core xeon, 2Gb mem, remote debugger 2010 communication channel: vpn , 5 mbit , latency about 20ms. (seems that debbugging never uses more than 20 kb/s)

    Read the article

  • Global ignore pattern for TortoiseSVN / Visual Studio 2010

    - by Chris Simmons
    After installing and using Visual Studio 2010, I'm seeing some newer file types (at least with C++ projects ... don't know about the other types) as compared to 2008. e.g. .sdf, .opensdf, which I guess are the replacement for ncb files with Intellisense info stored in SQL Server Compact files? I also notice .log files are generated, which appear to be build logs. Given this, what's safe to add to my global ignore pattern? Off the bat, I'd assume .sdf, .opensdf, but what else?

    Read the article

  • Missing help files for Microsoft.WindowsMobile in Visual Studio 2008 help system

    - by Johann Gerell
    I've just installed the Windows Mobile 6.5.3 DTK, both standard and professional. Before that I had the standard and professional Windows Mobile 6 SDKs. All Windows Mobile help pages are missing in Visual Studio 2008's help system - in particular everything in the Microsoft.WindowsMobile namespace. Microsoft.WindowsMobile.DirectX is there, but it's not part of the Windows Mobile 6 SDK or 6.5.3 DTK. If I open the WM6 docs from the freshly created program group, then it's all fine and dandy, but there doesn't seem to have been a proper integration with VS during installation. Any ideas what's gone wrong and how to fix it?

    Read the article

  • Visual Studio 2008: Don't deploy SQL Server Compact 3.5 when debugging

    - by Thorsten Dittmar
    Hi, I'm using VS2008 to create a Compact Framework application for a Windows CE 5.0 device (Datalogic Kyman). I'm using SQL Server Compact 3.5 in my application. However, I'm debugging on a Kyman that still has Windows CE 4.2 installed (attached via USB using Mobile Device Center). My problem: VS2008 does not recognize that SQL Server Compact is already installed on the device and asks me to install SQL Server Compact every time I'm running my application from the IDE. The installer shows me a warning about the SQL Server Compact CAB file not being suitable for this device, but installation works without errors, also the application works without errors. I've unchecked the box "Always deploy latest .NET version" (don't know what it's called in English exactly, using German VS2008), but that doesn't help. How can I tell Visual Studio not to install the SQL Server before launching my application every time?

    Read the article

  • Installing Visual Studio 2003 on Windows 7 64-bit

    - by Cole Shelton
    My team is currently supporting a 1.1 app and we are installing VS.NET 2003 on Windows 7. We haven't had any issues on the 32-bit machines, but FrontPage Server Extensions are failing to install on my 64-bit machine. Others on the Interwebs say that they have done this successfully, so I wanted to know if anyone here has and if they know of a solution. The specific issue is that FPSE (to clarify, I'm installing "FrontPage 2002 Server Extensions for IIS 7.0") fails to install correctly. In EventViewer I get the error: Microsoft FrontPage Server Extensions: Error #3004f Message: Unable to read configuration information for Microsoft Internet Information Server: ImpersonateLoggedOnUser Error. I've looded for errors with ImpersonateLoggedOnUser on 64-bit and did find a case where it fails on 64-bit when UAC is turned off (which I did have it off). I turned UAC back on, ran command prompt as administrator, and ran msiexec on the FPSE package. Still no dice. I have followed this tutorial (and the others it points to) for installing: http://frankbuchan.blogspot.com/2009/08/visual-studio-2003-under-windows-7.html

    Read the article

  • VS2010 "An item with the same key has already been added"

    - by fneep
    I recently installed Visual Studio 2010 and copied and converted an old VS2005 solution to VS2010 When I edit this solution, if I try to change a control's .image property, VS2010 creates a message box telling me that "An item with the same key has already been added" (screenshot below), and won't let me browse for an image. I can add images for any other solution, even others ported from VS2005, but not this one. Any idea what I'm doing wrong?

    Read the article

  • Building cURL & libcurl with Visual Studio 2010

    - by KTC
    With the help of question #197444, I have managed to build cURL & libcurl from source on Windows from within the Visual Studio 2010 IDE, OpenSSL 1.0.0, and zlib 1.2.5. The problem I see is that at the moment, if I run the resulting curl.exe with the argument -V, then the version that it report is curl 7.20.1 (i386-pc-win32) libcurl/7.20.1 OpenSSL/0.9.8d zlib/1.2.3 Protocols: dict file ftp ftps http https imap imaps ldap pop3 pop3s rtsp smtp smtps telnet tftp Features: AsynchDNS Largefile NTLM SSL libz Note the versions reported for both OpenSSL & zlib doesn't match if what I actually used. Any idea on how to fix this? p.s. Is there a clear list of optional components that can be compiled into libcurl and what options/preprocessor directive to use? (e.g. SSPI, libidn, ...?)

    Read the article

  • Problems trying to use Visual Studio 2010 and AJAX extensions

    - by Adrian
    Why can I not add ScriptManager control or UpdatePanel to a page in Visual Studio 2010? The drag( or double click) just fails - it seems like there is an incompatibility somewhere? UPDATE: This is 'default' install of 2010 Ultimate on Windows7, create a web application, cannot drag ScriptManager or UpdatePanel to the designer. Typing the declarations works. The cursor changes to the [+] icon when you drag it to the right place but nothing appears to happens on 'drop', briefly the documents name has * appended, this quickly changes back to normal, either is saves or does an undo. I'm assuming something is going wrong so it undos...but what is going wrong?

    Read the article

  • XML/XSD intellisense not working in Visual Studio 2010

    - by Jason
    I am working on xml and xsd files in VS 2010 but intellisense isn't working. Intellisense is working for the same files in VS 2008, however. When I type '<xs:' options like "attribute", "complexType", "simpleType", or "element" do not appear. Is there some difference between the VS 2008 and VS 2010 that I'm missing? I add an xsd file to my solution. All the proper namespaces are generated automatically as such: <?xml version="1.0" encoding="utf-8"?> <xs:schema id="XMLSchema2" targetNamespace="http://tempuri.org/XMLSchema2.xsd" elementFormDefault="qualified" xmlns="http://tempuri.org/XMLSchema2.xsd" xmlns:mstns="http://tempuri.org/XMLSchema2.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema"> </xs:schema> The "xsdschema.xsd" is in the "C:\Program Files\Microsoft Visual Studio 10.0\xml\Schemas" directory. There is a check mark in the "Use" column in the XML Schemas dialog box.

    Read the article

  • Visual Studio and SQL Server - correct installation sequence?

    - by cdonner
    I am rebuilding my development machine. This issue is not new to me, but I don't remember the solution. I started with VS 2008 Pro, then SQL 2008 Developer, then the SQL SP1, then VS SP1. The result is that I cannot open SSIS projects (see the error below). What is the correct order so that I can avoid the installation of SQL Server Express and still have all the features working? --------------------------- Microsoft Visual Studio --------------------------- Package Load Failure Package 'DataWarehouse VSIntegration layer' has failed to load properly ( GUID = {4A0C6509-BF90-43DA-ABEE-0ABA3A8527F1} ). Please contact package vendor for assistance. Application restart is recommended, due to possible environment corruption. Would you like to disable loading this package in the future? You may use 'devenv /resetskippkgs' to re-enable package loading. --------------------------- Yes No ---------------------------

    Read the article

  • Unit Tests in Visual Studio 2010

    - by Ben
    Hi, i am trying to create a Unit test for a WinForm in a Visual Studio 2010 project. I add a new "Coded UI Test" to my project, open up the code file, then right click and select "Generate Code for Coded UI Test" - "Use Coded UI Test builder". I then start my application up, select "Record" on the UI Map control. I run my tests (in this case simply select a textbox, type in a random value, them click a button). I then select "Generate Code" from the UI Map control which generates the code which the test will use. When running this test, i get the error: Test method HelloWorldTest.CodedUITest1.CodedUITestMethod1 threw exception: Microsoft.VisualStudio.TestTools.UITest.Extension.UITestControlNotFoundException: The playback failed to find the control with the given search properties. Additional Details: TechnologyName: 'MSAA' ControlType: 'Window' Name: 'Form1' ClassName: 'WindowsForms10.Window' --- System.Runtime.InteropServices.COMException: Error HRESULT E_FAIL has been returned from a call to a COM component. Does anyone know where i am going wrong? Thanks

    Read the article

  • Passing parameter to SOAP Web Service Requests Using Visual Studio Team System 2008

    - by Nicholas
    How can I pass in the current datetime parameter to a SOAP request? I know you can pass in parameters by adding a datasource to the web test project and reference it like this {{DataSource.TableName.FieldName}}. But I want to pass in current datetime parameter as a dynamic value (something like DateTime.Now). How do I go about doing this? Below is sample SOAP request that I put into String Body: <soap:body> <MyQuery xmlns="http://something.com"> <req> <QueryType>{{DataSource.Table.QueryType}}</QueryType> <Name>{{DataSource.Table.Name}}</Name> <RequestDateTime>{{insert DateTime.Now here}}</RequestDatetime> </req> </MyQuery> </soap:body> P.S. Running web test by adding Web Service Request in Visual Studio Team System 2008

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >