Search Results

Search found 16056 results on 643 pages for 'visual studio dbpro'.

Page 591/643 | < Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >

  • How can I change the VisualState in a View from the ViewModel?

    - by Decker
    I'm new to WPF and MVVM. I think this is a simple question. My ViewModel is performing an asynch call to obtain data for a DataGrid which is bound to an ObservableCollection in the ViewModel. When the data is loaded, I set the proper ViewModel property and the DataGrid displays the data with no problem. However, I want to introduce a visual cue for the user that the data is loading. So, using Blend, I added this to my markup: <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="LoadingStateGroup"> <VisualState x:Name="HistoryLoading"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="HistoryGrid"> <DiscreteObjectKeyFrame KeyTime="0" Value="{x:Static Visibility.Hidden}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="HistoryLoaded"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="WorkingStackPanel"> <DiscreteObjectKeyFrame KeyTime="0" Value="{x:Static Visibility.Hidden}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> </VisualStateGroup> </VisualStateManager.VisualStateGroups> I think I know how to change the state in my code-behind (something similar to this): VisualStateManager.GoToElementState(LayoutRoot, "HistoryLoaded", true); However, the place where I want to do this is in the I/O completion method of my ViewModel which does not have a reference to it's corresponding View. How would I accomplish this using the MVVM pattern?

    Read the article

  • Linq causes collection to disappear when trying to use OrderByDescending

    - by Jeremy B.
    For background, I am using MongoDB and Rob Conery's linq driver. The code I am attempting is thus: using (var session = new Session<ContentItem>()) { var contentCollection = session.QueryCollection.Where(x => x.CreatedOn < DateTime.Now).OrderByDescending(y => y.CreatedOn).ToList(); ViewData.Model = contentCollection; } this will work on one machine, but on another machine I get back no results. To get results i have to do using (var session = new Session<ContentItem>()) { var contentCollection = session.QueryCollection.Where(x => x.CreatedOn < DateTime.Now).ToList(); ViewData.Model = contentCollection.OrderByDescending(y => y.CreatedOn).ToList(); } I have to do ToList() on both lines, or no results. If I try to chain anything it breaks. This is the same project, all dll's are locally loaded. Both machines have the same framework, versions of Visual studio and addons. the only difference is one has VisualSVN the other AnkhSVN. I can't see those causing the problem. Also, while debugging, on the machine that does not work you can see the items in the collection, and if you remove ordering all together it will work. This has got me completely stumped.

    Read the article

  • Managing multiple customer databases in ASP.NET MVC application

    - by Robert Harvey
    I am building an application that requires separate SQL Server databases for each customer. To achieve this, I need to be able to create a new customer folder, put a copy of a prototype database in the folder, change the name of the database, and attach it as a new "database instance" to SQL Server. The prototype database contains all of the required table, field and index definitions, but no data records. I will be using SMO to manage attaching, detaching and renaming the databases. In the process of creating the prototype database, I tried attaching a copy of the database (companion .MDF, .LDF pair) to SQL Server, using Sql Server Management Studio, and discovered that SSMS expects the database to reside in c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabaseName.MDF Is this a "feature" of SQL Server? Is there a way to manage individual databases in separate directories? Or am I going to have to put all of the customer databases in the same directory? (I was hoping for a little better control than this). NOTE: I am currently using SQL Server Express, but for testing purposes only. The production database will be SQL Server 2008, Enterprise version. So "User Instances" are not an option.

    Read the article

  • UITableView UITableViewCell dynamic UILabel Height storyboard

    - by Mikel Nelson
    This isn'an a question, just a results log on an issue I had with XCode 4.5 storyboards and dynamic height UITableCell with a UILabel. The issue was; the initial display of a cell would only show part of the resized UILabel contents, and that the visual UILabel was not resized. It would only display correctly after scrolling off the top of the Table and back down. I did the calculations in hieghtForRowAtIndexPath and sizeToFit the UILabel in rowAtIndexPath. The sizes where coming up ok in debug, but the device was not updating the display with the correct size and UILable.text value. I had created the dynamic UITableCell in a storyboard. However, I had set the width and height to a nominal value (290x44). It turns out, this was causing my issues. I set the width and height to zero (0) in the story board, and everything started working correctly. (i.e. the UILabels displayed at the correct size with full content). I was unable to find anything online on this issue, except for some references to creating the custom table cell with a frame of zero. Turns out, that was really the answer (for me).

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • Loading .dll/.exe from file into temporary AppDomain throws Exception

    Hi Gang, I am trying to make a Visual Studio AddIn that removes unused references from the projects in the current solution (I know this can be done, Resharper does it, but my client doesn't want to pay for 300 licences). Anyhoo, I use DTE to loop through the projects, compile their assemblies, then reflect over those assemblies to get their referenced assemblies and cross-examine the .csproj file. Problem: since the .dll/.exe I loaded up with Reflection doesn't unload until the app domian unloads, it is now locked and the projects can't be built again because VS tries to re-create the files (all standard stuff). I have tried creating temporary files, then reflecting over them...no worky, still have locked original files (I totally don’t understand that BTW). Now I am now going down the path of creating a temporary AppDomain to load the files into and then destroy. I am having problems loading the files though: The way I understand AddDomain.Load is that I should create and send a byte array of the assembly to it. I do that: FileStream fs = new FileStream(assemblyFile, FileMode.Open); byte[] assemblyFileBuffer = new byte[(int)fs.Length]; fs.Read(assemblyFileBuffer, 0, assemblyFileBuffer.Length); fs.Close(); AppDomainSetup domainSetup = new AppDomainSetup(); domainSetup.ApplicationBase = assemblyFileInfo.Directory.FullName; AppDomain tempAppDomain = AppDomain.CreateDomain("TempAppDomain", null, domainSetup); Assembly projectAssembly = tempAppDomain.Load(assemblyFileBuffer); The last line throws an exception: "Could not load file or assembly 'WindowsFormsApplication1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified.":"WindowsFormsApplication3, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"}" Any help or thoughts would be greatly appreciated. My head is lopsided from beating it against the wall... Thanks, Dan

    Read the article

  • SQL 2 INNER JOINS with 3 tables

    - by Jelmer Holtes
    I've a question about a SQL query.. I'm building a prototype webshop in ASP.NET Visual Studio. Now I'm looking for a solution to view my products. I've build a database in MS Access, it consists of multiple tables. The tables which are important for my question are: Product Productfoto Foto Below you'll see the relations between the tables For me it is important to get three datatypes: Product title, price and image. The product title, and the price are in the Product table. The images are in the Foto table. Because a product can have more than one picture, there is a N - M relation between them. So I've to split it up, I did it in the Productfoto table. So the connection between them is: product.artikelnummer -> productfoto.artikelnummer productfoto.foto_id -> foto.foto_id Then I can read the filename (in the database: foto.bestandnaam) I've created the first inner join, and tested it in Access, this works: SELECT titel, prijs, foto_id FROM Product INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikelnummer But I need another INNER JOIN, how could I create that? I guess something like this (this one will give me an error) SELECT titel, prijs, bestandnaam FROM Product (( INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikkelnummer ) INNER JOIN foto ON productfoto.foto_id = foto.foto_id) Can anyone help me?

    Read the article

  • Debugging an XBAP application with 64-bit browser

    - by Anne Schuessler
    We have an XBAP application that fails when opened in Internet Explorer 8 64 bit. We only get a pretty generic error which makes it hard to determine where the error is coming from. I'm trying to find a way to debug the application with IE 8 64 bit, but I haven't figured out how to do this. I can't set the 64 bit version as the standard browser and overwriting the browser path in the browsers.xml for Visual Studio doesn't work as well. It just gets overwritten as soon as I hit F5 to debug to point to the 32 bit IE. I have figured out how to start the application from Debug with the 64 bit browser by changing the Debug options from "Start browser with URL" to "Start external program" and setting the command line arguments to point to the bin folder. Unfortunately then the XBAP is looking for its config.deploy file which doesn't seem to be generated during regular debug. This doesn't happen when using "Start browser with URL" and the application doesn't seem to care for this file then. Does anybody know why there's a difference between "Start browser with URL" and "Start external program" in the Debug options which might cause this difference in behavior when Debug is started? Also, does anybody know how to successfully debug an XBAP with a 64-bit browser?

    Read the article

  • jquery change event not working with IE6

    - by manivineet
    It is indeed quite unfortunate that my client still uses IE6. using jquery 1.4.2 The problem is that I open a window using a click event and do some edit operation in the new window. I have a 'change' event attached to the row of a table which has input fields. Now when the window loads for the first time and I make a change in the input for the FIRST time, the change event does not fire. however, on a second try it starts working. I have noticed that I e.g. I run a dummy page, i.e. create a new page(i work with visual studio) and run that page individually , the 'change' event works just fine. what it going on? and what can i do, besides going back to 1.3.2 (by the way that doesn't work either, but haven't fully tested it yet) <!--HTML--> <table id="tbReadData"> <tr class="nenDataRow" id="nenDr2"> <td> <input type="text" class="nenMeterRegister" value="1234" /> </td> <tr /> <table> <script type="text/javascript"> $(document).ready(function(){ $('#tbReadData').find('tr').change(function() { alert('this works'); } }); </script>

    Read the article

  • Gacutil.exe successfully adds assembly, but assembly missing from GAC. Why?

    - by Ben McCormack
    I'm running GacUtil.exe from within Visual Studio Command Prompt 2010 to register a dll (CatalogPromotion.dll) to the GAC. After running the utility, it says Assembly Successfully added to the cache, and running gacutil /l CatalogPromotionDll shows that the GAC contains the assembly, but I can't see the assembly when I navigate to C:\WINDOWS\assembly from Windows Explorer. Why can't I see the assembly in WINDOWS\assembly from Windows Explorer but I can see it using gacutil.exe? Background: Here's what I typed into the command prompt for VS Tools: C:\_Dev Projects\VS Projects\bmccormack\CatalogPromotion\CatalogPromotionDll\bin \Debuggacutil /i CatalogPromotionDll.dll Microsoft (R) .NET Global Assembly Cache Utility. Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. Assembly successfully added to the cache C:\_Dev Projects\VS Projects\bmccormack\CatalogPromotion\CatalogPromotionDll\bin \Debuggacutil /l CatalogPromotionDll Microsoft (R) .NET Global Assembly Cache Utility. Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. The Global Assembly Cache contains the following assemblies: CatalogPromotionDll, Version=1.0.0.0, Culture=neutral, PublicKeyToken=9188a175 f199de4a, processorArchitecture=MSIL Number of items = 1 However, the assembly doesn't show up in C:\WINDOWS\assembly.

    Read the article

  • Are function-local typedefs visible inside C++0x lambdas?

    - by GMan - Save the Unicorns
    I've run into a strange problem. The following simplified code reproduces the problem in MSVC 2010 Beta 2: template <typename T> struct dummy { static T foo(void) { return T(); } }; int main(void) { typedef dummy<bool> dummy_type; auto x = [](void){ bool b = dummy_type::foo(); }; // auto x = [](void){ bool b = dummy<bool>::foo(); }; // works } The typedef I created locally in the function doesn't seem to be visible in the lambda. If I replace the typedef with the actual type, it works as expected. Here are some other test cases: // crashes the compiler, credit to Tarydon int main(void) { struct dummy {}; auto x = [](void){ dummy d; }; } // works as expected int main(void) { typedef int integer; auto x = [](void){ integer i = 0; }; } I don't have g++ 4.5 available to test it, right now. Is this some strange rule in C++0x, or just a bug in the compiler? From the results above, I'm leaning towards bug. Though the crash is definitely a bug. For now, I have filed two bug reports. All code snippets above should compile. The error has to do with using the scope resolution on locally defined scopes. (Spotted by dvide.) And the crash bug has to do with... who knows. :) Update According to the bug reports, they have both been fixed for the next release of Visual Studio 2010.

    Read the article

  • Push TFS 2008 code to remote VSS over VPN?

    - by drovani
    We have a local Team Foundation Server 2008 that we keep our code under version control. However, we also have a paranoid client that has their own Visual Source Safe installation that wants us to keep a running copy of the code on their server as well. As such, I'm hoping there is a way I can just do a nightly push from our TFS repository to their VSS repository. I'm not concerned about keeping each changeset on TFS as a different changeset on the VSS, just a once-nightly push that creates a new changeset on the VSS and uploads the latest changeset from TFS. I guess the first part is if it is even possible for TFS to push an update to VSS. I've noticed that most replies to this question have been something to the tune of "don't do it", but I can't find anything that specifically states that it cannot be done. The second part would then be automating the process by having the TFS server connect to the client's VPN, then push the code changes. I have full control over the TFS server and I can customize the VSS install, if there are settings that need changing, but I'm limited on what I can do about settings on either firewall or server specific settings on the client's VSS server.

    Read the article

  • Unhandled Exception error message

    - by Joshua Green
    Does anyone know why including a term such as: t = PL_new_term_ref(); would cause an Unhandled Exception error message: 0xC0000005: Access violation reading location 0x0000000c. (Visual Studio 2008) I have a header file: class UserTaskProlog : public ArAction { public: UserTaskProlog( const char* name = " sth " ); ~UserTaskProlog( ); AREXPORT virtual ArActionDesired *fire( ArActionDesired currentDesired ); private: term_t t; }; and a cpp file: UserTaskProlog::UserTaskProlog( const char* name ) : ArAction( name, " sth " ) { char** argv; argv[ 0 ] = "libpl.dll"; PL_initialise( 1, argv ); PlCall( "consult( 'myProg.pl' )" ); } UserTaskProlog::~UserTaskProlog( ) { } ArActionDesired *UserTaskProlog::fire( ArActionDesired currentDesired ) { cout << " something " << endl; t = PL_new_term_ref( ); } Without t=PL_new_term_ref() everything works fine, but when I start adding my Prolog code (declarations first, such as t=PL_new_term_ref), I get this Access Violation error message. I'd appreciate any help. Thanks,

    Read the article

  • How do I put an ASP.NET website project and class library projects in one .sln file on Subversion

    - by JustinP8
    My company has several class libraries we use in multiple website projects (not web application projects). Website projects don't have .sln files, but I'm sure I've read in my past research that you can make a blank solution and put your website and class library projects in it. After answers to my previous questions, this is the direction that I'm going (based slightly on [http://amadiere.com/blog/2009/06/multiple-subversion-projects-in-one-visual-studio-solution-using-svnexternals/][1]: /websites /website1 /trunk /website1 /libraries /library1 /trunk /library1 /library2 /trunk /library2 /etc... Then I planed on using svn:externals to copy /library1, /library2, and so on into the working_copy/websites/website1/ folder. I want my team members to be able to checkout the /trunk folder for website1 and get a .sln file, /library1 external, /library2 external, etc. I want that .sln file to contain the website1 website project, and all of the library external projects. Hopefully that would look something like: /working_copy /websites /website1 /trunk /website1 /library1 (svn:external of libraries/library1/trunk/library1) /library2 (svn:external of libraries/library2/trunk/library2) /etc. website1.sln So, at the end of all of this, the goal is that my teammates check out the trunk, open the solution, and everyone has the exact same solution. When we commit, everything is committed appropriately to subversion (the website code, and the libraries are committed to their appropriate place on the repo). How have others solved these issues? How can I make a .sln file that my team members and I can share in this manner? [1]: "This Article"

    Read the article

  • Transitioning from Domain Authentication to SQL Server Authentication

    - by Albert Perrien
    Greetings all, I've run into a problem that has me stumped. I've put together a database in SQL Server Express, and I'm having a strange permissions problem. The database is on my development machine with a domain user: DOMAIN\albertp. My development database server is set for "SQL Server and Windows Authentication" mode. I can edit and query my database without any problems when I log in using Windows Authentication. However, when I log in to any user that uses SQL Server authentication (Including sa) I get this message when I run queries against my database. SELECT * FROM [Testing].[dbo].[AuditingReport] I get: Msg 18456, Level 14, State 1, Line 1 Login failed for user 'auditor'. I'm logged into the server from SQL Server Management Studio as 'auditor' and I don't see anything in the error log about the login failure. I've already run: Use Testing; Grant All to auditor; Go And I still get the same error. What permissions do I have to set for the database to be usable by others outside of my personal domain login? Or am I looking at the wrong problem? My ultimate goal is to have the database be accessible from a set of PHP pages, using a either a common login (hence 'auditor') or a login specific to a set of individual users.

    Read the article

  • Avoid existing files being overwritten when newer version is installed.

    - by constant learner
    Hello I have a VS2008 windows application project (WinProject) which is deployed by the installation project (InstallationProject) which inturn has the property RemovePreviousVersions set to True. In my app for each configuration made by an user, the winapp writes the configurations into an xml file (stored in C:\Application Name\Files\ folder) which also includes the path where the config was saved. Now when I build new versions of the installer,This folder and the files are overwritten since i the flag AlwaysCreate is set to True. My question is how can i avoid these older files from being overwritten and at the same time shall get the updated file from the installer. Ex: Contents of the file <PriceFiles> <Name>arr</Name> <Path>C:\NewTool\arr.xml</Path> <UserDefined>true</UserDefined> </PriceFiles> <ReferenceProjects> <Name>studio</Name> <Path>C:\NewTool\ReferenceProjects\6cd3a9e9-ad65-475e-953b-128915a496cd.xml</Path> <UserDefined>true</UserDefined> <CreatedBy>Admin</CreatedBy> </ReferenceProjects> Thanks in advance

    Read the article

  • Pros and Cons of using SqlCommand Prepare in C#?

    - by MadBoy
    When i was reading books to learn C# (might be some old Visual Studio 2005 books) I've encountered advice to always use SqlCommand.Prepare everytime I execute SQL call (whether its' a SELECT/UPDATE or INSERT on SQL SERVER 2005/2008) and I pass parameters to it. But is it really so? Should it be done every time? Or just sometimes? Does it matter whether it's one parameter being passed or five or twenty? What boost should it give if any? Would it be noticeable at all (I've been using SqlCommand.Prepare here and skipped it there and never had any problems or noticeable differences). For the sake of the question this is my usual code that I use, but this is more of a general question. public static decimal pobierzBenchmarkKolejny(string varPortfelID, DateTime data, decimal varBenchmarkPoprzedni, decimal varStopaOdniesienia) { const string preparedCommand = @"SELECT [dbo].[ufn_BenchmarkKolejny](@varPortfelID, @data, @varBenchmarkPoprzedni, @varStopaOdniesienia) AS 'Benchmark'"; using (var varConnection = Locale.sqlConnectOneTime(Locale.sqlDataConnectionDetailsDZP)) //if (varConnection != null) { using (var sqlQuery = new SqlCommand(preparedCommand, varConnection)) { sqlQuery.Prepare(); sqlQuery.Parameters.AddWithValue("@varPortfelID", varPortfelID); sqlQuery.Parameters.AddWithValue("@varStopaOdniesienia", varStopaOdniesienia); sqlQuery.Parameters.AddWithValue("@data", data); sqlQuery.Parameters.AddWithValue("@varBenchmarkPoprzedni", varBenchmarkPoprzedni); using (var sqlQueryResult = sqlQuery.ExecuteReader()) if (sqlQueryResult != null) { while (sqlQueryResult.Read()) { //sqlQueryResult["Benchmark"]; } } } }

    Read the article

  • ASP.NET page with base class with dynamic master page not firing events

    - by Kangkan
    Hi guys! I am feeling that I have terribly wrong somewhere. I was working on a small asp.net app. I have some dynamic themes in the \theme folder and have implemented a page base class to load the master page on the fly. The master is having the ContentPlaceHolder like: <asp:ContentPlaceHolder ID="cphBody" runat="server" /> Now I am adding pages that are derived from my base class and added the form elements. I know, Visual Studio has problem showing the page in the design mode. I have a dropdown box and wish to add the event of onselectedindexchange. But it is not working. the page is like this: <%@ Page Language="C#" AutoEventWireup="true" Inherits="trigon.web.Pages.MIS.JobStatus" Title="Job Status" AspCompat="true" CodeBehind="JobStatus.aspx.cs" %> <asp:Content ID="Content1" ContentPlaceHolderID="cphBody" runat="Server"> <div id="divError" runat="server" /> <asp:DropDownList runat="server" id="jobType" onselectedindexchange="On_jobTypeSelection_Change"></asp:DropDownList> </asp:Content> I have also tried adding the event on the code behind like: protected void Page_Load(object sender, EventArgs e) { jobType.SelectedIndexChanged += new System.EventHandler(this.On_jobTypeSelection_Change); if (!IsPostBack) { JobStatus_DA da = new JobStatus_DA(); jobType.DataSource = da.getJobTypes(); jobType.DataBind(); } } protected void On_jobTypeSelection_Change(Object sender, EventArgs e) { //do something here } Can anybody help? Regards,

    Read the article

  • How to efficiently compare the sign of two floating-point values while handling negative zeros

    - by François Beaune
    Given two floating-point numbers, I'm looking for an efficient way to check if they have the same sign, given that if any of the two values is zero (+0.0 or -0.0), they should be considered to have the same sign. For instance, SameSign(1.0, 2.0) should return true SameSign(-1.0, -2.0) should return true SameSign(-1.0, 2.0) should return false SameSign(0.0, 1.0) should return true SameSign(0.0, -1.0) should return true SameSign(-0.0, 1.0) should return true SameSign(-0.0, -1.0) should return true A naive but correct implementation of SameSign in C++ would be: bool SameSign(float a, float b) { if (fabs(a) == 0.0f || fabs(b) == 0.0f) return true; return (a >= 0.0f) == (b >= 0.0f); } Assuming the IEEE floating-point model, here's a variant of SameSign that compiles to branchless code (at least with with Visual C++ 2008): bool SameSign(float a, float b) { int ia = binary_cast<int>(a); int ib = binary_cast<int>(b); int az = (ia & 0x7FFFFFFF) == 0; int bz = (ib & 0x7FFFFFFF) == 0; int ab = (ia ^ ib) >= 0; return (az | bz | ab) != 0; } with binary_cast defined as follow: template <typename Target, typename Source> inline Target binary_cast(Source s) { union { Source m_source; Target m_target; } u; u.m_source = s; return u.m_target; } I'm looking for two things: A faster, more efficient implementation of SameSign, using bit tricks, FPU tricks or even SSE intrinsics. An efficient extension of SameSign to three values.

    Read the article

  • How do I install websocket module for Node.js on Debian VPS?

    - by Ollie Shaw
    I currently am renting a VPS from Dreamhost which runs Debian. I am still learning command line on this OS, but fast! I have successfully installed Node.js, now I want to install the websocket module found here: https://github.com/Worlize/WebSocket-Node From the root user, I have run the following command: npm install websocket The error thrown is: [websocket v1.0.7] Native code compile failed!! On Windows, native extensions require Visual Studio and Python. On Unix, native extensions require Python, make and a C++ compiler. Start npm with --websocket:verbose to show compilation output (if any). What commands should I issue to install this websocket module and its requirements? Thanks very much! Edit: When I run sudo apt-get install gcc make I get this message: Reading package lists... Done Building dependency tree Reading state information... Done gcc is already the newest version. gcc set to manually installed. make is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 44 not upgraded. And the same error when trying to install WebSocket.

    Read the article

  • implementing stretchable dialog borders in iphone sdk

    - by Joey
    Hi, I want to implement dialog borders that scale to the size I require the dialog to be. Perhaps there is a better more conventional name for this sort of thing. If there is, if someone would edit the title, that'd be great. Anyhow, I'd like to do this so I can have dialogs of any size without the visual artifacts that come with scaling border art to small, large, or wacky unproportional dimentions. I have a few ideas on how this is done, but am not sure which is better for iphone. I have a few questions. 1) Should I make a containing view object that basically overloads its drawRect method and draws the images where they should be at their appropriate scale when the method is called, or should I main a containing view object that simply contains 8 UIImageViews? I suspect the latter approach won't work if I need to actively scale the resulting dialog class like in an animation. 1b) If overloading drawRect is the way to go, does someone have some sample code or a link to an example that demonstrates drawing an image directly from drawRect()? 2) Is it generally better to create a) a 3 x 3 image where the segments are in their appropriate 1x1 grid of the image? If so, is it simple to draw from a portion of this image onto my target view in drawRect (if the former assumption is correct that I should use drawRect)? b) The pieces separately in 8 different files?

    Read the article

  • Screen information while Windows system is locked (.NET)

    - by Matt
    We have a nightly process that updates applications on a user's pc, and that requires bringing the application down and back up again (not looking to get into changing that process). The problem is that we are building a Windows AppBar on launch which requires a valid screen, and when the system is locked there isn't one in the Screen class. So none of the visual effects are enabled and it shows up real ugly. The only way we currently have around this is to detect a locked screen and just spin and wait until the user unlocks the desktop, then continue launching. Leaving it down isn't an option, as this is a key part of our user's workflow, and they expect it to be up and running if they left it that way the night before. Any ideas?? I can't seem to find the display information anywhere, but it has to be stored off someplace, since the user is still logged in. The contents of the Screen.AllScreens array: ** When Locked: Device Name : DISPLAY Primary : True Bits Per Pixel : 0 Bounds : {X=-1280,Y=0,Width=2560,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=1024} ** When Unlocked: Device Name : \\.\DISPLAY1 Primary : True Bits Per Pixel : 32 Bounds : {X=0,Y=0,Width=1280,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=994} Device Name : \\.\DISPLAY2 Primary : False Bits Per Pixel : 32 Bounds : {X=-1280,Y=0,Width=1280,Height=1024} Working Area : {X=-1280,Y=0,Width=1280,Height=964}

    Read the article

  • INSERT stored procedure does not work?

    - by vikitor
    Hello, I'm trying to make an insertion from one database called suspension to the table called Notification in the ANimals database. My stored procedure is this: ALTER PROCEDURE [dbo].[spCreateNotification] -- Add the parameters for the stored procedure here @notRecID int, @notName nvarchar(50), @notRecStatus nvarchar(1), @notAdded smalldatetime, @notByWho int AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here INSERT INTO Animals.dbo.Notification values (@notRecID, @notName, @notRecStatus, null, @notAdded, @notByWho); END The null inserting is to replenish one column that otherwise will not be filled, I've tried different ways, like using also the names for the columns after the name of the table and then only indicate in values the fields I've got. I know it is not a problem of the stored procedure because I executed it from the sql server management studio and it works introducing the parameters. Then I guess the problem must be in the repository when I call the stored procedure: public void createNotification(Notification not) { try { DB.spCreateNotification(not.NotRecID, not.NotName, not.NotRecStatus, (DateTime)not.NotAdded, (int)not.NotByWho); } catch { return; } } It does not record the value in the database. I've been debugging and getting mad about this, because it works when I execute it manually, but not when I automatize the proccess in my application. Does anyone see anything wrong with my code? Thank you

    Read the article

  • How do I programatically verify, create, and update SQL table structure?

    - by JYelton
    Scenario: I have an application (C#) that expects a SQL database and login, which are set by a user. Once connected, it checks for the existence of several table and creates them if not found. I'd like to expand on this by having the program be capable of adding columns to those tables if I release a new version of the program which relies upon the new columns. Question: What is the best way to programatically check the structure of an existing SQL table and create or update it to match an expected structure? I am planning to iterate through the list of required columns and alter the existing table whenever it does not contain the new column. I can't help but wonder if there's an approach that is different or better. Criteria: Here are some of my expectations and self-imposed rules: Newer versions of the program might no longer use certain columns, but they would be retained for data logging purposes. In other words, no columns will be removed. Existing data in the table must be preserved, so the table cannot simply be dropped and recreated. In all cases, newly added columns would allow null data, so the population of old records is taken care of by having default null values. Example: Here is a sample table (because visual examples help!): id sensor_name sensor_status x1 x2 x3 x4 1 na019 OK 0.01 0.21 1.41 1.22 Then, in a new version, I may want to add the column x5. The "x-columns" are all data-storage columns that accept null.

    Read the article

  • How do i get the screen to pause?

    - by Dakota
    So im learning c++ and i was given this example and i wanted to run it. But i cannot get it to stay up, unless i change it. How do i get Microsoft visual 2010 to keep up the screen when it gets to the end of the program after I release it? include using namespace std; int area(int length, int width); /* function declaration */ /* MAIN PROGRAM: */ int main() { int this_length, this_width; cout << "Enter the length: "; /* <--- line 9 */ cin >> this_length; cout << "Enter the width: "; cin >> this_width; cout << "\n"; /* <--- line 13 */ cout << "The area of a " << this_length << "x" << this_width; cout << " rectangle is " << area(this_length, this_width); return 0; } /* END OF MAIN PROGRAM */ /* FUNCTION TO CALCULATE AREA: */ int area(int length, int width) /* start of function definition */ { int number; number = length * width; return number; } /* end of function definition */ /* END OF FUNCTION */

    Read the article

< Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >