Search Results

Search found 23539 results on 942 pages for 'invoke command'.

Page 339/942 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Git over SSH Server in Windows, cannot find shared libraries.

    - by Roy Marco Aruta
    I was to setup an SSH Server to Host my Git Repository to my local area network. I followed this tutorial by TimDavis hoping that I would be able to make a secured Git Repository. I tested my connection using Putty and it was successful. My only problem was I cannot run "git" command in the console. Then I tried cloning my repository, and this was the error that outputed: /usr/bin/git-upload-pack.exe: error while loading shared libraries: libiconv2.dll: cannot open shared object file: No such file or directory Also when I ran "git" command in the Putty Bash that was connected to the SSH Server, this was the error I encountered: /usr/bin/git.exe: error while loading shared libraries: pthreadGC2.dll: cannot open shared object file: No such file or directory I seems that all my problem was about the missing libraries but I don't know how to solve it. I am using Windows 7 as an Operating System. Thanks

    Read the article

  • How to Integrate ILMerge into C#/VB.NET (MSBuild) Projects to Merge Assemblies?

    - by AMissico
    I want to merge one .NET DLL assembly and one C# Class Library project referenced by a VB.NET Console Application project into one command-line console executable. I can do this with ILMerge from the command-line, but I want to integrate this merging of reference assemblies and projects into the Visual Studio project. From my reading, I understand that I can do this through a MSBuild Task or a Target and just add it to a C#/VB.NET Project file, but I can find no specific example since MSBuild is large topic. How do I integrate ILMerge into a Visual Studio (C#/VB.NET) project, which are just MSBuild projects, to merge all referenced assemblies (copy-local=true) into one assembly? How does this tie into a possible ILMerge.Targets file?

    Read the article

  • Problems upgrading VB.Net 2008 project into VS2010

    - by Brett Rigby
    Hi there, I have been upgrading several different VS2008 projects into VS2010 and have found a problem with VB.Net projects when they are converted. Once converted, the .vbproj files have changed from this in VS2008: <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> <DebugSymbols>true</DebugSymbols> <DebugType>full</DebugType> <DefineDebug>true</DefineDebug> <DefineTrace>true</DefineTrace> <OutputPath>bin\Debug\</OutputPath> <DocumentationFile>CustomerManager.xml</DocumentationFile> <WarningsAsErrors>41999,42016,42017,42018,42019,42020,42021,42022,42032,42036</WarningsAsErrors> </PropertyGroup> To this in VS2010: <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> <DebugSymbols>true</DebugSymbols> <DebugType>full</DebugType> <DefineDebug>true</DefineDebug> <DefineTrace>true</DefineTrace> <OutputPath>bin\Debug\</OutputPath> <DocumentationFile>CustomerManager.xml</DocumentationFile> <NoWarn>42353,42354,42355</NoWarn> <WarningsAsErrors>41999,42016,42017,42018,42019,42020,42021,42022,42032,42036</WarningsAsErrors> </PropertyGroup> The main difference, is that in the VS2010 version, the 42353,42354,42355 value has been added; Inside the IDE, this manifests itself as the following setting in the Project Properties | Compile section as: "Function returning intrinsic value type without return value" = None This isn't a problem when building code inside Visual Studio 2010, but when trying to build the code through our continuous integration scripts, it fails with the following errors: [msbuild] vbc : Command line error BC2026: warning number '42353' for the option 'nowarn' is either not configurable or not valid [msbuild] vbc : Command line error BC2026: warning number '42354' for the option 'nowarn' is either not configurable or not valid [msbuild] vbc : Command line error BC2026: warning number '42355' for the option 'nowarn' is either not configurable or not valid I couldn't find anything on Google for these messages, which is strange, as I am trying to find out why this is happening. Any suggestions as to why Visual Studio 2010's conversion wizard is doing this?

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Unable to find standard libraries when compiling Objective-C using GNUstep on Windows

    - by Jason Roberts
    I just installed GNUstep on my Windows XP machine and I'm attempting to compile the following Objective-C Hello World program from the command line: #import <Foundation/Foundation.h> int main(int argc, const char *argv[]) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSLog(@"Hello world\n"); [pool drain]; return 0; } When I try to compile the program from the command line like so gcc hello.m -o hello I end up getting the following error hello.m:1:34: Foundation/Foundation.h: No such file or directory Is there something I need to do order to inform the compiler of where the standard Objective-C libraries are located?

    Read the article

  • Error reading in date using Zoo in R

    - by SASnewby
    I am trying to use the Zoo package to read in daily observations which do not occur on each day. The date format looks like this, and it is in the first column of dataset: 25-May-07 The command I use is: z <- read.zoo("multiplier7.csv", sep = ";", header = TRUE, format="%d-%b-%y") I get this error: Error in strptime(x, format, tz = "GMT") : input string is too long First question is: 1. Is Zoo the right package to use if I want to use lm regression with a spline time dummy? The bs command does not work because I need to convert it into a time series. 2. Why is this error occurring and how to fix?

    Read the article

  • rails paperclip and passenger `is not recognized by the 'identify' command`

    - by Joseph Silvashy
    When I upload a photo, my model fails validation, err well even without any validations I'm returned this error: /tmp/stream20100103-13830-ywmerx-0 is not recognized by the 'identify' command. and /tmp/stream20100103-13830-ywmerx-0 is not recognized by the 'identify' command. I'm confident this is not related to ImageMagick because I've removed any image processing from the uploading, also I've tried uploading different mime types, like .txt files and the such. Additionally, I found something that may work. A blog post claims that putting the following in my environment (in this case development.rb) Paperclip.options[:command_path] = "/opt/local/bin"

    Read the article

  • How can I get TFS 2010 to build each project to a separate directory?

    - by Jonathan Schuster
    In our project, we'd like to have our TFS build put each project into its own folder under the drop folder, instead of dropping all of the files into one flat structure. To illustrate, we'd like to see something like this: DropFolder/ Foo/ foo.exe Bar/ bar.dll Baz baz.dll This is basically the same question as was asked here, but now that we're using workflow-based builds, those solutions don't seem to work. The solution using the CustomizableOutDir property looked like it would work best for us, but I can't get that property to be recognized. I customized our workflow to pass it in to MSBuild as a command line argument (/p:CustomizableOutDir=true), but it seems MSBuild just ignores it and puts the output into the OutDir given by the workflow. I looked at the build logs, and I can see that the CustomizableOutDir and OutDir properties are both getting set in the command line args to MSBuild. I still need OutDir to be passed in so that I can copy my files to TeamBuildOutDir at the end. Any idea why my CustomizableOutDir parameter isn't getting recognized, or if there's a better way to achieve this?

    Read the article

  • Improving WIF&rsquo;s Claims-based Authorization - Part 2

    - by Your DisplayName here!
    In the last post I showed you how to take control over the invocation of ClaimsAuthorizationManager. Then you have complete freedom over the claim types, the amount of claims and the values. In addition I added two attributes that invoke the authorization manager using an “application claim type”. This way it is very easy to distinguish between authorization calls that originate from WIF’s per-request authorization and the ones from “within” you application. The attribute comes in two flavours: a CAS attribute (invoked by the CLR) and an ASP.NET MVC attribute (for MVC controllers, invoke by the MVC plumbing). Both also feature static methods to easily call them using the application claim types. The CAS attribute is part of Thinktecture.IdentityModel on Codeplex (or via NuGet: Install-Package Thinktecture.IdentityModel). If you really want to see that code ;) There is also a sample included in the Codeplex donwload. The MVC attribute is currently used in Thinktecture.IdentityServer – and I don’t currently plan to make it part of the library project since I don’t want to add a dependency on MVC for now. You can find the code below – and I will write about its usage in a follow-up post. public class ClaimsAuthorize : AuthorizeAttribute {     private string _resource;     private string _action;     private string[] _additionalResources;     /// <summary>     /// Default action claim type.     /// </summary>     public const string ActionType = "http://application/claims/authorization/action";     /// <summary>     /// Default resource claim type     /// </summary>     public const string ResourceType = "http://application/claims/authorization/resource";     /// <summary>     /// Additional resource claim type     /// </summary>     public const string AdditionalResourceType = "http://application/claims/authorization/additionalresource"          public ClaimsAuthorize(string action, string resource, params string[] additionalResources)     {         _action = action;         _resource = resource;         _additionalResources = additionalResources;     }     public static bool CheckAccess(       string action, string resource, params string[] additionalResources)     {         return CheckAccess(             Thread.CurrentPrincipal as IClaimsPrincipal,             action,             resource,             additionalResources);     }     public static bool CheckAccess(       IClaimsPrincipal principal, string action, string resource, params string[] additionalResources)     {         var context = CreateAuthorizationContext(             principal,             action,             resource,             additionalResources);         return ClaimsAuthorization.CheckAccess(context);     }     protected override bool AuthorizeCore(HttpContextBase httpContext)     {         return CheckAccess(_action, _resource, _additionalResources);     }     private static WIF.AuthorizationContext CreateAuthorizationContext(       IClaimsPrincipal principal, string action, string resource, params string[] additionalResources)     {         var actionClaims = new Collection<Claim>         {             new Claim(ActionType, action)         };         var resourceClaims = new Collection<Claim>         {             new Claim(ResourceType, resource)         };         if (additionalResources != null && additionalResources.Length > 0)         {             additionalResources.ToList().ForEach(ar => resourceClaims.Add(               new Claim(AdditionalResourceType, ar)));         }         return new WIF.AuthorizationContext(             principal,             resourceClaims,             actionClaims);     } }

    Read the article

  • install play-framework in Ubuntu 9.10

    - by Shekhar
    I have copied zipped file from the playframework.org website and unzipped it at a location. I have inserted it in my .bashrc profile to set up as PATH environment. But still, the play command is not accessible from anywhere. And even in the installed directory of the framework, the play file is not running as it is. I have to prefix python before any play command to run it. Am i making a mistake somewhere? Please help me.

    Read the article

  • F# PowerPack 2.0.0.0 issue: The task ..."…\fslex.exe" is invalid

    - by Roman Kuzmin
    I have upgraded F# PowerPack today to the latest 2.0.0.0 and tried to rebuild the MiniCalc sample from here: http://achrissmith.blogspot.com/2010/04/fslex-and-fsyacc-examples-updated.html If I build it in VS 2010 it fails with the message: C:\Program Files\MSBuild\FSharp\1.0\FSharp.PowerPack.targets(32,3): error MSB6004: The specified task executable location "C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\fslex.exe" is invalid. If I build it from the command line by MSBuild it complains about missing C:\Windows\Microsoft.NET\Framework\v4.0.30319\fslex.exe The problem is kind of “fixed” if I copy fslex and fsyacc to that both directories, so after that I can build from the command line and from VS 2010. But it does not look like a right way to solve the problem. What is the right way? EDIT: The same issue is true for the PowerPack sample from sources: May2010\workyard\tests\LexAndYaccMiniProject. Now (after the trick I have done) it is built fine, too.

    Read the article

  • MSSQL: How to copy a file (pdf, doc, txt...) stored in a varbinary(max) field to a file in a CLR sto

    - by user193655
    I ask this question as a followup of this question. A solution that uses bcp and xp_cmdshell, that is not my desired solution, has been posted here: stackoverflow.com/questions/828749/ms-sql-server-2005-write-varbinary-to-file-system (sorry i cannot post a second hyperlink since my reputation is les than 10). I am new to c# (since I am a Delphi developer) anyway I was able to create a simple CLR stored procedures by following a tutorial. My task is to move a file from the client file system to the server file system (the server can be accessed using remote IP, so I cannot use a shared folder as destination, this is why I need a CLR stored procedure). So I plan to: 1) store from Delphi the file in a varbinary(max) column of a temporary table 2) call the CLR stored procedure to create a file at the desired path using the data contained in the varbinary(max) field Imagine I need to move C:\MyFile.pdf to Z:\MyFile.pdf, where C: is a harddrive on local system and Z: is an harddrive on the server. I provide the code below (not working) that someone can modify to make it work? Here I suppose to have a table called MyTable with two fields: ID (int) and DATA (varbinary(max)). Please note it doesn't make a difference if the table is a real temporary table or just a table where I temporarly store the data. I would appreciate if some exception handling code is there (so that I can manage an "impossible to save file" exception). I would like to be able to write a new file or overwrite the file if already existing. [Microsoft.SqlServer.Server.SqlProcedure] public static void VarbinaryToFile(int TableId) { using (SqlConnection connection = new SqlConnection("context connection=true")) { connection.Open(); SqlCommand command = new SqlCommand("select data from mytable where ID = @TableId", connection); command.Parameters.AddWithValue("@TableId", TableId); // This was the sample code I found to run a query //SqlContext.Pipe.ExecuteAndSend(command); // instead I need something like this (THIS IS META_SYNTAX!!!): SqlContext.Pipe.ResultAsStream.SaveToFile('z:\MyFile.pdf'); } } (one subquestion is: is this approach coorect or there is a way to directly pass the data to the CLR stored procedure so I don't need to use a temp table?) If the subquestion's answer is No, could you describe the approach of avoiding a temp table? So is there a better way then the one I describe above (=temp table + Stored procedure)? A way to directly pass the dataastream from the client application to the CLR stored procedure? (my files can be any size but also very big)

    Read the article

  • Defining a ContextMenu in a DataGridRow style

    - by Brent
    I'm trying to clean up some of my xaml in my views by moving a lot of the DataGrid styles into a ResourceDictionary. One of the things I'd like to move is the ContextMenu that is bound to some commands in the ViewModel. However, when I move the context menu to the ResourceDictionary, the commands are are never firing anymore, and I can't figure out why. I've defined the ContextMenu in the DataGridRow style so that when the user right clicks on the columnheader, no ContextMenu is shown... it will only be shown they right click on a row. Am I doing something wrong here? FYI I'm using VS 2010 RTM if that makes a difference. <Style x:Key="DataGridRowStyle" TargetType="{x:Type DataGridRow}"> <Setter Property="Height" Value="20"/> <Setter Property="ContextMenu"> <Setter.Value> <ContextMenu> <MenuItem Header="New" Command="{Binding RelativeSource={RelativeSource AncestorType=DataGrid}, Path=DataContext.NewCommand}"> <MenuItem.Icon> <Image Source="/Images/DocumentWhite(32N).png" Width="16" Height="16"/> </MenuItem.Icon> </MenuItem> <MenuItem Header="Open" Command="{Binding RelativeSource={RelativeSource AncestorType=DataGrid}, Path=DataContext.OpenCommand}"> <MenuItem.Icon> <Image Source="/Images/FolderOpenYellow(32N).png" Width="16" Height="16"/> </MenuItem.Icon> </MenuItem> <MenuItem Header="Delete" Command="{Binding RelativeSource={RelativeSource AncestorType=DataGrid}, Path=DataContext.DeleteCommand}"> <MenuItem.Icon> <Image Source="/Images/Delete(32N).png" Width="16" Height="16"/> </MenuItem.Icon> </MenuItem> </ContextMenu> </Setter.Value> </Setter> <Style.Triggers> <Trigger Property="IsMouseOver" Value="True"> <Setter Property="Background" Value="{StaticResource hoverGradient}"/> </Trigger> <Trigger Property="IsSelected" Value="True"> <Setter Property="Background" Value="{StaticResource BtnOverFill}"/> </Trigger> </Style.Triggers> </Style>

    Read the article

  • Best programming aids for a quadriplegic programmer

    - by Peter Rowell
    Before you jump to conclusions, yes, this is programming related. It covers a situation that comes under the heading of, "There, but for the grace of God, go you or I." This is brand new territory for me so I'm asking for some serious help here. A young man, Honza Ripa, in a nearby town did the classic Dumb Thing two weeks after graduating from High School -- he dove into shallow water in the Russian River and had a C-4/C-5 break, sometimes called a Swimming Pool break. In a matter of seconds he went from an exceptional golfer and wrestler to a quadriplegic. (Read the story ... all of us should have been so lucky as to have a girlfriend like Brianna.) That was 10 months ago and he has regained only tiny amounts of control of his right index finger and a couple of other hand/foot motions, none of them fine-grained. His total control of his computer (currently running Win7, but we can change that as needed) is via voice command. Honza's not dumb. He had a 3.7 GPA with AP math and physics. The Problems: Since all of his input is via voice command, he is concerned that the predominance of special characters in programming will require vast amount of verbose commands. Does anyone know of any well done voice input system specifically designed for programmers? I'm thinking about something that might be modal--e.g. you say "Python input" and it goes into a macro mode for doing class definitions, etc. Given all of the RSI in programmer-land there's got to be something out there. What OS(es) does it run on? I am planning on teaching him Python, which is my preferred language for programming and teaching. Are there any applications / whatevers that are written in Python and would be a particularly good match for engaging him mentally while supporting his disability? One of his expressed interests is in stock investing, but that not might be a good starting point for a brand-new programmer. There are a lot of environments (Flash, JavaScript, etc) that are not particularly friendly to people with accessibility challenges. I vaguely remember (but cannot find) a research project that basically created an overlay system on top of a screen environment and then allowed macro command construction on top of the screen image. If we can get/train this system, we may be able to remove many hurdles to using the net. I am particularly interested in finding open source Python-based robotics and robotic prostheses projects so that he can simultaneously learn advanced programming concepts while learning to solve some of his own immediate problems. I've done a ton of googling on this, but I know there things I'm missing. I'm asking the SO community to step up to the plate here. I know this group has the answers, so let me hear them! Overwhelm me with the opportunities that any of us might have/need to still program after such a life-changing event.

    Read the article

  • Overwriting the content from one MOSS content database to another

    - by 78lro
    We have a content database on our live moss server. It contains one site collection with several sub-sites. I'm using the stsadm export command to produce a cmp file, then moving this to our test server in a different farm. I then want to import this content into the content database on our test farm, using the import stsadm command results in me being left with all the existing test data as well as the live data. I tried detaching the existing content database from test in central admin and creating a new empty one,to the then run the import against that but the import failed as obviously there's not root site in the empty db. The aim is to have the data on test look like live, clearing out all the test data. Can anyone suggest a good approach to this type of problem?

    Read the article

  • nmake makefile, linking objects files in a subfolder

    - by Gauthier
    My makefile defines a link command: prod_link = $(LINK) $(LINK_FLAGS) -o$(PROD_OUT) $(PROD_OBJS) where $(PROD_OBJS) is a list of object files of the form: PROD_OBJS = objfile1.obj objfile2.obj objfile3.obj ... objfileN.obj Now the makefile itself is at the root of my project directory. It gets messy to have object and listing files at the root, I'd like to put them in a subfolder. Building and outputing the obj files to a subfolder works, I'm doing it with suffixes and inference: .s.obj: $(ASSEMBLY) $(FLAGS) $*.s -o Objects\$*.obj The problem is to pass the Objects folder to the link command. I tried: prod_link = $(LINK) $(LINK_FLAGS) -o$(PROD_OUT) Objects\$(PROD_OBJS) but only the first file in the list of object files gets the folder's name. How can I pass the Objects subfolder to all files of my list $(PROD_OBJS)? EDIT I tried also PROD_OBJS = $(patsubst %.ss,Object\%.obj, $(PROD_SRC)) but got: makefile(51) : fatal error U1000: syntax error : ')' missing in macro invocation Stop. This is quite strange...

    Read the article

  • how to run existing ant script from groovy

    - by Omnipresent
    my build is a 3 step process. run ant to build. transfer war to server. touch reload file. I have transfered last two steps in groovy, using antbuilder. However, I am not able to run my existing ant script using groovy. Usually I run it using the following command in dos prompt: ant -Dsystem=mysystem -DsomeotherOption=true from groovy when I try to do "ant -Dsystem=mysystem -DsomeotherOption=true".execute() it gives an error saying ant is not a recognized command. How can I utilize my existing ant script in groovy?

    Read the article

  • How to automaticaly backup TFS 2010

    - by Julien Ferraro
    Hello, I'm evaluating Team Foundation Server 2010. I would like to know if there is some command line to backup my TFS data. I currently have a folder sent to the cloud. This backup contains all the data I need to back up (like MySql databases, word documents, ...) What I want is a way to automatically backup my TFS collections (and any other important TFS data) in one (or more) file in this directory. A command line would be perfect. Many thanks Julien

    Read the article

  • pros and cons with server management gui tools to manage linux web servers

    - by ajsie
    i have stumbled upon these GUI tools that could help you manage your linux server through a web interface. ebox, webmin, ispconfig, zivios, ispcp, plesk, cpanel etc. i wonder what the pros and cons are with these solutions. a lot of people is saying that they are not as good as using pure command line (ssh) to manage your server. but i think thats yet another "linux are for advanced users" talk. i agree that some things may only be done with the command line by editing directly in the configuration files. but i don't really want to do that every time and for everything. its like not having phpmyadmin for managing mysql. it would be a pain in the ass right? so if one wants to throw up a web server serving a php site oneself developed and wants all the usual stuff up and running (mysql, phpmyadmin, svn, webdav etc) is these tools the right way to go?

    Read the article

  • CommandBinding CanExecute always null

    - by developer
    Hi All, I am using CommandBinding to display visibility of a button. Below is my code <UserControl.CommandBindings> <CommandBinding Command="{x:Static local:AttendeePanel.LaunchAttEditor}" Executed="LaunchAttEditor_Executed" CanExecute="CanCreateProfile"/> </UserControl.CommandBindings> <Button Content="Create Profile" Command="local:LaunchEditor" CommandParameter="{Binding Profile}" Name="BtnCreate"> My problem is that CanExecute method always gets null as parameter even though I am binding the parameter to Profile. Is there a way I can set Data Context? or is this because the canexecute runs before data load?

    Read the article

  • How do I unescape HTML entities in a string in Python 3.1?

    - by Sho Minamimoto
    I have looked all around and only found solutions for python 2.6 and earlier, NOTHING on how to do this in python 3.X. (I only have access to Win7 box.) I HAVE to be able to do this in 3.1 and preferably without external libraries. Currently, I have httplib2 installed and access to command-prompt curl (that's how I'm getting the source code for pages). Unfortunately, curl does not decode html entities, as far as I know, I couldn't find a command to decode it in the documentation. YES, I've tried to get Beautiful Soup to work, MANY TIMES without success in 3.X. If you could provide EXPLICIT instructions on how to get it to work in python 3 in MS Windows environment, I would be very grateful. So, to be clear, I need to turn strings like this: Suzy &amp; John into a string like this: "Suzy & John".

    Read the article

  • Error setting up thrift modules for python

    - by MMRUser
    Hi, I'm trying to set up thrift in order to incorporate with Cassandra, so when I ran the setup.py it out puts this message in command line running build running build_py running build_ext building 'thrift.protocol.fastbinary' extension C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\Python26\include -IC:\Pytho n26\PC -c src/protocol/fastbinary.c -o build\temp.win32-2.6\Release\src\protocol \fastbinary.o src/protocol/fastbinary.c:24:24: netinet/in.h: No such file or directory src/protocol/fastbinary.c:85:4: #error "Cannot determine endianness" src/protocol/fastbinary.c: In function `writeI16': src/protocol/fastbinary.c:295: warning: implicit declaration of function `htons' src/protocol/fastbinary.c: In function `writeI32': src/protocol/fastbinary.c:300: warning: implicit declaration of function `htonl' src/protocol/fastbinary.c: In function `readI16': src/protocol/fastbinary.c:688: warning: implicit declaration of function `ntohs' src/protocol/fastbinary.c: In function `readI32': src/protocol/fastbinary.c:696: warning: implicit declaration of function `ntohl' error: command 'gcc' failed with exit status 1 Need some helping on this issue.I have already install the MigW32 Thanks.

    Read the article

  • Watir with IronRuby!

    - by azamsharp
    Has anyone used Watir with IronRuby successfully? I am getting an error that the required file 'Watir' was not found. What path do I need to set to get this file to work in IronRuby? For some reason my igem command is not working: C:\DevTools\IronRuby\ironruby\Merlin\Main\Languages\Ruby\Scripts\binigem instal l watir '"C:\DevTools\IronRuby\ironruby\Merlin\Main\Languages\Ruby\Scripts\bin\ir.exe"' is not recognized as an internal or external command, operable program or batch file. I am using 0.9 version of Ironruby. I remember that in 0.9 you have to indicate the ir tool: I used the following and got the error again! C:\DevTools\IronRuby\ironruby\Merlin\Main\Languages\Ruby\Scripts\binir igem ins tall watir ERROR: While executing gem ... (RangeError) bignum too big to convert into Fixnum The current version of RubyGems is 1.3.5: C:\DevTools\IronRuby\ironruby\Merlin\Main\Languages\Ruby\Scripts\binir igem -v 1.3.5 I even tried using the full path: require File.dirname(__FILE__) + "C:/ruby/lib/ruby/gems/1.8/gems/commonwatir-1.6.2/lib/watir.rb"

    Read the article

  • Including configuration files while compiling a Flex application with MXMLC

    - by Daniel
    Hello there, I'm using: - Flex SDK 3.5.0 - Parsley 2.2.2. - Flash Builder 4 Down in my src folder (which is configured as part of the source path in the Flash Builder), I have a logging.xml which I configure via Parsley: FlexLoggingXmlSupport.initialize(); XmlContextBuilder.build("com/company/product/util/log/logging.xml"); When I run my application through Flash Builder, the XmlContentBuilder seems to locate the logging.xml (the implementation is a regular URLLoader one). When I compile my application using MXMLC (whether in Ant or command-line), and then run the swf, I get the following error: Cause(0): Error loading com/company/product/util/log/logging.xml: Error in URLLoader - cause: Error #2032: Stream Error. URL: file:///C|/workspace/folder01/product/target/com/company/product/util/log/logging.xml - cause: Error #2032: Stream Error. URL: file:///C|/workspace/folder01/product/target/com/company/product/util/log/logging.xml Here is the MXMLC tag in Ant: <mxmlc file="${product.src.dir}/com/company/product/view/Main.mxml" output="${product.target.dir}/${product.release.filename}" keep-generated-actionscript="false"> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml" /> <!-- source paths --> <source-path path-element="${FLEX_HOME}/frameworks" /> <compiler.source-path path-element="${product.src.dir}" /> <compiler.source-path path-element="${product.locale.dir}/{locale}" /> <compiler.library-path dir="${product.basedir}" append="true"> <include name="libs" /> </compiler.library-path> <warnings>false</warnings> <debug>false</debug> </mxmlc> And here is the command line: \mxmlc.exe -output "C:\temp\Rap.swf" -load-config "C:\Program Files\Adobe\Adobe Flash Builder 4 Plug-in\sdks\3.5.0\frameworks\flex-config.xml" -source-path "C:\Program Files\Adobe\Adobe Flash Builder 4 Plug-in\sdks\3.5.0\frameworks" C:\workspace\folder01\product\src C:\workspace\folder01\product\locale\en_US -library-path+=C:\workspace\folder01\product\libs -file-specs C:\workspace\folder01\product\src\com\company\product\view\main.mxml Now perhaps I don't get this correctly, but as far as I understand the SWF should be compiled with all of the resources in the paths I give MXMLC as source-paths. For some reason it seems that the XML file is not compiled into the SWF, hence the relative path of the XmlContentBuilder isn't located successfully. I could not find any argument to provide the MXMLC with that might solve this. I tried using the -dump-config option with the Flash Builder's compiler, then giving that configuration to MXMLC, but it didn't work either. I tried providing the XmlContentBuilder with an absolute path. That worked fine when I compiled with MXMLC via Ant, but still didn't work when I used MXMLC in the command-line... I'd be happy to be enlightened here, regarding all subjects - using MXMLC, accessing resources with relative paths, configuring logging in Parsley, etc. Many thanks in advance, Daniel

    Read the article

  • Python (pdb) - Queueing up commands to execute

    - by kpatelPro
    I am implementing a "breakpoint" system for use in my Python development that will allow me to call a function that, in essence, calls pdb.set_trace(); Some of the functionality that I would like to implement requires me to control pdb from code while I am within a set_trace context. Example: disableList = [] def breakpoint(name=None): def d(): disableList.append(name) #**** #issue 'run' command to pdb so user #does not have to type 'c' #**** if name in disableList: return print "Use d() to disable breakpoint, 'c' to continue" pdb.set_trace(); In the above example, how do I implement the comments demarked by the #**** ? In other parts of this system, I would like to issue an 'up' command, or two sequential 'up' commands without leaving the pdb session (so the user ends up at a pdb prompt but up two levels on the call stack. Thanks!

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >