Search Results

Search found 9596 results on 384 pages for 'remote assistance'.

Page 348/384 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • GIT clone repo across local file system

    - by Jon
    Hi all, I am a complete Noob when it comes to GIT. I have been just taking my first steps over the last few days. I setup a repo on my laptop, pulled down the Trunk from an SVN project (had some issues with branches, not got them working), but all seems ok there. I now want to be able to pull or push from the laptop to my main desktop. The reason being the laptop is handy on the train as I spend 2 hours a day travelling and can get some good work done. But my main machine at home is great for development. So I want to be able to push / pull from the laptop to the main computer when I get home. I thought the most simple way of doing this would be to just have the code folder shared out across the LAN and do: git clone file://192.168.10.51/code unfortunately this doesn't seem to be working for me: so I open a git bash cmd and type the above command, I am in C:\code (the shared folder for both machines) this is what I get back: Initialized empty Git repository in C:/code/code/.git/ fatal: 'C:/Program Files (x86)/Git/code' does not appear to be a git repository fatal: The remote end hung up unexpectedly How can I share the repository between the two machines in the most simple of ways. There will be other locations that will be official storage points and places where the other devs and CI server etc will pull from, this is just so that I can work on the same repo across two machines. Thanks

    Read the article

  • Maven build fails on an Ant FTP task failure

    - by fraido
    I'm using the FTP Ant task with maven-antrun-plugin <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <id>ftp</id> <phase>generate-resources</phase> <configuration> <tasks> <ftp action="get" server="${ftp.server.ip}" userid="${ftp.server.userid}" password="${ftp.server.password}" remotedir="${ftp.server.remotedir}" depends="yes" verbose="yes" skipFailedTransfers="true" ignoreNoncriticalErrors="true"> <fileset dir="target/test-classes/testdata"> <include name="**/*.html" /> </fileset> </ftp> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> ... the problem is that my build fails when the folder ${ftp.server.remotedir} doesn't exist. I tried to specify skipFailedTransfers="true" ignoreNoncriticalErrors="true but these don't fix the problem and the build keeps failing. An Ant BuildException has occured: could not change remote directory: 550 /myBadDir: The system cannot find the file specified. Do you know how to instruct my maven build to don't care about this Ant task error

    Read the article

  • Can't connect to SQL Server 2005 Express from an ASP.NET C# page

    - by Aviv
    Hey guys, I have an MS SQL Server 2005 Express running on a VPS. I'm using pymssql in Python to connect to my server with the following code: conn = pymssql.connect(host='host:port', user='me', password='pwd', database='db') and it works perfectly. When I try to connect to the server from an ASP.NET C# page with the following code: SqlConnection myConnection = new SqlConnection("Data Source=host,port;Network Library=DBMSSOCN; Initial Catalog=db;User ID=me;Password=pwd;"); myConnection.Open(); When I run the ASP.NET page I get the following exception at myConnection.Open();: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) I tried restarting the SQL Server but I had no luck. Can anyone point me out to what I'm missing here? Thanks!

    Read the article

  • .NET Remoting Connecting to Wrong Host

    - by Dark Falcon
    I have an application I wrote which has been running well for 4 years. Yesterday they moved all their servers around and installed about 60 pending Windows updates, and now it is broken. The application makes use of remoting to update some information on another server (10.0.5.230), but when I try to create my remote object, I get the following exception: Note that it is trying to connect to 127.0.0.1, not the proper server. The server (10.0.5.230) is listening on port 9091 as it should. This same error is happening on all three terminal servers where this application is installed. Here is the code which registers the remoted object: public static void RegisterClient() { string lServer; RegistryKey lKey = Registry.CurrentUser.OpenSubKey("SOFTWARE\\Shoreline Teleworks\\ShoreWare Client"); if (lKey == null) throw new InvalidOperationException("Could not find Shoretel Call Manager"); object lVal = lKey.GetValue("Server"); if (lVal == null) throw new InvalidOperationException("Shoretel Call Manager did not specify a server name"); lServer = lVal.ToString(); IDictionary props = new Hashtable(); props["port"] = 0; string s = System.Guid.NewGuid().ToString(); props["name"] = s; ChannelServices.RegisterChannel(new TcpClientChannel(props, null), false); RemotingConfiguration.RegisterActivatedClientType(typeof(UpdateClient), "tcp://" + lServer + ":" + S_REMOTING_PORT + "/"); RemotingConfiguration.RegisterActivatedClientType(typeof(Playback), "tcp://" + lServer + ":" + S_REMOTING_PORT + "/"); } Here is the code which calls the remoted object: UpdateClient lUpdater = new UpdateClient(Settings.CurrentSettings.Extension.ToString()); lUpdater.SetAgentState(false); I have verified that the following URI is passed to RegisterActivatedClientType: "tcp://10.0.5.230:9091/" Why does this application try to connect to the wrong server?

    Read the article

  • Git Workflow With Capistrano

    - by jerhinesmith
    I'm trying to get my head around a good git workflow using capistrano. I've found a few good articles, but I'm either not grasping completely what they're suggesting (likely) or they're somewhat lacking. Here's kind of what I had in mind so far, but I get caught up when to merge back into the master branch (i.e. before moving to stage? after?) and trying to hook it into capistrano for deployments: Make sure you’re up to date with all the changes made on the remote master branch by other developers git checkout master git pull Create a new branch that pertains to the particular bug you're trying to fix git checkout -b bug-fix-branch Make your changes git status git add . git commit -m "Friendly message about the commit" So, this is usually where I get stuck. At this point, I have a master branch that is healthy and a new bug-fix-branch that contains my (untested -- other than unit tests) changes. If I want to push my changes to stage (through cap staging deploy), do I have to merge my changes back into the master branch (I'd prefer not to since it seems like master should be kept free of untested code)? Do I even deploy from master (or should I be tagging a release first and then modifying my production.rb file to deploy from that tag)? git-deployment seems to address some of these workflow issues, but I can't seem to find out how on earth it actually hooks into cap staging deploy and cap production deploy. Thoughts? I assume there's a likely canonical way to do this, but I either can't find it or I'm too new to git to recognize that I have found it. Help!

    Read the article

  • Two way sync with rsync

    - by mwm
    I have a folder a/ and a remote folder A/. I now run something like this on a Makefile: get-music: rsync -avzru server:/media/10001/music/ /media/Incoming/music/ put-music: rsync -avzru /media/Incoming/music/ server:/media/10001/music/ sync-music: get-music put-music when I make sync-music, it first gets all the diffs from server to local and then the opposite, sending all the diffs from local to server. This works very well only if there are just updates or new files on the future. If there are deletions, it doesn't do anything. In rsync there is --delete and --delete-after options to help accomplish what I want but thing is, it doesn't work on a 2-way-sync. If I want to delete server files on a syn, when local files have been deleted, it works, but if, for some reason (explained after) I have some files that aren't in the server but exist locally and they were deleted, I want locally to remove them and not server copied (as it happens). Thing is I have 3 machines in context: desktop notebook home-server So, sometimes, server will have files that were deleted with a notebook sync, for example and then, when I run a sync with my desktop (where the deleted server files still exist on) I want these files to be deleted and not to be copied again to the server. I guess this is only possible with a database and track of operations :P Any simple solutions? Thank you.

    Read the article

  • How to correctly waitFor() a saveScreenShot() end of execution.

    - by Alain
    Here is my full first working test: var expect = require('chai').expect; var assert = require('assert'); var webdriverjs = require('webdriverjs'); var client = {}; var webdriverOptions = { desiredCapabilities: { browserName: 'phantomjs' }, logLevel: 'verbose' }; describe('Test mysite', function(){ before(function() { client = webdriverjs.remote( webdriverOptions ); client.init(); }); var selector = "#mybodybody"; it('should see the correct title', function(done) { client.url('http://localhost/mysite/') .getTitle( function(err, title){ expect(err).to.be.null; assert.strictEqual(title, 'My title page' ); }) .waitFor( selector, 2000, function(){ client.saveScreenshot( "./ExtractScreen.png" ); }) .waitFor( selector, 7000, function(){ }) .call(done); }); after(function(done) { client.end(done); }); }); Ok, it does not do much, but after working many hours to get the environement correctly setup, it passed. Now, the only way I got it working is by playing with the waitFor() method and adjust the delays. It works, but I still do not understand how to surely wait for a png file to be saved on disk. As I will deal with tests orders, I will eventually get hung up from the test script before securely save the file. Now, How can I improve this screen save sequence and avoid loosing my screenshot ? Thanks.

    Read the article

  • Visual Studio 2010 Professional - Problem Unit-Testing Web Services

    - by Ben
    Have created a very simple Web Service (asmx) in Visual Studio 2010 Professional, and am trying to use the auto-generated unit test cases. I get something that seems quite familiar on this site: The web site could not be configured correctly; getting ASP.NET process information failed. Requesting http://localhost:81/zfp/VSEnterpriseHelper.axd return an error: The remote server returned an error: (500) Internal Server Error. http://stackoverflow.com/questions/260432/500-error-running-visual-studio-asp-net-unit-test I have tried: 1. Running the tests on IIS rather than ASP.NET Development Server 2. Adding and then removing the XML fragment to my Web Service's .config file 3. Giving the MACHINE\ASPNET account Full control to the local folder My current questions: 1. Why am I being bothered with this instrumentation / code coverage DLL, when this doesn't seem to be something that ships with Visual Studio 2010 Professional? Is there any way I can turn it off? 2. I'm placing the node under in Web.config - is that the correct node? 3. Is it possible to bind to a web service without using the webby test attributes? I've seen other people advising making the Web Service as light-weight as possible. I'm trying to call it with jQuery / AJAX / JSON, so being able to debug the actual web service would be really helpful. Best wishes, Ben

    Read the article

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • Starting and stopping firefox from c#

    - by Lucas Meijer
    When I start /Applications/Firefox.app/Contents/MacOS/firefox-bin on MacOSX using Process.Start() using Mono, the id of the process that gets returned does not match the process that firefox ends up running under. It looks like firefox quickly decides to start another process, and kill the current one. This makes it difficult to stop firefox, and to detect if it is still running. I've tried starting firefox using the -no-remote flag, to no avail. Is there a way to start firefox in such a way that it doesn't do this "I'll quickly make a new process for you" dance? The situation can somewhat be detected by making sure Firefox keeps on running for at least 3 seconds after its start, and when it does not, scan for other firefox processes. However, this technique is shaky at best, as on slow days it might take a bit more than 3 seconds, and then all tests depending on this behaviour fail. It turns out, that this behaviour only happens when asking firefox to start a specific profile using -P MyProfile. (Which I need to do, as I need to start firefox with specific proxyserver settings) If I start firefox "normally" it does stick to its process.

    Read the article

  • Can I delay the keyup event for jquery?

    - by Paul
    I'm using the rottentomatoes movie API in conjunction with twitter's typeahead plugin using bootstrap 2.0. I've been able to integerate the API but the issue I'm having is that after every keyup event the API gets called. This is all fine and dandy but I would rather make the call after a small pause allowing the user to type in several characters first. Here is my current code that calls the API after a keyup event: var autocomplete = $('#searchinput').typeahead() .on('keyup', function(ev){ ev.stopPropagation(); ev.preventDefault(); //filter out up/down, tab, enter, and escape keys if( $.inArray(ev.keyCode,[40,38,9,13,27]) === -1 ){ var self = $(this); //set typeahead source to empty self.data('typeahead').source = []; //active used so we aren't triggering duplicate keyup events if( !self.data('active') && self.val().length > 0){ self.data('active', true); //Do data request. Insert your own API logic here. $.getJSON("http://api.rottentomatoes.com/api/public/v1.0/movies.json?callback=?&apikey=MY_API_KEY&page_limit=5",{ q: encodeURI($(this).val()) }, function(data) { //set this to true when your callback executes self.data('active',true); //Filter out your own parameters. Populate them into an array, since this is what typeahead's source requires var arr = [], i=0; var movies = data.movies; $.each(movies, function(index, movie) { arr[i] = movie.title i++; }); //set your results into the typehead's source self.data('typeahead').source = arr; //trigger keyup on the typeahead to make it search self.trigger('keyup'); //All done, set to false to prepare for the next remote query. self.data('active', false); }); } } }); Is it possible to set a small delay and avoid calling the API after every keyup?

    Read the article

  • QT QSslError being signaled with the error code set to NoError

    - by Nantucket
    My Problem I compiled OpenSSL into QT to enable OpenSSL support. Everything appeared to go correctly in the compile. However, when I try to use the official HTTP example application that can be found here, everytime I try to download an https page, it will signal two QSslError, each with contents NoError. The types of QSslErrors, including NoError, are documented here, poorly. There is no explanation on why they even included an error type called NoError, or what it means. Bizarrely, the NoError error code seems to be true, as it downloads the remote https document perfectly even while signaling the error. Does anyone have any idea what this means and what could possibly be causing it? Optional Background Reading Here is the relevant part of the code from the example app (this is connected to the network connection's sslErrors signal by the constructor): void HttpWindow::sslErrors(QNetworkReply*,const QList<QSslError> &errors) { QString errorString; foreach (const QSslError &error, errors) { if (!errorString.isEmpty()) errorString += ", "; errorString += error.errorString(); } if (QMessageBox::warning(this, tr("HTTP"), tr("One or more SSL errors has occurred: %1").arg(errorString), QMessageBox::Ignore | QMessageBox::Abort) == QMessageBox::Ignore) { reply->ignoreSslErrors(); } } I have tried the old version of this example, and it produced the same result. I have tried OpenSSL 1.0.0a and 0.9.8o. I have tried tried compiling OpenSSL myself, I have tried using pre-compiled versions of OpenSSL from the net. All produce the same result. If this were my first time using QT with SSL, I would almost think this is the intended result (even though their example application is popping up error warning message windows), if not for the fact that last time I played with QT, using what would now be an old version of QT with an old version of SSL, I distinctly remember everything working fine with no error windows. My system is running Windows 7 x64.

    Read the article

  • "You have already activated" message even when using bundle exec

    - by juanpastas
    I am installing gems in my Gemfile in shared path as Capistrano does by default, and when I run: bundle exec rake assets:precompile RAILS_ENV=production I get: You have already activated rake 0.9.2.2, but your Gemfile requires rake 10.0.4. Using bundle exec may solve this. See that: cat Gemfile.lock | grep rake returns: rake (>= 0.8.7) rake (10.0.4) This is my gem environment output: - RUBYGEMS VERSION: 1.8.24 - RUBY VERSION: 1.9.3 (2013-06-27 patchlevel 448) [x86_64-linux] - INSTALLATION DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - RUBY EXECUTABLE: /opt/bitnami/ruby/bin/ruby - EXECUTABLE DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gemhome" => "/home/bitnami/my_app/shared/bundle/ruby/1.9.1/" - "gempath" => ["/home/bitnami/my_app/shared/bundle/ruby/1.9.1/"] - REMOTE SOURCES: - http://rubygems.org/ Update which -a rake /opt/bitnami/rvm/bin/rake /opt/bitnami/ruby/bin/rake Update 2 I tried giving full path to rake, but same problem

    Read the article

  • rails Rake and mysql ssh port forwarding.

    - by rube_noob
    Hello, I need to create a rake task to do some active record operations via a ssh tunnel. The rake task is run on a remote windows machine so I would like to keep things in ruby. This is my latest attempt. desc "Syncronizes the tablets DB with the Server" task(:sync => :environment) do require 'rubygems' require 'net/ssh' begin Thread.abort_on_exception = true tunnel_thread = Thread.new do Thread.current[:ready] = false hostname = 'host' username = 'tunneluser' Net::SSH.start(hostname, username) do|ssh| ssh.forward.local(3333, "mysqlhost.com", 3306) Thread.current[:ready] = true puts "ready thread" ssh.loop(0) { true } end end until tunnel_thread[:ready] == true do end puts "tunnel ready" Importer.sync rescue StandardError => e puts "The Database Sync Failed." end end The task seems to hang at "tunnel ready" and never attempts the sync. I have had success when running first a rake task to create the tunnel and then running the rake sync in a different terminal. I want to combine these however so that if there is an error with the tunnel it will not attempt the sync. This is my first time using ruby Threads and Net::SSH forwarding so I am not sure what is the issue here. Any Ideas!? Thanks

    Read the article

  • Git as mercurial client? Why no git-hg?

    - by aapeli
    This is a question that's been bothering me for a while. I've done my homework and checked stackoverflow and found at least these two topics about my question: Git for Mercurial like git-svn and Git interoperability with a Mercurial repository I've done some serious googling to solve this issue, but so far with no luck. I've also read the Git Internals book, and the Mercurial Definitive Behind the Scenes to try to figure this out. I'm still a bit puzzled why I haven't been able to find any suitable git-hg type of a tool. From my perspective hg-svn is one of the main features, why I've chosen to use git over mercurial also at work. It allows me to use a workflow I like, and nobody else needs to bother, if they don't care. I just don't see the point in using the intermediate hg repo to convert back and forth, as suggested in one of the chains. So anyway, from what I've read hg and git seem very similar in conceptual design. There are differences under the hood, but none of those should prevent creating a git client for hg. As it seems to me, remote tracking branches and octopus merges make git even more powerful than hg is. So, the real question, is there any real reason why git-hg does not exist (or at least is very hard to find)? Is there some animosity from git users (and developers) towards their hg counterparts that has caused the lack of the git-hg tool? Do any of you have any plans to develop something like this, and go public with it? I could volunteer (although with very feeble C-skills) to participate to get this done. I just don't possess the full knowledge to start this up myself. Could this be the tool to end all DVCS wars for good?

    Read the article

  • Why am I getting "(304) Not Modified" error on some links when using HttpWebRequest?

    - by Greg
    Hi, Any ideas why on some links that I try to access using HttpWebRequest I am getting "The remote server returned an error: (304) Not Modified." in the code? The code I'm using is from Jeff's post here. Note the concept of the code is a simple proxy server, so I'm pointing my browser at this locally running piece of code, which gets my browsers request, and then proxies it on by creating a new HttpWebRequest, as you'll see in the code. It works great for most sites/links, but for some this error comes up. You will see one key bit in the code is where it seems to copy the http header settings from the browser request to it's request out to the site, and it copies in the header attributes. Not sure if the issue is something to do with how it mimics this aspect of the request and then what happens as the result comes back? case "If-Modified-Since": request.IfModifiedSince = DateTime.Parse(listenerContext.Request.Headers[key]); break; I get the issue for example from http://en.wikipedia.org/wiki/Main_Page thanks

    Read the article

  • How to do parrallel processing in Unix Shell script?

    - by Bikram Agarwal
    I have a shell script that transfers a build.xml file to a remote unix machine (devrsp02) and executes the ANT task wldeploy on that machine (devrsp02). Now, this wldeploy task takes around 15 minutes to complete and while this is running, the last line at the unix console is - "task {some digit} initialized". Once this task is complete, we get a "task Completed" msg and the next task in the script is executed only after that. But sometimes, there might be a problem with the weblogic domain and the deployment might be failing internally, with no effect on the status of the wldeploy task. The unix console will still be stuck at "task {some digit} initialized". The error of the deployment will be getting logged in a file called output.a So, what I want now is - Start a time counter before running wldeploy. If the wldeploy runs for more than 15 minutes, the following command should be run - tail -f output.a ## without terminating the wldeploy or cat output.a ## after terminating the wldeploy forcefully Point to be noted here is - I can't run the wldeploy task in background, as in that case the user won't get to know when the task is complete, which is crucial for this script. Could you please suggest anything to achieve this?

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • How do I generate a connection reset programatically?

    - by Brock Adams
    Hi, I'm sure you've seen the "the connection was reset" message displayed when trying to browse web pages. (The text is from Firefox, other browsers differ.) I need to generate that message/error/condition on demand, to test workarounds. So, how do I generate that condition programmatically? (How to generate a TCP RST from PHP -- or one of the other web-app languages?) Caveats and Conditions: It cannot be a general IP block. The test client must still be able to see the test server when not triggering the condition. Ideally, it would be done at the web-application level (Python, PHP, Coldfusion, Javascript, etc.). Access to routers is problematic. Access to Apache config is a pain. Ideally, it would be triggered by fetching a specific web-page. Bonus if it works on a standard, commercial web host. Update: Sending RST is not enough to cause this condition. See my partial answer, below. I've a solution that works on a local machine, Now need to get it working on a remote host.

    Read the article

  • Subsonic in a VS2008 Add-In woes

    - by Michael Smit
    Hi, I am writing a VS2008 add-in that connects to a remote database blah blah. I am having a problem with the app.config in this project. When I use SubSonic in my code, it moans that is cannot find the SubSonicServer section. This is because the .config file cannot be found. This appears to a problem with paths as the add-in is a DLL running in the context of VS2008 and the working directory is C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE. Is there a way to get the app.config to deploy properly with the application so my add-in (and SubSonic) can find what it needs in the .config file, or is there a way to get SubSonic to work without the need for the .config? I am very experienced in SubSonic projects now, but only winforms, web, web service, and WPF applications. This is the first time I have tried to use SubSonic in a VS2008 Add-In project. I also have AppSettings in the config file which the ConfigurationManager cannot read because it cannot see the .config file. 2AM now and brain is tired of trying to figure this one out. Hopefully there is an answer when I wake up :) TIA

    Read the article

  • Java SSH2 libraries in depth: Trilead/Ganymed/Orion [/other?]

    - by Bernd Haug
    I have been searching for a pure Java SSH library to use for a project. The single most important needed feature is that it has to be able to work with command-line git, but remote-controlling command-line tools is also important. A pretty common choice, e.g. used in the IntelliJ IDEA git integration (which works very well), seems to be Trilead SSH2. Looking at their website, it's not being maintained any more. Trilead seems to have been a fork of Ganymed SSH2, which was a ETH Zurich project that didn't see releases for a while, but had a recent release by its new owner, Christian Plattner. There is another actively maintained fork from that code base, Orion SSH, that saw an even more recent release, but which seems to get mentioned online much less than the other 2 forks. Has anybody here worked with any of (or, if possible, both) of Ganymed and Orion and could kindly describe the development experience with either/both? Accuracy of documentation [existence of documentation?], stability, buggyness... - all of these would be highly interesting to me. Performance is not so important for my current project. If there is another pure-Java SSH implementation that should be used instead, please feel free to mention it, but please don't just mention a name...describe your judgment from actual experience. Sorry if this question may seem a bit "do my homework"-y, but I've really searched for reviews. Everything out there seems to be either a listing of implementations or short "use this! it's great!" snippets.

    Read the article

  • svnsync looses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • php curl login not work

    - by Massimo Zampieri
    Hi i have a problem with the curl. I watched an old post Remote Login not Working With Curl, but it not work. I followed baba's advice but the code enter in the if statement. Sorry for my bad english. Can anyone help me? This is the code: $url="http://hipfile.com/"; $urllog="http://hipfile.com/login.html"; $postdata = "login=bnnoor&password=########&op=login"; $ch = curl_init(); curl_setopt ($ch, CURLOPT_URL, $url); curl_setopt ($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt ($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6"); curl_setopt ($ch, CURLOPT_TIMEOUT, 60); curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_REFERER, $urllog); curl_setopt ($ch, CURLOPT_POSTFIELDS, $postdata); curl_setopt ($ch, CURLOPT_POST, 1); $result = curl_exec ($ch); if (!$result) { $http_code = curl_getinfo($ch, CURLINFO_HTTP_CODE); curl_close($ch); // make sure we closeany current curl sessions die($http_code.' Unable to connect to server. Please come back later.'); } echo $result; curl_close($ch);

    Read the article

  • 401 Unauthorized returned on GET request (https) with correct credentials

    - by Johnny Grass
    I am trying to login to my web app using HttpWebRequest but I keep getting the following error: System.Net.WebException: The remote server returned an error: (401) Unauthorized. Fiddler has the following output: Result Protocol Host URL 200 HTTP CONNECT mysite.com:443 302 HTTPS mysite.com /auth 401 HTTP mysite.com /auth This is what I'm doing: // to ignore SSL certificate errors public bool AcceptAllCertifications(object sender, System.Security.Cryptography.X509Certificates.X509Certificate certification, System.Security.Cryptography.X509Certificates.X509Chain chain, System.Net.Security.SslPolicyErrors sslPolicyErrors) { return true; } try { // request Uri uri = new Uri("https://mysite.com/auth"); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri) as HttpWebRequest; request.Accept = "application/xml"; // authentication string user = "user"; string pwd = "secret"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); request.Headers.Add("Authorization", auth); ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback(AcceptAllCertifications); // response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Display Stream dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); Console.WriteLine(responseFromServer); // Cleanup reader.Close(); dataStream.Close(); response.Close(); } catch (WebException webEx) { Console.Write(webEx.ToString()); } I am able to log in to the same site with no problem using ASIHTTPRequest in a Mac app like this: NSURL *login_url = [NSURL URLWithString:@"https://mysite.com/auth"]; ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:login_url]; [request setDelegate:self]; [request setUsername:name]; [request setPassword:pwd]; [request setRequestMethod:@"GET"]; [request addRequestHeader:@"Accept" value:@"application/xml"]; [request startAsynchronous];

    Read the article

  • How to setup Continuous Integration and Continuous Deployment for Django projects?

    - by ycseattle
    Hello, I am researching about how to set up CI and continuous deployment for a small team project for a Django based web application. Here are needs: Developer check in the code into a hosted SVN server (unfuddle.com) A CI server detects new checkin, check out the source, build, run functional tests. If tests all passed, deploy the code to the webserver on Amazon EC2. For now, the CI server is also responsible to run the functional tests. I figured out that I can use Husdon as the CI server, use Selenium to run functional tests, and use Fabric to deploy the build to remote web server in Amazon cloud. I am new to Django development and not very familiar with opensource tools. My questions are: I can find some information to integrate hudson with selenium, but I couldn't find much information on how to integrate Fabric to Hudson as well. Is this setup viable? Do you see problems? How do I integrate and deploy database changes? Most likely in the early stage we will change database schema very often with code changes. I used to use Visual Studio and the database project made it very simple to deploy. I wonder if there is "established, well-supported" way to do that. Thanks!!

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >