Search Results

Search found 18765 results on 751 pages for 'custom commands'.

Page 612/751 | < Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >

  • Tomcat deploy: make included scripts executable

    - by AlexS
    I'm devellopping a WebApplication (for Tomcat) using netbeans on Windows 7. For the Webapplication to run I need to run a insall-script once. This script (*.bat for windows and *.sh for linux is included in my war-file (WEB_INF). Now everytime I deploy the WAR-file and want to run the script on linux I have to call chmod +x install.sh first. Is there a way that this script can be made executable by default? I don't want to have to execute some extra commands after the deploy to make the script executable. For clarification: I'm not new to Linux and I know how to set executable-rights on files. That's not the problem. My problem is: What do I have to do, so that this script is executable right after tomcat deployed my *.war-file (unpacked it). If I would be using Linux for development as well, I would try to set the rights according in my sources (maybe I'll try it when I have a little more spare time). But I am using Windows and netbeans. Are there any attributes I can set to achive my goal, or is it possible to achive this using ant? By the way: Are there security related issues with this approach? The script looks for java executable and calls a javabased GUI-installer...

    Read the article

  • How can I generate a FindBugs report that shows me the bugs removed between two revisions in the bug

    - by David Deschenes
    I am attempting to execute a combination of the FindBugs commands filterBugs and convertXmlToText, against a bug database that I created, to generate a report that shows me the all of the bugs removed between two revisions of the system that I am working on. Unfortunately, the resulting report does not show any bug details. It appears that the convertXmlToText throws away all bugs that are dead (aka inactive)... the exact set of bugs that I'd like to see. Below is what I see when I pass the results of the filterBugs command to the mineBugHistory command: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./mineBugHistory seq version time classes NCSS added newCode fixed removed retained dead active 0 r39764 1271169398000 438 74069 0 64 0 0 0 0 64 1 r39921 1271186932000 441 74333 0 0 22 0 42 0 42 2 r40149 1271185876000 449 74636 0 0 3 0 39 22 39 3 r40344 1271180332000 452 74789 0 0 7 0 32 25 32 4 r40558 1271179612000 452 74806 0 0 1 0 31 32 31 5 r40793 1271178818000 464 75610 0 0 20 0 11 33 11 6 r41016 1271176154000 467 75712 0 0 4 0 7 53 7 7 r41303 1271175616000 481 76931 0 0 7 0 0 57 0 8 r41558 1271175026000 486 77793 0 0 0 0 0 64 0 What I'd like to see in the HTML report is the list of the 64 bugs that are shown as active in version r39764 (sequence # 0). Below is the command line that I am using to generate the HTML report: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./convertXmlToText -html:fancy-hist.xsl > ../../../mmfg/bugDB-removed.html

    Read the article

  • iTextSharp Overlay Image

    - by pennylane
    Hi guys I have an instance where I have a logo image as part of some artwork.. If a user uploads a new logo I have a form field which is larger than the default logo. I then use that form field to position the new image. The problem is I need to set the background colour of that form field to white so that it covers the old logo in the event that the new image is smaller than the old logo.. what I have done is: foreach (var imageField in imageReplacements) { fields.SetFieldProperty(imageField.Key, "bgcolor", iTextSharp.text.Color.WHITE, null); fields.RegenerateField(imageField.Key); PdfContentByte overContent = stamper.GetOverContent(imageField.Value.PageNumber); float[] logoArea = fields.GetFieldPositions(imageField.Key); if (logoArea != null) { iTextSharp.text.Rectangle logoRect = new iTextSharp.text.Rectangle(logoArea[1], logoArea[2], logoArea[3], logoArea[4]); var logo = iTextSharp.text.Image.GetInstance(imageField.Value.Location); if (logo.Width >= logoRect.Width || logo.Height >= logoRect.Height) { logo.ScaleToFit(logoRect.Width, logoRect.Height); } logo.Alignment = iTextSharp.text.Image.ALIGN_LEFT; logo.SetAbsolutePosition(logoRect.Left, logoArea[2] + (logoRect.Height - logo.ScaledHeight) / 2); // left: logoArea[3] - logo.ScaledWidth + (logoRect.Width - logo.ScaledWidth) / 2 overContent.AddImage(logo); } } The problem with this is that the background colour of the field is set to white and the image then doesn't appear.. i remove the SetFieldProperty and RegenerateField commands and the image replacement works fine.. is there a way to set a stacking order on layers?

    Read the article

  • OpenGLES - Rendering a background image only once and not wiping it

    - by chaosbeaker
    Hello, first time asking a question here but been watching others answers for a while. My own question is one for improving the performance of my program. Currently I'm wiping the viewFrameBuffer on each pass through my program and then rendering the background image first followed by the rest of my scene. I was wondering how I go about rendering the background image once, and only wiping the rest of the scene for updating/re-rendering. I tried using a seperate buffer but I'm not sure how to present this new buffer to the render buffer. // Set the current EAGLContext and bind to the framebuffer. This will direct all OGL commands to the // framebuffer and the associated renderbuffer attachment which is where our scene will be rendered [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); // Define the viewport. Changing the settings for the viewport can allow you to scale the viewport // as well as the dimensions etc and so I'm setting it for each frame in case we want to change i glViewport(0, 0, screenBounds.size.width , screenBounds.size.height); // Clear the screen. If we are going to draw a background image then this clear is not necessary // as drawing the background image will destroy the previous image glClearColor(0.0f, 1.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Setup how the images are to be blended when rendered. This could be changed at different points during your // render process if you wanted to apply different effects glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); switch (currentViewInt) { case 1: { [background render:CGPointMake(240, 0) fromTopLeftBottomRightCenter:@"Bottom"]; // Other Rendering Code }} // Bind to the renderbuffer and then present this image to the current context glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; Hopefully by solving this I'll also be able to implement another buffer just for rendering particles as I can set them to always use a black background as their alpha source. Any help is greatly appreciated

    Read the article

  • git changes modification time of files

    - by tanascius
    In the GitFaq I can read, that Git sets the current time as the timestamp on every file it modifies, but only those. However, I tried this command sequence (EDIT: added complete command sequence) $ git init test && cd test Initialized empty Git repository in d:/test/.git/ exxxxxxx@wxxxxxxx /d/test (master) $ touch filea fileb exxxxxxx@wxxxxxxx /d/test (master) $ git add . exxxxxxx@wxxxxxxx /d/test (master) $ git commit -m "first commit" [master (root-commit) fcaf171] first commit 0 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 filea create mode 100644 fileb exxxxxxx@wxxxxxxx /d/test (master) $ ls -l > filea exxxxxxx@wxxxxxxx /d/test (master) $ touch fileb -t 200912301000 exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 1 -rw-r--r-- 1 exxxxxxx Administ 132 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Dec 30 10:00 fileb exxxxxxx@wxxxxxxx /d/test (master) $ git status -a warning: LF will be replaced by CRLF in filea # On branch master warning: LF will be replaced by CRLF in filea # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: filea # exxxxxxx@wxxxxxxx /d/test (master) $ git checkout . exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 0 -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 fileb Now my question: Why did git change the timestamp of file fileb? I'd expect the timestamp to be unchanged. Are my commands causing a problem? Maybe it is possible to do something like a git checkout . --modified instead? I am using git version 1.6.5.1.1367.gcd48 under mingw32/windows xp.

    Read the article

  • How to enter text into the "Write something..." box on Facebook and click Submit

    - by Peter Payne
    Hello, I am trying to manipulate Facebook pages in various ways, using Javascript browser elements. I'd need to be able to insert some text into the top "Type something..." box that shows on my site's fan page (or alternately, "click into" the field and I can type the text using GUI scripting), then click the "submit" button as if i'd done it by hand. It's tricky since the page is very Ajax heavy and I can't find the names of the elements I need to manipulate, let alone how to manipulate them as they're not using traditional form fields I'm used to. Can anyone help me figure out how to do this with javascript commands, which I'd be calling from Applescript on the Mac? Many thanks in advance. UPDATE Thanks for the comments below. Believe me, I am not trying to do anything spammy or douchy, mainly posting links to products that have gone live on page facebook page, but do it during the business day when people are on rather than at strange hours of the day. I am located in Japan so my sleep period is right when people are using FB. The solution I came up with for clicking the button was got using UI Browser, an outstanding tool if you're trying to script on the Mac. The script that clicked the button for me was: tell application "Safari" activate set thename to name of (get current tab of window 1) delay 3 tell application "System Events" tell process "Safari" try click button "Share" of group 1 of group 2 of list 3 of group 9 of UI element 1 of scroll area 1 of group 3 of window thename -- this one works on the mini? on error click button "Share" of group 1 of group 2 of list 3 of group 9 of UI element 1 of scroll area 1 of group 2 of window thename -- did not work end try end tell end tell end tell Hope this is useful to anyone.

    Read the article

  • How to debug a Gruntfile with breakpoints using node-inspector?

    - by Kris Hollenbeck
    So I have spent the past couple days trying to get this to work with no luck. Most of the solutions I have found seem to work "okay" for debugging node applications. But I haven't had much luck debugging grunt stand alone. I would like to be able to set breakpoints in my gruntfile and either step through the code with either the browser or an IDE. I have tried the following: Debugging using intelliJ IDE Using Grunt Console (Process finished with exit code 6) Debugging with Nodeeclipse (This sort of works okay but doesn't hit the breakpoints set in eclipse, not very intuitive) Debugging using node-inspector (This one also sort of works. I can step through a little ways using F11 and F10 in chrome. But eventually it just crashes. Using F8 to skip to break point never works.) ERROR MESSAGE USING NODE-INSPECTOR So currently node-inspector feels like it has gotten me the closest to what I want. To get here I did the following: From my grunt directory I ran the following commands: grunt node-inspector node --debug-brk Gruntfile.js And then from there I went to localhost:8080/debug?port=5858 to debug my Gruntfile.js. But like I mentioned above, as soon as I hit F8 to skip to breakpoint it crashes with the above error. Has anybody had any success using this method to try to debug a Gruntfile? So far from my search efforts I have not found a very well documented way of doing this. So hopefully this will be useful or beneficial information for future users. Also I am using Windows 7 by the way. Thanks in advance.

    Read the article

  • not able to run c/cpp execs in eclipse cdt

    - by user1658323
    i installed eclipse and then cdt on an ubuntu system recently and was trying to make the first runnable c/c++ proj.. i installed g++ also, and then created the first executable cpp 'Hello World' project some files are created... then some issues... 1) even though Build Automatically is selected, I have to goto the project n do a Build Project to build it manually, and this i have to do everytime i make a change 2) After Building manually, there are some new folders created with Binaries and Debug files and i can see g++ commands in the console being executed. The project binary is output both to debug n binaries folder. But i am not able to run these through the Green Play Button or any other way in eclipse. Even Run configuration is not showing any option for c/C++ proj.. though i can goto terminal and run the binary myself through ./ But i want to be able to run n debug this through eclipse. plz help in fixing me this problem as i really love eclipse n have some c/cpp assignments coming soon.. Console info on doing a manual project build - Build of configuration Debug for project qwe ** make all Building file: ../src/qwe.cpp Invoking: GCC C++ Compiler g++ -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"src/qwe.d" -MT"src/qwe.d" -o "src/qwe.o" "../src/qwe.cpp" Finished building: ../src/qwe.cpp Building target: qwe Invoking: GCC C++ Linker g++ -o "qwe" ./src/qwe.o Finished building target: qwe Build Finished **

    Read the article

  • Forcing Kernel::method_name to be called in Ruby

    - by Peter
    I want to add a foo method to Ruby's Kernel module, so I can write foo(obj) anywhere and have it do something to obj. Sometimes I want a class to override foo, so I do this: module Kernel private # important; this is what Ruby does for commands like 'puts', etc. def foo x if x.respond_to? :foo x.foo # use overwritten method. else # do something to x. end end end this is good, and works. but, what if I want to use the default Kernel::foo in some other object that overwrites foo? Since I've got an instance method foo, I've lost the original binding to Kernel::foo. class Bar def foo # override behaviour of Kernel::foo for Bar objects. foo(3) # calls Bar::foo, not the desired call of Kernel::foo. Kernel::foo(3) # can't call Kernel::foo because it's private. # question: how do I call Kernel::foo on 3? end end Is there any clean way to get around this? I'd rather not have two different names, and I definitely don't want to make Kernel::foo public.

    Read the article

  • Artisan unable to access environment variables from $_ENV

    - by hansn
    Any artisan command I enter into the command line throws this error: $ php artisan <? return array( 'DB_HOSTNAME' => 'localhost', 'DB_USERNAME' => 'root', 'DB_NAME' => 'pc_booking', 'DB_PASSWORD' => 'secret', ); PHP Warning: Invalid argument supplied for foreach() in /home/martin/code/www/pc_backend/vendor/laravel/framework/src/Illuminate/Config/EnvironmentVariables.php on line 35 {"error":{"type":"ErrorException","message":"Undefined index: DB_HOSTNAME","file":"\/home\/martin\/code\/www\/pc_backend\/app\/config\/database.php","line":57}} This is only on my local development system, where I recently installed apache and php. On my production system on a shared host artisan commands work just fine. The prod system has it's own .env.php, but other than that the code should be identical. Relevant files: .env.local.php <? return array( 'DB_HOSTNAME' => 'localhost', 'DB_USERNAME' => 'root', 'DB_NAME' => 'pc_booking', 'DB_PASSWORD' => 'secret', ); app/config/database.php <?php return array( 'fetch' => PDO::FETCH_CLASS, 'default' => 'mysql', 'connections' => array( 'mysql' => array( 'driver' => 'mysql', 'host' => $_ENV['DB_HOSTNAME'], 'database' => $_ENV['DB_NAME'], 'username' => $_ENV['DB_USERNAME'], 'password' => $_ENV['DB_PASSWORD'], 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', ), ), 'migrations' => 'migrations', ), ); The $_ENV array is populated as expected on the website - the problem appears to be with artisan only.

    Read the article

  • Pass Memory in GB Using Import-CSV Powershell to New-VM in Hyper-V Version 3

    - by PowerShell
    I created the below function to pass memory from a csv file to create a VM in Hyper-V Version 3 Function Install-VM { param ( [Parameter(Mandatory=$true)] [int64]$Memory=512MB ) $VMName = "dv.VMWIN2K8R2-3.Hng" $vmpath = "c:\2012vms" New-VM -MemoryStartupBytes ([int64]$memory*1024) -Name $VMName -Path $VMPath -Verbose } Import-Csv "C:\2012vms\Vminfo1.csv" | ForEach-Object { Install-VM -Memory ([int64]$_.Memory) } But when i try to create the VM it says mismatch between the memory parameter passed from import-csv, i receive an error as below VERBOSE: New-VM will create a new virtual machine "dv.VMWIN2K8R2-3.Hng". New-VM : 'dv.VMWIN2K8R2-3.Hng' failed to modify device 'Memory'. (Virtual machine ID CE8D36CA-C8C6-42E6-B5C6-2AA8FA15B4AF) Invalid startup memory amount assigned for 'dv.VMWIN2K8R2-3.Hng'. The minimum amount of memory you can assign to a virtual machine is '8' MB. (Virtual machine ID CE8D36CA-C8C6-42E6-B5C6-2AA8FA15B4AF) A parameter that is not valid was passed to the operation. At line:48 char:9 + New-VM -ComputerName $HyperVHost -MemoryStartupBytes ([int64]$memory*10 ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (Microsoft.HyperV.PowerShell.VMTask:VMTask) [New-VM], VirtualizationOpe rationFailedException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.NewVMCommand Also please not in the csv file im passing memory as 1,2,4.. etc as shown below, and converting them to MB by multiplying them with 1024 later Memory 1 Can Anyone help me out on how to format and pass the memory details to the function

    Read the article

  • ssh & script problem

    - by Nishanth
    I am having a strange problem while doing ssh. I am not sure where the term Unmatched ` is coming from. What I need to do is run script that logs information of what I am doing on the terminal to text file. After ssh - Sun Microsystems Inc. SunOS 5.8 Generic Patch October 2001 This is /etc/motd, last updated 3 Feb 2003. To learn about the UCS system and other aspects of computing at UL-Lafayette visit our home page http://helpdesk.louisiana.edu/ . For more information about system use, contact the Help Desk, Stephens Hall, Room 201, 482-5516 (x25516), during normal UL office hours; or send e-mail to [email protected]. ATTENTION: Unsecure Telnet and FTP will be turned off soon. Please make arrange to use ssh or sftp. Putty(telnet) and WinSCP(ftp) would be a good replacement. Unmatched ` d13.ucs.louisiana.edu% bash bash-2.04$ script -a myInformation.txt Script started, file is myInformation.txt Unmatched ` d13.ucs.louisiana.edu% When I tried to start the script with name myInformation.txt, you can see the message I am getting - Script started, file is myInformation.txt. But again I am getting that message Unmatched ` and is coming out of bash, as you can notice. What is the problem ? Any insights suggested would be very great. Note: file with name myInformation.txt is being created but nothing goes in to it. As I have even tried running certain commands like ls and then exited the script with ctrl+d. But when I open the file, nothing is there.

    Read the article

  • Low Level Console Input

    - by Soulseekah
    I'm trying to send commands to to the input of a cmd.exe application using the low level read/write console functions. I have no trouble reading the text (scraping) using the ReadConsole...() and WriteConsole() functions after having attached to the process console, but I've not figured out how to write for example "dir" and have the console interpret it as a sent command. Here's a bit of my code: CreateProcess(NULL, "cmd.exe", NULL, NULL, FALSE, CREATE_NEW_CONSOLE, NULL, NULL, &si, &pi); AttachConsole(pi.dwProcessId); strcpy(buffer, "dir"); WriteConsole(GetStdHandle(STD_INPUT_HANDLE), buffer, strlen(buffer), &charRead, NULL); STARTUPINFO attributes of the process are all set to zero, except, of course, the .cb attribute. Nothing changes on the screen, however I'm getting an Error 6: Invalid Handle returned from WriteConsole to STD_INPUT_HANDLE. If I write to (STD_OUTPUT_HANDLE) I do get my dir written on the screen, but nothing of course happens. I'm guessing SetConsoleMode() might be of help, but I've tried many mode combinations, nothing helped. I've also created a quick console application that waits for input (scanf()) and echoes back whatever goes in, didn't work. I've also tried typing into the scanf() promp and then peek into the input buffer using PeekConsoleInput(), returns 0, but the INPUT_RECORD array is empty. I'm aware that there is another way around this using WriteConsoleInput() to directly inject INPUT_RECORD structured events into the console, but this would be way too long, I'll have to send each keypress into it. I hope the question is clear. Please let me know if you need any further information. Thanks for your help.

    Read the article

  • committing to a branch that's not checked out

    - by intuited
    I'm using git to version my home directories on a couple different machines. I'd like for them to each use separate branches and both pull from a common branch. So most commits should be made to that common branch, unless something specific to that machine is being committed, in which case the commit should go to the checked out, machine-specific branch. Switching branches is clearly not a very good option in this case. It's mentioned in this post that what I want to do is impossible, but I found that answer to be rather blunt and to perhaps not take into account the possibility of using the plumbing commands. Unfortunately I don't have enough reputation to comment on that thread. I rather suspect that there is some way to do this and am hoping to save myself an hour or few of questing for the answer by just asking you good folk. So is it possible to commit to a different branch without checking that branch out first? Ideally I'd like to use the index in the same way that git commit normally does.

    Read the article

  • Gem Load Error about whois command and removed cache

    - by Puru puru rin..
    Hello, I have an awesome trouble with Gem. After executing this command: rm -f /usr/local/lib/ruby/gems/1.9.1/cache/* I can not do any thing. If I try for instance: gem cleanup I get this kind of answer: /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/gemwhois.rb:3:in `require': no such file to load -- rubygems/commands/whois (LoadError) from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/gemwhois.rb:3:in `<top (required)>' from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/rubygems_plugin.rb:2:in `require' from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/rubygems_plugin.rb:2:in `<top (required)>' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1113:in `load' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1113:in `block in <top (required)>' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1105:in `each' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1105:in `<top (required)>' from <internal:gem_prelude>:235:in `require' from <internal:gem_prelude>:235:in `load_full_rubygems_library' from <internal:gem_prelude>:334:in `const_missing' from /usr/local/bin/gem:12:in `<main>' It's the same for gem -v, of just gem command... I'm working of Snow Leopard. What should the best solution about you? Thanks a lot!

    Read the article

  • MySQLPython is ignoring my my.cnf file. Where does it get its information?

    - by ?????
    When I try to use MySQLPython (via SQLAlchemy) I get the error File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-macosx-10.6-x86_64.egg/MySQLdb/connections.py", line 188, in __init__ super(Connection, self).__init__(*args, **kwargs2) sqlalchemy.exc.OperationalError: (OperationalError) (2002, "Can't connect to local MySQL server through socket '/opt/local/var/run/mysql5/mysqld.sock' (2)") None None but no other mysql client on my machine sees it fine! My my.cnf file states: [client] port = 3306 socket = /tmp/mysql/mysql.sock [safe_mysqld] socket = /tmp/mysql/mysql.sock [mysqld_safe] socket = /tmp/mysql/mysql.sock [mysqld] socket = /tmp/mysql/mysql.sock port = 3306 and the mysql.sock file is, indeed, located in /tmp/mysql I verified that ~/.my.cnf and /var/lib/mysql/my.cnf aren't overriding it. The mysql5 client program, etc, has no trouble connecting and neither does a groovy/grails installation on the same machine using jdbc/mysql connection thrilllap-2:~ swirsky$ mysql5 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 6 Server version: 5.1.47 Source distribution Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL v2 license Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | test | +--------------------+ 2 rows in set (0.00 sec) mysql> Why can't MySQLdb for python figure this out? Where would it look if not the my.cnf files?

    Read the article

  • git: remove 2nd commit

    - by cwolves
    I'm trying to remove the 2nd commit to a repo. At this point I could just blow away the .git dir and re-do it, but I'm curious how to do this... I've deleted commits before, but apparently never the 2nd one :) > git log commit c39019e4b08497406c53ceb532f99801793205ca Author: Me Date: Thu Mar 22 14:02:41 2012 -0700 Initializing registry directories commit 535dce28f1c68e8af9d22bc653aca426fb7825d8 Author: Me Date: Tue Jan 31 21:04:13 2012 -0800 First Commit > git rebase -i HEAD~2 fatal: Needed a single revision invalid upstream HEAD~2 > git rebase -i HEAD~1 at which point I get in my editor: pick c39019e Initializing registry directories # Rebase 535dce2..c39019e onto 535dce2 # # Commands: # p, pick = use commit # r, reword = use commit, but edit the commit message # e, edit = use commit, but stop for amending # s, squash = use commit, but meld into previous commit # f, fixup = like "squash", but discard this commit's log message # x, exec = run command (the rest of the line) using shell # # If you remove a line here THAT COMMIT WILL BE LOST. # However, if you remove everything, the rebase will be aborted. # Now my problem is that I can't just blow away this 2nd commit since "if you remove everything, the rebase will be aborted"

    Read the article

  • How can I unit test a PHP class method that executes a command-line program?

    - by acoulton
    For a PHP application I'm developing, I need to read the current git revision SHA which of course I can get easily by using shell_exec or backticks to execute the git command line client. I have obviously put this call into a method of its very own, so that I can easily isolate and mock this for the rest of my unit tests. So my class looks a bit like this: class Task_Bundle { public function execute() { // Do things $revision = $this->git_sha(); // Do more things } protected function git_sha() { return `git rev-parse --short HEAD`; } } Of course, although I can test most of the class by mocking git_sha, I'm struggling to see how to test the actual git_sha() method because I don't see a way to create a known state for it. I don't think there's any real value in a unit test that also calls git rev-parse to compare the results? I was wondering about at least asserting that the command had been run, but I can't see any way to get a history of shell commands executed by PHP - even if I specify that PHP should use BASH rather than SH the history list comes up empty, I presume because the separate backticks executions are separate terminal sessions. I'd love to hear any suggestions for how I might test this, or is it OK to just leave that method untested and be careful with it when the app is being maintained in future?

    Read the article

  • Extract history from Korn shell

    - by Luc
    I am not happy about the history file in binary format of the Korn shell. I like to "collect" some of my command lines, many of them actually, and for a long time. I'm talking about years. That doesn't seem easy in Korn because the history file is not plain text so I can't edit it, and a lot of junk is piling up in it. By "junk" I mean lines that I don'twant to keep, like 'cat' or 'man'. So I added these lines to my .profile: fc -ln 1 9999 ~/khistory.txt source ~/loghistory.sh ~/khistory.txt loghistory.sh contains a handful of sed and sort commands that gets rid of a lot of the junk. But apparently it is forbidden to run fc in the .profile file. I can't login whenever I do, the shell exits right away with signal 11. So I removed that 'fc -l' line from my .profile file and added it to the loghistory.sh script, but the shell still crashes. I also tried this line in my .profile: strings ~/.sh_history ~/khistory.txt source ~/loghistory.sh That doesn't crash, but the output is printed with an additional, random character in the beginning of many lines. I can run 'fc -l' on the command line, but that's no good. I need to automate that. But how? How can I extract my ksh history as plain text? TIA

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • Use of putty in command line

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij

    Read the article

  • Sharing a COM port over TCP

    - by guinness
    What would be a simple design pattern for sharing a COM port over TCP to multiple clients? For example, a local GPS device that could transmit co-ordinates to remote hosts in realtime. So I need a program that would open the serial port and accept multiple TCP connections like: class Program { public static void Main(string[] args) { SerialPort sp = new SerialPort("COM4", 19200, Parity.None, 8, StopBits.One); Socket srv = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); srv.Bind(new IPEndPoint(IPAddress.Any, 8000)); srv.Listen(20); while (true) { Socket soc = srv.Accept(); new Connection(soc); } } } I would then need a class to handle the communication between connected clients, allowing them all to see the data and keeping it synchronized so client commands are received in sequence: class Connection { static object lck = new object(); static List<Connection> cons = new List<Connection>(); public Socket socket; public StreamReader reader; public StreamWriter writer; public Connection(Socket soc) { this.socket = soc; this.reader = new StreamReader(new NetworkStream(soc, false)); this.writer = new StreamWriter(new NetworkStream(soc, true)); new Thread(ClientLoop).Start(); } void ClientLoop() { lock (lck) { connections.Add(this); } while (true) { lock (lck) { string line = reader.ReadLine(); if (String.IsNullOrEmpty(line)) break; foreach (Connection con in cons) con.writer.WriteLine(line); } } lock (lck) { cons.Remove(this); socket.Close(); } } } The problem I'm struggling to resolve is how to facilitate communication between the SerialPort instance and the threads. I'm not certain that the above code is the best way forward, so does anybody have another solution (the simpler the better)?

    Read the article

  • A C# Refactoring Question...

    - by james lewis
    I came accross the following code today and I didn't like it. It's fairly obvious what it's doing but I'll add a little explanation here anyway: Basically it reads all the settings for an app from the DB and the iterates through all of them looking for the DB Version and the APP Version then sets some variables to the values in the DB (to be used later). I looked at it and thought it was a bit ugly - I don't like switch statements and I hate things that carry on iterating through a list once they're finished. So I decided to refactor it. My question to all of you is how would you refactor it? Or do you think it even needs refactoring at all? Here's the code: using (var sqlConnection = new SqlConnection(Lfepa.Itrs.Framework.Configuration.ConnectionString)) { sqlConnection.Open(); var dataTable = new DataTable("Settings"); var selectCommand = new SqlCommand(Lfepa.Itrs.Data.Database.Commands.dbo.SettingsSelAll, sqlConnection); var reader = selectCommand.ExecuteReader(); while (reader.Read()) { switch (reader[SettingKeyColumnName].ToString().ToUpper()) { case DatabaseVersionKey: DatabaseVersion = new Version(reader[SettingValueColumneName].ToString()); break; case ApplicationVersionKey: ApplicationVersion = new Version(reader[SettingValueColumneName].ToString()); break; default: break; } } if (DatabaseVersion == null) throw new ApplicationException("Colud not load Database Version Setting from the database."); if (ApplicationVersion == null) throw new ApplicationException("Colud not load Application Version Setting from the database."); }

    Read the article

  • How to configure C# Typed Datasets when calling OracleDataAdapter.Update() on Oracle Stored Procedur

    - by John_D
    I am writing a C# Windows Forms application which calls Oracle stored procedures. I chose to use typed datasets in the application, these correctly populate various datagrids, but I am having trouble when invoking the UpdateCommand or the InsertCommand. I have manually coded these commands because a) I am using Oracle stored procedures and b) I don't trust CommandBuilder ;) I am using VS2008 and Oracle 9i I don't have trouble executing stored procedures in SQL Server or Oracle when simply calling them from the .ExecuteNonQuery command; neither do I have problems executing SQL statements directly and updating the database. The problems only arise when executing the changed rows with OracleDataAdapter.Update(). I am specifying the correct set of rows (added, changed etc.) The main error I am getting (after a lot of experimentation with increasingly simpler SPs finishing with just one int parameter) is "PLS-00306: wrong number or type of arguments in call to 'PROCNAME'" I have tried prefixing the Oracle parameter both with ':' and without. Suffice to say I am losing the will to live. Has anyone any more ideas I could try next? Thanks

    Read the article

  • wpf command pattern

    - by evan
    I have a wpf gui which displays a list of information in separate window and in a separate thread from the main application. As the user performs actions in the main window the side window is updated. (For example if you clicked page down in the main window a listbox in the side window would page down). Right now the architecture for this application feels very messy and I'm sure there is a cleaner way to do it. It looks like this: Main Window contains a singleton SideWindowControl which communicates with an instance of the SideWindowDisplay using events - so, for example, the pagedown button would work like: 1) the event handler of the button on the main window calls SideWindowControl.PageDown() 2) in the PageDown() function a event is created and thrown. 3) finally the gui, ShowSideWindowDisplay is subscribing to the SideWindowControl.Actions event handles the event and actually scrolls the listbox down - note because it is in a different thread it has to do that by running the command via Dispatcher.Invoke() This just seems like a very messy way to this and there must be a clearer way (The only part that can't change is that the main window and the side window must be on different threads). Perhaps using WPF commands? I'd really appreciate any suggestions!! Thanks

    Read the article

< Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >