Search Results

Search found 62069 results on 2483 pages for 'unix time'.

Page 114/2483 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • sed/awk or other: increment a number by 1 keeping spacing characters

    - by WizardOfOdds
    I've got a string: (notice the spacing) eh oh 37 and I want it to become: eh oh 36 (so I want to keep the spacing) Using awk I don't find how to do it, so far I have: echo "eh oh 37" | awk '$3>=0&&$3<=99 {$3--} {print}' But this gives: eh oh 36 (the spacing characters where lost, because the field separator is ' ') Is there a way to ask awk something like "print the output using the exact same field separators as the input had"? Then I tried with sed, but got stuck after this: echo "eh oh 37" | sed -e 's/\([0-9][0-9]\)/.../' Can I do arithmetic from sed using a reference to the matching digits and have the output not modify the number of spacing characters? Note that it's related to my question concerning Emacs and how to apply this to some (big) Emacs region (using a replace region with Emacs's shell-command-on-region) but it's not an identical question: this one is specifically about how to "keep spaces" when working with awk/sed/etc.

    Read the article

  • Keyboard input: how to separate keycodes received from user

    - by Iulian Serbanoiu
    Hello, I am writing an application involving user input from the keyboard. For doing it I use this way of reading the input: #include <stdio.h> #include <termios.h> #include <unistd.h> int mygetch( ) { struct termios oldt, newt; int ch; tcgetattr( STDIN_FILENO, &oldt ); newt = oldt; newt.c_lflag &= ~( ICANON | ECHO ); tcsetattr( STDIN_FILENO, TCSANOW, &newt ); ch = getchar(); tcsetattr( STDIN_FILENO, TCSANOW, &oldt ); return ch; } int main(void) { int c; do{ c = mygetch(); printf("%d\n",c); }while(c!='q'); return 0; } Everyting works fine for letters digits,tabs but when hiting DEL, LEFT, CTRL+LEFT, F8 (and others) I receive not one but 3,4,5 or even 6 characters. The question is: Is is possible to make a separation of these characters (to actually know that I only hit one key or key combination). What I would like is to have a function to return a single integer value for any type of input (letter, digit, F1-F12, DEl, PGUP, PGDOWN, CTRL+A, CTRL+ALT+A, ALT+LEFT, etc). Is this possible? I'm interested in an idea to to this, the language doesn't matter much, though I'd prefer perl or c. Thanks, Iulian

    Read the article

  • Why is the value of this string, in a bash script, being executing?

    - by Ross
    Hello Why is this script executing the string in the if statement: #!/bin/bash FILES="*" STRING='' for f in $FILES do if ["$STRING" = ""] then echo first STRING='hello' else STRING="$STRING hello" fi done echo $STRING when run it with sh script.sh outputs: first lesscd.sh: line 7: [hello: command not found lesscd.sh: line 7: [hello hello: command not found lesscd.sh: line 7: [hello hello hello: command not found lesscd.sh: line 7: [hello hello hello hello: command not found lesscd.sh: line 7: [hello hello hello hello hello: command not found hello hello hello hello hello hello p.s. first attempt at a shell script thanks

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • naming a screen session in linux

    - by Aly
    Hi, I am running multiple screens from one ssh connection, when I list all of the screens via screen -ls the names are not very descriptive and when I have multiple screens it becomes hard to remember what is running on each. Does anyone know how to name these sessions (preferably when creating the screen). Thanks

    Read the article

  • Why can't I pipe the output of uuencode to mailx in a single Perl open statement?

    - by CheeseConQueso
    Here's my code that is not working: print "To: "; my $to=<>; chomp $to; print "From: "; my $from=<>; chomp $from; print "Attach: "; my $attach=<>; chomp $attach; print "Subject: "; my $subject=<>; chomp $subject; print "Message: "; my $message=<>; chomp $message; my $mail_fh = \*MAIL; open $mail_fh, "uuencode $attach $attach |mailx -m -s \"$subject\" -r $from $to"; print $mail_fh $message; close($mail_fh); The mailx command works fine off the command line, but not in this Perl script context. Any idea what I'm missing? I suspect that this line's format/syntax: open $mail_fh, "uuencode $attach $attach |mailx -m -s \"$subject\" -r $from $to"; is the culprit.

    Read the article

  • Bash script "read" not pausing for user input when executed from SSH shell

    - by Aaron Hancock
    I'm new to Bash scripting, so please be gentle. I'm connected to a Ubuntu server via SSH (PuTTY) and when I run this command, I expect the bash script that downloads and executes to allow user input and then echo that input. It seems to just write out the echo label for the input request and terminate. wget -O - https://raw.github.com/aaronhancock/pub/master/bash/readtest.sh | bash Any clue what I might be doing wrong? UPDATE: This bash command does exactly what I wanted bash <(wget -q -O - https://raw.github.com/aaronhancock/pub/master/bash/readtest.sh)

    Read the article

  • Core dump of a multithreaded program

    - by benjamin button
    Hi, i have regularly worked with single threaded programs. i never saw a multithreded program crashing since i havent worked on any. is there any difference between both teh core dumps? is there any additional information provided in the core dump of a multithreaded program when compared to a single threaded program?

    Read the article

  • Which tool do you use for tracking daily/weekly progress on your long-term goals?

    - by NoCatharsis
    I read through this post and this post to get an idea of short-term task management applications out there. However, now I'm having a problem tracking my longer-term goals pertaining to things like career, education, and exercise. I just discovered Disciplanner which might meet my needs but haven't had time to set it up just yet. Not to mention, it took me about 2 weeks of playing with every online to-do list app I could find before I came to a decision on which to use. I'm very particular about my tasks and goal-setting, so I would prefer to get some advice from other superusers before diving into the web again to find another life-management website.

    Read the article

  • Renaming and Moving Files in Bash or Perl

    - by Katie
    HI, I'm completely new to Bash and StackOverflow. I need to move a set of files (all contained in the same folder) to a target folder where files with the same name could already exist. In case a specific file exists, I need to rename the file before moving it, by appending for example an incremental integer to the file name. The extensions should be preserved (in other words, that appended incremental integer should go before the extension). The file names could contain dots in the middle. Originally, I was thinking about comparing the two folders to have a list of the existing files (I did this with "comm"), but then I got a bit stuck. I think I'm just trying to do things in the most complicated possible way. Any hint to do this in the "bash way"? It's OK if it is done in a script other than bash script.

    Read the article

  • How to get the Time value from datetime picker

    - by Ranjana
    how to get the time value in a stored procedure form datepicker value i.e i have a datetime picker value stored into my table as '2010-04-06 09:00:00.000' '2010-04-07 14:30:00.000' i have got the date sepeartely using convert(varchar,Date,3) as Date.. i have got the time seperately using convert(time,Time) but it show the values as 9:00 , 14:30 i need it to show as 9.00 am and 2.30pm instead of 14.30. How to achieve this????????? and help??????

    Read the article

  • About fork system call and global variables

    - by lurks
    I have this program in C++ that forks two new processes: #include <pthread.h> #include <iostream> #include <unistd.h> #include <sys/types.h> #include <sys/wait.h> #include <cstdlib> using namespace std; int shared; void func(){ extern int shared; for (int i=0; i<10;i++) shared++; cout<<"Process "<<getpid()<<", shared " <<shared<<", &shared " <<&shared<<endl; } int main(){ extern int shared; pid_t p1,p2; int status; shared=0; if ((p1=fork())==0) {func();exit(0);}; if ((p2=fork())==0) {func();exit(0);}; for(int i=0;i<10;i++) shared++; waitpid(p1,&status,0); waitpid(p2,&status,0);; cout<<"shared variable is: "<<shared<<endl; cout<<"Process "<<getpid()<<", shared " <<shared<<", &shared " <<&shared<<endl; } The two forked processes make an increment on the shared variables and the parent process does the same. As the variable belongs to the data segment of each process, the final value is 10 because the increment is independent. However, the memory address of the shared variables is the same, you can try compiling and watching the output of the program. How can that be explained ? I cannot understand that, I thought I knew how the fork() works, but this seems very odd.. I need an explanation on why the address is the same, although they are separate variables.

    Read the article

  • Why does Samba/CIFS suck so badly. [closed]

    - by sean
    Seriously, machines refusing to save data because files THEY HAVE OPEN are locked BY THEMSELVES. Getting 200+ connections simultaneously takes it out despite a plethora of available disk and network bandwidth. You can't turn off CUPS you have to COMPILE WITHOUT IT. DFS support is completely broken and pretty much useless in the current state (as in DFS for load balancing, not replication). We should just move to NFS and find a DFS like namespace aggregator.

    Read the article

  • Bash: Extract Range with Regular Expressioin (maybe sed?)

    - by sixtyfootersdude
    I have a file that is similar to this: <many lines of stuff> SUMMARY: <some lines of stuff> END OF SUMMARY I want to extract just the stuff between SUMMARY and END OF SUMMARY. I suspect I can do this with sed but I am not sure how. I know I can modify the stuff in between with this: sed "/SUMMARY/,/END OF SUMMARY/ s/replace/with/" fileName (But not sure how to just extract that stuff). I am Bash on Solaris.

    Read the article

  • How do I determine if a terminal is color-capable?

    - by asjo
    I would like to change a program to automatically detect whether a terminal is color-capable or not, so when I run said program from within a non-color capable terminal (say M-x shell in (X)Emacs), color is automatically turned off. I don't want to hardcode the program to detect TERM={emacs,dumb}. I am thinking that termcap/terminfo should be able to help with this, but so far I've only managed to cobble together this (n)curses-using snippet of code, which fails badly when it can't find the terminal: #include <stdlib.h> #include <curses.h> int main(void) { int colors=0; initscr(); start_color(); colors=has_colors() ? 1 : 0; endwin(); printf(colors ? "YES\n" : "NO\n"); exit(0); } I.e. I get this: $ gcc -Wall -lncurses -o hep hep.c $ echo $TERM xterm $ ./hep YES $ export TERM=dumb $ ./hep NO $ export TERM=emacs $ ./hep Error opening terminal: emacs. $ which is... suboptimal.

    Read the article

  • Crazy interview question

    - by benjamin button
    I was asked this crazy question. I was out of my wits. Can a method in base class which is declared as virtual be called using the base class pointer which is pointing to a derived class object? Is this possible?

    Read the article

  • Help on Website response time KPI parameters

    - by geeth
    I am working on improving website performance. Here are the list of key performance indicators I am looking at for each page Total Bytes downloaded Number of requests DNS look up time FirstByte Download time DOM content load time Total load time Is there any optimum value for each KPI to indicate website performance? Please help me in this regard.

    Read the article

  • How to extract paragaph and selected lines with Perl

    - by neversaint
    I have a text that looks like this. What I want to do is to extract the whole paragraph under the section "Aceview summary" until the line that starts with "Please quote". extract the line that starts with "The closest human gene". And store them into array with two elements. However I am stuck with the following script logic. What's the right way to achieve that? #!/usr/bin/perl -w my $INFILE_file_name = $file; # input file name open ( INFILE, '<', $INFILE_file_name ) or croak "$0 : failed to open input file $INFILE_file_name : $!\n"; my @allsum; while ( <INFILE> ) { chomp; my $line = $_; my @temp1 = (); if ( $line =~ /^ AceView summary/ ) { print "$line\n"; push @temp1, $line; } elsif( $line =~ /Please quote/) { push @allsum, [@temp1]; @temp1 = (); } } close ( INFILE ); # close input file

    Read the article

  • Cisco ASA 5510 Time of Day Based Policing

    - by minamhere
    I have a Cisco ASA 5510 setup at a boarding school. I determined that many (most?) of the students were downloading files, watching movies, etc, during the day and this was causing the academic side of our network to suffer. The students should not even be in their rooms during the day, so I configured the ASA to police their network segment and limit their outbound bandwidth. This resolved all of our academic issues, and everyone was happy. Except the resident students. I have been asked to change/remove the policing policy at the end of the day, to allow the residents access to the unused bandwidth at night. There's no reason to let bandwidth sit unused at night just because it would be abused during the day. Is there a way to setup Time of Day based Policies on the ASA? Ideally I'd like to be able to open up the network at night and all day during weekends. If I can't set Time based policies, is is possible to schedule the ASA to load a set of commands at a specific time? I suppose I could just setup a scheduled task on one of our servers to log in and make the changes with a simple script, but this seems like a hack, and I'm hoping there is a better or more standard way to accomplish this. Thanks. Edit: If there is a totally different solution that would accomplish a similar goal, I'd be interested in that as well. Free/Cheap would be ideal, but if a separate internet connection is my only other option, it might be worth fighting for money for hardware or software to do this better or more efficiently.

    Read the article

  • Hudson trigget builds remotely gives a forbidden 403 error

    - by Ritesh M Nayak
    I have a shell script on the same machine that hudson is deployed on and upon executing it, it calls wget on a hudson build trigger URL. Since its the same machine, I access it as http://localhost:8080/hudson/job/jobname/build?token=sometoken Typically, this is supposed to trigger a build on the project. But I get a 403 forbidden when I do this. Anybody has any idea why? I have tried this using a browser and it triggers the build, but via the command line it doesn't seem to work. Any ideas?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >