Search Results

Search found 4311 results on 173 pages for 'unix utils'.

Page 57/173 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Running pl/sql in Korn Shell(AIX)

    - by learner135
    I have a file to execute in Ksh written by someone. It has a set of commands to execute in sqlplus. It starts with, sqlplus -s $UP <<- END followed by a set of ddl commands such as create,drop,etc., When I execute the file in the shell, I get the error in the starting line quoted above. I understand "-s" starts the sqlplus in silent mode and $UP is the connection string with username/password. But I couldn't make heads or tails of "<<- END" part(Many sites from google says input redirection is "<<" not "<<-"). So I presumed the error must be in that part and removed it from the file. Now it reads, sqlplus -s $UP But once I execute the file, It waits for input from the shell, instead of reading the rest of the lines from the file. How would I make sqlplus to execute the ddl commands in the rest of the file?. Thanks in advance.

    Read the article

  • Can I use POSIX signals in my Perl program to create event-driven programming?

    - by Shiftbit
    Is there any POSIX signals that I could utilize in my Perl program to create event-driven programming? Currently, I have multi-process program that is able to cross communicate but my parent thread is only able to listen to listen at one child at a time. foreach (@proc) { sysread(${$_}{'read'}, my $line, 100); #problem here chomp($line); print "Parent hears: $line\n"; } The problem is that the parent sits in a continual wait state until it receives it a signal from the first child before it can continue on. I am relying on 'pipe' for my intercommunication. My current solution is very similar to: http://stackoverflow.com/questions/2558098/how-can-i-use-pipe-to-facilitate-interprocess-communication-in-perl If possible I would like to rely on a $SIG{...} event or any non-CPAN solution. Update: As Jonathan Leffler mentioned, kill can be used to send a signal: kill USR1 = $$; # send myself a SIGUSR1 My solution will be to send a USR1 signal to my child process. This event tells the parent to listen to the particular child. child: kill USR1 => $parentPID if($customEvent); syswrite($parentPipe, $msg, $buffer); #select $parentPipe; print $parentPipe $msg; parent: $SIG{USR1} = { #get child pid? sysread($array[$pid]{'childPipe'}, $msg, $buffer); }; But how do I get my the source/child pid that signaled the parent? Have the child Identify itself in its message. What happens if two children signal USR1 at the same time?

    Read the article

  • Custom installation directories

    - by Alex Farber
    Let's say I am writing installation script for the program which contains executable file and shared library. By default, this script places executable to /usr/local/bin, and shared library to /usr/local/lib. In this case my program may be executed by any user by typing its name in the command line. Suppose that user selects custom installation directory, like ~/myprogram/. Is it user's responsibility to ensure that my program may be executed, or my installation script must do this?

    Read the article

  • Setting variables in shell script by running commands

    - by rajya vardhan
    >cat /tmp/list1 john jack >cat /tmp/list2 smith taylor It is guaranteed that list1 and list2 will have equal number of lines. f(){ i=1 while read line do var1 = `sed -n '$ip' /tmp/list1` var2 = `sed -n '$ip' /tmp/list2` echo $i,$var1,$var2 i=`expr $i+1` echo $i,$var1,$var2 done < $INFILE } So output of f() should be: 1,john,smith 2,jack,taylor But getting 1,p,p 1+1,p,p If i replace following: var1 = `sed -n '$ip' /tmp/list1` var2 = `sed -n '$ip' /tmp/list2` with this: var1=`head -$i /tmp/vip_list|tail -1` var2=`head -$i /tmp/lb_list|tail -1` Then output: 1,john,smith 1,john,smith Not an expert of shell, so please excuse if sounds childish :)

    Read the article

  • Logging from multiple apps/processes to a single log file

    - by Andrew
    Our app servers (weblogic) all use log4j to log to the same file on a network share. On top of this we have all web apps in a managed server logging errors to a common error.log. I can't imagine this is a good idea but wanted to hear from some pros. I know that each web app has its own classloader, so any thread synchronization only occurs within the app. So what happens when multiple processes start converging on a single log file? Can we expect interspersed log statements? Performance problems? What about multiple web apps logging to a common log file? The environment is Solaris.

    Read the article

  • aumix problem- how can I make the changes permenantly

    - by thillai-selvan
    Hai every one. How can I make the changes in the aumix permanently? I am running the aumix application using the following command thinapplaunch aumix. I am increasing the volume manually. I saved and quit. I am again launching that application using the same command. thinapplaunch aumix Now all the changes I made is still there. All the volumes are full. But when I logged out and login again the changes is not exists. All the volumes are not full. How can I make the changes permanently? Any help much appreciated. Thanks in Advance!!!

    Read the article

  • Why Does Piping Binary Text to the Screen often Horck a Terminal

    - by Alan Storm
    Imaginary Situation: You’ve used mysqldump to create a backup of a mysql database. This database has columns that are blobs. That means your “text” dump files contains both strings and binary data (binary data stored as strings?) If you cat this file to the screen $ cat dump.mysql you’ll often get unexpected results. The terminal will start beeping, and then the output finishes scrolling by you’ll often have garbage chacters entered on your terminal as through you’d typed them, and sometimes your prompts and anything you type will be garbage characters. Why does this happen? Put another way, I think I’m looking for an overview of what’s actually happening when you store binary strings into a file, and when you cat those files, and when the results of the cat are reported to the terminal, and any other steps I’m missing.

    Read the article

  • Shell script task status monitoring

    - by Bikram Agarwal
    I'm running an ANT task in background and checking in 60 second intervals whether that task is complete or not. If it is not, every 60 seconds, a message should be displayed on screen - "Deploy process is still running. $slept seconds since deploy started", where $slept is 60, 120, 180 n so on. There's a limit of 1200 seconds, after which the script will show the log via 'ant log' command and ask the user whether to continue. If the user chooses to continue, 300 seconds are added to the time limit and the process repeats. The code that I am using for this task is - ant deploy & limit=1200 deploy_check() { while [ ${slept:-0} -le $limit ]; do sleep 60 && slept=`expr ${slept:-0} + 60` if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process is still running. $slept seconds since deploy started." else wait $! && echo "Application ${New_App_Name} deployed successfully" || echo "Deployment of ${New_App_Name} failed" break fi done } deploy_check if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process did not finish in $slept seconds. Here's the log." ant log echo "Do you want to kill the process? Press Ctrl+C to kill. Press Enter to continue." read log limit=`expr ${limit} + 300` deploy_check fi Now, the problem is - this code is not working. This looks like a perfectly good code and yet, this is not working. Can anyone point out what is wrong with this code, please.

    Read the article

  • iteratively creating graphs

    - by Andrei
    I have a bunch of files containing x and y coordinates, representing time and value (space-separated, but can be amended) For example: 15:06:59 0.0140 ....... I want to create a word file (or some equivalent) to show all these graphs. Right now I am using Excel. It pretty daunting task, as I ahve to plug paste numbers in two rows for each graph, and I have many of them. Thanks

    Read the article

  • Exit SSH from the script

    - by Kimi
    I Want to exit ssh: Does the below line work: ssh -f -T ${USAGE_2_USER}@${USAGE_2_HOST} Or do i need to write it some other way . Please tell should I use exit with ssh an how?

    Read the article

  • awk output to a file

    - by Harish
    I need help in moving the contents printed by awk to a text file. THis is a continuation of previous quesion I have to move all the contents into the same file so it is appending. To be specific nawk -v file="$FILE" 'BEGIN{RS=";"} /select/{ gsub(/.*select/,"select");gsub(/\n+/,"");print file,$0;} /update/{ gsub(/.*update/,"update");gsub(/\n+/,"");print file,$0;} /insert/{ gsub(/.*insert/,"insert");gsub(/\n+/,"");print file,$0;} ' "$FILE" How to get the print results to a text file appended one after the other in the same file?

    Read the article

  • How to use parallel execution in a shell script?

    - by eSKay
    I have a C shell script that does something like this: #!/bin/csh gcc example.c -o ex gcc combine.c -o combine ex file1 r1 <-- 1 ex file2 r2 <-- 2 ex file3 r3 <-- 3 #... many more like the above combine r1 r2 r3 final \rm r1 r2 r3 Is there some way I can make lines 1, 2 and 3 run in parallel instead of one after the another?

    Read the article

  • Bash: using commands as parameters (specifically cd, dirname and find)

    - by sixtyfootersdude
    This command and output: % find . -name file.xml 2> /dev/null ./a/d/file.xml % So this command and output: % dirname `find . -name file.xml 2> /dev/null` ./a/d % So you would expect that this command: % cd `dirname `find . -name file.xml 2> /dev/null`` Would change the current directory to ./a/d. Strangely this does not work. When I type cd ./a/d. The directory change works. However I cannot find out why the above does not work...

    Read the article

  • git filter-branch chmod

    - by Evan Purkhiser
    I accidental had my umask set incorrectly for the past few months and somehow didn't notice. One of my git repositories has many files marked as executable that should be just 644. This repo has one main master branch, and about 4 private feature branches (that I keep rebased on top of the master). I've corrected the files in my master branch by running find -type f -exec chmod 644 {} \; and committing the changes. I then rebased my feature branches onto master. The problem is there are newly created files in the feature branches that are only in that branch, so they weren't corrected by my massive chmod commit. I didn't want to create a new commit for each feature branch that does the same thing as the commit I made on master. So I decided it would be best to go back through to each commit where a file was made and set the permissions. This is what I tried: git filter-branch -f --tree-filter 'chmod 644 `git show --diff-filter=ACR --pretty="format:" --name-only $GIT_COMMIT`; git add .' master.. It looked like this worked, but upon further inspection I noticed that the every commit after a commit containing a new file with the proper permissions of 644 would actually revert the change with something like: diff --git a b old mode 100644 new mode 100755 I can't for the life of me figure out why this is happening. I think I must be mis-understanding how git filter-branch works. My Solution I've managed to fix my problem using this command: git filter-branch -f --tree-filter 'FILES="$FILES "`git show --diff-filter=ACMR --pretty="format:" --name-only $GIT_COMMIT`; chmod 644 $FILES; true' development.. I keep adding onto the FILES variable to ensure that in each commit any file created at some point has the proper mode. However, I'm still not sure I really understand why git tracks the file mode for each commit. I had though that since I had fixed the mode of the file when it was first created that it would stay that mode unless one of my other commits explicit changed it to something else. That did not appear to the be the case. The reason I thought that this would work is from my understanding of rebase. If I go back to HEAD~5 and change a line of code, that change is propagated through, it doesn't just get changed back in HEAD~4.

    Read the article

  • Unknown error in Producer/Consumer program, believe it to be an infinite loop.

    - by ray2k
    Hello, I am writing a program that is solving the producer/consumer problem, specifically the bounded-buffer version(i believe they mean the same thing). The producer will be generating x number of random numbers, where x is a command line parameter to my program. At the current moment, I believe my program is entering an infinite loop, but I'm not sure why it is occurring. I believe I am executing the semaphores correctly. You compile it like this: gcc -o prodcon prodcon.cpp -lpthread -lrt Then to run, ./prodcon 100(the number of randum nums to produce) This is my code. typedef int buffer_item; #include <stdlib.h> #include <stdio.h> #include <pthread.h> #include <semaphore.h> #include <unistd.h> #define BUFF_SIZE 10 #define RAND_DIVISOR 100000000 #define TRUE 1 //two threads void *Producer(void *param); void *Consumer(void *param); int insert_item(buffer_item item); int remove_item(buffer_item *item); int returnRandom(); //the global semaphores sem_t empty, full, mutex; //the buffer buffer_item buf[BUFF_SIZE]; //buffer counter int counter; //number of random numbers to produce int numRand; int main(int argc, char** argv) { /* thread ids and attributes */ pthread_t pid, cid; pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); numRand = atoi(argv[1]); sem_init(&empty,0,BUFF_SIZE); sem_init(&full,0,0); sem_init(&mutex,0,0); printf("main started\n"); pthread_create(&pid, &attr, Producer, NULL); pthread_create(&cid, &attr, Consumer, NULL); printf("main gets here"); pthread_join(pid, NULL); pthread_join(cid, NULL); printf("main done\n"); return 0; } //generates a randum number between 1 and 100 int returnRandom() { int num; srand(time(NULL)); num = rand() % 100 + 1; return num; } //begin producing items void *Producer(void *param) { buffer_item item; int i; for(i = 0; i < numRand; i++) { //sleep for a random period of time int rNum = rand() / RAND_DIVISOR; sleep(rNum); //generate a random number item = returnRandom(); //acquire the empty lock sem_wait(&empty); //acquire the mutex lock sem_wait(&mutex); if(insert_item(item)) { fprintf(stderr, " Producer report error condition\n"); } else { printf("producer produced %d\n", item); } /* release the mutex lock */ sem_post(&mutex); /* signal full */ sem_post(&full); } return NULL; } /* Consumer Thread */ void *Consumer(void *param) { buffer_item item; int i; for(i = 0; i < numRand; i++) { /* sleep for a random period of time */ int rNum = rand() / RAND_DIVISOR; sleep(rNum); /* aquire the full lock */ sem_wait(&full); /* aquire the mutex lock */ sem_wait(&mutex); if(remove_item(&item)) { fprintf(stderr, "Consumer report error condition\n"); } else { printf("consumer consumed %d\n", item); } /* release the mutex lock */ sem_post(&mutex); /* signal empty */ sem_post(&empty); } return NULL; } /* Add an item to the buffer */ int insert_item(buffer_item item) { /* When the buffer is not full add the item and increment the counter*/ if(counter < BUFF_SIZE) { buf[counter] = item; counter++; return 0; } else { /* Error the buffer is full */ return -1; } } /* Remove an item from the buffer */ int remove_item(buffer_item *item) { /* When the buffer is not empty remove the item and decrement the counter */ if(counter > 0) { *item = buf[(counter-1)]; counter--; return 0; } else { /* Error buffer empty */ return -1; } }

    Read the article

  • Too many open files in one of my java routine.

    - by Irfan Zulfiqar
    I have a multithreaded code that has to generated a set of objects and write them to a file. When I run it I sometime get "Too many open files" message in Exception. I have checked the code to make sure that all the file streams are being closed properly. Here is the stack trace. When I do ulimit -a, open files allowed is set to 1024. We think increasing this number is not a viable option / solution. [java] java.io.FileNotFoundException: /export/event_1_0.dtd (Too many open files) [java] at java.io.FileInputStream.open(Native Method) [java] at java.io.FileInputStream.<init>(FileInputStream.java:106) [java] at java.io.FileInputStream.<init>(FileInputStream.java:66) [java] at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70) [java] at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161) [java] at java.net.URL.openStream(URL.java:1010) Now what we have identified so far by looking closely at the list of open files is that the VM is opening same class file multiple times. /export/BaseEvent.class 236 /export/EventType1BaseEvent.class 60 /export/EventType2BaseEvent.class 48 /export/EventType2.class 30 /export/EventType1.class 14 Where BaseEvent is partent of all the classes and EventType1 ant EventType2 inherits EventType1BaseEvent and EventType2BaseEvent respectively. Why would a class loader load the same class file 200+ times. It seems it is opening up the base class as many time it create any child instance. Is this normal? Can it be handler any other way apart from increasing the number of open files?

    Read the article

  • How to feed data over STDIN to multiple external commands in ruby.

    - by Erik
    This question is a bit like my previous (answered) question: How to run multiple external commands in the background in ruby. But, in this case I am looking for a way to feed ruby strings over STDIN to external processes, something like this (the code below is not valid but illustrates my goal): #!/usr/bin/ruby str1 = 'In reality a relatively large string.....' str2 = 'Another large string' str3 = 'etc..' spawn 'some_command.sh', :stdin => str1 spawn 'some_command.sh', :stdin => str2 spawn 'some_command.sh', :stdin => str3 Process.waitall

    Read the article

  • In terminal, merging multiple folders into one.

    - by Josh Pinter
    I have a backup directory created by WDBackup (western digital external HD backup util) that contains a directory for each day that it backed up and the incremental contents of just what was backed up. So the hierarchy looks like this: 20100101 My Documents Letter1.doc My Music Best Songs Every First Songs.mp3 My song.mp3 # modified 20100101 20100102 My Documents Important Docs Taxes.doc My Music My Song.mp3 # modified 20100102 ...etc... Only what has changed is backed up and the first backup that was ever made contains all the files selected for backup. What I'm trying to do now is incrementally copy, while keeping the folder structure, from oldest to newest, each of these dated folders into a 'merged' folder so that it overrides the older content and keeps the new stuff. As an example, if just using these two example folders, the final merged folder would look like this: Merged My Documents Important Docs Taxes.doc Letter1.doc My Music Best Songs Every First Songs.mp3 My Song.mp3 # modified 20100102 Hope that makes sense. Thanks, Josh

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >