I have a data that looks like this:
3
2
1
5
What I want to get is the "decreasing" cumulative of this data
yielding
11
8
6
5
0
What is the compact way of doing that in Perl?
I have an sql file which will give me an output like below:
10|1
10|2
10|3
11|2
11|4
.
.
.
I am using this in a perl script like below:
my @tmp_cycledef = `sqlplus -s $connstr \@DLCycleState.sql`;
after this above statement,since tmp_cycledef has all the output of the sql query
i want to show the output as:
10 1,2,3
11 2,4
how could i do this using perl?
Hello,
I work on shared linux machines with between 4 and 24 cores. To make best use of them, I use the following code to detect the number of processors from my ruby scripts:
return `cat /proc/cpuinfo | grep processor | wc -l`.to_i
(perhaps there is a pure-ruby way of doing this?)
But sometimes a colleague is using six or eight of the 24 cores. (as seen via top). How can I get an estimate of the number of currently unused processors that I can use without making anyone upset?
Thanks!
remsh remoteserverhostname -l remoteusername find /tmp/a1/ | cpio -o > /tmp/paketr.cpio
rcp remoteserverhostname:/tmp/paketr.cpio /tmp/aaa
cpio -idmv < /tmp/paketr.cpio
i'am trying to get and create directory structure from remote server to local server
i can do this with following command list
but i wonder if i can do this with just one command
by running cpio with pass-through mode
remsh remoteserverhostname find /tmp/a1 | cpio -pd /tmp
current </tmp/tmp/a1/b1/y1> newer
current </tmp/tmp/a1/b1/z1> newer
current </tmp/tmp/a1/b2/l2smc> newer
"/tmp/a1/b3": No such file or directory
Cannot stat </tmp/a1/b3>.
0 blocks
so when i try to cpio -pd option , i am expecting it to create directories for me but it does not
what can i do ?
I am on a Mac OSX and I am using the built in PHP and Apache2. I have been setting up MySQL and finally when I got MySQL working my local site won't display. Do note that I did have the web server running and delivering PHP enabled pages, just no database connection. But my question is not about MySQL.
I have changed various settings in the 'http.conf' file, and I have the line: '127.0.0.1 localhost' in my hosts file. I also have other alias' pointing to 127.0.0.1.
I have checked everything I could about Apache and I have made sure that any message in the error_log is ok. I currently have my errorLevel set to debug, so I get all the messages.
At this point (HOURS of self fixing) I think I need help.
What can I provide for someone to figure this out with me.
Thanks.
Our ksh environment defines several functions, which can be listed using then "functions" ksh function. Is it possible to see the definition (ie source code) for these functions?
This seems like an obvious question, but I've tried all manner of parameters to the "functions" and "function" functions with no luck.
Thanks,
Steve
Hi,
I am doing a find and then getting a list of files. how do I pipe it to another utility like cat (so that cat displays the contents of all those files) and basically need to grep something from these files.
I have a requirement where I have to set environment variables calling a script file eg:set_env.sh.
set_env.sh contains all the environment variables.
export SCRIPT_DIR=/e/scripts/
...
when I call the set_env.sh from my code the variables are available in that file itself. They are not available in file where I have called the script.
What should be done so that environment variables can be retained and can be used in file which calls set_env.sh.
Thanks,
Sandeep M.
I am new to perl coding.
I am facing a problem while executing a small script i have:
open is not able to find the file which i am giving as an argument.Please see below:
File is available:
ls -l DLmissing_months.sql
-rwxr-xr-x 1 tlmwrk61 aimsys 2842 May 16 09:44 DLmissing_months.sql
My perl script:
#!/usr/local/bin/perl
use strict;
use warnings;
my $this_line = "";
my $do_next = 0;
my $file_name = $ARGV[0];
open( my $fh, '<', '$file_name')
or die "Error opening file - $!\n";
close($fh);
executing the perl script :
> new.pl DLmissing_months.sql
Error opening file - No such file or directory
what is the problem with my perl script.
I have the following text file
Eif2ak1.aSep07
Eif2ak1.aSep07
LOC100042862.aSep07-unspliced
NADH5_C.0.aSep07-unspliced
LOC100042862.aSep07-unspliced
NADH5_C.0.aSep07-unspliced
What I want to do is to remove all the text starting from period (.) to the end.
But why this command doesn't do it?
sed 's/\.*//g' myfile.txt
What's the right way to do it?
I have both matrices containing only ones and each array has 500 rows and columns. So, the resulting matrix should be a matrix of all elements having value 500. But, I am getting res_mat[0][0]=5000. Even other elements are also 5000. Why?
#include<stdio.h>
#include<pthread.h>
#include<unistd.h>
#include<stdlib.h>
#define ROWS 500
#define COLUMNS 500
#define N_THREADS 10
int mat1[ROWS][COLUMNS],mat2[ROWS][COLUMNS],res_mat[ROWS][COLUMNS];
void *mult_thread(void *t)
{
/*This function calculates 50 ROWS of the matrix*/
int starting_row;
starting_row = *((int *)t);
starting_row = 50 * starting_row;
int i,j,k;
for (i = starting_row;i<starting_row+50;i++)
for (j=0;j<COLUMNS;j++)
for (k=0;k<ROWS;k++)
res_mat[i][j] += (mat1[i][k] * mat2[k][j]);
return;
}
void fill_matrix(int mat[ROWS][COLUMNS])
{
int i,j;
for(i=0;i<ROWS;i++)
for(j=0;j<COLUMNS;j++)
mat[i][j] = 1;
}
int main()
{
int n_threads = 10; //10 threads created bcos we have 500 rows and one thread calculates 50 rows
int j=0;
pthread_t p[n_threads];
fill_matrix(mat1);
fill_matrix(mat2);
for (j=0;j<10;j++)
pthread_create(&p[j],NULL,mult_thread,&j);
for (j=0;j<10;j++)
pthread_join(p[j],NULL);
printf("%d\n",res_mat[0][0]);
return 0;
}
i have three similar files, they are all like this:
ID1 Value1a
ID2 Value2a
.
.
.
IDN Value2n
and i want an output like this
ID1 Value1a Value1b Value1c
ID2 Value2a Value2b Value2c
.....
IDN ValueNa ValueNb ValueNc
Looking to the first line i want value1A to be the value of id1 in fileA, value1B the value of id1 in fileB, etc, i think of it like a nice sql join. I've tried several things but none of them where even close.
I've run into the need to be able refer to a directory by path given its file descriptor in Linux. The path doesn't have to be canonical, it just has to be functional so that I can pass it to other functions. So, taking the same parameters as passed to a function like fstatat(), I need to be able to call a function like getxattr() which doesn't have a f-XYZ-at() variant.
So far I've come up with these solutions; though none are particularly elegant.
The simplest solution is to avoid the problem by calling openat() and then using a function like fgetxattr(). This works, but not in every situation. So another method is needed to fill the gaps.
The next solution involves looking up the information in proc:
if (!access("/proc/self/fd",X_OK)) {
sprintf(path,"/proc/self/fd/%i/",fd);
}
This, of course, totally breaks on systems without proc, including some chroot environments.
The last option, a more portable but potentially-race-condition-prone solution, looks like this:
DIR* save = opendir(".");
fchdir(fd);
getcwd(path,PATH_MAX);
fchdir(dirfd(save));
closedir(save);
The obvious problem here is that in a multithreaded app, changing the working directory around could have side effects.
However, the fact that it works is compelling: if I can get the path of a directory by calling fchdir() followed by getcwd(), why shouldn't I be able to just get the information directly: fgetcwd() or something. Clearly the kernel is tracking the necessary information.
So how do I get to it?
I have a data in that always comes in block of four
in the following format (called FASTQ):
@SRR018006.2016 GA2:6:1:20:650 length=36
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGN
+SRR018006.2016 GA2:6:1:20:650 length=36
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!+!
@SRR018006.19405469 GA2:6:100:1793:611 length=36
ACCCGCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
+SRR018006.19405469 GA2:6:100:1793:611 length=36
7);;).;);;/;*.2>/@@7;@77<..;)58)5/>/
Is there a simple sed/awk/bash way to convert them into
this format (called FASTA):
>SRR018006.2016 GA2:6:1:20:650 length=36
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGN
>SRR018006.19405469 GA2:6:100:1793:611 length=36
ACCCGCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
In principle we want to extract the first two lines in each block-of-4
and replace @ with >.
I am playing with using flock, a bash command for file locks to prevent 2 different instances of the code from running more than once.
I am using this testing code:
( ( flock -x 200 ; sleep 10 ; echo "original finished" ; ) 200>./test.lock ) &
( sleep 2 ; ( flock -x -w 2 200 ; echo "a finished" ) 200>./test.lock ) &
I am running 2 subshells (backgrounded). The (flock NUM; ...) NUM>FILE syntax is from flock's man page.
I expect that the first subshell will get an exclusive lock on test.lock, then wait 10 seconds, then print "original finished", all the time holding the lock. The second subshell will start at more or less the same time, wait 2 seconds, then try to get a lock on test.lock, but timeout after 2 seconds. If it gets a lock, then it'll print "a finished". If it doesn't get the lock, that subshell should stop, and nothing should be printed.
Since the first subshell is waiting longer, it will keep the lock for 10 seconds, so the second subshell should not get the lock, and shouldn't finish. i.e. one should see "original finished" printed and not both.
What actually happens is that "a finished" is printed, then "original finished" is printed.
This implies that that the second subshell is either (a) not using the same lock as the first subhsell or (b) that it fails to get the lock, but continues to execute or (c) something else.
Why don't those locks work?
I want to know how I can send the command(s) spawned by xargs to background.
For example, consider
find . -type f -mtime +7 | tee compressedP.list | xargs compress
I tried
find . -type f -mtime +7 | tee compressedP.list | xargs -i{} compress {} &
.. and as unexpected, it seems to send xargs to the background instead?
How do I make each instance of the compress command go to the background?
I need to config one SMTP server (sendmail) to send mail with 2 interfaces with different ip's depending server.
For example: In same machine with to ip: 1.1.1.1 and 2.2.2.2 i need to send email [email protected] by 1.1.1.1 and [email protected] by 2.2.2.2
I don't now if i can configure it on sendmail, or use iptables, some idea ?
Thx.
Hello
I have defined a custom file type with these lines:
syn region SubSubtitle start=+=+ end=+=+
highlight SubSubtitle ctermbg=black ctermfg=DarkGrey
syn region Subtitle start=+==+ end=+==+
highlight Subtitle ctermbg=black ctermfg=DarkMagenta
syn region Title start=+===+ end=+===+
highlight Title ctermbg=black ctermfg=yellow
syn region MasterTitle start=+====+ end=+====+
highlight MasterTitle cterm=bold term=bold ctermbg=black ctermfg=LightBlue
I enclose all of my headings in this kind of document like this:
==== Biggest Heading ==== // this will be bold and light blue
===Sub heading === // this will be yellow
bla bla bla // this will be normally formatted
However right now when ever I use an equals sign in my code it thinks that it is a title. Is there anyway that I can force a match to be only on one line?
I've got a string: (notice the spacing)
eh oh 37
and I want it to become:
eh oh 36
(so I want to keep the spacing)
Using awk I don't find how to do it, so far I have:
echo "eh oh 37" | awk '$3>=0&&$3<=99 {$3--} {print}'
But this gives:
eh oh 36
(the spacing characters where lost, because the field separator is ' ')
Is there a way to ask awk something like "print the output using the exact same field separators as the input had"?
Then I tried with sed, but got stuck after this:
echo "eh oh 37" | sed -e 's/\([0-9][0-9]\)/.../'
Can I do arithmetic from sed using a reference to the matching digits and have the output not modify the number of spacing characters?
Note that it's related to my question concerning Emacs and how to apply this to some (big) Emacs region (using a replace region with Emacs's shell-command-on-region) but it's not an identical question: this one is specifically about how to "keep spaces" when working with awk/sed/etc.
Hello,
I am writing an application involving user input from the keyboard. For doing it I use this way of reading the input:
#include <stdio.h>
#include <termios.h>
#include <unistd.h>
int mygetch( ) {
struct termios oldt,
newt;
int ch;
tcgetattr( STDIN_FILENO, &oldt );
newt = oldt;
newt.c_lflag &= ~( ICANON | ECHO );
tcsetattr( STDIN_FILENO, TCSANOW, &newt );
ch = getchar();
tcsetattr( STDIN_FILENO, TCSANOW, &oldt );
return ch;
}
int main(void)
{
int c;
do{
c = mygetch();
printf("%d\n",c);
}while(c!='q');
return 0;
}
Everyting works fine for letters digits,tabs but when hiting DEL, LEFT, CTRL+LEFT, F8 (and others) I receive not one but 3,4,5 or even 6 characters.
The question is: Is is possible to make a separation of these characters (to actually know that I only hit one key or key combination).
What I would like is to have a function to return a single integer value for any type of input (letter, digit, F1-F12, DEl, PGUP, PGDOWN, CTRL+A, CTRL+ALT+A, ALT+LEFT, etc). Is this possible?
I'm interested in an idea to to this, the language doesn't matter much, though I'd prefer perl or c.
Thanks,
Iulian