Search Results

Search found 17651 results on 707 pages for 'unix domain sockets'.

Page 186/707 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Perl open call failing.

    - by benjamin button
    I am new to perl coding. I am facing a problem while executing a small script i have: open is not able to find the file which i am giving as an argument.Please see below: File is available: ls -l DLmissing_months.sql -rwxr-xr-x 1 tlmwrk61 aimsys 2842 May 16 09:44 DLmissing_months.sql My perl script: #!/usr/local/bin/perl use strict; use warnings; my $this_line = ""; my $do_next = 0; my $file_name = $ARGV[0]; open( my $fh, '<', '$file_name') or die "Error opening file - $!\n"; close($fh); executing the perl script : > new.pl DLmissing_months.sql Error opening file - No such file or directory what is the problem with my perl script.

    Read the article

  • Applying Domain Model on top of Linq2Sql entities

    - by Thomas
    I am trying to practice the model first approach and I am putting together a domain model. My requirement is pretty simple: UserSession can have multiple ShoppingCartItems. I should start off by saying that I am going to apply the domain model interfaces to Linq2Sql generated entities (using partial classes). My requirement translates into three database tables (UserSession, Product, ShoppingCartItem where ProductId and UserSessionId are foreign keys in the ShoppingCartItem table). Linq2Sql generates these entities for me. I know I shouldn't even be dealing with the database at this point but I think it is important to mention. The aggregate root is UserSession as a ShoppingCartItem can not exist without a UserSession but I am unclear on the rest. What about Product? It is defiently an entity but should it be associated to ShoppingCartItem? Here are a few suggestion (they might all be incorrect implementations): public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItem> ShoppingCartItems{ get; set; } } public interface IShoppingCartItem { public Guid UserSessionId { get; set; } public int ProductId { get; set; } } Another one would be: public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItem> ShoppingCartItems{ get; set; } } public interface IShoppingCartItem { public Guid UserSessionId { get; set; } public IProduct Product { get; set; } } A third one is: public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItemColletion> ShoppingCartItems{ get; set; } } public interface IShoppingCartItemColletion { public IUserSession UserSession { get; set; } public IProduct Product { get; set; } } public interface IProduct { public int ProductId { get; set; } } I have a feeling my mind is too tightly coupled with database models and tables which is making this hard to grasp. Anyone care to decouple?

    Read the article

  • Converting FASTQ to FASTA with SED/AWK

    - by neversaint
    I have a data in that always comes in block of four in the following format (called FASTQ): @SRR018006.2016 GA2:6:1:20:650 length=36 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGN +SRR018006.2016 GA2:6:1:20:650 length=36 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!+! @SRR018006.19405469 GA2:6:100:1793:611 length=36 ACCCGCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC +SRR018006.19405469 GA2:6:100:1793:611 length=36 7);;).;);;/;*.2>/@@7;@77<..;)58)5/>/ Is there a simple sed/awk/bash way to convert them into this format (called FASTA): >SRR018006.2016 GA2:6:1:20:650 length=36 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGN >SRR018006.19405469 GA2:6:100:1793:611 length=36 ACCCGCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC In principle we want to extract the first two lines in each block-of-4 and replace @ with >.

    Read the article

  • Replacing Part of Text Using Sed

    - by neversaint
    I have the following text file Eif2ak1.aSep07 Eif2ak1.aSep07 LOC100042862.aSep07-unspliced NADH5_C.0.aSep07-unspliced LOC100042862.aSep07-unspliced NADH5_C.0.aSep07-unspliced What I want to do is to remove all the text starting from period (.) to the end. But why this command doesn't do it? sed 's/\.*//g' myfile.txt What's the right way to do it?

    Read the article

  • Get directory path by fd

    - by tylerl
    I've run into the need to be able refer to a directory by path given its file descriptor in Linux. The path doesn't have to be canonical, it just has to be functional so that I can pass it to other functions. So, taking the same parameters as passed to a function like fstatat(), I need to be able to call a function like getxattr() which doesn't have a f-XYZ-at() variant. So far I've come up with these solutions; though none are particularly elegant. The simplest solution is to avoid the problem by calling openat() and then using a function like fgetxattr(). This works, but not in every situation. So another method is needed to fill the gaps. The next solution involves looking up the information in proc: if (!access("/proc/self/fd",X_OK)) { sprintf(path,"/proc/self/fd/%i/",fd); } This, of course, totally breaks on systems without proc, including some chroot environments. The last option, a more portable but potentially-race-condition-prone solution, looks like this: DIR* save = opendir("."); fchdir(fd); getcwd(path,PATH_MAX); fchdir(dirfd(save)); closedir(save); The obvious problem here is that in a multithreaded app, changing the working directory around could have side effects. However, the fact that it works is compelling: if I can get the path of a directory by calling fchdir() followed by getcwd(), why shouldn't I be able to just get the information directly: fgetcwd() or something. Clearly the kernel is tracking the necessary information. So how do I get to it?

    Read the article

  • What is a .NET app domain?

    - by Luke
    In particular, what are the implications of running code in two different app domains? How is data normally passed across the app domain boundary? Is it the same as passing data across the process boundary? I'm curious to know more about this abstraction and what it is useful for. EDIT: Good existing coverage of AppDomains in general at http://stackoverflow.com/questions/622516/i-dont-understand-appdomains

    Read the article

  • bash and flock (file lock) - Doesn't seem to be locking....

    - by Rory
    I am playing with using flock, a bash command for file locks to prevent 2 different instances of the code from running more than once. I am using this testing code: ( ( flock -x 200 ; sleep 10 ; echo "original finished" ; ) 200>./test.lock ) & ( sleep 2 ; ( flock -x -w 2 200 ; echo "a finished" ) 200>./test.lock ) & I am running 2 subshells (backgrounded). The (flock NUM; ...) NUM>FILE syntax is from flock's man page. I expect that the first subshell will get an exclusive lock on test.lock, then wait 10 seconds, then print "original finished", all the time holding the lock. The second subshell will start at more or less the same time, wait 2 seconds, then try to get a lock on test.lock, but timeout after 2 seconds. If it gets a lock, then it'll print "a finished". If it doesn't get the lock, that subshell should stop, and nothing should be printed. Since the first subshell is waiting longer, it will keep the lock for 10 seconds, so the second subshell should not get the lock, and shouldn't finish. i.e. one should see "original finished" printed and not both. What actually happens is that "a finished" is printed, then "original finished" is printed. This implies that that the second subshell is either (a) not using the same lock as the first subhsell or (b) that it fails to get the lock, but continues to execute or (c) something else. Why don't those locks work?

    Read the article

  • sed/awk or other: increment a number by 1 keeping spacing characters

    - by WizardOfOdds
    I've got a string: (notice the spacing) eh oh 37 and I want it to become: eh oh 36 (so I want to keep the spacing) Using awk I don't find how to do it, so far I have: echo "eh oh 37" | awk '$3>=0&&$3<=99 {$3--} {print}' But this gives: eh oh 36 (the spacing characters where lost, because the field separator is ' ') Is there a way to ask awk something like "print the output using the exact same field separators as the input had"? Then I tried with sed, but got stuck after this: echo "eh oh 37" | sed -e 's/\([0-9][0-9]\)/.../' Can I do arithmetic from sed using a reference to the matching digits and have the output not modify the number of spacing characters? Note that it's related to my question concerning Emacs and how to apply this to some (big) Emacs region (using a replace region with Emacs's shell-command-on-region) but it's not an identical question: this one is specifically about how to "keep spaces" when working with awk/sed/etc.

    Read the article

  • Make two servers talk to each other

    - by Maksim
    I have application written in GWT and hosted on Google AppEngine/Java. In this application user will have an option to upload video/audio/text file to the server. Those files could be big, up to 1gb or so and because GAE/J does not support large file I have to use another server to store those files. This would be easy to implement if there was no cross-domain security feature in browsers. So, what I'm thinking is to make GAE Server talk to my server (Glassfish or any other java servers if needed) to tell url to the file and if possible send status of uploaded file (how many percent was uploaded) so I can show status on clients screen. Here is what I'm thinking to do. When user loads GWT page that is stored on GAE/J he/she will upload file to my server, then my server will send response back to GAE and GAE will send response to the client. If this scenario is possible what would be the best way to implement GAE to Glassfish conversation?

    Read the article

  • Sending the command(s) spawned by xargs to background

    - by PoorLuzer
    I want to know how I can send the command(s) spawned by xargs to background. For example, consider find . -type f -mtime +7 | tee compressedP.list | xargs compress I tried find . -type f -mtime +7 | tee compressedP.list | xargs -i{} compress {} & .. and as unexpected, it seems to send xargs to the background instead? How do I make each instance of the compress command go to the background?

    Read the article

  • Vim syntax highlighting: make region only match on one line

    - by sixtyfootersdude
    Hello I have defined a custom file type with these lines: syn region SubSubtitle start=+=+ end=+=+ highlight SubSubtitle ctermbg=black ctermfg=DarkGrey syn region Subtitle start=+==+ end=+==+ highlight Subtitle ctermbg=black ctermfg=DarkMagenta syn region Title start=+===+ end=+===+ highlight Title ctermbg=black ctermfg=yellow syn region MasterTitle start=+====+ end=+====+ highlight MasterTitle cterm=bold term=bold ctermbg=black ctermfg=LightBlue I enclose all of my headings in this kind of document like this: ==== Biggest Heading ==== // this will be bold and light blue ===Sub heading === // this will be yellow bla bla bla // this will be normally formatted However right now when ever I use an equals sign in my code it thinks that it is a title. Is there anyway that I can force a match to be only on one line?

    Read the article

  • Keyboard input: how to separate keycodes received from user

    - by Iulian Serbanoiu
    Hello, I am writing an application involving user input from the keyboard. For doing it I use this way of reading the input: #include <stdio.h> #include <termios.h> #include <unistd.h> int mygetch( ) { struct termios oldt, newt; int ch; tcgetattr( STDIN_FILENO, &oldt ); newt = oldt; newt.c_lflag &= ~( ICANON | ECHO ); tcsetattr( STDIN_FILENO, TCSANOW, &newt ); ch = getchar(); tcsetattr( STDIN_FILENO, TCSANOW, &oldt ); return ch; } int main(void) { int c; do{ c = mygetch(); printf("%d\n",c); }while(c!='q'); return 0; } Everyting works fine for letters digits,tabs but when hiting DEL, LEFT, CTRL+LEFT, F8 (and others) I receive not one but 3,4,5 or even 6 characters. The question is: Is is possible to make a separation of these characters (to actually know that I only hit one key or key combination). What I would like is to have a function to return a single integer value for any type of input (letter, digit, F1-F12, DEl, PGUP, PGDOWN, CTRL+A, CTRL+ALT+A, ALT+LEFT, etc). Is this possible? I'm interested in an idea to to this, the language doesn't matter much, though I'd prefer perl or c. Thanks, Iulian

    Read the article

  • Removing the default pages when adding a domain via Plesk

    - by ChrisS
    Hi, whenever I add a new domain into my new Plesk control panel on my dedicated server it creates a whole bunch of test files in the cgi-bin, httpdocs and httpsdocs. There must be some setting somewhere where I can tell Plesk not to do this? I've done a good Google search but must now turn to the StackOverflow masses :) Yours, Chris

    Read the article

  • Bash script "read" not pausing for user input when executed from SSH shell

    - by Aaron Hancock
    I'm new to Bash scripting, so please be gentle. I'm connected to a Ubuntu server via SSH (PuTTY) and when I run this command, I expect the bash script that downloads and executes to allow user input and then echo that input. It seems to just write out the echo label for the input request and terminate. wget -O - https://raw.github.com/aaronhancock/pub/master/bash/readtest.sh | bash Any clue what I might be doing wrong? UPDATE: This bash command does exactly what I wanted bash <(wget -q -O - https://raw.github.com/aaronhancock/pub/master/bash/readtest.sh)

    Read the article

  • Pulling a timestamp from an XML feed with PHP but seem to be to many digits

    - by Craig Ward
    I am pulling a timestamp from a feed and it gives 12 digits (1269088723811). When I convert it, it comes out as 1901-12-13 20:45:52, but if I put the timestamp into http://www.epochconverter.com/ it comes out as Sat, 20 Mar 2010 12:38:43 GMT, which is the correct time. epochconverter.com mentions that it maybe in milliseconds so I have amended the script to take care of it using $mil = $timestamp; $seconds = $mil / 1000; $date = date('Y-m-d H:i:s', date($seconds)); but it still converts the date wrong, 1970-01-25 20:31:23. What am I doing wrong?

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • Why is the value of this string, in a bash script, being executing?

    - by Ross
    Hello Why is this script executing the string in the if statement: #!/bin/bash FILES="*" STRING='' for f in $FILES do if ["$STRING" = ""] then echo first STRING='hello' else STRING="$STRING hello" fi done echo $STRING when run it with sh script.sh outputs: first lesscd.sh: line 7: [hello: command not found lesscd.sh: line 7: [hello hello: command not found lesscd.sh: line 7: [hello hello hello: command not found lesscd.sh: line 7: [hello hello hello hello: command not found lesscd.sh: line 7: [hello hello hello hello hello: command not found hello hello hello hello hello hello p.s. first attempt at a shell script thanks

    Read the article

  • Core dump of a multithreaded program

    - by benjamin button
    Hi, i have regularly worked with single threaded programs. i never saw a multithreded program crashing since i havent worked on any. is there any difference between both teh core dumps? is there any additional information provided in the core dump of a multithreaded program when compared to a single threaded program?

    Read the article

  • naming a screen session in linux

    - by Aly
    Hi, I am running multiple screens from one ssh connection, when I list all of the screens via screen -ls the names are not very descriptive and when I have multiple screens it becomes hard to remember what is running on each. Does anyone know how to name these sessions (preferably when creating the screen). Thanks

    Read the article

  • Why can't I pipe the output of uuencode to mailx in a single Perl open statement?

    - by CheeseConQueso
    Here's my code that is not working: print "To: "; my $to=<>; chomp $to; print "From: "; my $from=<>; chomp $from; print "Attach: "; my $attach=<>; chomp $attach; print "Subject: "; my $subject=<>; chomp $subject; print "Message: "; my $message=<>; chomp $message; my $mail_fh = \*MAIL; open $mail_fh, "uuencode $attach $attach |mailx -m -s \"$subject\" -r $from $to"; print $mail_fh $message; close($mail_fh); The mailx command works fine off the command line, but not in this Perl script context. Any idea what I'm missing? I suspect that this line's format/syntax: open $mail_fh, "uuencode $attach $attach |mailx -m -s \"$subject\" -r $from $to"; is the culprit.

    Read the article

  • About fork system call and global variables

    - by lurks
    I have this program in C++ that forks two new processes: #include <pthread.h> #include <iostream> #include <unistd.h> #include <sys/types.h> #include <sys/wait.h> #include <cstdlib> using namespace std; int shared; void func(){ extern int shared; for (int i=0; i<10;i++) shared++; cout<<"Process "<<getpid()<<", shared " <<shared<<", &shared " <<&shared<<endl; } int main(){ extern int shared; pid_t p1,p2; int status; shared=0; if ((p1=fork())==0) {func();exit(0);}; if ((p2=fork())==0) {func();exit(0);}; for(int i=0;i<10;i++) shared++; waitpid(p1,&status,0); waitpid(p2,&status,0);; cout<<"shared variable is: "<<shared<<endl; cout<<"Process "<<getpid()<<", shared " <<shared<<", &shared " <<&shared<<endl; } The two forked processes make an increment on the shared variables and the parent process does the same. As the variable belongs to the data segment of each process, the final value is 10 because the increment is independent. However, the memory address of the shared variables is the same, you can try compiling and watching the output of the program. How can that be explained ? I cannot understand that, I thought I knew how the fork() works, but this seems very odd.. I need an explanation on why the address is the same, although they are separate variables.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >