Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 524/1051 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • Extracting shell script from parameterised Hudson job

    - by Jonik
    I have a parameterised Hudson job, used for some AWS deployment stuff, which in one build step runs certain shell commands. However, that script has become sufficiently complicated that I want to "extract" it from Hudson to a separate script file, so that it can easily be versioned properly. The Hudson job would then simply update from VCS and execute the external script file. My main question is about passing parameters to the script. I have a Hudson parameter named AMI_ID and a few others. The script references those params as if they were environment variables: echo "Using AMI $AMI_ID and type $TYPE" Now, this works fine inside Hudson, but not if Hudson calls an external script. Could I somehow make Hudson set the params as environment variables so that I don't need to change the script? Or is my best option to alter the script to take command line parameters (and possibly assign those to named variables for readability: ami_id=$1; type=$2; ... )? I tried something like this but the script doesn't get correctly replaced values: export AMI_ID=$AMI_ID export TYPE=$TYPE external-script.sh # this tries to use e.g. $AMI_ID Bonus question: when the script is inside Hudson, the "console output" will contain both the executed commands and their output. This is extremely useful for debugging when something goes wrong with a build! For example, here the line starting with "+" is part of the script and the following line its output: + ec2-associate-address -K pk.pem -C cert.pem 77.125.116.139 -i i-aa3487fd ADDRESS 77.125.116.139 i-aa3487fd When calling an external script, Hudson output will only contain the latter line, making debugging harder. I could cat the script file to stdout before running it, but that's not optimal either. In effect, I'd like a kind of DOS-style "echo on" for the script which I'm calling from Hudson - anyone know a trick to achieve this?

    Read the article

  • Too many open files in one of my java routine.

    - by Irfan Zulfiqar
    I have a multithreaded code that has to generated a set of objects and write them to a file. When I run it I sometime get "Too many open files" message in Exception. I have checked the code to make sure that all the file streams are being closed properly. Here is the stack trace. When I do ulimit -a, open files allowed is set to 1024. We think increasing this number is not a viable option / solution. [java] java.io.FileNotFoundException: /export/event_1_0.dtd (Too many open files) [java] at java.io.FileInputStream.open(Native Method) [java] at java.io.FileInputStream.<init>(FileInputStream.java:106) [java] at java.io.FileInputStream.<init>(FileInputStream.java:66) [java] at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70) [java] at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161) [java] at java.net.URL.openStream(URL.java:1010) Now what we have identified so far by looking closely at the list of open files is that the VM is opening same class file multiple times. /export/BaseEvent.class 236 /export/EventType1BaseEvent.class 60 /export/EventType2BaseEvent.class 48 /export/EventType2.class 30 /export/EventType1.class 14 Where BaseEvent is partent of all the classes and EventType1 ant EventType2 inherits EventType1BaseEvent and EventType2BaseEvent respectively. Why would a class loader load the same class file 200+ times. It seems it is opening up the base class as many time it create any child instance. Is this normal? Can it be handler any other way apart from increasing the number of open files?

    Read the article

  • How to include all objects of an archive in a shared object?

    - by Didier Trosset
    When compiling our project, we create several archives (static libraries), say liby.a and libz.a that each contains an object file defining a function y_function() and z_function(). Then, these archives are joined in a shared object, say libyz.so, that is one of our main distributable target. g++ -fPIC -c -o y.o y.cpp ar cr liby.a y.o g++ -fPIC -c -o z.o z.cpp ar cr libz.a z.o g++ -shared -L. -ly -lz -o libyz.so When using this shared object into the example program, say x.c, the link fails because of an undefined references to functions y_function() and z_function(). g++ x.o -L. -lyz -o xyz It works however when I link the final executable directly with the archives (static libraries). g++ x.o -L. -ly -lz -o xyz My guess is that the object files contained in the archives are not linked into the shared library because they are not used in it. How to force inclusion? Edit: Inclusion can be forced using --whole-archive ld option. But if results in compilation errors: g++ -shared '-Wl,--whole-archive' -L. -ly -lz -o libyz.so /usr/lib/libc_nonshared.a(elf-init.oS): In function `__libc_csu_init': (.text+0x1d): undefined reference to `__init_array_end' /usr/bin/ld: /usr/lib/libc_nonshared.a(elf-init.oS): relocation R_X86_64_PC32 against undefined hidden symbol `__init_array_end' can not be used when making a shared object /usr/bin/ld: final link failed: Bad value Any idea where this comes from?

    Read the article

  • Perl: Edit hyperlinks in nested tags that aren't on separate lines

    - by user305801
    I have an interesting problem. I wrote the following perl script to recursively loop through a directory and in all html files for img/script/a tags do the following: Convert the entire url to lowercase Replace spaces and %20 with underscores The script works great except when an image tag in wrapped with an anchor tag. Is there a way to modify the current script to also be able to manipulate the links for nested tags that are not on separate lines? Basically if I have <a href="..."><img src="..."></a> the script will only change the link in the anchor tag but skip the img tag. #!/usr/bin/perl use File::Find; $input="/var/www/tecnew/"; sub process { if (-T and m/.+\.(htm|html)/i) { #print "htm/html: $_\n"; open(FILE,"+<$_") or die "couldn't open file $!\n"; $out = ''; while(<FILE>) { $cur_line = $_; if($cur_line =~ m/<a.*>/i) { print "cur_line (unaltered) $cur_line\n"; $cur_line =~ /(^.* href=\")(.+?)(\".*$)/i; $beg = $1; $link = html_clean($2); $end = $3; $cur_line = $beg.$link.$end; print "cur_line (altered) $cur_line\n"; } if($cur_line =~ m/(<img.*>|<script.*>)/i) { print "cur_line (unaltered) $cur_line\n"; $cur_line =~ /(^.* src=\")(.+?)(\".*$)/i; $beg = $1; $link = html_clean($2); $end = $3; $cur_line = $beg.$link.$end; print "cur_line (altered) $cur_line\n"; } $out .= $cur_line; } seek(FILE, 0, 0) or die "can't seek to start of file: $!"; print FILE $out or die "can't print to file: $1"; truncate(FILE, tell(FILE)) or die "can't truncate file: $!"; close(FILE) or die "can't close file: $!"; } } find(\&process, $input); sub html_clean { my($input_string) = @_; $input_string = lc($input_string); $input_string =~ s/%20|\s/_/g; return $input_string; }

    Read the article

  • Can a 32-bit RHEL4 userland work with a 64-bit kernel?

    - by James
    Is there a way to change an i386 RHEL4 machine to run an amd64 kernel, but ensure that it still builds software into same i386 binaries? On Debian this seems quite straightforward: just install an amd64 kernel (worst case, build one like this guy: http://www.debian-administration.org/users/jonesy/weblog/1) and prefix everything with "linux32". Then everything that considers uname -m will be unchanged, I just need to handle the few cases that consider uname -r. What is the Red Hat equivalent? Is the only way a full 64-bit installation on another disk and then chrooting back to the 32-bit system before anyone builds anything? (Even the best examples of that seem to be Debian-based.) Background: We make a large system that runs on (a variant of) i386 RHEL4. However, some of the larger RHEL build machines now have enough RAM that they might benefit from going 64-bit (for the kernel and maybe some of the bigger build steps). Our build system doesn't support cross-compilation.

    Read the article

  • ssh-keygen accepting stdin

    - by Ryan
    I am trying to call ssh-keygen using a variable through bash as an input instead of a file to get a fingerprint of a public key. This method does not work as it says the key file is invalid (it's correct for sure) echo $pubkey | ssh-keygen -lf /dev/stdin This does work ssh-keygen -lf /dev/stdin < alpha.pub This does not work because I get an ambiguous redirect ssh-keygen -lf /dev/stdin < $(echo $pubkey) I would appreciate some insight as to how to get ssh-keygen to read from a variable with a public key and if possible, an explanation as to why the redirects aren't doing what I think they should be doing. I searched online but many of the redirect tutorials didn't seem to answer my questions.

    Read the article

  • Intermittent "No Database Selected" in PHP/MySQL?

    - by ANE
    Have a PHP/MySQL form with a dropdown box containing a list of 350 names. When any random name is selected, sometimes it works & displays info about that name from the database, and sometimes the form gives the error "No Database Selected". Here's what I've tried, pretty much grasping at straws as I'm not a programmer: Increasing max_connections in /etc/my.cnf from 200 to 2000 (even though only 4-5 connections are made and it's a lightly used server) Changing mysql_pconnect to mysql_connect Adding the word true to this connection string: $mysql = mysql_pconnect($hostname_mysql, $username_mysql, $password_mysql, true) or trigger_error(mysql_error(),E_USER_ERROR); Changing the word require_once to require on this line: [?php require('/home/user/Connections/mysql.php'); ?] Enabling MySQL & PHP query & error logging. (no errors logged) Here is the code: [removed old bad code] Update: Working answer from Rob Apodaca below.

    Read the article

  • Distributing a bundle of files across an extranet

    - by John Zwinck
    I want to be able to distribute bundles of files, about 500 MB per bundle, to all machines on a corporate "extranet" (which is basically a few LANs connected using various private mechanisms, including leased lines and VPN). The total number of hosts is roughly 100, and the goal is to get a copy of the bundle from one host onto all the other hosts reliably, quickly, and efficiently. One important issue is that some hosts are grouped together on single fast LANs in which case the network I/O should be done once from one group to the next and then within each group between all the peers. This is as opposed to a strict central server system where multiple hosts might each fetch the same bundle over a slow link, rather than once via the slow link and then between each other quickly. A new bundle will be produced every few days, and occasionally old bundles will be deleted (but that problem can be solved separately). The machines in question happen to run recent Linuxes, but bonus points will go to solutions which are at least somewhat cross-platform (in which case the bundle might differ per platform but maybe the same mechanism can be used). That's pretty much it. I'm not opposed to writing some code to handle this, but it would be preferable if it were one of bash, Python, Ruby, Lua, C, or C++.

    Read the article

  • Does O2Micro Oz776 Smartcard reader support SLE5528 cards?

    - by Proton
    Well, the title seems indicating I'm a lazy guy but it's not the truth... I'm messing with this weird circumstance a whole day. My laptop is a Dell Latitude D630 which has a Oz776 (usb idVendor == 0x0b97, idProduct == 0x7772) smartcard reader, but I'm not sure if it is functioning well. It can successfully recognize my Gemplus GPK4000 smartcard and my SIM card, but not SLE5528. This is my pcscd log when insert the GPK4000: 06039271 ifdhandler.c:1043:IFDHPowerICC() action: PowerUp, usb:0b97/7772:libhal:/org/freedesktop/Hal/devices/usb_device_b97_7772_noserial_if0 (lun: 0) 00000100 - 000000 62 00 00 00 00 00 14 01 00 00 00967744 <- 000000 80 0A 00 00 00 00 14 00 00 00 3B 27 00 80 65 A2 0C 01 01 37 00000048 ATR: 3B 27 00 80 65 A2 0C 01 01 37 00000013 atrhandler.c:102:ATRDecodeAtr() Conv: 01, Y1: 02, K: 07 00000011 atrhandler.c:120:ATRDecodeAtr() TA1: FFFFFFFF, TB1: 00, TC1: FFFFFFFF, TD1:FFFFFFFF 00000011 atrhandler.c:248:ATRDecodeAtr() CurrentProtocol: 1, AvailableProtocols: 1 00000062 eventhandler.c:429:EHStatusHandlerThread() Card inserted into O2 Micro Oz776 00 00 00000014 Card ATR: 3B 27 00 80 65 A2 0C 01 01 37 29016873 eventhandler.c:361:EHStatusHandlerThread() Card Removed From O2 Micro Oz776 00 00 This is the log when insert a SLE5528 card: 99999999 ifdhandler.c:1043:IFDHPowerICC() action: PowerUp, usb:0b97/7772:libhal:/org/freedesktop/Hal/devices/usb_device_b97_7772_noserial_if0 (lun: 0) 00000048 - 000000 62 00 00 00 00 00 11 01 00 00 ** Then it chokes here, when I remove the card, the log continues ** 04741980 <- 000000 80 00 00 00 00 00 11 42 FE 00 00000044 commands.c:225:CmdPowerOn Card absent or mute 00000017 ifdhandler.c:1096:IFDHPowerICC() PowerUp failed 00000082 eventhandler.c:429:EHStatusHandlerThread() Card inserted into O2 Micro Oz776 00 00 00000021 eventhandler.c:443:EHStatusHandlerThread() Error powering up card. 00402818 eventhandler.c:361:EHStatusHandlerThread() Card Removed From O2 Micro Oz776 00 00 I found that SLE5528 is ISO7816 compatible, and it should have ATR, but it just chokes at the PowerUp. When inserted any PVC card with no chip or AT24C01 card, it would not choke but report immediate PowerUp failure. When I tried it on Windows(Windows 7, "runas other user", smartcard login), it chokes too while PVC cards and AT24C01 report immediate failure.

    Read the article

  • Sending the array of arbitrary length through a socket. Endianness.

    - by Negai
    Hi everyone, I'm fighting with socket programming now and I've encountered a problem, which I don't know how to solve in a portable way. The task is simple : I need to send the array of 16 bytes over the network, receive it in a client application and parse it. I know, there are functions like htonl, htons and so one to use with uint16 and uint32. But what should I do with the chunks of data greater than that? Thank you.

    Read the article

  • Building both devel and normal version of a RPM package

    - by ereOn
    Hi, I have a library from which I'd like to create two RPM packages. While I found several links on how to create a basic RPM package, I can't find how to create a devel package (see this question if you wonder what a devel package is). What do I have to do to generate both devel and non-devel versions of my RPM package ? Thanks.

    Read the article

  • PHP Extension using libtidy compiles, but does not load

    - by ewokker
    I wrote an extension in C++ that uses libtidy, and it runs perfectly under PHP when I compile PHP --with-tidy. However, it would be nice to have the extension run on a vanilla PHP. When I try to use the extension, I get something like: PHP Warning: PHP Startup: Unable to load dynamic library 'extension.so': undefined symbol: tidyCleanAndRepair in Unknown on line 0 and the extension is not loaded. Obviously, the official tidy extension works fine. I have the relevant libtidy development packages installed on the system, and it compiles+links without a problem. I have tried to look through the code for the tidy extension, but it is a huge mass of macros - copying pieces at random felt like cargo code. Besides linking to the library with PHP_ADD_LIBRARY_WITH_PATH(tidy, $TIDY_LIBDIR, TIDY_SHARED_LIBADD), Is there a PHP extension or C statement that fixes this error? Thanks in advance!!

    Read the article

  • Free utility which runs in Linux to create a UML class diagram from Java source files

    - by DeletedAccount
    I prefer to jot down UML-diagrams on paper and then implement them using Java. It would be nice to have a utility which could create UML-diagrams for me which I may share on-line and include in the digital documentation. In other words: I want to create UML diagrams from Java source code. The utility must be able to: Run in Linux. Handle Generics, i.e show List<Foo correctly in parameters and return type. Show class inheritance and interface implementations. It's nice if the utility is able to: Run in Windows and Mac OS X. Display enums in some nice manner. Generate output in a diagram format which I may modify using some other utility. Run from the command line. Restrict the UML generation to a set of packages which I may specify. Handle classes/interfaces which are not part of my source code. It could include the first class/interface which is external in the UML diagram. Perhaps in another color to indicate it being a library/framework created by someone else. Focuses on this task and doesn't try to solve the whole issue of documentation.

    Read the article

  • Python Script to check website for a tag

    - by LinuxGnut
    Hello all. I'm trying to figure out how to go about writing a website monitoring script (cron job in the end) to open up a given URL, check to see if a tag exists, and if the tag does not exist, or doesn't contain the expected data, then to write some to a log file, or to send an e-mail. The tag would be something like or something relatively similar. Anyone have any ideas?

    Read the article

  • Trap SIGPIPE when trying to write without reader

    - by Matt
    I am trying to implement a named-pipe communication solution in BASH between two processes. The first process runs a script which echo something in a named-pipe: send(){ echo 'something' > $NAMEDPIPE } And the second script is supposed to read the named-pipe via another script which contains: while true;do if read line < $NAMEDPIPE;do someCommands fi done Not that the named pipe has been previously created using the traditional command mkfifo $NAMEDPIPE My problem is that the reader script is not always running so that if the writer script try to write in the named-pipe it stay blocked until a reader connect the pipe. I want to avoid this behavior, and a solution would be to trap a SIGPIPE signal. Indeed, according to man 7 signal is supposed to be send when trying to write in a pipe with no reader. So I changed my red function by: read(){ trap 'echo "SIGPIPE received"' SIGPIPE echo 'something' > $NAMEDPIPE } But when I run the reader script, the script stay blocked, and not "SIGPIPE received" appears... Am I mistaking on the signal mechanism or is there any better solution to my problem ? Thank you for your help.

    Read the article

  • Shell script to name videos on device

    - by Jordan
    I have a .sh script that automounts any usb device that is plugged in. I need it to also find if there are videos in a certain location on the device that is plugged in then write them to a videos.txt file. Here's what I have and its not working. Also I need it to put the mountpoint in the videos.txt file. ${MOUNTPOINT}$count is the path to the mounted device. VIDEOS=ls ${MOUNTPOINT}$count/dcim/100Video | grep mp4 if [ "$VIDEOS" -ne "" ] ; then "${MOUNTPOINT}$count" > ${MOUNTPOINT}$count/videos.txt; "$VIDEOS" >> ${MOUNTPOINT}$count/videos.txt; fi What am I doing wrong?

    Read the article

  • How can I link to a specific glibc version

    - by falstaff
    When I compile something on my Ubuntu Lucid 10.04 PC it gets linked against glibc. Lucid uses 2.11 of glibc. When I run this binary on another PC with an older glibc, the command fails saying there's no glibc 2.11... As far as I know glibc uses symbol versioning. Can I force gcc to link against a specific symbol version? In my concret use I try to compile a gcc cross toolchain for ARM.

    Read the article

  • Updating a single file in a compressed tar

    - by Phil
    Given a compressed archive file such as application.tar.gz which has a folder application/x/y/z.jar among others, I'd like to be able to take my most recent version of z.jar and update/refresh the archive with it. Is there a way to do this other than something like the following? tar -xzf application.tar.gz cp ~/myupdatedfolder/z.jar application/x/y tar -czf application application.tar.gz I understand the -u switch in tar may be of use to avoid having to untar the whole thing, but I'm unsure how to use it exactly.

    Read the article

  • What is the fastest way to display an image in QT on X11 without OpenGL?

    - by msh
    I need to display a raw image in a QT widget. I'm running X11 on a framebuffer, so OpenGL is not available. Both the image and the framebuffer are in the same format - RGB565, but I can change it to any other format if needed. I don't need blending or scaling. I just need to display pixels as is. I'm using QPainter::drawImage, but it converts QImage to QPixmap and this conversion seems to be very slow. Also it is backed by Xrender and I think there is unnecessary overhead required to support blending in Xrender which I don't really need Is there any better way? If it is not available in QT, I can use Xlib or any other library or protocol. I can modify the driver, X server or anything else.

    Read the article

  • _dl_runtime_resolve -- When do the shared objects get loaded in to memory?

    - by windfinder
    We have a message processing system with high performance demands. Recently we have noticed that the first message takes many times longer then subsequent messages. A bunch of transformation and message augmentation happens as this goes through our system, much of it done by way of external lib. I just profiled this issue (using callgrind), comparing a "run" of just one message with a "run" of many messages (providing a baseline of comparison). The main difference I see is the function "do_lookup_x" taking up a huge amount of time. Looking at the various calls to this function, they all seem to be called by the common function: _dl_runtime_resolve. Not sure what this function does, but to me this looks like the first time the various shared libraries are being used, and are then being loaded in to memory by the ld. Is this a correct assumption? That the binary will not load the shared libraries in to memory until they are being prepped for use, therefore we will see a massive slowdown on the first message, but on none of the subsequent? How do we go about avoiding this? Note: We operate on the microsecond scale.

    Read the article

  • Swapping of columns in a file and remove duplicates

    - by LucaB
    Hi all i have a file like this: term1 term2 term3 term4 term2 term1 term5 term3 ..... ..... what i need to do is to remove duplicates in any order they appear, such as: term1 term2 and term2 term1 is a duplicate to me. It is a really long file, so I'm not sure what can be faster. Does anyone has an idea on how to do this? awk perhaps?

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >