Search Results

Search found 26947 results on 1078 pages for 'util linux'.

Page 352/1078 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Why does the Java VM process eat up more RAM then specified in -Xmx parameter?

    - by evilpenguin
    I have multiple servers running CentOS 5.4 and only one application running on Java VM. I've configured the Java VM with the following arguments: java -Xmx4500M -server -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:NewSize=1024m -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote=true The machines I'm running the VM on has 6 GB RAM and no other applications running. After a while, the java process starts to hit the swap space really hard, I get this info out of the top command: 7658 root 25 0 11.7g 3.9g 4796 S 39.4 67.3 543:54.17 java On the other hand, if I connect via JConsole, it reports the Java VM has 2.6 GB used, 4.6 GB commited and 4.6 Gb max. java -version returns: java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Why is the Java VM expanding so much past it's allocated heap size? And where does that memory go, if it's not reported in JConsole?

    Read the article

  • Recursively move files in sub-dirs to new sub-dirs of same name

    - by Gabriel
    I have a batch of files all ending with the same string, ie: *_ext.dat located in several sub-dirs along with several other files, in a given main dir. This is the structure: /main_dir/subdir1/file11_ext.dat /main_dir/subdir1/file12_ext.dat /main_dir/subdir1/file13_ext.dat /main_dir/subdir1/file14_other.dat /main_dir/subdir1/file15_other.dat /main_dir/subdir2/file21_ext.dat /main_dir/subdir2/file22_ext.dat /main_dir/subdir2/file23_ext.dat /main_dir/subdir2/file24_other.dat /main_dir/subdir2/file25_other.dat /main_dir/subdir3/file31_ext.dat /main_dir/subdir3/file32_ext.dat /main_dir/subdir3/file33_ext.dat /main_dir/subdir3/file34_other.dat /main_dir/subdir3/file35_other.dat I need to recursively move only the files ending in *_ext.dat into a new main dir, new_dir, respecting the sub-dir structure so the files will end up in an equivalent dir structure like this: /new_dir/subdir1/file11_ext.dat /new_dir/subdir1/file12_ext.dat /new_dir/subdir1/file13_ext.dat /new_dir/subdir2/file21_ext.dat /new_dir/subdir2/file22_ext.dat /new_dir/subdir2/file23_ext.dat /new_dir/subdir3/file31_ext.dat /new_dir/subdir3/file32_ext.dat /new_dir/subdir3/file33_ext.dat Because of this the command should also create those sub-dirs with their corresponding names. I know that with a line like this one: find . -name "*_ext.dat" -print0 | xargs -0 rm -rf I can delete all those files, but I don't know how to modify it to do what I need (or if it is even possible).

    Read the article

  • Virtual machine on ubuntu

    - by MITHIYA MOIZ
    I have configured virtual machine on ubuntu with the help of below article, https://help.ubuntu.com/9.04/serverguide/C/libvirt.html I managed to finish all the part except the major portion getting virtual host to talk to real network, Which I guess should be done only via bridge interface. Via virtual machine manager I try to choose any interface it gives me interface not bridged When I try to bridge the interceface eth0 as below auto br0 iface br0 inet static address 192.168.0.223 network 192.168.0.0 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off I cannot communicate with this interface to network, host server looses all the communication to network. But when I remote bridge interface from /etc/network/interfaces And configure eth0 as below it works fine The primary network interface auto eth0 iface eth0 inet static address 192.168.0.223 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 dns-nameservers 62.215.6.51 gateway 192.168.0.1 how can i setup bridge interface correctly and how would my /etc/netwrok/interfaces file would look a like.

    Read the article

  • SSHing thru an HTTP proxy

    - by Siler
    Typical scenario: I'm trying to SSH thru a corporate HTTP proxy to a remote machine using corkscrew, and I get: ssh_exchange_identification: Connection closed by remote host Obviously, there's a lot of reasons this might be happening - the proxy might not allow this, the remote box might not be running sshd, etc. So, I tried to tunnel manually via telnet: $ telnet proxy.evilcorporation.com 82 Trying XX.XX.XX.XX... Connected to proxy.evilcorporation.com. Escape character is '^]'. CONNECT myremotehost.com:22 HTTP/1.1 HTTP/1.1 200 Connection established So, unless I'm mistaken... it looks like the connection is working. So, why then, doesn't it work via corkscrew? ssh -vvv [email protected] -p 22 -o "ProxyCommand corkscrew proxy.evilcorporation.com 82 myremotehost.com 22" OpenSSH_6.6, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Executing proxy command: exec corkscrew proxy.evilcorporation.com 82 myremotehost.com 22 debug1: permanently_set_uid: 0/0 debug1: permanently_drop_suid: 0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: identity file /root/.ssh/id_ed25519 type -1 debug1: identity file /root/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6p1 Ubuntu-2ubuntu1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • Setting up dante socks server

    - by skerit
    I want to tunnel all my internet traffic through my vps, so I'm trying to install a proxy server. However: I can't seem to browse the internet through Dante. I get the ERR_EMPTY_RESPONSE error. This is my config: logoutput: stderr /home/user/dantelog internal: eth1 port=1080 external: eth1 method: username pam user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } Do I really have to run 2 proxy servers: one for http and one for socks? or is there something else I can do?

    Read the article

  • placing shell script under systemd control

    - by Calvin Cheng
    Assuming I have a shell script like this:- #!/bin/sh # cherrypy_server.sh PROCESSES=10 THREADS=1 # threads per process BASE_PORT=3035 # the first port used # you need to make the PIDFILE dir and insure it has the right permissions PIDFILE="/var/run/cherrypy/myproject.pid" WORKDIR=`dirname "$0"` cd "$WORKDIR" cp_start_proc() { N=$1 P=$(( $BASE_PORT + $N - 1 )) ./manage.py runcpserver daemonize=1 port=$P pidfile="$PIDFILE-$N" threads=$THREADS request_queue_size=0 verbose=0 } cp_start() { for N in `seq 1 $PROCESSES`; do cp_start_proc $N done } cp_stop_proc() { N=$1 #[ -f "$PIDFILE-$N" ] && kill `cat "$PIDFILE-$N"` [ -f "$PIDFILE-$N" ] && ./manage.py runcpserver pidfile="$PIDFILE-$N" stop rm -f "$PIDFILE-$N" } cp_stop() { for N in `seq 1 $PROCESSES`; do cp_stop_proc $N done } cp_restart_proc() { N=$1 cp_stop_proc $N #sleep 1 cp_start_proc $N } cp_restart() { for N in `seq 1 $PROCESSES`; do cp_restart_proc $N done } case "$1" in "start") cp_start ;; "stop") cp_stop ;; "restart") cp_restart ;; *) "$@" ;; esac From the bash script, we can essentially do 3 things: start the cherrypy server by calling ./cherrypy_server.sh start stop the cherrypy server by calling ./cherrypy_server.sh stop restart the cherrypy server by calling ./cherrypy_server.sh restart How would I place this shell script under systemd's control as a cherrypy.service file (with the obvious goal of having systemd start up the cherrypy server when a machine has been rebooted)? Reference systemd service file example here - https://wiki.archlinux.org/index.php/Systemd#Using_service_file

    Read the article

  • Strange port forwarding problem

    - by rAyt
    I've got a strange port forwarding problem. The port forwarding to my internal webserver (10.0.0.10 on Port 80) works without a problem but the port forwarding to a windows server (10.0.0.15) on port 3389 doesn't work. The port 3389 is open. Any ideas? thanks! #!/bin/sh IPTABLES="/sbin/iptables" $IPTABLES --flush $IPTABLES --table nat --flush $IPTABLES --delete-chain $IPTABLES --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE $IPTABLES -t nat -A PREROUTING -p tcp -i eth0 -d 188.40.XXX.XXX --dport 3389 -j DNAT --to 10.0.0.15:3389 $IPTABLES -t nat -A PREROUTING -p tcp -i eth0 -d 188.40.XXX.XXX --dport 80 -j DNAT --to 10.0.0.10:80 $IPTABLES -t nat -A PREROUTING -p tcp -i eth0 -d 188.40.XXX.XXX --dport 222 -j DNAT --to 10.0.0.10:22 $IPTABLES --append FORWARD --in-interface eth1 -j ACCEPT

    Read the article

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • Accessing webserver behind cheap router

    - by malfist
    I have a trendNET wireless/wired router, and inside the LAN I have a webserver on 192.168.10.103:80. Does anyone know how I can access the webserver from outside the LAN? I setup a "VirtualSever" to portforward publicIP:8080 to 192.168.10.103:80, but it never loads. Port scanning the external IP shows the port as "filtered" on the router, and from the inside, it shows 192.168.10.103:80 as open. Does anyone know how I can make this work?

    Read the article

  • Anonymous user with proftpd on fedora

    - by stukerr
    Hi there, I am trying to setup an anonymous user account on our server to enable people to downlaod technical manuals for our products etc. and I would like this to be as secure as possible! I was just wondering if anyone knew a series of steps that will allow me to create an anonymous ftp account linked to a directory on the server that enables download only ? Also how could i make a corresponding ftp account with write priviledges to this account to allow people within our company to upload new files ? Sorry i'm a bit new to all this! Many Thanks, Stuart

    Read the article

  • openSSL tutorial not fully working - Can sign but cannot restore original file

    - by djechelon
    I'm writing, and testing, a little tutorial for my groupmates involved in an openSSL homework. We have a bunch of PDF files, I'm the CA and each one should send me a signed PDF for me to be verified. I've told them to do the following (and tried to do it by myself) Request and obtain a certificate (I'll skip this part) Create a MIME message with the PDF file in it makemime -c "text/pdf" -a "Content-Disposition: attachment; filename=”Elaborato.pdf" Elaborato.pdf > Elaborato.pdf.msg Sign with openSSL openssl smime -sign -in Elaborato.pdf.msg -out Elaborato.pdf.p7m -certfile ca.pem -certfile nomegruppo.crt -inkey nomegruppo.key -signer nomegruppo.crt Verify with openssl smime -verify -in Elaborato.pdf.p7m -out Elaborato-verified.msg -CAfile ca.pem -signer nomegruppo.crt Extract attachment with munpack Elaborato-verified.msg View with Acrobat Reader The problem is that even if I get a file that (from its binary content) resembles a PDF file my current Ubuntu PDF viewer doesn't read it. The XXXElaborato.pdf extracted by munpack is a little bit smaller than the original. What's the problem with this procedure? In theory, they should send me the signed S/MIME message and I should be able to read the PDF within it. Why can't I restore the original content of the PDF file?

    Read the article

  • Weird noise while scanning, using scanimage and a Canon Lide 35

    - by Manu
    I'm trying to scan a bunch of images, using xsane's scanimage : scanimage --format=tiff --batch --batch-prompt This command scans the first picture perfectly, but as soon as I press enter, the scanner makes a weird noise, and the scanning "arm" moves very, very slowly. If I stop scanimage and start again, it scans normally again. Is there another scanimage option that I need to add? I've checked the man page, but can't see what I'm missing. Edit: the problem seems to be that the scanning "arm" doesn't go back to it's original position after the first scan.

    Read the article

  • LXC container can only access host via bridge

    - by vitaut
    I have an LXC container with i686 Ubuntu 12.04 running on a x86_64 Ubuntu 12.04 host. I've set up a bridge using instructions here. However the ping from the container only goes through to the host and not to other machines on the local network. Similarly only the host and not the other machines see the container OS. The host's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_fd 0 bridge_maxwait 0 The container's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp And here's the relevant part of the container's config: lxc.network.type=veth lxc.network.link=br0 lxc.network.flags=up Any ideas what I'm doing wrong? Additional info: The output of iptables-save on host: $ sudo iptables-save # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *filter :INPUT ACCEPT [6854:721708] :FORWARD ACCEPT [4067:538895] :OUTPUT ACCEPT [4967:522405] COMMIT # Completed on Sat Oct 26 06:06:48 2013 # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *nat :PREROUTING ACCEPT [82235:21547307] :INPUT ACCEPT [16:1070] :OUTPUT ACCEPT [9386:583359] :POSTROUTING ACCEPT [14693:1291952] -A POSTROUTING -s 10.0.3.0/24 ! -d 10.0.3.0/24 -j MASQUERADE COMMIT # Completed on Sat Oct 26 06:06:48 2013 The output of brctl show on host: $ brctl show bridge name bridge id STP enabled interfaces br0 8000.080027409684 no eth0 vethBkwWyV The output of ifconfig br0 on host: $ ifconfig br0 br0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet addr:192.168.1.11 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:232863 errors:0 dropped:0 overruns:0 frame:0 TX packets:59518 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34437354 (34.4 MB) TX bytes:198492871 (198.4 MB) The output of ifconfig eth0 on host: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:299419 errors:0 dropped:0 overruns:0 frame:0 TX packets:203569 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:59077446 (59.0 MB) TX bytes:372056540 (372.0 MB) The output of ifconfig eth0 on container: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:16:3e:74:08:2b inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::216:3eff:fe74:82b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:81 errors:0 dropped:0 overruns:0 frame:0 TX packets:113 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8506 (8.5 KB) TX bytes:9021 (9.0 KB)

    Read the article

  • Surprising corruption and never-ending fsck after resizing a filesystem.

    - by Steve Kemp
    System in question has Debian Lenny installed, running a 2.65.27.38 kernel. System has 16Gb memory, and 8x1Tb drives running behind a 3Ware RAID card. The storage is managed via LVM. Short version: Running a KVM guest which had 1.7Tb storage allocated to it. The guest was reaching a full-disk. So we decided to resize the disk that it was running upon We're pretty familiar with LVM, and KVM, so we figured this would be a painless operation: Stop the KVM guest. Extend the size of the LVM partition: "lvextend -L+500Gb ..." Check the filesystem : "e2fsck -f /dev/mapper/..." Resize the filesystem: "resize2fs /dev/mapper/" Start the guest. The guest booted successfully, and running "df" showed the extra space, however a short time later the system decided to remount the filesystem read-only, without any explicit indication of error. Being paranoid we shut the guest down and ran the filesystem check again, given the new size of the filesystem we expected this to take a while, however it has now been running for 24 hours and there is no indication of how long it will take. Using strace I can see the fsck is "doing stuff", similarly running "vmstat 1" I can see that there are a lot of block input/output operations occurring. So now my question is threefold: Has anybody come across a similar situation? Generally we've done this kind of resize in the past with zero issues. What is the most likely cause? (3Ware card shows the RAID arrays of the backing stores as being A-OK, the host system hasn't rebooted and nothing in dmesg looks important/unusual) Ignoring brtfs + ext3 (not mature enough to trust) should we make our larger partitions in a different filesystem in the future to avoid either this corruption (whatever the cause) or reduce the fsck time? xfs seems like the obvious candidate?

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • Installing GNU scientific library and linking to programme

    - by jack
    I am trying to install a statistical program which requires GNU Scientific Library (GSL). I have successfully installed GSL through the yum command, but my statistical program gives an error when I try to run make install. I think there is a linking problem. How can I solve it? $ sudo yum install gsl.x86_64 Installed: gsl.x86_64 0:1.15-3.fc16 Dependency Installed: atlas.x86_64 0:3.8.4-1.fc16 $ tar -xvzf prog.tgz $ cd prog $ make $ gcc -O3 -Wall -Wshadow -pedantic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -DVER32 -I/opt/local/include/ -L/opt/local/lib/ -c -o prog.o prog.c In file included from prog.c:16:0: prog.h:7:30: fatal error: gsl/gsl_sf_gamma.h: No such file or directory compilation terminated. make: *** [prog.o] Error 1

    Read the article

  • unable to run OpenOffice from text mode

    - by dilip
    I have installed OpenOffice in Redhat5 in text mode. When I tried to start OpenOffice using the command: sudo /usr/lib/openoffice/program/soffice "-accept=socket,host=localhost,port=8100; urp;StarOffice.ServiceManager " -nologo -headless -nofirststartwizard: " It shows the error saying javaldx: Could not find a Java Runtime Environment! So I installed jre in my system, and then I did not get any error but OpenOffice does not start. Also I checked the process regarding OpenOffice, but I haven's seen any process related to that. Can any one help me to fix this issue?

    Read the article

  • probems using ssh from cron

    - by Travis
    I am attempting to automate a script that executes commands on remote machines via ssh. I have public key authentication setup between the machines using ssh-agent. The script runs fine when executed from the command prompt. I suspect my problem is that cron isn't starting the ssh-agent due to it's minimalist environment. Here is the output when I add the -v flag to ssh: debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: gssapi-with-mic debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering public key: /home/<user>/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 149 debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> debug1: Trying private key: /home/<user>/.ssh/id_dsa debug1: Next authentication method: password debug1: Authentications that can continue: publickey,gssapi-with-mic,password Permission denied, please try again. debug1: Authentications that can continue: publickey,gssapi-with-mic,password Permission denied, please try again. debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: No more authentication methods to try. Permission denied (publickey,gssapi-with-mic,password). How can I make this work? Thanks!

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >