Search Results

Search found 24965 results on 999 pages for 'linux kvm'.

Page 450/999 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • How to configure OpenVPN server to use custom default gateway?

    - by Arenim
    I have a vpn server at address 10.1.0.2 and the server have another ip in it's network -- 10.0.0.2 in his subnet (it's a tun2socks router). But default server's gateway is NOT 10.0.0.2 (and it's ok) but another external IP. I want all the client's traffic to be forwarded through this ip address -- 10.0.0.2. Here is part of my server's config: dev tap0 server-bridge 10.1.0.1 255.255.255.0 10.1.0.50 10.1.0.100 push "route 10.0.0.0 255.255.255.0" ; now client can ping 10.0.0.2 push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 10.1.0.1" push "dhcp-option WINS 10.1.0.1" in fact i want some like push "redirect-gateway 10.0.0.2" How can I achieve this?

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Launch script after SFTP disconnect

    - by Mates
    I'm currently using Caja (basically the same as Nautilus) to connect using SSH to my server and work with files. What I'm looking for is a way to launch a simple script when I disconnect - I can launch a script after disconnecting from the TTY by putting it into ~/.bash_logout file, but that is not executed when disconnecting from a file manager. The only idea I have is to set up a cronjob which would be checking for existing sftp-server or sshd processes periodicaly and launched the script when there's no such process running. Is there any easier way to do this?

    Read the article

  • Two distinct mount points with one device

    - by user1761555
    After being disappointed with Ubuntu's release update feature, I finally decided to have separate mount points for / and /home. Towards this, I reformatted my HDD giving most of my drive to sda1(meant to be /home) and allocated about 40GB to rootfs (/). Unfortunately, I would also like to have a /projects which is to be located on sda1. Currently, sda1 is being mounted as /dev/sda1 on /home type ext4 (rw) I've tried looking online for a solution to this problem..however, I'm not sure as to what to look for! Is it possible to mount the 'home' directory of sda1 as /home and 'projects' directory of sda1 as /projects?

    Read the article

  • Directory permissions on Ubuntu Server 10.04 LTS

    - by SebastianOpperman
    I have set up a second drive on Ubuntu Server. The directory displays correctly but Windows users cannot write or create files on the directory. I have Samba set up so Windows can access the drives. here is the last bit of my /etc/samba/smb.conf [personeel] path = /media/windows browsable = yes guest ok = yes writable = yes read only = no create mask = 0775 directory mask = 0775 I want the directory to be shared with writable permissions to everyone who can access the Ubuntu Server. I have tried sudo chmod but to no success. Any help would be appreciated

    Read the article

  • Le noyau Linux sort en version 3.15 et permet une mise en veille et une reprise plus rapides

    Le noyau Linux sort en version 3.15 et permet une mise en veille et une reprise plus rapides Comme il est de coutume, Linus Torvalds, le père du noyau Linux a annoncé la sortie de la version stable de Linux 3.15.Des améliorations de performances sont au coeur des modifications de cette troisième version du célèbre noyau open source depuis le début de cette année. Le nouveau Kernel réduit considérablement le temps de mise en veille et de reprise du système pour les ordinateurs portables.Le nouveau...

    Read the article

  • Netbeans automatically changes the file owner when updating files

    - by Alon_A
    We use Netbeans IDE 7.2 to edit our PHP files. In the Run Configuration it is configured as Remote Web Site to automatically save the changes on our web server (Centos OS 6.3). The problem is that every time it is updating the files the owner of the file is changed from apache:apache to userThatUploadedTheFile:users. This causes us problems with SOAP cache files that are configured with apache:apache ownership, and we need to manually chownit back to apache:apache. We've checked the "Preserve Remote File Permissions" checkbox, so the permissions are not changed, only the owner. Is there any solution to preserve the ownership ?

    Read the article

  • yum security update - message indicating kernel version not up to date

    - by JMC
    Running yum --security check-update returns this message: Security: kernel-3.x.x-x.63 is an installed security update Security: kernel-3.x.x-x.29 is the currently running version I already ran the yum security update on the kernel, but it looks like it didn't change the version running on the system. What needs to be done to make it run the new kernel? Are there any concerns about why it didn't change during the installation process? The yum log just shows installed for the new kernel no error messages.

    Read the article

  • Improving sound quality with remote ESD server

    - by cuu508
    Hi, I'm investigating low-budget ways to get audio from my PC (Ubuntu) to HiFi without wires. I'm currently testing a setup where Asus WL-500gP wireless router runs ESD daemon and has attached USB soundcard which is then plugged into HiFi. I'm testing playback on PC with mpg123-esd and Spotify under Wine. The sound is there, latency is unexpectedly low, but I also hear occassional clicks and some distortion from time to time. I suppose that's because of the low latency and wireless streaming of uncompressed audio--any packet drops, CPU temporarily being busy etc. will cause clicks in sound output. Is there a way around this problem, increasing latency / buffer size somehow perhaps? Streaming using shoutcast protocol seems to be a way out but I have feeling that would be a complex and brittle setup.

    Read the article

  • Creating full, global clang+llvm environment

    - by Griwes
    What is the easiest way to setup full Clang, libc++ and LLVM as default global toolchain? All of my attempts to build it, in most of the configurations I could think of, resulted in working Clang, but it didn't use libc++ headers, but default GCC's libstd++'s ones, resulting in numerous faults in incompatible pieces of library code. I would like it working out of the box, without having to do magic in .bashrc or passing all those -stdlib=libc++ and -lc++ to compiler and linker.

    Read the article

  • My servers been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 pm on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere I'm not sure till j get there. Does anyone have any tips on how I can track this down quickly. Were in for a whole lot of litigation if I dont get the server back up asap. Any help appreciated.

    Read the article

  • Debian: Unable to mount a second drive as a subdirectory inside of another partition.

    - by jkndrkn
    Hello. I have the following /etc/fstab: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/md1 / ext3 defaults,errors=remount-ro 0 1 /dev/md0 /boot ext3 defaults 0 2 /dev/md5 /home ext3 defaults 0 2 /dev/md3 /opt ext3 defaults 0 2 /dev/md6 /tmp ext3 defaults 0 2 /dev/md2 /usr ext3 defaults 0 2 /dev/md4 /var ext3 defaults 0 2 /dev/md7 none swap sw 0 0 /dev/sdc /home/httpd ext3 defaults 0 2 /dev/hda /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/sdc1 /mnt/usb/backup-1 auto defaults 0 0 I am unable to get /dev/sdc/ to mount at /home/httpd/ on reboot. The /home/httpd/ directory exists. Mounting via mount -t ext3 /dev/sdc /home/httpd works just fine. Mounting via mount -a generates the following error message: mount: you must specify the filesystem type This is, incidentally, the same message that I see while booting. The error message goes away if I comment out the line in fstab starting with /dev/sdc.

    Read the article

  • How to make my Ubuntu an internet gateway for my Android phone

    - by yacine
    I want to use the internet of my school on my Android, the problem is they have a Squid proxy, and many applications on my phone don't use the proxy at all. The obvious solution is to install a transparent proxy on my Android to force all applications to connect through it. The problem is that I need to root the phone to make it work, and I don't want to do it because it's not really my phone and rooting is a little risky- Another solution, which is safer, is to make my computer run as a gateway, so I put my Ubuntu IP in the gateway parameter of the phone. I'm running a small proxy on my ubuntu (cntlm), so I redirect the Android traffic to it. I did it with "iptables" as follows: iptables -t nat -A PREROUTING -s 10.0.1.118 -p tcp -j REDIRECT --to-ports 8888 iptables -t nat -A PREROUTING -s 10.0.1.118 -p udp -j REDIRECT --to-ports 8888 10.0.1.118 is the IP of the phone, 8888 is the port of cntlm (proxy on my PC). Now, on the phone: When I enter www.google.com on the navigator I get nothing (web site not found, error message of Firefox). But, when I enter http://74.125.143.101 (IP of Google) I get an error message from the school proxy (so it worked in some way – my PC redirected the traffic of the phone to the Squid proxy). The error message is : The requested URL could not be retrieved while trying to process the request get / http/1.1 host 74.125.143.101 user-Agent ... ... I think the problem is in the "GET" header,it should be GET 74.125.143.101 HTTP/1.1. But I don't understand what's happening, and I'm a certified CCNA.

    Read the article

  • Migrating a running production server to Xen, unmodified as a second HDD?

    - by DaveCol
    I have a production server which I am looking to virtualize via XEN. For this purpose I have purchased a new Sata HDD, in which I have promptly installed CentOS 5.5 x64 with XEN server installed. Now I have two HDD: /dev/sda1 running as host with Xen Server Installed; and /dev/sda2 which is the HDD where the original server has installed. Is it posible to use /dev/sda2 to work as GuestOS in a xen server? Would I have to modify its kernel? Thank you for any input

    Read the article

  • whats the default username and password for an ubuntu live cd?

    - by Rory McCann
    What's the username and password for an ubuntu live cd image? I ask cause I've recently copied the contents of an ubuntu based live iso (easypeasy, the ldistro for nwtbooks) onto a harddisk, but the squash fs is corrupt. Most likely cause I copied it live. :) so it's not autologging in. Is there a username/password for this? Update: I tried username ubuntu and a blank password, it didn't work

    Read the article

  • debian lenny : problem modifying static ip

    - by supertiti
    hello all, i'm trying to change a static ip assigned to a debian VM. I modified the /etc/network/interfaces file but my debian doesn't seem to like the new settings currently the machine's ip is set to 192.168.1.136 and i want the machine's ip to be set to 192.168.1.8 here's my modified /etc/network/interfaces : auto lo iface lo inet loopback allow-hotplug eth0 auto eth0 iface eth0 inet static address 192.168.1.8 gateway 192.168.1.1 netmask 255.255.255.0

    Read the article

  • Open application in background without losing current window focus. Fedora 17, Gnome 3

    - by Ishan
    I'm running a script in the background which loads an image with feh depending on which application is currently in focus. However, whenever the script opens the image, window focus is lost to feh. I was able to circumvent this by using xdotool to switch back to the application that was originally in focus, but this introduces a short annoying period of time where the focus is switched from feh to the application. My question is this: is there any way to launch feh in the background such that window focus is NOT lost? System: Fedora 17, Gnome 3, Bash Thanks a ton!

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • E: Internal Error, Could not perform immediate configuration (2) on libattr1 ? in Ubuntu

    I am working with Ubuntu latest version. While installing via apt-get install i tried to abort that by pressing Ctrl+Z. It terminate successfully ;). But next time when i tried to use apt-get, i got some error "lock" and "temporally unavailable" something like that and **I unfortunately i delete the /var/lib/dkpg folder.** after that i cant install anything from apt-get, getting an error. E: Internal Error, Could not perform immediate configuration (2) on libattr1 so how can i solve this issue?

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >