Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 458/1051 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • What are secure ways of sharing a server (ssh+LAMP) with friends?

    - by Bran the Blessed
    What is the best way to share a virtual server with friends? More precisely, I have the following assets: A virtual private server (Debian Lenny) with root access for myself, running... SSH apache2 mysql Some unused disk space Some friends in need of hosting The problem I would now like to do the following: Hosting one or several domains per friend My friends should have full access to their domains, including running PHP scripts, for example My friends should not be able to poke around in other directories The security of my server should not be compromised by faulty PHP scripts To clarify: I do trust my friends in the sense that they are not trying to do something evil with their access. I just do not trust the programs they are going to run. So, what are your recommendations for establishing such a scenario? Partial solution I already came up with the following plan: Add chrooted SSH users for my friends Add Apache vhosts per user (point the directories to subdirectories of the homedirectories, i.e. /home/alice/example.com, /home/bob/example.net, etc. But how can I enforce a chroot-like environment for the scripts they are running within these vhosts? Any pointers would be appreciated.

    Read the article

  • Partitioning of Ubuntu server which will use OpenVZ and encrypted partitions (unlocked through SSH l

    - by DeletedAccount
    Hi, I'm about to install a server. Some context: My HDD is 1 TB and I have 2 GB RAM Ubuntu Server Lucid Lynx AMD 64 I will use OpenVZ and have most functionality separated into containers. To support disk quotas I need to use ext3 (not ext4) for the container partition. Each time I reboot the server I want to be forced to login through SSH and mount the encrypted partitions by typing my password (if someone steals the server, no critical data should be available). I want to have as much as possible encrypted. Yet I want to be able to login through SSH as I don't have a monitor or keyboard at the server. I am not sure how big I need my partitions to be. Being able to resize them later would be nice. I guess it implies using LVM? But the manual partition mount using SSH is also very important (in fact it's more important, if I have to pick one). How do you recommend that I partition the HDD? If I have daemons which needs the encrypted partitions, will they fail and can I just restart them after mounting the needed partitions?

    Read the article

  • Connection closed by remote host followed by Connection refused

    - by Khosrow
    All of a sudden my ssh connection to server has been damaged. Here is what's happened: $ ssh -vvv -p <PORT> -l <USER> <HOST> OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /home/khosrow/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <HOST> [<IP>] port <PORT>. debug1: Connection established. debug1: identity file /home/khosrow/.ssh/identity type -1 debug1: identity file /home/khosrow/.ssh/id_rsa type -1 debug1: identity file /home/khosrow/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host I've recently updated the box with yum update and sshd got updated as well. I honestly don't know if this caused any damages or not. But it's prompted that /etc/ssh/sshd_config was stored as /etc/ssh/sshd_config.rpmnew which was quite normal. I've seen similar posts while googling, but almost all of them suggests that I should check /etc/hosts.allow and /etc/hosts.deny, which in my case, I can't. I can not connect to the box to see what's going on there. I rebooted the box, through web interface of server provider, and it even got worse. I'm now getting this: $ ssh -vvv -p <PORT> -l <USER> <HOST> OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /home/khosrow/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <HOST> [<IP>] <PORT>. debug1: connect to address <IP> port <PORT>: Connection refused ssh: connect to host <HOST> port <PORT>: Connection refused with both <CUSTOM_PORT> and default 22 ports. I would really appreciate if anyone could help me on this.

    Read the article

  • How to retry connections with wget?

    - by Andrei
    I have a very unstable internet connection, and sometimes have to download files as large as 200 MB. The problem is that the speed frequently drops and sits at --, -K/s and the process remains alive. I thought just to send some KILL signals to the process, but as I read in the wget manual about signals it doesn't help. How can I force wget to reinitialize itself and pick the download up where it left off after the connection drops and comes back up again? I would like to leave wget running, and when I come back, I want to see it downloading, and not waiting with speed --,-K/s.

    Read the article

  • Cross-platform centralized desktop password manager

    - by Dave
    I have been using KeePass as a desktop password manager on Windows for many years. Love it! However, I am now needing to work on different platforms much of my day (Windows 7, Windows XP, Mac OS X, Ubuntu, and OpenSUSE.) I'm looking for a password manager I can share across all these platforms. My ideal solution would: Run natively (not in a virtual machine) on all platforms. Store the "official" copy of the password data on a local network so I can get to it from any and all machines. It is OK if it locks (or becomes read-only) when one client is accessing it. Keep a local cached copy (read-only is fine) so I can still get to my passwords when disconnected from the network. Does any such beast exist?

    Read the article

  • How to restrict all services to single domain in Ubuntu?

    - by harold
    Someone has pointed an unknown domain to my server's IP address likely via A records. I would like to reject access to ALL services (httpd, ssh, mail, etc.) from this domain and only allow requests from my domain. I want to make it so when I connect to that domain it's completely rejected from my server. I can disallow access from HTTP by changing my web server settings, but I want to do this for every single type of connection. How can I do this?

    Read the article

  • Keeping folder of files in sync over 3 machines

    - by Wizzard
    Morning, Got 3 machines that have user content on them, which I need to keep in sync. This is a 3 way sync. Currently I run rsync but we just don't handle deletes. Have looked at something like gluster, but that seems a little over the top Any other software out there to do a 3 way sync, or a good network file system...? There is for web servers so we don't want a slow / IO hungry process. 3 servers... user content could be added to 1 and needs to be moved to other two.

    Read the article

  • why in /proc file system have this infomation

    - by liutaihua
    run: lsof|grep delete can find some process open fd, but system dis that it had to delete: mingetty 2031 root txt REG 8,2 15256 49021039 /sbin/mingetty (deleted) I look the /proce filesystem: ls -l /proc/[pid] lrwxrwxrwx 1 root root 0 9? 17 16:12 exe -> /sbin/mingetty (deleted) but actually, the executable(/sbin/mingetty) is normal at /sbin/mingetty path. and some soket like this situation: ls -l /proc/[pid]/fd 82 -> socket:[23716953] but, use the commands: netstat -ae|grep [socket id] can find it. why the OS display this infomation??

    Read the article

  • How can I use '{}' to redirect the output of a command run through find's -exec option?

    - by pkaeding
    I am trying to automate an svnadmin dump command for a backup script, and I want to do something like this: find /var/svn/* \( ! -name dir -prune \) -type d -exec svnadmin dump {} > {}.svn \; This seems to work, in that it looks through each svn repository in /var/svn, and runs svnadmin dump on it. However, the second {} in the exec command doesn't get substituted for the name of the directory being processed. It basically just results a single file named {}.svn. I suspect that this is because the shell interprets > to end the find command, and it tries redirecting stdout from that command to the file named {}.svn. Any ideas?

    Read the article

  • securing communication between 2 Linux servers on local network for ports only they need access to

    - by gkdsp
    I have two Linux servers connected to each other via a cross-connect cable, forming a local network. One of the servers presents a DMZ for the other server (e.g. database server) that must be very secure. I'm restricting this question to communication between the two servers for ports that only need to be available to these servers (and no one else). Thus, communication between the two servers can be established by: (1) opening the required port(s) on both servers, and authenticating according to the applications' rules. (2) disabling IP Tables associated with the NIC cards the cross-connect cable is attached to (on both servers). Which method is more secure? In the first case, the needed ports are open to the external world, but protected by user name and password. In the second case, none of the needed ports are open to the outside world, but since the IP Tables are disabled for the NIC cards associated with the cross-connect cables, essentially all of the ports may be considered to be "open" between the two servers (and so if the server creating the DMZ is compromized, the hacker on the DMZ server could view all ports open using the cross-connect cable). Any conventional wisdom how to make the communication secure between two servers for ports only these servers need access to?

    Read the article

  • getting Thunderbird rescan imap folder

    - by asdmin
    Since I got an external program (imapfilter) modifying my imap folder, thunderbird keeps loosing track of new messages. Messages are moved upon arrival in sub-folders, making Thunderbird unable keep tracking them - therefore I have no clue which folders to look for new messages, and newly created folders (even I subscribe them after creation) does not show up until I restart the mail client. Is there any extension or setting for Thunderbird which I could use to trigger re-scanning my folder tree? Please don't waste time on advices like restarting Thunderbird: takes a great amount of time, or "use Evolution (or any other mail client)", or use internal mail filters: they are not sophisticated enough or procmail/fetchmail: I'm arranging a remote imap server for good EXTENSION 1: even folders can be created in the background, without Thunderbird would know it has been created.

    Read the article

  • Rescue data from damaged hard disk

    - by Lexsys
    Hello. I have a 500 GB hard drive with one NTFS-partition on it. I can mount it with Ubuntu and view the contents. But when I try to copy something, I get an I/O error. Ok, I tried to make its image with dd. I/O error as soon as it starts. I have installed ddrescue, but its manual page says not to use it with drives, failing on I/O. Can I manage to get some information from this drive and how to do this?

    Read the article

  • Two distinct mount points with one device

    - by user1761555
    After being disappointed with Ubuntu's release update feature, I finally decided to have separate mount points for / and /home. Towards this, I reformatted my HDD giving most of my drive to sda1(meant to be /home) and allocated about 40GB to rootfs (/). Unfortunately, I would also like to have a /projects which is to be located on sda1. Currently, sda1 is being mounted as /dev/sda1 on /home type ext4 (rw) I've tried looking online for a solution to this problem..however, I'm not sure as to what to look for! Is it possible to mount the 'home' directory of sda1 as /home and 'projects' directory of sda1 as /projects?

    Read the article

  • Strange filesystem behavior, Ubuntu 9

    - by Fixee
    I have two windows open on the same machine (Ubuntu 9, ia32, server). I'll call these windows W1 and W2. W1: $ cd ~/test $ ls sample $ In W2 I run "make" from a parent directory that recreates file test/sample: $ make project . . $ cd test $ ls sample $ Now, returning to W1: $ ls $ cd ../test $ ls sample $ In other words, after I build from another window and the file test/sample is replaced, ls shows the file as missing in the 2nd window until I cd ../test back into the directory whereupon it reappears. I can give more details if required, but just wondering if this is a well-known behavior.

    Read the article

  • Raid1+0: create stripe over two /dev/mdx on partition or not?

    - by Chris
    Given that I haven't found a way to define how a Raid10 is created with mdadm, i went the Raid1+0 solution. How to display/define Mirror/Stripping pairs with mdadm mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdf1 mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdg1 /dev/sdh1 mdadm --create /dev/md10 --level=0 --raid-devices=2 /dev/md0 /dev/md1 My question is about the stripe. For the mirror I create a primary partition over the full HD and set partition type to FD. So, should I do the same for the Stripe? Create partition on /dev/md0 and /dev/md1 (primary over full 'HDD', set partition type correctly) and then do the stripe on the partition? Is there a correct way here or are there any advantages/disadvantages to a solution? Thank you

    Read the article

  • Eject LiveCD + Reboot

    - by JPerkSter
    We use LiveCD's alot in my line of work. Whether it be fscking file systems, recovering data from a customer to rm'd his server, etc. I'm looking for a quick way to eject the CDROM and reboot the server. Does anyone have any one-liners to do this or any other suggestions? Using 'eject' doesn't work most of the time, from what I've tested / used. We're using RHEL / Cent on most of our servers if that helps :D

    Read the article

  • Tail and wildcard characters

    - by Mitch
    I want to get the last 10 lines of multiple files. I know they all end with "-access_log". So I tried: tail -10 *-access_log But this gives me an error, where as: tail -10 file-* Gives me the output I'd expect. I would think this probably has more to do with BASH then tail. However commands like: cat *-access_log Work fine. Any suggestions?

    Read the article

  • Ubuntu reset network configuration

    - by user1103294
    When I boot up my ubuntu server, it cannot connect to my wireless network anymore. It says "waiting for network configuration" for 60 seconds, boots up, but no wireless. I suspect it's because of the following reasons. I used to connect to a wireless connection named 2WIRE555, password: 123abc But then I upgraded my connection and my new wireless connection was named 2WIRE444, password:111111 Being lazy, I simply renamed 2WIRE555 to 2WIRE444 and changed the password accordingly. I was hoping this would work but ever since then my network configurations is messed up. So back to the issue, how do I reset my network configurations for my Ubuntu 11.10 server?

    Read the article

  • Gentoo+urxvt+terminus: How do I change font version?

    - by gaidal
    In my Debian installation I can type extended ASCII characters such as åäö by default using the terminus font, however in Gentoo I can't get it to work so far. Nothing happens when I hit those keys, like in this thread: Missing glyphs in Terminus font, how to setup a fallback font ? But in this case I know terminus supports those characters in at least some of its versions, since it's works in Debian. So what I want is to find out how to see and choose which of the many different terminus font files is being used. I set the font in the same way on both Debian and Gentoo, using URxvt*font: xft:terminus:size=xx in .Xdefaults. Both systems use en_US.UTF-8 as default locale.

    Read the article

  • Distributed filesystem across a slow link

    - by Jeff Ferland
    I have an image in my head where a link is too slow to realize the real-time transfer of files, but fast enough to catch up every day. What I'd like to see is a master <- master setup where when I write a file to Server A, the metadata will transfer to Server B immediately and the file will transfer at idle or immediately when Server B's client tries to read the file before Server A has sent it. It seems that there are many filesystems which can perform well over fast links, but I don't know of any that do well with a big bottle neck and a few hours of latency.

    Read the article

  • Apache on Ubuntu very slow on inital calls, very fast afterwards

    - by papakost
    I own an Ubuntu 10 VPS Server with Apache 2 hosting a Magento website. The first hit to the site from any client takes about 15-20 sec, while the subsequent hits from the same client take 0-1 sec. I suppose it doesn't have to do with Magento caching, because this happens also when the first call is on a very light page and the next calls are on heavy ones. Does anyone have an idea on what is going wrong here?

    Read the article

  • Performance Drop Lingers after Load [closed]

    - by Charles
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I'm noticing a drop in performance after subsequent load tests. Although our cpu and ram numbers look fine, performance seems to degrade over time as sustained load is applied to the system. If we allow more time between the load tests, the performance gets back to about 1,000 ms, but if you apply load every 3 minutes or so, it starts to degrade to a point where it takes 12,000 ms. None of the application servers are showing lingering apache processes and the number of database connections cools down to about 3 (from a sustained 20). Is there anything else I should be looking out for here?

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • Ubuntu Wired network(ethernet does not work)

    - by user36777
    It was working just fine, until the other day I yanked it out. The wireless works just fine on the same router. If I login to a windows 7 instance on this dual boot laptop then the ehternet works just fine. So it's not a hardware, cable or router issue. The card even gets an ip, but I can't connect to the internet. Here are the details from route, iptables, ifconfig, ping etc. Any ideas? I have been struggling with this for day, none seems to have an answer. http://pastie.org/954816

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >