Search Results

Search found 24646 results on 986 pages for 'linux vserver'.

Page 439/986 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Iptables port mapping from two PCs to one

    - by Anton
    We have 3 PCs, two of it are connected to internet (both of it have 2 NIC) PC1: eth0 - 1.0.0.1 (external IP) eth1 - 172.16.0.1 (internal IP) PC2: eth0 - 1.0.0.2 (external IP) eth1 - 172.16.0.2 (internal IP) PC3: eth0 - 172.16.0.3 (internal IP) Now we want to map port 80 from PC1 and PC2 to PC3. But there is the problem: iptables port forwarding works well from PC1 or PC2, but only in case if PC3 have PC1 or PC2 as gateway. So, question is: can we have port mapping from both PC1 and PC2 regardless of gateway settings on PC3? Thank you in advance.

    Read the article

  • Additional Hard Drives for Servers

    - by Abs
    Hello all, I am developing a web app where I will have to save lots of files and I am just trying to work out the directory structure and where things should be saved to. I have had a look at the dedicated server I want to buy and for storage it shows this: 2x 1TB SATA in RAID1 The space is enough but I am guessing this will not be on one hard drive? I will have to save files on one hard drive and when that fills up, I have to use the other? For the Fedora distro - what is the path for the second drive? Is there a primary drive where I will be able to setup my webroot? I am sorry, this is all new to me. It would be great to links and advice on how things actually work when it comes to additional hard drives etc. Thanks all

    Read the article

  • rsync invocation to replace symlinks pointing to source?

    - by bdbaddog
    Currently I'm moving a big filesystem to a new server as the original fileserver is no longer able to handle the filesystem writes. To make this quick I made symlinks at the target filesystem pointing to the original filesystem. Initially: /company/release (mountpoint of the original filesystem) After migration: /company/release.old (points to original filesystem after automount map update) /company/release (points to new fileserver/filesystem after automount map update) In /company/release there are symlinks like the following: /company/release/product-1.0.tar.gz - /company/release.old/product-1.0.tar.gz /company/release/product-1.0 - /company/release.old/product-1.0 (this is a tree of files) Using symlinks allowed me to move the writes to the new filesystem quickly. Now I'd like to slowly migrate the existing files and directories to the new filesystem. The problem I'm running into is that since the symlinks point back at the original files rsync doesn't see any difference and so it doesn't actually copy the file(s) or directory(s) and remove/overwrite the symlinks. Is there a set of rsync flags which will do what I want?

    Read the article

  • yum security update - message indicating kernel version not up to date

    - by JMC
    Running yum --security check-update returns this message: Security: kernel-3.x.x-x.63 is an installed security update Security: kernel-3.x.x-x.29 is the currently running version I already ran the yum security update on the kernel, but it looks like it didn't change the version running on the system. What needs to be done to make it run the new kernel? Are there any concerns about why it didn't change during the installation process? The yum log just shows installed for the new kernel no error messages.

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • Debian: Unable to mount a second drive as a subdirectory inside of another partition.

    - by jkndrkn
    Hello. I have the following /etc/fstab: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/md1 / ext3 defaults,errors=remount-ro 0 1 /dev/md0 /boot ext3 defaults 0 2 /dev/md5 /home ext3 defaults 0 2 /dev/md3 /opt ext3 defaults 0 2 /dev/md6 /tmp ext3 defaults 0 2 /dev/md2 /usr ext3 defaults 0 2 /dev/md4 /var ext3 defaults 0 2 /dev/md7 none swap sw 0 0 /dev/sdc /home/httpd ext3 defaults 0 2 /dev/hda /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/sdc1 /mnt/usb/backup-1 auto defaults 0 0 I am unable to get /dev/sdc/ to mount at /home/httpd/ on reboot. The /home/httpd/ directory exists. Mounting via mount -t ext3 /dev/sdc /home/httpd works just fine. Mounting via mount -a generates the following error message: mount: you must specify the filesystem type This is, incidentally, the same message that I see while booting. The error message goes away if I comment out the line in fstab starting with /dev/sdc.

    Read the article

  • It's the ethernet 10/100 in LAN transfer faster than USB 1.0?

    - by dag729
    I have an old laptop (PIII 800MHz, with 256 RAM) that I wish to use as my home server: it'll have to serve just two people, so I think that I'll be more than ok as for the RAM and the CPU. The issue is about data, because the internal hard disk is a 12GB, that is...ridicolous! I have more than 60GB of mixed storage and counting (images, videos and music) in an external usb hd. I could put the hd in my desktop pc just to serve the big files through ethernet or let it inside its usb box attached to the laptop. The question is: which of these solutions will be the fastest? USB 1.0 attached to the server (laptop) or internal hard disk serving files via 10/100 ethernet to the laptop on demand?

    Read the article

  • Why does my CentOS logrotate run at random times?

    - by Mike Pennington
    I put a logrotate configuration file in /etc/logrotate.d/ and expected the logs to rotate at a consistent time; however, they do not... log rotation times are seemingly random +/- one hour. Why are the log rotation start times random, and how can I change this? Informational: my logrotate config file looks like this... /opt/backups/network/*.conf { copytruncate rotate 30 daily create 644 root root dateext maxage 30 missingok notifempty compress delaycompress postrotate ## Create symbolic links in daily/ PATH=`/usr/bin/dirname $1`; FILE=`/bin/basename $1`; /bin/ln -s $1 $PATH/daily/$FILE endscript }

    Read the article

  • How do I minimize Evolution to the system tray in Ubuntu?

    - by Jephir
    In Ubuntu some applications can be set to minimize instead of exit on close. For example, Empathy minimizes to the system tray (mail icon) when the close button is pressed in the application window. How do I make Evolution do this as well? Essentially I would like to have Evolution hidden in the system tray instead of having to re-launch it every ten minutes to check for new messages (or leave it open and clutter the taskbar).

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • HAProxy overload protection

    - by user2050516
    using the HAProxy, would it be possible to configure an overload protection, to limit the amount of requests sent to the backing http server(s) to a given rate (z.B 100 Request per second ). If the threshold is exceeded requests should be answered with a default response. I am interested in requests per second not connections per second as a connection can have many requests. And yes to improve the servers is not an option here. If yes a configuration example to achieve that would be excellent. Thank you in advance.

    Read the article

  • Hardware needed for receiving and recording videcalls in Asterisk

    - by jneves
    I'm planning an Asterisk configuration that should record videocalls and then feed it to an application. From what I've researched, it seems like app_h234m is the way to go (http://www.voip-info.org/wiki/view/Asterisk+app_h324m+compatibility). But it's not clear to me what are the hardware requirements for this. Can someone enlighten me?

    Read the article

  • puppet onlyif specified nodes

    - by Valintinr
    I'm trying to write a puppet template. I have a puppet-master and a few puppet-agents and they all must be divided. I think it's good to do this by the node's hostname. But when I tried to do this I've encountered an error "puppet-agent[169037]: (/Stage[main]//Exec[adduser]) Could not evaluate: Could not find command 'ru1'" see code below exec { 'adduser': command => 'sudo adduser -m -p pawSfQewWrUAA test -G wheel', path => [ '/bin','/usr/bin' ], onlyif => "$hostname == ru1" } I need to specify this task for only one node with the hostname ru1. So have can I do this? Thanks.

    Read the article

  • Website & Forum sharing the same login credentials ?

    - by Brian
    I am going to be running a small site (100 hits a week maybe) and I am looking for a quick and easy way to share login information between the main website, a control panel (webmin, cpanel, or something), and the forum. One login needed to access any of the three. The website won't have use for the login, per say. But it will display "logged in" when you are on the website. Any custom solutions, any thoughts, logic, examples?

    Read the article

  • Two distinct mount points with one device

    - by user1761555
    After being disappointed with Ubuntu's release update feature, I finally decided to have separate mount points for / and /home. Towards this, I reformatted my HDD giving most of my drive to sda1(meant to be /home) and allocated about 40GB to rootfs (/). Unfortunately, I would also like to have a /projects which is to be located on sda1. Currently, sda1 is being mounted as /dev/sda1 on /home type ext4 (rw) I've tried looking online for a solution to this problem..however, I'm not sure as to what to look for! Is it possible to mount the 'home' directory of sda1 as /home and 'projects' directory of sda1 as /projects?

    Read the article

  • Snort monitoring of spanning interface

    - by aHunter
    I have configured a Cisco 3500 switch with a port SPAN and have my snort node (fedora 13) plugged into it. I am running snort as a daemon and have configured a rule to log all tcp traffic but I am only seeing traffic with a destination of the snort node. I know that the SPAN port is working and wanted to know if there is a specific option that I needed to start snort with in order for it to pickup all the traffic? Or is there something that I have missed here? Many thanks.

    Read the article

  • mutt isn't sending large messages

    - by Guy
    I'm using mutt in the following way: echo <MESSAGE> | mutt -s <SUBJECT> -- <TO-ADDR> This usually works when I try small message (messages with ~10 lines in the body). But when trying very large message (a message with ~200 lines) the email just isn't received. Any ideas?

    Read the article

  • Strange filesystem behavior, Ubuntu 9

    - by Fixee
    I have two windows open on the same machine (Ubuntu 9, ia32, server). I'll call these windows W1 and W2. W1: $ cd ~/test $ ls sample $ In W2 I run "make" from a parent directory that recreates file test/sample: $ make project . . $ cd test $ ls sample $ Now, returning to W1: $ ls $ cd ../test $ ls sample $ In other words, after I build from another window and the file test/sample is replaced, ls shows the file as missing in the 2nd window until I cd ../test back into the directory whereupon it reappears. I can give more details if required, but just wondering if this is a well-known behavior.

    Read the article

  • Do I have a bad SD card?

    - by User1
    I'm trying to copy data from my computer to an SD card. After a few hundred megs, I keep getting the following errors in dmesg: [34542.836192] end_request: I/O error, dev mmcblk0, sector 855936 [34542.836284] FAT: unable to read inode block for updating (i_pos 13694981) [34542.836306] MMC: killing requests for dead queue [34542.836310] end_request: I/O error, dev mmcblk0, sector 9280 [34542.837035] FAT: unable to read inode block for updating (i_pos 148486) [34542.837062] MMC: killing requests for dead queue [34542.837066] end_request: I/O error, dev mmcblk0, sector 1 [34542.837074] FAT: bread failed in fat_clusters_flush [34542.837085] MMC: killing requests for dead queue These were all files I copied from a smaller SD card. I just want to transfer them to my new, larger card for my phone. I tried the same experiment with different files on a different machine and the card failed again. Reading data from the old card went fine. My systems are older and the new SD card is new (16GB Class 4). Could this be that my computers are too old? Is there a definitive test to verify if my SD card is bad?

    Read the article

  • Is disabling password login for SSH the same as deleting the password for all users?

    - by Arsham Skrenes
    I have a cloud server with only a root user. I SSH to it using RSA keys only. To make it more secure, I wanted to disable the password feature. I know that this can be done by editing the /etc/ssh/sshd_config file and changing PermitRootLogin yes to PermitRootLogin without-password. I was wondering if simply deleting the root password via passwd -d root would be the equivalent (assuming I do not create more users or new users have their passwords deleted too). Are there any security issues with one approach verses the other?

    Read the article

  • How to use ccache selectively?

    - by Anonymous
    I have to compile multiple versions of an app written in C++ and I think to use ccache for speeding up the process. ccache howtos have examples which suggest to create symlinks named gcc, g++ etc and make sure they appear in PATH before the original gcc binaries, so ccache is used instead. So far so good, but I'd like to use ccache only when compiling this particular app, not always. Of course, I can write a shell script that will try to create these symlinks every time I want to compile the app and will delete them when the app is compiled. But this looks like filesystem abuse to me. Are there better ways to use ccache selectively, not always? For compilation of a single source code file, I could just manually call ccache instead of gcc and be done, but I have to deal with a complex app that uses an automated build system for multiple source code files.

    Read the article

  • Regex working in RedHat is not giving any result in Ubuntu

    - by Supratik
    My goal is to match specific files from specific sub directories. I have the following folder structure `-- data |-- a |-- a.txt |-- b |-- b.txt |-- c |-- c.txt |-- d |-- d.txt |-- e |-- e.txt |-- org-1 | |-- a.org | |-- b.org | |-- org.txt | |-- user-0 | | |-- a.txt | | |-- b.txt I am trying to list the files only inside the data directory. I am able to get the correct result using the following command in RHEL find ./testdir/ -iwholename "*/data/[!/].txt" a.txt b.txt c.txt d.txt e.txt If I run the same command in Ubuntu it is not working. Can anyone please tell me why it is not working in Ubuntu ?

    Read the article

  • How can I make zsh completion behave like Bash completion?

    - by Nate
    I switched to zsh, but I dislike the completion. If I have 20 files, each with a shared prefix, on pressing tab, zsh will fully complete the first file, then continue going through the list with each press of tab. If I want one near the end, I would have to press tab many times. In bash, this was simple - press tab and I would get the prefix. If I continued typing (and pressing tab), bash would complete as far as it could be certain of. I find this behavior to be much more intuitive but prefer the other features of zsh to bash. Is there a way to get this style of completion? Google suggested setopt bash_autolist, but this had no effect for me (and no error message was printed upon starting my shell). Thanks.

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >