Search Results

Search found 24624 results on 985 pages for 'linux rrt'.

Page 347/985 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • Can not boot CentOS VM using VirtIO in KVM

    - by Jake
    I converted qcow2 image to raw and changed I/O bus to VirtIO for a VM. now I can't boot that VM. I Installed VirtIO driver with following command: mkinitrd --with virtio_pci --with virtio_blk -f /boot/initrd-$(uname -r).img $(uname -r) and these are related kernel modules: virtio_balloon 11329 0 virtio_blk 11593 3 virtio_pci 11845 0 virtio_ring 8513 1 virtio_pci virtio 9541 3 virtio_balloon,virtio_blk,virtio_pci and this is what happens during boot-up. I also changed /boot/grub/device.map from "(hd0) /dev/sda" to "(hd0) /dev/vda" but problem still exists. any ideas how to fix this ? This is my default option to boot: title CentOS (2.6.18-308.13.1.el5) root (hd0,0) kernel /vmlinuz-2.6.18-308.13.1.el5 ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.18-308.13.1.el5.img

    Read the article

  • md5sum repeatedly gives different checksum for same file on same machine

    - by Joel
    I have a very small and quite old hard drive disk, about 32G. On to this disk I have copied a largish tar file, about 5G. When I run md5sum to generate a checksum on this file I repeatedly get different results (on the same machine and the same file). This obviously should not happen. If I repeat the experiment with a much smaller file, as expected the checksum is the same each time. I can only assume that because the large file is spanning most of the disk, and it is an old drive, I am experiencing a lot of read errors on the hard drive - and it needs replacing? Could there be any other good reason for this? Something I can do to fix the problem other than buying a new disk? Update: sha1sum also produces inconsistent results.

    Read the article

  • unable to run OpenOffice from text mode

    - by dilip
    I have installed OpenOffice in Redhat5 in text mode. When I tried to start OpenOffice using the command: sudo /usr/lib/openoffice/program/soffice "-accept=socket,host=localhost,port=8100; urp;StarOffice.ServiceManager " -nologo -headless -nofirststartwizard: " It shows the error saying javaldx: Could not find a Java Runtime Environment! So I installed jre in my system, and then I did not get any error but OpenOffice does not start. Also I checked the process regarding OpenOffice, but I haven's seen any process related to that. Can any one help me to fix this issue?

    Read the article

  • How can I make the XAnalogTV xscreensaver fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Alternatively, as I believe the problem is aspect-ratio related, is there any way I can get the screensaver to render larger than my display, and simply discard the extra pixels? Relevant screenshots (windowed, maximized, and full-screen, respectively): You can see in the last two that the scrollbar from Firefox is clearly visible, even though this is a full-screen screensaver.

    Read the article

  • How to upgrade XBMC Live from 9.04.1 to 9.11?

    - by sunpech
    I've been unable to do a fresh install of XBMC Live 9.11 to my hard drive. Everytime it fails at the Install System step. But I am able to get XBMC Live 9.04.1 to install successfully. How do I upgrade XBMC Live 9.04.1 to 9.11? I understand that Ctrl+Shift+F2 brings up the command line, but what are the next set of commands to run?

    Read the article

  • guest crash on long backup via rsync

    - by ToreTrygg
    I recently upgraded host to Ubuntu 9.10 with vmware server 2.0.2, i had two guest machine. One is a sme server i had several crash during a session of backup with rsync to another pc. Normal activities run regularly. The other guest is up without problem since 25 days. I found in the log a lot o f row like these Dec 20 05:29:27.445: vcpu-1| VLANCE: Ethernet0 skipped 2560 time(s) Dec 20 05:29:27.445: vcpu-1| VLANCE: 66 12 5 8 2 3 3 0 1 0 0 1 0 1 2 0 Dec 20 05:29:27.445: vcpu-1| VLANCE: 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 2452 Dec 20 05:29:27.651: vmx| ide0:0: Command WRITE(10) took 1.947 seconds (ok) Dec 20 05:29:37.945: vmx| ide0:0: Command WRITE(10) took 1.033 seconds (ok) when the vitual machine crash the log report, I paste here only some part to limit the lenght of the message Dec 27 01:48:05.686: Worker#2| Caught signal 6 -- tid 700 Dec 27 01:48:05.686: Worker#2| SIGNAL: eip 0x460422 esp 0xb124c024 ebp 0xb124c03 Dec 27 01:48:05.712: Worker#2| SymBacktrace12 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.719: Worker#2| Unexpected signal: 6. Dec 27 01:48:05.720: Worker#2| Core dump limit is 0 KB. Dec 27 01:48:05.762: Worker#2| Child process 10455 failed to dump core (status 0 x6). Dec 27 01:48:05.762: Worker#2|SymBacktrace13 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.779: Worker#2|Msg_Post: Error Dec 27 01:48:05.780: Worker#2|http://msg.log.error.unrecoverable VMware Server unrecoverable error: (Worker#2) Dec 27 01:48:05.780: Worker#2|Unexpected signal: 6. I have no idea how to solve the problem with this installation, I think to dowgrade the host to a version more compatible with vmware server 2. I read a lot of post about difficult of installation I think the problem of compilation during install could be related to my problem. Excuse me if the post isn't very clear, it's my first post here. Any help or suggest will be appreciated. Thanks

    Read the article

  • Why /home folder is missing from my backup archive created by tar?

    - by Konstantin Pereyaslov
    So I'm doing full backup of my VPS using the following command (as root, of course): tar czvf 20120604.tar.gz / Everything seems to be fine, all files seem to appear in the list. The size of archive is 6 Gb and gunzipped version is 11 Gb which includes /home, because I totally have 11 Gb of data on VPS. But when I try actually to unpack archive, or open it using mc or WinRAR, there's no /home folder. And WinRAR tells 20120604.tar.gz - TAR+GZIP archive, unpacked size 894 841 346 bytes. It can't be WinRAR's bug, because when I type tar xzvf 20120604.tar.gz, /home folder isn't unpacked either. Why is /home folder missing from my archive? And what can I do to include it there? tar --version outputs the following: tar (GNU tar) 1.15.1

    Read the article

  • Recursively move files in sub-dirs to new sub-dirs of same name

    - by Gabriel
    I have a batch of files all ending with the same string, ie: *_ext.dat located in several sub-dirs along with several other files, in a given main dir. This is the structure: /main_dir/subdir1/file11_ext.dat /main_dir/subdir1/file12_ext.dat /main_dir/subdir1/file13_ext.dat /main_dir/subdir1/file14_other.dat /main_dir/subdir1/file15_other.dat /main_dir/subdir2/file21_ext.dat /main_dir/subdir2/file22_ext.dat /main_dir/subdir2/file23_ext.dat /main_dir/subdir2/file24_other.dat /main_dir/subdir2/file25_other.dat /main_dir/subdir3/file31_ext.dat /main_dir/subdir3/file32_ext.dat /main_dir/subdir3/file33_ext.dat /main_dir/subdir3/file34_other.dat /main_dir/subdir3/file35_other.dat I need to recursively move only the files ending in *_ext.dat into a new main dir, new_dir, respecting the sub-dir structure so the files will end up in an equivalent dir structure like this: /new_dir/subdir1/file11_ext.dat /new_dir/subdir1/file12_ext.dat /new_dir/subdir1/file13_ext.dat /new_dir/subdir2/file21_ext.dat /new_dir/subdir2/file22_ext.dat /new_dir/subdir2/file23_ext.dat /new_dir/subdir3/file31_ext.dat /new_dir/subdir3/file32_ext.dat /new_dir/subdir3/file33_ext.dat Because of this the command should also create those sub-dirs with their corresponding names. I know that with a line like this one: find . -name "*_ext.dat" -print0 | xargs -0 rm -rf I can delete all those files, but I don't know how to modify it to do what I need (or if it is even possible).

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • How can i install sun java to Fedora 8

    - by Tushar Ahirrao
    Hi I want to install java on my fedora 8 server but come to the step 9 that is mention in http://fedorasolved.org/browser-solutions/java-i386 but at the step 10 when i enter the command ln -s /opt/jre1.6.0_18/plugin/i386/ns7/libjavaplugin_oji.so /usr/lib/mozilla/plugins/libjavaplugin_oji.so it gives following error ln -s /opt/jre1.6.0_18/plugin/i386/ns7/libjavaplugin_oji.so /usr/lib/mozilla/plugins/libjavaplugin_oji.so ln: creating symbolic link `/usr/lib/mozilla/plugins/libjavaplugin_oji.so': No such file or directory can you help me please?

    Read the article

  • Some HTTPS connections via NAT fail, but work on firewall itself.

    - by hnxn
    Hi, I am having trouble establishing some HTTPS connections from internal machines, even though these same connections work if initiated on the firewall itself. The firewall machine is running Ubuntu 10.04.1 and shorewall 4.4.6. The internet connection is Bell PPPoE DSL (in Canada). I have tried various MTU settings, it doesn't seem to make any difference. Other protocols (HTTP, FTP, etc) generally work. The problem seems to be limited to certain sites; this one never works from an internal machine, but always works from the firewall itself: From internal machine: $ wget https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif --2011-01-13 20:51:31-- https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif Resolving images.fedex.com... 184.24.96.69 Connecting to images.fedex.com|184.24.96.69|:443... connected. ^C From firewall: $ wget https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif --2011-01-13 20:58:28-- https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif Resolving images.fedex.com... 184.24.96.69 Connecting to images.fedex.com|184.24.96.69|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 840 [image/gif] Saving to: `corp_logo.gif' 2011-01-13 20:58:28 (149 MB/s) - `corp_logo.gif' saved [840/840] This URL always works from both internal and firewall: https://encrypted.google.com/images/logos/ssl_logo_lg.gif Any troubleshooting tips would be greatly appreciated!

    Read the article

  • bash script with permanent ssh connection

    - by samuelf
    Hi, I use a bash script which runs /usr/bin/ssh -f -N -T -L8888:127.0.0.1:3306 [email protected] However, when I run the bash script, it waits.. I see the connection coming up but the script doesn't exit.. it's like it's waiting for the SSH process to finish, because when I manually kill it the bash script finishes as well. Any ideas how to resolve this? UPDATE: I have croned this script.. and the cron process is the one that becomes a zombie.. the actual scripts runs just fine, sorry about that, with ps -auxf I get: root 597 0.0 0.7 2372 912 ? Ss Jul12 0:00 cron root 2595 0.0 0.8 2552 1064 ? S 02:09 0:00 \_ CRON 1001 2597 0.0 0.0 0 0 ? Zs 02:09 0:00 \_ [sh] <defunct> 1001 2603 0.0 0.0 0 0 ? Z 02:09 0:00 \_ [cron] <defunct> and when I kill the ssh the defuncts disappear.. why would they become defunct?

    Read the article

  • Centos 5.5 [Read-only file system] issue after rebooting

    - by canu johann
    I have a virtual server under centos 5.5 (hosted by a japanese company called sakura ) Since yesterday, connection through ssh couldn't be established. I've contacted support center who told me to restart VS from the control panel. After restarting, I got the message below Connected to domain wwwxxxxxx.sakura.ne.jp Escape character is ^] [ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) @@cat: /proc/self/attr/current: Invalid argument Welcome to CentOS Starting udev: @[ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) [FAILED] *** An error occurred during the file system check. *** Dropping you to a shell; the system will reboot *** when you leave the shell. *** Warning -- SELinux is active *** Disabling security enforcement for system recovery. *** Run 'setenforce 1' to reenable. /etc/rc.d/rc.sysinit: line 53: /selinux/enforce: Read-only file system Give root password for maintenance (or type Control-D to continue): bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system (Repair filesystem) 1 # setenforce 1 setenforce: SELinux is disabled (Repair filesystem) 2 # echo 1 (Repair filesystem) 4 # /etc/init.d/sshd status openssh-daemon is stopped (Repair filesystem) 5 # /etc/init.d/sshd start Starting sshd: NET: Registered protocol family 10 lo: Disabled Privacy Extensions touch: cannot touch `/var/lock/subsys/sshd': Read-only file system (Repair filesystem) 6 # sudo /etc/init.d/sshd start sudo: sorry, you must have a tty to run sudo (Repair filesystem) 7 # I have 4 site in production and I need to restart the server quickly (SSH + HTTPD ,...). Thank you for your time.

    Read the article

  • Certain users cannot get to my server

    - by Zeno
    I am finding more and more users that report they cannot reach my server (website or services). Their tracert from that user looks like this: Tracing route to domain.com [*.*.*.255] over a maximum of 30 hops: 1 * * * Request timed out. The server is up and functional and every else reports it is fine. But there are various users who cannot get to it. I have no firewall or anything that would block anyone. Yes, the last part of the server IP is 255. Could this be causing it? http://www.dslreports.com/forum/r18539206-Last-octet-255-bug-on-Windows Or would a certain ISP be denying traffic to my server? Or something on their router level?

    Read the article

  • Change the foreground color of statusbars in irssi?

    - by Marlun
    Hello, I've changed the terminal colors and now irssi statusbars have white text on light blue background. I would like to change the foreground color of the irssi statusbars to black but can't figure out how to do it. I don't want to download a whole theme, I only want to change this one color. Any idea? Thanks in advance! -Martin

    Read the article

  • Grepping grep output fails

    - by viraptor
    I'm trying to grep the output of ngrep. Unfortunately when I add another grep to the pipeline, I get no output at all. It can be some other command too - cat / grep / tee - everything breaks the chain. Example: # this works: $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' -- U +1.469535 xxx:5060 -> xxx:5060 SIP/2.0 180 Ringing. -- U +0.001384 xxx:5060 -> xxx:2048 SIP/2.0 180 Ringing. but #these don't: $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' | egrep '^U' $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' | cat $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' | tee test If I use cat somefile instead of ngrep at the start, everything works as expected. Any ideas what could go wrong here?

    Read the article

  • FreeNX Server w/ nxagent 3.5 not able to create shadow sessions

    - by Jenna Whitehouse
    I am running a FreeNX server on Ubuntu 11.10 and am unable to do session shadowing. I get the authorization prompt, but the shadow client crashes after. The NX server log in the user's .nx directory is as follows: Error: Aborting session with 'Server is already active for display 3000 If this server is no longer running, remove /tmp/.X3000-lock and start again'. Session: Aborting session at 'Mon Oct 1 14:26:44 2012'. Session: Session aborted at 'Mon Oct 1 14:26:44 2012'. This then deletes the lock file, which is the lock file for the initial Unix session and crashes out. Everything works for a normal session, and shadowing works up to the authorization prompt. I am using this software: Ubuntu 11.10 freenx-server 0.7.3.zgit.120322.977c28d-0~ppa11 nx-common 0.7.3.zgit.120322.977c28d-0~ppa11 nxagent 1:3.5.0-1-2-0ubuntu1ppa8 nxlibs 1:3.5.0-1-2-0ubuntu1ppa8 Any help is appreciated, thanks!

    Read the article

  • Setup routing and iptables for new VPN connection to redirect **only** ports 80 and 443

    - by Steve
    I have a new VPN connection (using openvpn) to allow me to route around some ISP restrictions. Whilst it is working fine, it is taking all the traffic over the vpn. This is causing me issues for downloading (my internet connection is a lot faster than the vpn allows), and for remote access. I run an ssh server, and have a daemon running that allows me to schdule downloads via my phone. I have my existing ethernet connection on eth0, and the new VPN connection on tun0. I believe I need to setup the default route to use my existing eth0 connection on the 192.168.0.0/24 network, and set the default gateway to 192.168.0.1 (my knowledge is shaky as I haven't done this for a number of years). If that is correct, then I'm not exactly sure how to do it!. My current routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface MSS Window irtt 0.0.0.0 10.51.0.169 0.0.0.0 UG 0 0 0 tun0 0 0 0 10.51.0.1 10.51.0.169 255.255.255.255 UGH 0 0 0 tun0 0 0 0 10.51.0.169 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 0 0 0 85.25.147.49 192.168.0.1 255.255.255.255 UGH 0 0 0 eth0 0 0 0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0 0 0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 0 0 0 After fixing the routing, I believe I need to use iptables to configure prerouting or masquerading to force everything for destination port 80 or 443 over tun0. Again, I'm not exactly sure how to do this! Everything I've found on the internet is trying to do something far more complicated, and trying to sort the wood from the trees is proving difficult. Any help would be much appreciated. UPDATE So far, from the various sources, I've cobbled together the following: #!/bin/sh DEV1=eth0 IP1=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 192.` GW1=192.168.0.1 TABLE1=internet TABLE2=vpn DEV2=tun0 IP2=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` GW2=`route -n | grep 'UG[ \t]' | awk '{print $2}'` ip route flush table $TABLE1 ip route flush table $TABLE2 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table $TABLE1 $ROUTE ip route add table $TABLE2 $ROUTE done ip route add table $TABLE1 $GW1 dev $DEV1 src $IP1 ip route add table $TABLE2 $GW2 dev $DEV2 src $IP2 ip route add table $TABLE1 default via $GW1 ip route add table $TABLE2 default via $GW2 echo "1" > /proc/sys/net/ipv4/ip_forward echo "1" > /proc/sys/net/ipv4/ip_dynaddr ip rule add from $IP1 lookup $TABLE1 ip rule add from $IP2 lookup $TABLE2 ip rule add fwmark 1 lookup $TABLE1 ip rule add fwmark 2 lookup $TABLE2 iptables -t nat -A POSTROUTING -o $DEV1 -j SNAT --to-source $IP1 iptables -t nat -A POSTROUTING -o $DEV2 -j SNAT --to-source $IP2 iptables -t nat -A PREROUTING -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -t nat -A PREROUTING -i $DEV1 -m state --state NEW -j CONNMARK --set-mark 1 iptables -t nat -A PREROUTING -i $DEV2 -m state --state NEW -j CONNMARK --set-mark 2 iptables -t nat -A PREROUTING -m connmark --mark 1 -j MARK --set-mark 1 iptables -t nat -A PREROUTING -m connmark --mark 2 -j MARK --set-mark 2 iptables -t nat -A PREROUTING -m state --state NEW -m connmark ! --mark 0 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 80 -j CONNMARK --set-mark 2 iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 443 -j CONNMARK --set-mark 2 route del default route add default gw 192.168.0.1 eth0 Now this seems to be working. Except it isn't! Connections to the blocked websites are going through, connections not on ports 80 and 443 are using the non-VPN connection. However port 80 and 443 connections that aren't to the blocked websites are using the non-VPN connection too! As the general goal has been reached, I'm relatively happy, but it would be nice to know why it isn't working exactly right. Any ideas? For reference, I now have 3 routing tables, main, internet, and vpn. The listing of them is as follows... Main: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 Internet: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 192.168.0.1 dev eth0 scope link src 192.168.0.73 VPN: default via 10.38.0.205 dev tun0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1

    Read the article

  • Strategy to allow emergency access to colocation crew

    - by itsadok
    I'm setting up a server at a new colocation center half way around the world. They installed the OS for me and sent me the root password, so there's obviously a great amount of trust in them. However, I'm pretty sure I don't want them to have my root password on a regular basis. And anyway, I intend to only allow key-based login. On some cases, though, it might be useful to let their technical support log in through a physical terminal. For example, if I somehow mess up the firewall settings. Should I even bother worrying about that? Should I set up a sudoer account with a one-time password that will change if I ever use it? Is there a common strategy for handling something like this?

    Read the article

  • Setting up dante socks server

    - by skerit
    I want to tunnel all my internet traffic through my vps, so I'm trying to install a proxy server. However: I can't seem to browse the internet through Dante. I get the ERR_EMPTY_RESPONSE error. This is my config: logoutput: stderr /home/user/dantelog internal: eth1 port=1080 external: eth1 method: username pam user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } Do I really have to run 2 proxy servers: one for http and one for socks? or is there something else I can do?

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • Tiling window manager where dual heads share common workspace

    - by mikero
    I want to use a tiling window manager with my dual monitor setup, but almost all wms seem to treat each monitor as an independent workspace. This means that I can change the workspace of monitor 0 without affecting the workspace of monitor 1. This is not what I want -- I want a workspace to span both monitors, where each monitor is essentially a separate column for tiling (my monitors are oriented vertically, so they are well-suited as tiling columns). When I switch workspaces, say with Mod-[0-9], I want both monitors to change contents. So far the only wm I have found to support this is wmii, but I'd love to try some other options. Have I missed this capability from other tiling wms?

    Read the article

  • Help me understand Ubuntu user/group permissions.

    - by Bartek
    I'm beginning to deal with more than one user on my system (it's a VPS serving some sites) and I need to make sure I understand how group permissions work. Here's my setup: I have an account named "admin" .. it's basically the primary account that is used for serving most of the sites that I control myself. Now, I added a second account named "Ville" as one of my users wants to be able to administer that site. So, I can do this the easy way and just chown their domains folder under the ville user and viola, they have permission to do whatever they need be and so forth. However, let's say I want to also give the admin user access to the files (modifying and all) .. how can I put both users into the same group and give them both permission? I've tried doing: sudo usermod -a -G admin ville To add the ville into the admin group, but ville still cannot edit files by admin. Permissions for the primary directory for the ville user are read/write for both owner and group, and the current group for the files is admin:admin .. But ville still can't write into the directory. So, what should I be doing here to get this right and secure at the same time? Thank you.

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >