Search Results

Search found 25088 results on 1004 pages for 'dsl linux'.

Page 411/1004 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Iptables ignoring a rule in the config file

    - by Overdeath
    I see lot of established connections to my apache server from the ip 188.241.114.22 which eventually causes apache to hang . After I restart the service everything works fine. I tried adding a rule in iptables -A INPUT -s 188.241.114.22 -j DROP but despite that I keep seeing connections from that IP. I'm using centOS and i'm adding the rule like thie: iptables -A INPUT -s 188.241.114.22 -j DROP Right afther that I save it using: service iptables save Here is the output of iptables -L -v ` Chain INPUT (policy ACCEPT 120K packets, 16M bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any c-98-210-5-174.hsd1.ca.comcast.net anywhere 0 0 DROP all -- any any c-98-201-5-174.hsd1.tx.comcast.net anywhere 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any www.dabacus2.com anywhere 0 0 DROP all -- any any 116.255.163.100 anywhere 0 0 DROP all -- any any 94.23.119.11 anywhere 0 0 DROP all -- any any 164.bajanet.mx anywhere 0 0 DROP all -- any any 173-203-71-136.static.cloud-ips.com anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any 74.122.177.12 anywhere 0 0 DROP all -- any any 58.83.227.150 anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 186K packets, 224M bytes) pkts bytes target prot opt in out source destination `

    Read the article

  • SQUID Transparent SSL proxy (no intercept)

    - by user974896
    I know how to have squid work as a transparent proxy. You put it into transparent mode then use your router or IPTABLES to forward port 80 to the squid port. I would like to do the same for SSL. Every guide I see mentions setting up keys on the squid server. I do not want squid to actually decrypt the SSL traffic then establish a connection with the server, rather I would like squid to simply forward the SSL traffic as is. The only thing I would like to do is be able to check the SSL request for any offending IPs and drop the packets if the destination is one of them.

    Read the article

  • Recovering from bad ownership

    - by Christian Sciberras
    I was going to change the ownership of a directory to apache:apache, but I ended up running: chown -R apache:apache / Bad! Very bad! I knew what was going on when it started saying: chown: changing ownership of `/proc/2694/fd/48': Permission denied That's when I stopped everything (Ctrl+C). The current system I have is a server running virtualbox running CentOS 5. This problem happened inside the VM. Currently, everything seems to be working, but I have not restarted the system yet, and to be honest, I'm afraid that if I did something will break. I do not know chown's order, should I be concerned and assume something will break after a reboot? Is there a way to recover form this problem without having to rely on backups? I do have a daily one, but I thought there may be a simpler way out.

    Read the article

  • Advertise a subnet route with radvd

    - by Thomas Berger
    we have set up a small IPv6 Testing network. The setup looks like this: ::/0 +----------+ | Firewall | Router to the public net +----------+ | 2001:...::/106 | +----------+ +-------| SIT GW | sit Tunnel gatway to the some test users | +----------+ | +----------+ | Test Sys | Testsystem +----------+ The idea is to advertise the default route from the firewall and the route for the SIT subnets from the sit gateway. The configurations for radvd are: # Firewall interface eth0 { AdvSendAdvert on; route ::/0 { }; }; # SIT Gatway interface eth0 { AdvSendAdvert on; route 2001:...::/106 { }; }; We have captured the adv. packages with tcpdump and the packages looks good. We see a default route from the fw, and the subnet route from the SIT gatway. But if we look on the testsystem there are two default routes over both gateways. There is no subnet route. The routing does not work of course. Here the routes we get: 2001:.....::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 default via fe80::baac:6fff:fe8e:XXXX dev eth0 proto kernel metric 1024 expires 0sec mtu 1500 advmss 1440 hoplimit 64 default via fe80::e415:aeff:fe12:XXXX dev eth0 proto kernel metric 1024 expires 0sec mtu 1500 advmss 1440 hoplimit 64 Any Idea?

    Read the article

  • Creating rescue / install USB flash disk for CentOS

    - by wwwpanda
    For CentOS installation CDs, you can install OS, as well as booting into "rescue" mode so that you can do a chroot mount on the system partition for problem solving, even the system is installed in hardware RAID drives. How can we create a similar thing but on usb flash drive? I tried to do it with unetbootin, but when booting into the USB, eventually the CentOS setup still requires presence of CDs. Ultimately, I want to use this usb flash drive for remote disaster recovery through say HP iLo remote console / Dell iDrac etc.

    Read the article

  • vim: remove previous code indentation and convert to another

    - by ramgorur
    I have a c project with multiple files (more than 100), the codes are written in Whitesmiths style, but I want to change them into K&R style indentation. Is it possible to do using vim in an automated way ? For example I have a emacs-lisp script to achieve this -- (progn (find-file "{}") (mark-whole-buffer) (setq indent-tabs-mode nil) (untabify (point-min) (point-max)) (indent-region (point-min) (point-max) nil) (save-buffer)) I was wondering if there is a similar trick that could be done with vim.

    Read the article

  • unable to start apache after changes to rc.conf and resov.conf

    - by shupru
    I had a working configuration this morning with the following simple /etc/rc.conf ifconfig_rl0="DHCP" ifconfig_xl="inet 192.168.1.11 netmask 255.255.255." defaultrouter="192.168.1.1" I added the following lines: firewall_enable="YES" firewall_type="SIMPLE" firewall_logging="YES" sshd_enable="YES" apache_enable="YES" mysql_enable="YES" my httpd.conf includes: NameVirtualHost 192.168.1.11 <VirtualHost 192.168.1.11> ... </VirtualHost> now apache and ssh server are down. changed rc.conf back to last working configuration and still no ssh or apache apachectl start #--> /usr/local/sbin/apachectl start: httpd could not be started apachectl status #--> Looking up localhost Making http connection to localhost Alert!: Unable to connect to remote host.

    Read the article

  • How to set up virtual users in vsftpd?

    - by ares94
    I've read this tutorial: http://howto.gumph.org/content/setup-virtual-users-and-directories-in-vsftpd/ My configuration is as follow: ---vsftpd.conf--- listen=YES anonymous_enable=NO local_enable=YES virtual_use_local_privs=YES write_enable=YES connect_from_port_20=YES pam_service_name=vsftpd guest_enable=YES user_sub_token=$USER local_root=/var/www/sites/$USER chroot_local_user=YES hide_ids=YES ---/etc/pam.d/vsftpd--- auth required pam_pwdfile.so pwdfile /etc/vsftpd/passwd account required pam_permit.so I created file /etc/vsftpd/passwd and added users using htaccess. I tried to login but it didn't work: ftp 127.0.0.1 Connected to 127.0.0.1 (127.0.0.1). 220 vsFTPd 2.3.5+ (ext.1) ready... Name (127.0.0.1:root): user1 331 Please specify the password. Password: 530 Permission denied. Login failed. Everything seems fine accept the permission denied thing. How can I fix this?

    Read the article

  • where are deleted files kept?

    - by ant2009
    Hello, Ubuntu 9.10 I recently deleted some files. I would like to know are the files kept in a directory? Like in windows recycle bin. I would like to know where these files are? Many thanks for any suggestions,

    Read the article

  • SSH rsa key works with external IP not internal IP

    - by Ian
    I am using rackspace cloud hosting. I have 2 servers behind a load balancer. Each server has an external IP and an internal IP. I want to setup a sync job that uses SSH to transfer files. I made an rsa key, and I can successfully SSH from server A into server B, using the external IP of server B, without being prompted for a password. If I try to do the same but use the internal IP, it prompts me for a password. I want to be able to use the key instead of the password. Why is this? Is there something special I have to do during key generation so it works for both IPs? Any help is appreciated.

    Read the article

  • amplified reflected attack on dns

    - by Mike Janson
    The term is new to me. So I have a few questions about it. I've heard it mostly happens with DNS servers? How do you protect against it? How do you know if your servers can be used as a victim? This is a configuration issue right? my named conf file include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; }; }; options { /* make named use port 53 for the source of all queries, to allow * firewalls to block all ports except 53: */ // query-source port 53; /* We no longer enable this by default as the dns posion exploit has forced many providers to open up their firewalls a bit */ // Put files that named is allowed to write in the data/ directory: directory "/var/named"; // the default pid-file "/var/run/named/named.pid"; dump-file "data/cache_dump.db"; statistics-file "data/named_stats.txt"; /* memstatistics-file "data/named_mem_stats.txt"; */ allow-transfer {"none";}; }; logging { /* If you want to enable debugging, eg. using the 'rndc trace' command, * named will try to write the 'named.run' file in the $directory (/var/named"). * By default, SELinux policy does not allow named to modify the /var/named" directory, * so put the default debug log file in data/ : */ channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { /* This view sets up named to be a localhost resolver ( caching only nameserver ). * If all you want is a caching-only nameserver, then you need only define this view: */ match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; /* these are zones that contain definitions for all the localhost * names and addresses, as recommended in RFC1912 - these names should * ONLY be served to localhost clients: */ include "/var/named/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above :

    Read the article

  • Ubuntu server loses network connection after ADSL2+ modem reset

    - by squashbuff
    I am using an ubuntu 10.04 server (running on a Lenovo Thinkpad notebook) as my webserver. It is performing well in terms of handling the traffic etc. However my internet connection is ADSL2+ (using Thomson TG782T modem-router) and if the modem is reset, then my server loses network connection. The networkmanager icon shows a red exclamation mark showing that is has no connection. But as soon as I click on it and tell it to connect to eth0, the connection is back on. It must be something that networkmanager is failing to do and because of this, the reliability of my webserver is suffering. Any advice on how this can be fixed?

    Read the article

  • How can I upgrade Ubuntu from 9.10 to 10.04 on a netbook with a 4GB root partition?

    - by Blorgbeard
    I have an Asus EeePC 901, running Ubuntu 9.10. I'd like to upgrade it to 10.04. I don't want to reinstall, since I have a bunch of scripts and programs all set up. However, when I attempt to upgrade using sudo apt-get dist-upgrade, I get an error asking me to free up another ~600MB on /. My / is mounted on sda0, which is a 4GB SSD. I do not have 600MB worth of deletable stuff on /. I've emptied my trash, and done apt-get autoremove and apt-get clean. I do have plenty of space in /home, mounted on sda1 (a 16GB SSD). Is there some way I can tell apt-get to use a different download/temp directory?

    Read the article

  • Groups and Symlinks, is this safe?

    - by sjohns
    Hi, Im trying to serve similar content over two websites, but don't want to have 2 of each file, especially when they are growing. The basics, im running CentOS, with cPanel. Is it safe to do the following, I have folder downloads1 in /home/user1/www/downloads1/ i have user2, can i make a group - groupadd sharedfiles add both users to the group: useradd -g sharedfiles user1 useradd -g sharedfiles user2 then chown -r -v user1:sharedfiles downloads1/ User 2 i want to have /home/user2/www/downloads1 but i want it to be a symlink like ln "downloads1" "/home/user1/www/downloads1/" lrwxrwxrwx 1 user2 sharedfiles 11 May 9 14:20 downloads1 -> /home/user1/www/downloads1/ Is this a safe practice? Or is there a better way to do this if I want them both to be able to share the files for distribution over apache. Is there any drawbacks to this? Thanks in advance for any light shed on this. I'm not 100% sure weather this should have gone here or on serverfault.

    Read the article

  • Groups and Symlinks, is this safe?

    - by sjohns
    Hi, Im trying to serve similar content over two websites, but don't want to have 2 of each file, especially when they are growing. The basics, im running CentOS, with cPanel. Is it safe to do the following, I have folder downloads1 in /home/user1/www/downloads1/ i have user2, can i make a group - groupadd sharedfiles add both users to the group: useradd -g sharedfiles user1 useradd -g sharedfiles user2 then chown -r -v user1:sharedfiles downloads1/ User 2 i want to have /home/user2/www/downloads1 but i want it to be a symlink like ln "downloads1" "/home/user1/www/downloads1/" lrwxrwxrwx 1 user2 sharedfiles 11 May 9 14:20 downloads1 -> /home/user1/www/downloads1/ Is this a safe practice? Or is there a better way to do this if I want them both to be able to share the files for distribution over apache. Is there any drawbacks to this? Thanks in advance for any light shed on this. I'm not 100% sure weather this should have gone here or on serverfault.

    Read the article

  • Trying to change a Ubuntu user's password, authentication token manipulation error

    - by beagleguy
    I'm trying to create a local user on a new Ubuntu box. I'm unable to change the password, and I keep getting the error below. The user gets added to the shadow file, but I can't get it to set a password. How can this be fixed? admin@theserver:~$ sudo useradd jamz [sudo] password for admin: admin@theserver:~$ sudo passwd jamz passwd: Authentication token manipulation error passwd: password unchanged admin@theserver:~$

    Read the article

  • Why isn't this smbmount attempt working?

    - by Max Williams
    I can successfully access one of our local samba shares, which is on a windows pc (called marina) as follows: $ sudo /usr/bin/smbclient \\\\marina\\resource_library <my password> Domain=[MARINA] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager] smb: \> So, that works. I'm now trying to mount the above location (the resource_library folder on marina) to /mnt/resource_library (as a read only folder), but it keeps failing - i've tried a few variations of specifying the location: $ sudo smbmount \\\\marina\\resource_library /mnt/resource_library -o username=max,password=<my password>,r mount error: could not resolve address for marina: No address associated with hostname No ip address specified and hostname not found and $ sudo smbmount //marina/resource_library /mnt/resource_library -o username=max,password=<my password>,r mount error: could not resolve address for marina: No address associated with hostname No ip address specified and hostname not found and both of the above with MARINA instead of marina. It's bound to be some dumb mistake i'm making, can anyone see it? cheers, max

    Read the article

  • Entire filesystem restore from rdiff-backup snapshot

    - by atmosx
    I'm trying to make a complete system restore from an rdiff-backup. The cli for backing was: rdiff-backup --exclude-special-files --exclude /tmp --exclude /mnt --exclude /proc --exclude /sys / /mnt/backup/ebox/ I created a new partition mounted the partition at /mnt/gentoo and did: rdiff-backup -r /mnt/vol2 /mnt/gentoo However when I try to chroot to this system (following gentoo's manual, which means mounting /dev/ and /proc) I get the following error: chroot: failed to run command/bin/bash': No such file or directory` All this takes place on a Parallels (virtual machine) Debian installation. Any ideas on how to proceed in order to fully restore the system? Best Regards ps. /mnt/gentoo/bin/bash works fine if I execute it. All files and permissions are in place rdiff-backup seems to work just fine. However the system cannot neither boot (exits with kernel panic - cannot find init) or be chrooted.

    Read the article

  • How can I make the NetworkManager work?

    - by Yang Jy
    I am running a version of RHCE6 on my laptop, and lately I've been trying various stuff about network configuration through command line. Last night, I tried removing the NetworkManager using "yum remove NetworkManager" from the system, so that I could have more control of the network through the command line. But the result is, I didn't manage to configure the wireless connection through wpa_supplicant, and I need wireless connection during my travel to another place. So I need the wireless function back as soon as possible. I typed " yum install NetworkManager", some version installed, but I don't get to have an icon on the taskbar, and of course, the network doesn't work. The package I previously removed(about 24MB) was much larger that the one I just installed(about 2MB), so I think some dependencies must be missing. How could I install all these dependencies? Please help!

    Read the article

  • NTP configuration not recognized?

    - by Eugene S
    I'm trying to configure NTP on my machine but it seems that the parameters I set are not being read by the system. Below is my /etc/ntp.conf file. (I applied the most basic configuration to eliminate other issues) server 10.45.68.47 server 127.0.0.1 After I set the above configuration, I restart the ntpd process by doing the following: service ntpd restart And then I get the following output: Shutting down ntpd: [ OK ] ntpd: Synchronizing with time server: [FAILED] Starting ntpd: [ OK ] Moreover, I can see the following in /var/etc/messages: Apr 2 10:54:07 hsystem1a ntpd[21067]: ntpd exiting on signal 15 Apr 2 10:54:07 hsystem1a ntpdate[21537]: can't find host ntpServer1 Apr 2 10:54:07 hsystem1a ntpdate[21537]: can't find host ntpServer2 Apr 2 10:54:07 hsystem1a ntpdate[21537]: no servers can be used, exiting So it seems that the ntpServer1 and the ntpServer2 are being read from somewhere instead of the IPs I configured in /etc/ntp.conf. NOTE: I done init 6 on the machine just in case. Thanks!

    Read the article

  • Hung Java JVM failing to respond to kill -3

    - by Hans
    I have a Java VM that is hanging "randomly". I quote the randomly bit, because there is obviously a reason that the VM is hanging, but the hang does not occur periodically. We have the same software running in different customer environments and in those environments the JVM is not hanging. In the process of attempting to troubleshoot the hang the process exists with zero CPU utilization. I then attempt to execute kill -3 and the kill command hangs. No JVM Thread Dump is produced. I have spent time instrumenting the code to periodically log the thread stack traces hoping to catch the JVM in a state that would indicate where the issue lies, but so far this attempt has not born much fruit. Unfortunately I have not been able to reproduce this issue in my lab environment so I am limited by what can be done at the Customer site. The OS's in question are Red Hat Enterprise 5.4 and SUSE 10 running java version 1.6.0_05-b13 Has anyone had this problem? Any ideas on why kill -3 is failing to produce a Java Thread Dump? Thanks!

    Read the article

  • how to properly edit hosts, hostname and resolf.conf?

    - by Firewall
    i,v been searching the internet for a real noop tutorial on the subject but could not found any direct info. on how to edit these files the proper way. i,v got a debian internet server that i use to host some personal domains and runs squid and rTorrent. the server is up and running with no problems but i am confused about a few things. lets say that i named my server (foo), my domain is (example.com) and my public IP is 95.211.133.200 now: should /etc/hostname contains: tango.example.com or tango <----- just the server name should /etc/hosts contains: 127.0.0.1 localhost.localdomain localhost 95.211.133.200 foo.example.com foo should /etc/resolf.conf contains (along with the nameservers) both: domain example.com search example.com or just the first one. are there any other files that i should edit in order to make things right? last thing, the command: domainname returns: (none) i believe it should return (example.com). what should i do to correct that?

    Read the article

  • ~/.profile does not run on startup

    - by pocoa
    I want to run some scripts at system startup, so in ~/.profile file, I've added: WORKSPACE="~/Development/workspace" alias workspace="cd $WORKSPACE" So I want this "workspace" alias to be available after the startup. Maybe it's not the right place to define these variables.

    Read the article

  • Remote Program (via ssh) suspends when leaving client computer

    - by Philipp F
    I'm working with MATLAB on a remote computer logging in via ssh -X remotepc and running matlab like matlab &. When I start a long-running process and leave the computer, it seems to suspend the process (after like 30mins being away) such that there is nearly no progress over night. As soon as I come back and wake up the client, the remote process continues with the calculation. I can see this from the load-average values (uptime) Why is that and how can I change this behaviour?

    Read the article

  • High frequency, kernel bypass vs tuning kernels?

    - by Keith
    I often hear tales about High Frequency shops using network cards which do kernel bypass. However, I also often hear about them using operating systems where they "tune" the kernel. If they are bypassing the kernel, do they need to tune the kernel? Is it a case of they do both because whilst the network packets will bypass the kernel due to the card, there is still all the other stuff going on which tuning the kernel would help? So in other words, they use both approaches, one is just to speed up network activity and the other makes the OS generally more responsive/faster? I ask because a friend of mine who works within this industry once said they don't really bother with kernel tuning anymore-because they use kernel bypass network cards? This didn't make too much sense as I thought you would always want a faster kernel for all the CPU-offloaded calculations.

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >