Search Results

Search found 12281 results on 492 pages for 'ip blocking'.

Page 425/492 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • duplicity fail: not promping for password: "Running 'sftp user@host' failed"

    - by Thr4wn
    I have two linode VPS accounts and I want to back up one onto the other (the reasons are mainly for fun and to practice server administration.) the short version Duplicity isn't even asking for my password, but immediately says "invalid SSH password" (but I can ssh into the other server). why? the long version When I run duplicity /home/me scp://[email protected]//root/backup I get Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) Invalid SSH password Running 'sftp [email protected]' failed (attempt #2) Invalid SSH password Running 'sftp [email protected]' failed (attempt #3) And it says Invalid SSH password immediately with no opportunity for me to actually type the password. When I type duplicity full -v9 --num-retries 4 /home/me scp://[email protected]//root/backup I get Main action: full Running 'sftp [email protected]' (attempt #1) State = sftp, Before = 'Connecting to 97.107.129.67... [email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) I can ssh into [email protected] fine, and in fact have the ip in known_hosts before I tried any of this. serer 1 (from which I'm running the duplicity command) is Linode's default Ubuntu 8 setup with only a handful of programs installed via apt-get. server 2 (represented by x.x.x.x) is literally only Linode's default Ubuntu 8 setup I previously tried using SystemImager -- would that have changed settings in a destructive way? (I have removed and rebooted since then) Isn't Duplicity supposed to prompt for password? Am I using it wrong? are there common mistakes/dependencies I need to know about? Is there any way that x.x.x.x could be setup that could make this not work (I used Linode's default Ubuntu 8 setup and barely )?

    Read the article

  • Route all traffic via OpenVPN client

    - by Ilya
    I've got OpenVPN client running on 192.168.0.3. What I'd like to do is route all the traffic from the second computer with 192.168.0.100 via OpenVPN client that's running on the first computer. My router ip is 192.168.0.1 Network topology: Windows computer with OpenVPN client: 192.168.0.3 Windows computer whose traffic has to be rerouted: 192.168.0.100 Router: 192.168.0.1 I want it to work in the following way: 192.168.0.100 computer => 192.168.0.3 computer => OpenVPN => 192.168.0.1 How can I achieve that by only modifying windows' routing table? I've tried entering the following into windows shell(on computer without VPN), which didn't work (it just dropped my internet connection): route delete 0.0.0.0 mask 255.255.255.255 192.168.0.1 route add 0.0.0.0 mask 255.255.255.255 192.168.0.3 Should I also setup the computer that has OpenVPN client running? Does it have anything to do with windows tcp forwarding? Thanks!

    Read the article

  • OpenVPN bad source address from client

    - by Bogdan
    I have one problem with OpenVPN. There are a lot drops records in the openvpn log file on the server: Mon Oct 22 10:14:41 2012 us=726541 laptop/???:1194 MULTI: bad source address from client [192.168.1.107], packet dropped grep -E "^[a-z]" server.conf ----- port 1194 proto udp dev tun ca data/ca.crt cert data/server.crt key data/server.key dh data/dh1024.pem tls-server tls-auth data/ta.key 0 remote-cert-tls client cipher AES-256-CBC tun-mtu 1200 server 10.10.10.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" client-to-client client-config-dir /etc/openvpn/ccd route 10.10.10.0 255.255.255.0 keepalive 10 120 comp-lzo persist-key persist-tun max-clients 5 status /var/log/status-openvpn.log log /var/log/openvpn.log verb 4 auth-user-pass-verify /etc/openvpn/verify.sh via-file tmp-dir /tmp script-security 2 ----- cat ccd/laptop ----- iroute 10.10.10.0 255.255.255.0 ----- cat client.conf ----- remote server ip 1194 client dev tun ping 10 comp-lzo proto udp tls-client tls-auth data/ta.key 1 pkcs12 data/vpn.laptop.p12 remote-cert-tls server #ns-cert-type server persist-key persist-tun cipher AES-256-CBC verb 3 pull auth-user-pass /home/user/.openvpn/users.db ----- According to "Jan Just Keijser - OpenVPN 2 Cookbook" root of the problem is incorrect config options.see the screenshot But, as you see, my config has such options. Could you please help me to solve this problem. @week Verb leverl=6; client log. Mon Oct 22 16:06:02 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Mon Oct 22 16:06:02 2012 /sbin/ifconfig tun0 10.10.10.3 pointopoint 10.10.10.5 mtu 1500 Mon Oct 22 16:06:02 2012 /sbin/route add -net xxxx netmask 255.255.255.255 gw 192.168.1.1 Mon Oct 22 16:06:02 2012 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 Initialization Sequence Completed cat ccd/latop iroute 10.10.10.0 255.255.255.0 ifconfig-push 10.10.10.3 10.10.10.5

    Read the article

  • Apache 2.4, Ubuntu 12.04 Forbidden Errors

    - by tubaguy50035
    I just installed Apache 2.4 today, and I'm having some issues getting vhost configuration to work correctly. Below is the vhost conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /hosting/Client/site.com/www ServerName site.com ServerAlias www.site.com <Directory "/hosting/Client/site.com/www"> Options +Indexes +FollowSymLinks Order allow,deny Allow from all </Directory> DirectoryIndex index.html </VirtualHost> There is an index.html file in /hosting/Client/site.com/www. When I go to the site, I receive a 403 forbidden error. The www-data group is the group on the www folder, which I've already given all permissions (r/w/x). I'm really at a loss as to why this is happening. Any thoughts? If I remove the vhost and go straight to the IP address, I get the default, "It works!" page. So I know that it's working. The error log says "client denied by server configuration". apache2ctl -S dump: nick@server:~$ apache2ctl -S /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted) VirtualHost configuration: *:80 is a NameVirtualHost default server site.com (/etc/apache2/sites-enabled/site.com.conf:1) port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex watchdog-callback: using_defaults Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults PidFile: "/var/run/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG Define: ENALBLE_USR_LIB_CGI_BIN User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used Ouput of namei -mo /hosting/Client/site/www/index.html f: /hosting/Client/site.com/www/index.html drwxr-xr-x root root / drwxr-xr-x root root hosting drwxr-xr-x root root Client drwxr-xr-x nick www-data site.com drwxr-xr-x nick www-data www -rw-rwxr-x nick www-data index.html

    Read the article

  • fast way to find network user computer on domain hogging all wan bandwidth

    - by dasko
    i have a network of about 40 domain users and i have huge latency wan issues, like 1400ms for google.com pings. I have noticed that the problem goes away after everyone goes home for the day. I would like to know if i should use something like a hub with wireshark on the router or modem to see if there are any irregular activity. I am open to suggestions but i need to isolate which user has the bug. I am assuming it is either downloads or someone spamming out heavily and not knowing. It would be best to trace to ip number so i can just look into dns and find the pc hostname with the problem? This is the first client i have this problem with so never really needed to address it before but not suprised as users don't actually listen to any best practices that we have suggested. Please help, thanks. just to update, pc's to routers and other computers have ping latency of 1ms so it is right after i hit the wan, using tracert to a web site (random), that i get the massive delay in the responses. As well this is a dsl line with 5mb down and 650kb up (maybe upload saturation?) thanks.

    Read the article

  • postfix concurrency limit with round robin dns

    - by goose
    Take the following internal round robin dns setup mymta.com. IN A 172.31.1.1 mymta.com. IN A 172.31.1.2 mymta.com. IN A 172.31.1.3 mymta.com. IN A 172.31.1.4 mymta.com. IN A 172.31.1.5 mymta.com. IN A 172.31.1.6 mymta.com. IN A 172.31.1.7 mymta.com. IN A 172.31.1.8 mymta.com. IN A 172.31.1.9 mymta.com. IN A 172.31.1.10 Now assume the following postfix setup (assume these are the only tweaks from defaults in debian package) main.cf: smtp_connection_cache_destinations = mymta.com smtp_connection_cache_reuse_limit = 750 smtp_destination_concurrency_limit = 75 transport * :[mymta.com] I would expect 75 concurrent connections spread across the 10 A records I've set in DNS. However I'm seeing more than a few hundred connections to mymta.com and I'm wondering if Postfix is "smart" enough to set up 75 concurrent connections for each IP address. Thoughts?

    Read the article

  • Apache reverse proxy access control

    - by Steven
    I have an Apache reverse proxy that is currently reverse proxying for a few sites. However i am now going to be adding a new site (lets call it newsite.com) that should only be accessible by certain IP's. Is this doable using Apache as a reverse proxy? I use VirtualHosts for the sites that are being proxyied. I have tried using the Allow/Deny directives in combination with the Location statements. For example: <VirtualHost *:80> Servername newsite.com <Location http://newsite.com> Order Deny,Allow Deny from all Allow from x.x.x.x </Location> <IfModule rewrite_module> RewriteRule ^/$ http://newsite.internal.com [proxy] </IfModule> I have also tried configuring allow/deny specicaily for the site in the Proxy directives, for example <Proxy http://newsite.com/> Order deny,allow Deny from all Allow from x.x.x.x </Proxy> I still have this definition for the rest of the proxied sites however. <Proxy *> Order deny,allow Allow from all </Proxy> No matter what i do it seems to be accessible from any where. Is this because of the definition for all other proxied sites. Is there an order to which it applies Proxy directives. I have had the newsite one both before and after the * one, and also within the VirtualHost statement.

    Read the article

  • openvpn& iptables -- portforwarding and gateway

    - by Smith.Lai
    The problem is similar to this scenario: iptables rule still take effect after deleted Scenario: There are several clients(C1~C10) providing some services, such as SSH,HTTP..... The clients are actually a personal computer behind NAT. Their IP might be 192.168.0.x For easily access these machines through internet, I built a OpenVPN server(S1). All the C1~C10 connect to S1 with VPN address 10.8.0.x If A user(U1) wanna access C1 SSH through internet, he can connect to S1 with port "55555", and S1 port forward 55555 to 10.8.0.6:22 echo 1 /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 55555 -j DNAT --to-destination 10.8.0.6:22 It works well until I mark the following in the openvpn server.conf: I marked this because I think this will make all connection go through S1 ;push "redirect-gateway" |-------(NAT)--------| (C1)--| (INTERNET)----(U1) |-----(VPN)----(S1)--| The C1~C10 have their own path to access internet resource through NAT . The server loading would be heavy if all C1~C10 connection go through S1 (for example, C1 is sending data to C2, or C1 is downloading data from a FTP site). Is there a way to solve this quandary?

    Read the article

  • Servers in DMZ will not communicate with each other

    - by Tukaro
    (Full disclosure: I rate barely above "noob" when it comes to networking.) My workplace recent got a new web server. Since we're nearing the end of an overhaul of our website, we're doing a slooooow migration between the old web server and the new one. The old webserver (we'll call it SERVOLD) is Windows Server 2008 with IIS 7. It does not have SQL Server installed. The new server (SERVNEW) is Windows Server 2008 R2, IIS 7.5, with the same version of SQL Server installed. Both are located in the DMZ for our network, and both have their own outward-facing IP address (.3 and .4, respectively). Each server can communicate fine with computers within the domain (not in the DMZ), and those same computers have no trouble communicating with either server. Both servers are also accessible from the internet just fine. However, no matter what, these two servers just refuse to recognize each other. They have the same Workgroup name listed (WORKGROUP), and I thought that would be enough for them to recognize each other. What needs to happen such that I can get these two servers to communicate with each other? We want to do a gradual roll-over to the new website (new one uses ASP.NET, old one uses CFMX), so being able to use one database between both servers is a necessity. Thanks!

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • Is there a small business router that shows bandwidth usage graphs in the admin panel?

    - by Robert Drake
    I support a large number of public libraries that are having their networks upgraded in response to a grant application. These libraries are generally home to between 6-15 computers and have little or no tech services either onsite or contracted remotely. In order to justify current and future purchases, a number of the libraries have requested routers that can provide bandwidth usage graphs that they can show to their managing boards. Is there a small business router that displays traffic graphs in the router administration web interface? The router needs to suppport DHCP and basic firewalling. No other features are required. Further, the reports just need to show overall trends. It is not necessary to show traffic by IP, by protocol/application, or by time of day. They just need an overall week to week, month to month, trend line. I'm familiar with MRTG/PRTG/tools that collect SNMP data from the router, but the libraries don't have the expertise for the configuration. I've considered installing the tomato firmware on some cheap home/home office routers, but if there's a commercial product that can be purchased that would be significantly simpler. Also the library boards would be much more likely to approve the purchase of a commercial product over a 'hacked' one. Any assistance would be appreciated.

    Read the article

  • Reason for perpetual dynamic DNS updates?

    - by mad_vs
    I'm using dynamic DNS (the "adult" version from RFC 2136, not à la DynDNS), and for a while now I've been seeing my laptops with MacOS 10.6.x churning out updates about every 10 seconds. And seemingly redundant updates at that, as the IP is more or less stable (consumer broadband). I don't remember seeing that frequency in the (distant...) past. The lowest time-to-live that MacOS pushes on the entries is 2 minutes, so I have no clue what's going on. ... Jan 12 13:17:18 lambda named[18683]: info: client 84.208.X.X#48715: updating zone 'dynamic.foldr.org/IN': deleting rrset at 'rCosinus._afpovertcp._tcp.dynamic.foldr.org' SRV Jan 12 13:17:18 lambda named[18683]: info: client 84.208.X.X#48715: updating zone 'dynamic.foldr.org/IN': adding an RR at 'rCosinus._afpovertcp._tcp.dynamic.foldr.org' SRV Jan 12 13:17:26 lambda named[18683]: info: client 84.208.X.X#48715: updating zone 'dynamic.foldr.org/IN': deleting rrset at 'rcosinus.dynamic.foldr.org' AAAA ... Additionally, I can't find out what triggers the updates on the laptop-side. Is this a known problem, and how would I go about debugging it? One of the machines is freshly purchased and installed. The only "major" change was installation of the Miredo client for IPv6/Teredo, but even disabling it didn't make a change (except that AAAA records are no longer published).

    Read the article

  • Why do I have 55 local area connections in ipconfig?

    - by RMorrisey
    Windows Vista Home Premium. I should mention that I am having no problem whatever getting an internet connection. When I type "ipconfig" in the console, I get (55!) messages of 3 lines each, listing a ton of disconnected network connections. My PC only has one network card. Each message looks like this: Tunnel adapter Local Area Connection* 55: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : These don't cause a major problem; they make it a pain, though, to fish upward and find my IP address. How can I get rid of them? Edit: Actually, a few connection numbers are randomly missing from the sequence; so, it's really more like 30 or 40 connection messages, rather than all 55. Not sure why that is, either.

    Read the article

  • DNS zone file SPF configuration to support sending mail from multiple servers and gmail

    - by Tauren
    I want to configure SPF on a domain to allow mail to be sent from: the x.com website server (x.com and www.x.com - both at same IP) it's MX servers (smtp.x.com, mx.x.com, mail.x.com) another server that isn't listed as an MX server (somehost.x.com) via gmail using an account that has authenticated use of [email protected] Will this zone file work? If not, what are the problems with it? $ttl 38400 @ IN SOA ns1.x.com. hostmaster.x.com. ( 201003092 ; serial 8H ; refresh 15M ; retry 1W ; expire 1H ) ; minimum @ NS ns1.x.com. @ NS ns2.x.com. @ MX 10 mx.x.com. @ MX 20 smtp.x.com. @ MX 30 mailhost.x.com. ; SPF records @ IN TXT "v=spf1 a mx a:somehost.x.com include:_spf.google.com ~all" mx IN TXT "v=spf1 a -all" smtp IN TXT "v=spf1 a -all" mailhost IN TXT "v=spf1 a -all" Questions: Is _spf.google.com the right thing to include for gmail.com, or is it only for Google Hosted Apps? If only for Google Apps, what should I include to send from gmail.com? If mail shouldn't be sent from anywhere else, is it safe to use -all instead of ~all? Does it make sense to add specific SPF records for each of the mail servers? Any other problems with the zone file? I want to confirm these things before making changes to my zone file. The file has SPF configured basically the same now, just without google.com and somehost, but I want to make sure I won't break things when I change it.

    Read the article

  • Why should one have a secondary DNS server?

    - by Sam Levin
    I'm very confused. I basically understand how DNS works. Here's an example that helps illustrate what I'm having trouble understanding. Right now, I run a small web-server. I use my provider's DNS manager, so I don't have a DNS server hosted on the machine. Let's say for a second, that I don't use my host's DNS, and I decide to set up a DNS server on my server. Hypothetical scenario: my server (entire) server goes down - DNS included. Why do I need backup DNS? If the server is down, who cares if the DNS server is down too, considering that even if I had DNS up (it wasn't on the crashed server), it wouldn't be able to forward requests anyway since the server would be down? Is the point of having secondary DNS, to be able to change the IP addresses that your DNS server points to, so if your webserver was down, you could redirect traffic to a backup? How would you switch to the secondary provider, in the event that your main DNS provider becomes unavailable? Is a backup DNS system basically up all the time? How is it configured? Is it just an exact clone of the DNS server you would have on your server? Do they run simultaneously? Hopefully someone can see what I'm hung up on, and provide some guidance. Thanks

    Read the article

  • Virtual machines interconnection inside Proxmox 2.1 Cluster

    - by Anton
    We have 3 physical servers (each with 1 NIC) in different datacentres, all of them are interconnected by openvpn bridged private network (10.x.x.x). Inside this network we have fully functional 3 nodes Proxmox 2.1 cluster. So, actually question is: Is there any "proper" way to make "global" local network (172.16.x.x) for all VMs inside cluster, so even if we move VM from one node to other we could reach it by static IP regardless of it's physical location? BTW, we can't add dedicated NIC to each server. Thanks in advance. EDIT: I have tried to make a separate openvpn bridge for 172.16.x.x, now I have at each server two interfaces: SRV1: openvpnbr1 - 172.16.13.1 vmbr0 - 172.16.1.1 SRV2: openvpnbr1 - 172.16.13.2 vmbr0 - 172.16.2.1 But now there is no connection between those ifaces: SRV1: ping 172.16.13.2 From 172.16.1.1 icmp_seq=2 Destination Host Unreachable SRV2: ping 172.16.13.1 From 172.16.2.1 icmp_seq=2 Destination Host Unreachable If I shut down vmbr0 interfaces, so there is connection between servers over openvpn, but vmbr0 is used by Proxmox... Where I am wrong?

    Read the article

  • How have multiple web servers and IPs on the same physical network

    - by jsigned
    I do web development out of a small office and need to have multiple physical and virtual servers that can be accessed from the internet. I also have a number of devices (computers, laptops, tablets, printers, etc) that need connections as well. I have gotten a subnet of 8 IP's from my ISP and while that is adequate for the web servers its far too small for everything that needs access to the network. My router is an ASUS RT-N16 running DD-WRT. I'm just smart enough about this routing topic to be dangerous, think 2 year old with a magic marker. I would like to keep my internal network NAT'ed on the 192.168.x.x network and route the 68.69.x.x 255.255.255.248 traffic directly to the servers. The physical network consists of the 4 port DD-WRT router and an unmanaged gig switch. I have a fiber connection to the office that works as an Ethernet port. In other words I can plug my laptop directly into it and have access to the internet. There is no login or password and the router is setup to get DHCP from the ISP, and to provide DHCP addresses for the internal network. What I've done so far is google and try different configurations with little success. In the end I decided I didn't even know how to ask the questions needed. My questions are: Is this the best way to configure the network? How do you do it? VLANs? Multiple routers? I've never had to configure a router using anything more than the GUI so if this is command line stuff be gentle.

    Read the article

  • Websites down EC2 inaccessible via SSH CPU utilisation 100% last few hours - what should I do?

    - by fuzzybee
    I have multiple websites hosted on 1 single EC2 instance. 1 website "abc" were down for a few hours, sometimes threw database connection error and sometimes just took too long to respond. 1 website "def" were incredibly slow but still up and running the rest of the websites had the same symptoms has "abc" I can afford 15 min or less down time for "def". Should I then (in AWS console) reboot my instance or create an AMI image from my instance and launch it and associate my elastic IP to the new instance or "launch more like this" Background on what may have happened to my ec2 The last time I made changes for 21 hours ago. A cronjob to create snapshots ran around 19 hours ago and it has been running for a long time. Google Analytics shows traffic to my websites such as kidlander.sg has been nothing exceptional. Is there any other actions I should take or better options I could have? (I have already contacted AWS support but their turnaround is 12 hours so I appreciate all the help I could get) Update I got everything back up and running and CPU utilisation back to normal, around 30%. There is 1 difference between "def" and "abc" as well as my other websites "def"'s database is hosted on RDS "abc"'s database is hosted on an EC2 instance (different from my web server instance) configured by myself Nevertheless, I checked the EC2 instance I'm using as MySQL server yesterday and it was absolutely fine during the incident low CPU ultilisation I could log in using linux command line

    Read the article

  • who has files open on a linux server

    - by Robert
    I have the fairly common task of finding who has files open on our Linux (Ubuntu ) file server in our Windows environment. We use Samba on the network and I use Putty from my workstation to establish a shell window to run bash scripts. I have been using something like this to find what files are open: (this returns a list of process ids with each open file) Robert:$ sudo lsof | grep "/srv/office/some/folder" Then, I follow up with something like this to show who owns the process: (this returns the name of the machine on the network using the IP4 protocol who owns the process) Robert:$ sudo lsof -p 27295 | grep "IPv4" Now I know the windows client who has a file open and can take action from there. As you can tell this is not difficult but time consuming. I would prefer to have a windows application I can run that would just give me what I want. So, I have been thinking about creating some process I can run on Linux that listens on a port and then returns a clean list of all open files with the IP address of the host who has the file open. Then, a small windows client application that can send the request on the port. It seems like this should be a very common need but I can not find anything like this that has been done before. Any suggestions?

    Read the article

  • Web Server Routing Based On Location

    - by Eric
    I have a website that has users from both Hong Kong and Australia. Unfortunately, since the server is located in Australia, users from Hong Kong are going to suffer latency problems. Traffic has to go through US before travelling back to Australia. So I've setup a server in Hong Kong as well, and users using the .hk TLD are going to be redirected to the Hong Kong web server. It shares the same database server with the Australian server but due to aggressive SQL query caching, impact on performance from latency from SQL queries are negligible. But for users accustomed to the Hong Kong website but have since traveled to Australia, they suffer from additional latency because they go to the .hk site which redirects to the HK server even when they're in Australia. The website is targeted at international students from Hong Kong so this is an significant issue for me. Instead of redirecting users to the closest web server based on the TLD, how do I redirect users based on their location? Currently I am using nginx, postgres and Django. Say I know how to estimate users' location based on users' IP addresses, what is my next step? At what level would I work on? What topic should I read up?

    Read the article

  • SSL in IIS 7 on a subdomain in a web farm

    - by justjoshingyou
    I have been having one of the most frustrating days in my entire IT career. I am trying to install an SSL certificate on a subdomain in a web farm. http://shop.mydomain.com needs to ALWAYS be forced to https://shop.mydomain.com I have a temporary cert issued from verisign on shop.mydomain.com I have installed the cert on the server. The website for shop.mydomain.com is set as a host header in IIS with the DNS entry pointed to the same IP as mydomain.com - which is our load balancer. I actually have 2 load balancers (as needed by our ISP). One redirects all traffic on port 80 out to the different servers on port 80. The other pushes out port 443 to the servers on port 443. shop.mydomain.com is to be the only site protected by SSL at this time. When I add the binding and I navigate to https://shop.mydomain.com it pops up with a warning about the cert being invalid (assumed because this is a test cert), and then it sends the user to http. So, I checked the box "Require SSL and it redirects to http://shop.mydomain.com/default.aspx and displayes an ASP.NET 404 error message. (not the IIS 404 error) I tried removing the binding on the site to port 80 as well with no luck. I am nearly ready to crawl under my desk into the fetal position. How on earth do I make this work? I can't even get it to work on one machine, let alone in the load balanced environment.

    Read the article

  • Configuration for Router Behind Uverse 2Wire

    - by Nori
    I have a 2Wire Uverse router(RG). I'm not particularly fond of it and want to use it as a modem only. I have a Linksys router with Tomato firmware on it and want that to be configured as the router. Most of the "guides" I've seen in my searches has been to enable DMZ Plus mode, but I don't see a way to make that work while my router has a static IP address. I played with it for quite some time yesterday and didn't see a way to get it to work. Then I ran across a setting in broadband for configuring another network. I played with that for a little bit but ran out of time and couldn't get it to work. So my question is for anyone out there who has Uverse and successfully setup a Tomato based firmware router behind the RG. How did you get it configured? I'm sure if I continued playing with it I could get it to work, but if someone out there already has it working then that would make my life easier. Thoughts? Thanks.

    Read the article

  • LAMP server VM issues

    - by nullArray
    After getting a recommendation to salvage a wiki by installing a LAMP server, I went on the prowl for a good virtualized one. I used the VMware Player version. Since the windows box has Bonjour, I can, for example, go to http://lamp.local. and it works see the web client. The problem is, I can't ssh to a directory to scp the files I need, mount a usb thumbdrive (usbfs is unsupported) nor get samba working. I can't even update the ubuntu installation, it fails. I've tried bridged, nat and host-only networking settings in VMware Player. Bridged gives me an undefined IP, while the other two each have different IPs. All three settings allow me to access the web config, but none of them give me samba access. Windows usually freezes, then reports that it cannot connect. I'd rather not wipe a box to do a dedicated install, is there I way I can get this VM working, or are there better LAMP VMs out there? This one came already working and set up with VMware Player, so I thought it would be perfect... Thanks,

    Read the article

  • Extending home network using multiple access point originating from another access point

    - by cyberjar09
    I have a home network with the current setup (RJ45 running from main router to access point) : +-------------+ +--------------+ | 192.168.2.1 | |192.168.2.2 | | router +--------------->access point 1| +-----^-------+ +--------------+ | | +-----+--------+ | 192.168.1.1 | | modem | +-----^--------+ | | | | +--+--+ | ISP | +-----+ However I would like to extend the network to two more floors in the house via the existing Access Point (router is too far and not reachable using a network cable, hence I need to extend using current access point). Please see diagram below : +-------------+ +--------------+ +----------------+ | 192.168.2.1 | |192.168.2.2 | | 192.168.2.3 | | router +--------------->access point 1+----------> access point 2 | +-----^-------+ +--------+-----+ +----------------+ | | | | +-----+--------+ | | 192.168.1.1 | | | modem | | +-----^--------+ | +----------------+ | +----------------> 192.168.2.4 | | | access point 3 | | +----------------+ | +--+--+ | ISP | +-----+ Q1 : is this setup possible? Q2 : if possible, will I have to do anything different from what I did to setup access point 1? edit 1 : I am trying to study the dd-wrt documentation to see which would be the correct mode of operation for me Linking Routers but Im confused because I dont see any info on how to use an existing Access point to extend the signal of the SSID. If anyone could point me to the correct wiki for how I should setup AP2 and AP3 based on AP1, it would be very helpful. For AP1, I did the following Use static IP and setup same SSID as primary wireless router use same security as primary wireless router make AP1 point to 192.168.2.1 (primary router) for DHCP Thanks in advance.

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >