Search Results

Search found 26947 results on 1078 pages for 'util linux'.

Page 282/1078 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

  • Ubuntu 10.04 on virtualbox gives error: Target filesystem doesn't have /sbin/init \ No init found. Try passing init= bootarg

    - by Philip
    I'm a linux newbie and the only reason I have it installed is so I can stop having Windows incompatibility issues with Ruby on Rails. Having said that, it sure has been nice, and much faster, and I don't think I'll be doing any Winrails stuff anytime soon. So I created a virtualmachine using virtualbox and have had ubuntu on it for the last 3 weeks. Recently ubuntu asked if it could update a few things, I clicked 'ok'. Now it won't boot and I get this error: *mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory ... Target filesystem doesn't have /sbin/init. No init found. Try passing init= bootarg BusyBox v1.13.3... (initramfs) _ * So I cruised the forums and there are a variety of solutions, but they all have to do with booting from the live cd. (which I assume is the ISO image I used to install ubuntu in the first place). But when I boot from that CD, it just hangs on the ubuntu screen, and the little dots keep cycling white to red, but it hung there for an hour so I think it was stuck. Not sure what I can do; can I do anything from the busybox shell (or whatever that is) to fix things? The thing is, it took about 10 hours to get everything the way I needed with all the gems and whatnot. And I didn't really write down what I tweaked, and I'm middle aged, so all that information has leaked out by now and I don't want to do it again. I'd really like to repair my existing install. One question you might have is, is there something wrong with the ISO? I don't think so, because I made a new virtual machine and used that same iso file to install a fresh ubuntu. Any help much appreciated. Phil

    Read the article

  • ping incorrectly pinging 127.0.0.1

    - by AlexW
    I've got an odd DNS issue. I'm running a dual ipv4/ipv6 environment on Linux. Pinging some sites results in ping pinging 127.0.0.1. e.g. #> ping authserver.mojang.com PING authserver.mojang.com (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=3 ttl=64 time=0.058 ms --- authserver.mojang.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.043/0.048/0.058/0.010 ms Dig, however correctly returns the following: # dig authserver.mojang.com ; <<>> DiG 9.9.3-P2 <<>> authserver.mojang.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15800 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;authserver.mojang.com. IN A ;; ANSWER SECTION: authserver.mojang.com. 5 IN A 54.235.119.47 ;; Query time: 14 msec ;; SERVER: 2001:4860:4860::8888#53(2001:4860:4860::8888) ;; WHEN: Sat Nov 09 15:34:40 GMT 2013 ;; MSG SIZE rcvd: 66 I'm confused! My web browser returns the correct website, and the same computer booted into Windows also works correctly.

    Read the article

  • postfix test and configuration problem

    - by Woho87
    Hi Guys! I installed postfix using sudo yum install postfix postfix-mysql. I'm newbie to mail systems, but I have one AMAZON EC2 instance with a public DNS. I used the public DNS in most cases, when I configured the file main.cf. The public DNS I have is from amazon and it is a long string(ec2-123-34-234-677.....amazon.com). // I configured this on main.cf. I replaced example.com with ec2-123-.......amazon.com myhostname = mail.example.com mydomain = example.com myorigin = $mydomain mydestination = example.com, $transport_maps local_recipient_maps = $alias_maps $virtual_mailbox_maps unix:passwd.byname home_mailbox = Maildir/ How do I test postfix? I just want it to send emails for my web application. I tried to test it with >telnet localhost 25 after I typed in SSH >sudo postfix start. but I recieve the message that telnet command can not be found. I also use the Amazon linux distribution if you want to know. I use it because it is free. What have I done wrong? Are there anymore configurations required pls help!

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Servers/Websites Keep Going Down

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps Type: Linux

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • how to get ip address of a PPP(Point-to-Point Protocol) network interface?

    - by Xsmael
    I have a Linux machine with two network interfaces, and I'd like to get the IP address of the PPP interface w1g1 but it doesn't show up in ifconfig. There is a public IP on the PPP interface, but there is no internet connection, I'm trying to troubleshoot but I need to get the IP address of the interface and I can't. ifconfig : eth0 Link encap:Ethernet HWaddr 00:30:48:8D:F0:2C inet addr:192.168.2.254 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::230:48ff:fe8d:f02c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9970 errors:0 dropped:567 overruns:0 frame:0 TX packets:4338 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1441024 (1.3 MiB) TX bytes:915814 (894.3 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:675 errors:0 dropped:0 overruns:0 frame:0 TX packets:675 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:50659 (49.4 KiB) TX bytes:50659 (49.4 KiB) w1g1 Link encap:Point-to-Point Protocol UP POINTOPOINT RUNNING NOARP MTU:240 Metric:1 RX packets:748994 errors:0 dropped:0 overruns:0 frame:0 TX packets:748992 errors:0 dropped:0 overruns:0 carrier:3 collisions:0 txqueuelen:100 RX bytes:179758560 (171.4 MiB) TX bytes:179758080 (171.4 MiB) Interrupt:177 Memory:f881c400-f881e3ff w1g1 is connected to a modem by an RJ45<-Serial cable and the modem is connected to the phone line. The modem is a NOKIA DNT2Mi you can see it here Routing table : 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.254 169.254.0.0/16 dev eth0 scope link default via 192.168.2.180 dev eth0

    Read the article

  • Easiest way to do host name resolution with IPA?

    - by Luke
    We are currently using static LAN IP addresses for our internal non-public facing servers. We don't have DHCP configured. We're using Vyatta for our router and firewall. The firewall is configured to be zone based. We want to setup IPA for centralized authentication (LDAP+Kerberos). IPA is requiring resolvable host names. I want to avoid having to enter DNS records by hand. What is the most painless way to make host names resolvable that works with IPA in a Linux only environment? We arn't using anything to resolve host names now. Up until now we've been using static ip addresses and local users on each server. We've looked at BIND, DHCP (does that even solve the problem?), and multicast DNS. At this point we're not sure which solution would work best. Is there another option we haven't considered? Security is very important. We have multiple zones where each zone has very specific or no access to another zone. DNS for public domains is forwarded from Vyatta to our ISP's DNS server.

    Read the article

  • Adjust iptables

    - by madunix
    cat /etc/sysconfig/iptables: # Firewall configuration written by system-config-securitylevel # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p 50 -j ACCEPT -A RH-Firewall-1-INPUT -p 51 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d X.0.0.Y -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp -s X.Y.Z.W --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s M.M.M.M --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT I have the above following IPtables on my linux web server(Apache/MySQL), I want to have the following: Block any traffic from multiple IP's to my web server IP1:1.2.3.4.5, IP2:6.7.8.9 ..etc Limiting one host to 20 connections to 80 port, which should not affect non-malicious user, but would render slowloris unusable from one host. Limit MYSQL port 3306 access on my server only to the following IP range A.B.C.D/255.255.255.240 Block any ICMP traffic.

    Read the article

  • Sendmail Configuration for Exchange Server

    - by user119720
    i need help for sendmail configuration in our linux machine. Here the things: I want to send email to outside by using our exchange server as the mail relay.But when sending the email through the server,it will response "user unknown".To make it worse, it will bounce back all the sent message to my localhost. I already tested our configuration by using external mail server such as gmail and yahoo,the configuration is working without any issue and the email can be sent to the recipient.Most of the configuration of my sendmail is based on here. authinfo file : AuthInfo:my_exchange_server "U:my_name" "I:my_email" "P:my_passwd" "M:PLAIN LOGIN" AuthInfo:my_exchange_server:587 "U:my_name" "I:my_email" "P:my_passwd" "M:PLAIN LOGIN" sendmail.mc : FEATURE(authinfo,hash /etc/mail/authinfo.db) define(`SMART_HOST', `my_exchange server')dnl define('RELAY_MAILER_ARGS', 'TCP $h 587') define('ESMTP_MAILER_ARGS', 'TCP $h 587') define('confCACERT_PATH', '/usr/share/ssl/certs') define('confCACET','/usr/share/ssl/certs/ca-bundle.crt') define('confSERVER_CERT','/usr/share/ssl/certs/sendmail.pem') define('confSERVER_KEY','/usr/share/ssl/certs/sendmail.pem') define('confAUTH_MECHANISMS', 'EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN') TRUST_AUTH_MECH('EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN') define('confAUTH_OPTIONS, 'A')dnl My first assumptions the problem occur is due to the authentication problem, as exchange server need encrypted authentication (DIGEST-MD5).I have already changed this in the authinfo file (from plain login to digest-md5 login) but still not working. I also can telnet our exchange server.So the port is not being blocked by firewall. Can someone help me out with this problems?I'm really at wits ends. Thanks.

    Read the article

  • Strange issue with 74.125.79.118

    - by Domenic
    I'm facing with a strange issue on a Linux server. After frequent crashes the analysis found that the server is led to collapse by a huge number of connections to the ip 74.125.79.118 departing from php scripts of the hosted web sites. After a depth analysis of the files I'm found that are not present any malware infections. Ip 74.125.79.118 is Google. I realize after a Google search that the connections to this ip are generated by embedded video from youtube on web sites, among other Google features like safe search. But I don't understand how this type of behavior can lead to the collapse the server and the uniqueness of the situation leads me to think that the situation is far from being attributable only to Google and Youtube. Also I've found that blocking connections from eth0 to 74.125.79.118:80 doesn't solve the issue but if I stop DNS traffic from eth0 to internet, connections to 74.125.79.118 stops. I'm really confused about this. Any suggestions? Cheers.

    Read the article

  • Strange File-Server I/O Spikes - What Is Causing This?

    - by CruftRemover
    I am currently having a problem with a small Linux server that is providing file-sharing services to four Windows 7 32-bit clients. The server is an AMD PhenomX3 with two Western Digital 10EADS (1TB) drives, attached to a Gigabyte GA-MA770T-UD3 mainboard and running Ubuntu Server 10.04.1 LTS. The client machines are taking an extremely long time to access/transfer data on the file server. Applications often become non-responsive while trying to open files located remotely, or one program attempting to open a file but having to wait will prevent other software from accessing network resources at all. Other examples include one image taking 20 seconds or more to open, and in one instance a user waited 110 seconds for Microsoft Word 2007 to save a document. I had initially thought the problem was network-related, but this appears not to be the case. All cables and switches have been tested (one cable was replaced) for verification. This was additionally confirmed when closing down all client machines and rebooting the server resulted in the hard-drive light staying on solid during the startup process. For the first 15 minutes during boot, logon and after logging on (with no client machines attached), the system displayed a load average of 4 or higher. Symptoms included waiting several minutes for the logon prompt to appear, and then several minutes for the password prompt to appear after typing in a user name. After logon, it also took upwards of 45 seconds for the 'smartctl' man page to appear after the command 'man smartctl' was issued. After 15 minutes of this behaviour, the load average dropped to around 0.02 and the machine behaved normally. I have also considered that the problem is hard-drive-related, however diagnostic programs reveal no drive problems. Western Digital DLG, Spinrite and SMARTUDM show no abnormal characteristics - the drives are in perfect health as far as the hardware is concerned. I have thus far been completely unable to track down the cause of this problem, so any help is greatly appreciated. Requested Information: Output of 'free' hxxp://pastebin.com/mfsJS8HS (stupid spam filter) The command 'hdparm -d /dev/sda1' reports: HDIO_GET_DMA failed: Inappropriate ioctl for device (the BIOS is set to AHCI - I probably should have mentioned that).

    Read the article

  • mysqldump isn't able to export a specific database, phpMyAdmin crashes

    - by Devils Child
    I'm experiencing problems with a database on my server (Note: All other databases work fine). Once I try to export it with mysqldump I get this error: # mysqldump -u root -pXXXXXXXXX databasename > /root/databasename.sql mysqldump: Couldn't execute 'show table status like 'apps'': Lost connection to MySQL server during query (2013) Also, phpMyAdmin throws an error when selecting this database and immediately logs out. However, the web site which uses this database works fine. I can also execute SELECT statements on the table named "apps" from the MySQL shell. I tried restarting the MySQL daemon as well as REPAIR DATABASE and REPAIR TABLE but the problem still persists. I had this problem before, then it disappeared somehow without me doing anything to resolve the issue. Now, the problem is back and I'm simply unable to create a backup of this database. Used software Debian 6.0.7 x64 MySQL 5.1.66-0 MySQL Version: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.66-0+squeeze1 | | version_comment | (Debian) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+

    Read the article

  • No sound out of headphone port

    - by Thanatos
    I cannot get sound out of the headphone port. Headphones are plugged in, and sound comes out of the internal speakers. Windows behaves normally (sound switches to headphones when headphones are inserted). It did work in Linux at one point, but something changed, we're just not sure what. Rebooting doesn't fix. This appears to occur whether or not PulseAudio is running. Things I've tried: Rebooting. No effect. Booting into Windows. It works properly, so probably not a hardware issue. All of alsamixer. My only controls are this: "Master" Volume bar & mutable, unmuted. Controls volume. "PCM" Volume bar only. 100%. "S/PDIF" Mutable only, currently muted, has no effect. "S/PDIF" Default PCM", Mutable only, currently unmuted, has no effect. Killing PulseAudio. No effect. (It also won't stay dead! Something appears to be restarting it, and I can't tell what, but it is annoying as fuck.) alsactl init 0, no effect. sudo rm -f /var/lib/alsa/asound.state, no effect. General system info: Ubuntu 10.04 LTS Toshiba Satellite T135D-S1324 lspci says I have: 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • Relinking a deleted file

    - by mbac32768
    Sometimes people delete files they shouldn't, a long-running process still has the file open, and recovering the data by catting /proc/<pid>/fd/N just isn't awesome enough. Awesome enough would be if you could "undo" the delete by running some magic option to ln that would let you re-link to the inode number (recovered through lsof). I can't find any Linux tools to do this, least with cursory Googling. What do you got, serverfault? EDIT1: The reason catting the file from /proc/<pid>/fd/N isn't awesome enough is because the process which still has the file open is still writing to it. A delete removes the reference to the inode from the filesystem namespace. What I want is a way of re-creating the reference. EDIT2: 'debugfs ln' works but the risk is too high since it frobs raw filesystem data. The recovered file is also crazy inconsistent. The link count is zero and I can't add links to it. I'm worse off this way since I can just use /proc/<pid>/fd/N to access the data without corrupting my fs.

    Read the article

  • Recover file from NTFS after it was formatted twice

    - by Phil
    I'm running Linux Mint and have a 2TB drive that I formatted as NTFS. I copied ~120GB of files from another computer to the 2TB drive, removing the files from the other computer as I did so. When they were all on the 2TB drive, I zipped them up as file "Gold.tar.gz". Then I reformatted the 2TB drive as ext3 in a moment of absolute stupidity. I formatted the 2TB back to NTFS, but of course everything is gone. Here is what I have tried: TestDisk -- won't find any lost partitions or undelete files, just the current empty one PhotoRec -- seems to only find some broken text files and misidentify their extensions. It never finds the 100's of avi files I had (before the 120GB copy, I already had 750GB on the drive full of avi files) or anything else that would show me it's working properly. Using dd I recovered the first 512MB of the drive and went hunting through it. I found all of the file as MFT entries, including the file "Gold.tar.gz" in a 2048 byte MFT record. I'm looking now for some way of either (1) telling PhotoRec to look at that record, or (2) analyze the MFT record myself and discover the sectors holding the data; I can piece it all together using dd and join the binary output if it's fragmented. One last thing - from the moment I got this drive a few days ago to the incident, there were only file copies made to it and no deletes. I formatted as NTFS, then copied thousands of files, then made a tar.gz, then reformatted to ext3, then reformatted to NTFS again. I'm hoping that the size of the drive and fact that there was no file modification/deleting happening makes for minimal file fragmentation.

    Read the article

  • cset shield --kthread on: should I use this?

    - by lori
    I'm reading up on cpu shielding using Alex Tsariounov's cset utility here: https://rt.wiki.kernel.org/index.php/Cpuset_Management_Utility/tutorial In the tutorial I'm finding the wording around migrating kernel threads from having access to all cpus to running only in a certain cpuset a bit ambiguous The tutorial says the following: Some kernel threads can be moved into the unshielded system cpuset as well. These are the threads that are not bound to specific CPUs. If a kernel thread is bound to a specific CPU, then it is generally not a good idea to move that thread to the system set because at worst it may hang the system and at best it will slow the system down significantly. These threads are usually the IRQ threads on a real time Linux kernel, for example, and you may want to not move these kernel threads into system. If you leave them in the root cpuset, then they will have access to all CPUs. The tutorial then goes on to say: However, if your application demands an even "quieter" shield, then you can move all movable kernel threads into the unshielded system set with the following command. [zuul:cpuset-trunk]# cset shield -k on cset: --> activating kthread shielding cset: kthread shield activated, moving 70 tasks into system cpuset... [==================================================]% cset: done I am confused by this final sentence. By using the word however, it seems to suggest that you typically should not move the movable kernel threads into the unshielded system set. Is this the case, or is it safe to move kernel threads which can be moved into a cpuset, thereby preventing them from running on some cpus?

    Read the article

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

  • How can see what processes makes my server slow?

    - by Steven
    All my websites on my server are extremely slow or not loading at all. Even server admin (Plesk) will not load some times. There's been no changes to the sites for the last coupple of months. How can I see what processes is making my server slow? My environment looks like this: Server: VPS running Linux 2.8.x OS: Centos 5 Manage interface: Plesk 9.x Memmory: 1024MB CPU: 2.2GHz My websites run on PHP and MySQL. I finally managed to telnet (Putty + SSH) in to my server. Running top did not show any processes using more than max 2% CPU and none were using exesive memmory. I also got a friend to install a program that checks the core files, and all seemed fine. So I'm leaning towards network issues or some other server malfunction. But I'm not able to find out what can be wrong. Here are some answers to Sean Kimball: I don't run mail services on my server yet There are noe specific bandwidth peaks. Prefork looks like this <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> Not sure what you mean with DNS question. But I think it's up and running. There are no processes running wild Where can I find avarage load? Telnet is disabled and I have to log in using SSH :)

    Read the article

  • optimal folder structure for storing 100k files on a USB drive

    - by cherouvim
    I need to store 100k files (around 40GB) in a USB drive. Each file has a unique int id (e.g 45000). Option one is to put all files in a single folder: root/ root/1.pdf root/2.pdf root/3.pdf ... root/567.pdf root/568.pdf root/569.pdf ... root/10001.pdf root/10002.pdf root/10003.pdf ... root/99998.pdf root/99999.pdf root/100000.pdf Option two is to create a [1-9][0-9]* folder hierarchy based on that id: root/ root/1/file.pdf root/2/file.pdf root/3/file.pdf ... root/5/6/7/file.pdf root/5/6/8/file.pdf root/5/6/9/file.pdf ... root/1/0/0/0/1/file.pdf root/1/0/0/0/2/file.pdf root/1/0/0/0/3/file.pdf ... root/9/9/9/9/8/file.pdf root/9/9/9/9/9/file.pdf root/1/0/0/0/0/0/file.pdf Which option will scale better? I can understand that the second option will require tons of folders but each folder will at most contain 10 folders and 1 file. Maintenance will not be an issue since everything will be controlled by an application. Note that this is a USB drive on linux and based on the above I'd also like to know whether I should go with FAT32 or NTFS.

    Read the article

  • HPET missing from available clocksources on CentOS

    - by squareone
    I am having trouble using HPET on my physical machine. It is not available, even though I have enabled it in my bios, forced it in grub, and triple checked my kernel to include HPET in its compilation. Motherboard: Supermicro X9DRW Processor: 2x Intel(R) Xeon(R) CPU E5-2640 SAS Controller: LSI Logic / Symbios Logic SAS2004 PCI-Express Fusion-MPT SAS-2 [Spitfire] (rev 03) Distro: CentOS 6.3 Kernel: 3.4.21-rt32 #2 SMP PREEMPT RT x86_64 GNU/Linux Grub: hpet=force clocksource=hpet .config file: CONFIG_HPET_TIMER=y CONFIG_HPET_EMULATE_RTC=y CONFIG_HPET=y dmesg | grep hpet: Command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet Kernel command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet cat /sys/devices/system/clocksource/clocksource0/current_clocksource: tsc cat /sys/devices/system/clocksource/clocksource0/available_clocksource: tsc jiffies What is even more confusing, is that I have about a dozen other machines that utilize the same kernel .config, and can use HPET fine. I fear it is a hardware issue, but would appreciate any advice or help with getting HPET available. Thanks in advance!

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >