Search Results

Search found 19061 results on 763 pages for 'load factor'.

Page 690/763 | < Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >

  • Encrypted off-site data storage

    - by Dan
    My business has a rather unique problem. We work in China and we want to implement a file server paradigm which does not store any files locally, but rather in a server overseas. Applications would be saved onto our local machines, but data would be loaded directly into memory from the cloud, e.g. I load a docx into word at the beginning of the day, saving periodically to the cloud as I work on it, and turn off my computer at night, with nothing saved locally. Considering recent events, we worry about being raided by the Chinese authorities, and although all our data is encrypted, it would not be hard for the authorities to force us to give up the keys. So the goal is not to have anything compromising physically in China. We have about 20 computers, and we need an authenticated, encrypted connection with this overseas file server. A system with Active-Directory-like permissions would be best, so that only management can read or write to certain files, or workers can only access files that relate to their projects, and to which all access can be cut off should the need arise. The file server itself would also need to be encrypted. And for convenience, it would be nice if this system was integrated with each computer's file explorer (like skydrive or dropbox does, but, again, without saving a copy locally), rather than through a browser. I can't find any solution online. Does anyone know of a service that does this? Otherwise I'll have to do it myself (which kinda sounds fun, but I don't really have the time), and I'm not sure where to start. Amazon maybe. But the protocols that offices would use on their intranet typically aren't encrypted; we need all traffic securely tunneled out of the country. Each computer already has a VPN to a server in California, but I'm unsure whether it would be efficient to pipe file transfers through it. Let me know if anyone has any ideas. And this is my first post; feel free say whether this question is inappropriate/needs to be posted elsewhere.

    Read the article

  • Ubuntu Server 10.04 Heavy Network Traffic causes disconnect

    - by K Vaughan
    I'm currently running a headless Ubuntu 10.04 server. Installed is the LAMP stack, Joomla, Virtualbox, phpvirtualbox, webmin and proFTP.. It resolves the IP address so I can access it remotely (either the apache2 webserver or the FTP) using DDClient. Any packages installed have been installed using apt-get. Webmin, although discouraged in Ubuntu Server, is used mostly to administer the webserver aspect. This issue also appeared when I was using Ubuntu Server 10.10. After periods of heavy network traffic, whether local or remote, the connect drops. I'm talking specifically about the transfer of files via FTP, SCP or Samba (the latter of which I seldom use). There is no response to ping or ssh. I can't FTP to the server nor can I load the website. There are times when the server has been on for a few days and everything runs fine because I haven't accessed it much, if at all (thus not much network traffic). I've gone through a few hardware changes although I don't believe this has cause the issue: this has been happening long before I made any changes. At first I thought it was my ISP-provided router blocking traffic because of some kind of misconfiguration (perhaps assuming it was some kind of DoS attack). I've changed routers and still found no success. I've checked syslog, dmesg and kern.log for warnings but have uncovered none. I've ran memtest via the GRUB2 menu at boot and once it turned up 4 errors. I ran again with individual sticks of RAM in various slots and everything turned up fine. I've looked through the BIOS settings and everything looks fine. I've tried unplugging unnecessary pieces of hardware (other internal hard drives, CD drives, floppy, PCI cards, etc). Any help or tips on how I can even begin to troubleshoot this would be very much appreciated. Please note that i've only started playing with servers as a hobby so my knowledge wouldn't be the most refined. I'm comfortable with command line and have the initiative to know how to look up something I can't do. Unfortunately I can't seem to find any issues like this. Additionally: If a solution can't be found some assistance to write a script that will cause the server to reboot automatically if, after x minutes, it gets no response to pinging somewhere like google. Admittedly that's not the cleanest solution should my internet end up going down but I can't think of what else to do.

    Read the article

  • Troubleshooting an unstable internet connection

    - by Konrad Rudolph
    My MacBook Pro running OS X (10.9, but I had the same problem before) is connected to a Belkin router via WiFi and, using Virgin Media as the ISP, to the internet. The connection is extremely unstable – on some days, I get a ping timeout every few seconds. In addition, some domains seem to suffer general connectivity issues. For instance, I often find that while the youtube.com website loads, none of the videos (which are hosted on a separate domain) do. At other times, videos load but always fail to buffer, even though the actual connection speed is ok, even though I’ve disabled dash playback. Since I’m living in a rented room and the ISP contract isn’t actually mine I’ve got only limited possibilities of addressing the problem. In particular, I have no access to the router configuration and my non tech savvy landlady, while sympathetic, is not in a great hurry to hand the problem over to the ISP’s customer support. What’s more, I seem to be the only person in the house experiencing these problems – but I can imagine that this is simply because I’m the only one who’s using the internet continuously. I’m searching for specific tests that might be able to pinpoint – and ideally solve – the problem. So far all I’ve managed to do is establish that Virgin is routing my traffic in mysterious ways. Here’s an excerpt from traceroute google.co.uk. It’s worth mentioning that the host name doesn’t seem to matter a lot, the trace route is always the same. traceroute: Warning: google.co.uk has multiple addresses; using 62.254.36.148 traceroute to google.co.uk (62.254.36.148), 64 hops max, 52 byte packets 1 (192.168.2.1) 1.112 ms 1.300 ms 2.359 ms 2 10.100.32.1 (10.100.32.1) 11.926 ms 10.217 ms 24.987 ms 3 cmbg-core-1a-ae3-610.network.virginmedia.net (80.1.202.93) 28.809 ms * 66.653 ms 4 popl-bb-1b-ae16-0.network.virginmedia.net (212.43.163.141) 13.759 ms 126.504 ms 20.472 ms 5 nrth-bb-1b-et-010-0.network.virginmedia.net (62.253.175.57) 28.357 ms 16.398 ms 42.387 ms 6 nrth-bb-1c-ae1-0.network.virginmedia.net (62.253.174.110) 27.441 ms 15.622 ms 12.044 ms 7 lutn-icdn-1-ae0-0.network.virginmedia.net (62.253.175.82) 16.678 ms 28.463 ms 28.253 ms 8 * * * 9 * * * 10 * * * ^C If I let it, this goes on until the end of time. It never seems to reach a destination. Is this normal? A friend living in the same town who is also with Virgin Media has a more conventional traceroute output: 7 hops to google.co.uk, all of which send the ICMP TIME_EXCEEDED response. The obvious fix – rebooting the router – doesn’t seem to help. As far as I can tell, the WiFi connection is stable (I can always ping the router) so the problem is further downstream. I’ve tried using an alternative DNS before (OpenDNS) but if anything, this made things worse. In fact, it made all Google services nigh unreachable.

    Read the article

  • MySQL Execution Time Spikes

    - by Brett
    I am having issues with MySQL all of the sudden today. Details: OS: CentOS release 5.7 Server type: Parallels virtuozzo container running on mediatemple DV 4.0 package Average total memory usage: <500mb Total memory usage allowed: 1gb (part of shared pool for emergency only, users are only guaranteed 500mb) Processor: 1ghz Main database sizes with most usage: 275mb & 107mb server stack: nginx 1.0.10, mysql 5.1.54, php 5.3.8 with php-fpm innodb_buffer_pool_size=100M php-fpm max children: 5 Webapps: custom php-based sites, magento & drupal slow query timeout is set to 1 second Steps I completed towards diagnosis: Cannot restart container yet - I will try later tonight when our domestic traffic has dropped Enabled mysql and php-fpm slowlog. Found functions that did DB queries in php-fpm slowlog were taking over 1s to complete at times Found some simple queries in mysql slowlog taking well over 1s to complete that should take less than 1s. Most interesting - execution time seems to spike at times. A query will take .2s a couple times, then one time it will take 8s to run the same query. These results were verified by running raw SQL queries through mysql command line. Top does not reveal anything too interesting Only resource related thing i can see is load averages much higher than normal Up until today, mysql has been fine, there have been no major changes to the db since yesterday. Sometimes things are so bad, I am seeing bad gateway errors after 60s of execution time. Innodb is doing on average 300-1400 reads/sec. Mysql is doing 3-10 queries/sec slow query count in 2 hours uptime is 171 (with slow timeout at 1 second) Tried restarting mysql, nginx, php-fpm multiple times For example: UPDATE `catalogsearch_query` SET `query_text` = 'EW 90', `num_results` = '7532', `popularity` = '99180', `redirect` = NULL, `synonym_for` = NULL, `store_id` = '1', `display_in_terms` = '1', `is_active` = '1', `is_processed` = '1', `updated_at` = '2012-05-08 21:38:31' WHERE (query_id='31'); This query took 17sec to complete one time, rest of the time around .079 sec. But varies, sometimes 1sec, sometimes .004 sec. This is running the same query, over and over with a couple seconds time in between each. Most tables are innodb, and sometimes I noticed the lock time taking 90% of the query execution time, but most of the time lock time is insignificant. Any idea what's going on here?

    Read the article

  • nginx - Redirect specific page paths to https while keeping everything else on http (in a single server call)?

    - by Kris Anderson
    From what I've gathered so far it's clear that running if statements in nginx should be avoided at all costs. Most of the examples I've found so far regarding specific page redirects involve multiple servers being used. But, isn't that a bit wasteful? I'm not sure, but I would think multiple servers to accomplish this would be somewhat slower then a single server when under heavy load. My current server call is this: server { listen 10.0.0.60:80; listen 10.0.0.60:443 default ssl; #other code } What I want to do is redirect certain http requests to https requests. For example, I want /login/ and /my-account/ to always be forced to use SSL. If you're on /help/ though, I want that served over the default http. Is there a way to accomplish this within a single server call? Or is there no downside to using 2 server calls to get this working? nginx seems to be under pretty active development and a lot of the older guides I've followed were from times when you couldn't listen to requests for port 80 and 443 within the same server call. But now that nginx has been updated to support that (I'm running 1.2.4), I'm wondering if there's a "best practice" way of handling this today. Any help would be greatly appreciated. EDIT: I did find this guide: http://redant.com.au/blog/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/ and I updated my code as follows: map $uri $my_preferred_proto { default "http"; ~^/#/user/login "https"; } server { listen 10.0.0.60:80; ## listen for ipv4; this line is default and implied listen 10.0.0.60:443 default ssl; if ($my_preferred_proto = "none") { set $my_preferred_proto $scheme; } if ($my_preferred_proto != $scheme) { return 301 $my_preferred_proto://mysite.com$request_uri; } It's not working though. When I change the default to https everything is redirected to SSL so it does somewhat work. But the redirect of /#/user/login is not redirecting to HTTPS. Any ideas? Also, is this a good way to go about this?

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • How to move a ruby on rails application to a new server

    - by ManiacZX
    I have a rails app on an old Ubuntu server I need to move onto a new machine. I haven't worked with ruby on rails so I don't really know anything about the structure of the app. I want to load this onto an Ubuntu 8.04 AMI on Amazon EC2 and am looking for any information regarding the migration process such as: Do I copy over the entire folder defined as the application root in the mongrel config (for ex: /u/apps/myapp/current) or just certain folders? Am I looking for trouble if I go with the latest versions of ruby and the various gems? Any general gotchas to look out for in the process. Current server information: root@webnode001:/# cat /proc/version Linux version 2.6.15-27-server (buildd@terranova) (gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5)) #1 SMP Fri Dec 8 18:43:54 UTC 2006 root@webnode001:/# rails -v Rails 1.2.3 root@webnode001:/# mongrel_rails cluster::configure --version Version 1.0.1 root@webnode001:/# gem -v 0.9.0 root@webnode001:/# gem list -l *** LOCAL GEMS *** actionmailer (1.3.3, 1.2.5) Service layer for easy email delivery and testing. actionpack (1.13.3, 1.12.5) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.3, 1.1.6) Web service support for Action Pack. activerecord (1.15.3, 1.15.2, 1.14.4) Implements the ActiveRecord pattern for ORM. activesupport (1.4.2, 1.4.1, 1.3.1) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.7, 1.0.5, 1.0.4, 1.0.2) A toolkit to create and control daemons in different ways eventmachine (0.7.2, 0.7.0) Ruby/EventMachine socket engine library fastercsv (1.2.0, 1.1.0) FasterCSV is CSV, but faster, smaller, and cleaner. fastthread (1.0) Optimized replacement for thread.rb primitives ferret (0.11.4) Ruby indexing library. gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only mongrel (1.0.1, 0.3.13.4) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. piston (1.3.3) Piston is a utility that enables merge tracking of remote repositories. rails (1.2.3, 1.1.6) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.3, 0.7.1) Ruby based make-like utility. sources (0.0.1) This package provides download sources for remote gem installation swiftiply (0.5.1) A fast clustering proxy for web applications.

    Read the article

  • MySQL Extremely High Disk Activity (Read Operations)

    - by Jake Schoermer
    I have 1GB Linode VPS with a standard LAMP stack. Apache is tuned fine but for some reason MySQL's disk usage is high. This is causing really slow site load times. RAM and CPU usage are fine. Can anyone give me any pointers on tuning mysql's disk performance? I'm using InnoDB. iotop output is below. Total DISK READ: 38.50 M/s | Total DISK WRITE: 27.20 K/s TID PRIO USER DISK READ> DISK WRITE SWAPIN IO COMMAND 9808 be/4 mysql 22.40 M/s 0.00 B/s 0.00 % 63.75 % mysqld 10045 be/4 mysql 2.06 M/s 0.00 B/s 0.00 % 26.65 % mysqld 9987 be/4 mysql 1694.38 K/s 0.00 B/s 0.00 % 18.33 % mysqld 10015 be/4 mysql 1554.47 K/s 0.00 B/s 0.00 % 12.71 % mysqld 10019 be/4 mysql 1461.21 K/s 0.00 B/s 0.00 % 5.58 % mysqld 9839 be/4 mysql 1383.48 K/s 0.00 B/s 0.00 % 25.69 % mysqld 10031 be/4 mysql 1243.58 K/s 0.00 B/s 0.00 % 5.68 % mysqld 10023 be/4 mysql 1057.04 K/s 0.00 B/s 0.00 % 2.02 % mysqld 10020 be/4 mysql 1025.95 K/s 0.00 B/s 0.00 % 7.05 % mysqld 10001 be/4 mysql 808.33 K/s 683.97 K/s 0.00 % 1.16 % mysqld 10025 be/4 mysql 746.15 K/s 0.00 B/s 0.00 % 3.28 % mysqld 10043 be/4 mysql 715.06 K/s 0.00 B/s 0.00 % 0.48 % mysqld 10044 be/4 mysql 672.31 K/s 0.00 B/s 0.00 % 5.25 % mysqld 10034 be/4 mysql 668.42 K/s 1989.73 K/s 0.00 % 5.31 % mysqld 9985 be/4 mysql 450.80 K/s 124.36 K/s 0.00 % 8.83 % mysqld 9989 be/4 mysql 357.53 K/s 0.00 B/s 0.00 % 5.21 % mysqld 10033 be/4 mysql 186.54 K/s 0.00 B/s 0.00 % 1.59 % mysqld 10021 be/4 mysql 155.45 K/s 435.25 K/s 0.00 % 1.23 % mysqld 10007 be/4 mysql 124.36 K/s 0.00 B/s 0.00 % 0.53 % mysqld 9763 be/4 www-data 38.86 K/s 0.00 B/s 0.00 % 4.56 % apache2 -k start 10027 be/4 mysql 31.09 K/s 0.00 B/s 0.00 % 4.24 % mysqld 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 4 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0] 5 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/u:0] 6 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0] 7 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1]

    Read the article

  • The Koyal Group Info Mag News¦Charged building material could make the renewable grid a reality

    - by Chyler Tilton
    What if your cell phone didn’t come with a battery? Imagine, instead, if the material from which your phone was built was a battery. The promise of strong load-bearing materials that can also work as batteries represents something of a holy grail for engineers. And in a letter published online in Nano Letters last week, a team of researchers from Vanderbilt University describes what it says is a breakthrough in turning that dream into an electrocharged reality. The researchers etched nanopores into silicon layers, which were infused with a polyethylene oxide-ionic liquid composite and coated with an atomically thin layer of carbon. In doing so, they created small but strong supercapacitor battery systems, which stored electricity in a solid electrolyte, instead of using corrosive chemical liquids found in traditional batteries. These supercapacitors could store and release about 98 percent of the energy that was used to charge them, and they held onto their charges even as they were squashed and stretched at pressures up to 44 pounds per square inch. Small pieces of them were even strong enough to hang a laptop from—a big, fat Dell, no less. Although the supercapacitors resemble small charcoal wafers, they could theoretically be molded into just about any shape, including a cell phone’s casing or the chassis of a sedan. They could also be charged—and evacuated of their charge—in less time than is the case for traditional batteries. “We’ve demonstrated, for the first time, the simple proof-of-concept that this can be done,” says Cary Pint, an assistant professor in the university’s mechanical engineering department and one of the authors of the new paper. “Now we can extend this to all kinds of different materials systems to make practical composites with materials specifically tailored to a host of different types of applications. We see this as being just the tip of a very massive iceberg.” Pint says potential applications for such materials would go well beyond “neat tech gadgets,” eventually becoming a “transformational technology” in everything from rocket ships to sedans to home building materials. “These types of systems could range in size from electric powered aircraft all the way down to little tiny flying robots, where adding an extra on-board battery inhibits the potential capability of the system,” Pint says. And they could help the world shift to the intermittencies of renewable energy power grids, where powerful batteries are needed to help keep the lights on when the sun is down or when the wind is not blowing. “Using the materials that make up a home as the native platform for energy storage to complement intermittent resources could also open the door to improve the prospects for solar energy on the U.S. grid,” Pint says. “I personally believe that these types of multifunctional materials are critical to a sustainable electric grid system that integrates solar energy as a key power source.”

    Read the article

  • Excessive CPU Utilization for Bind 9.8.1 `named` processes

    - by justinzane
    I just noticed that named is eating vast amounts of CPU time for a very small network with only a few domains. Can someone help me determine what is misconfigured, please? Or how to debug this. top top - 14:13:08 up 25 days, 14:16, 1 user, load average: 1.04, 1.04, 1.05 Tasks: 149 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.3 us, 4.3 sy, 0.0 ni, 78.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 2042776 total, 1347916 used, 694860 free, 249396 buffers KiB Swap: 3976080 total, 30552 used, 3945528 free, 574164 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17445 bind 20 0 244m 42m 3124 S 99.4 2.2 2345:03 named rndc stats +++ Statistics Dump +++ (1352931389) ++ Incoming Requests ++ 65869 QUERY ++ Incoming Queries ++ 31809 A 241 NS 3 CNAME 27455 SOA 276 PTR 123 MX 462 TXT 5400 AAAA 7 A6 1 DS 14 DNSKEY 15 SPF 55 AXFR 8 ANY ++ Outgoing Queries ++ [View: internal] 22206 A 509 NS 10 SOA 25 PTR 12 MX 524 TXT 4851 AAAA 62 DNSKEY 19 SPF 3157 DLV [View: external] 87 A 2 NS 80 AAAA 120 DNSKEY 7 DLV [View: _bind] ++ Name Server Statistics ++ 65869 IPv4 requests received 27670 requests with EDNS(0) received 112 TCP requests received 65652 responses sent 20 truncated responses sent 27670 responses with EDNS(0) sent 62920 queries resulted in successful answer 37117 queries resulted in authoritative answer 28482 queries resulted in non authoritative answer 7 queries resulted in referral answer 591 queries resulted in nxrrset 53 queries resulted in SERVFAIL 2081 queries resulted in NXDOMAIN 14530 queries caused recursion 162 duplicate queries received 55 requested transfers completed ++ Zone Maintenance Statistics ++ 109536 IPv4 notifies sent ++ Resolver Statistics ++ [Common] [View: internal] 29362 IPv4 queries sent 2013 IPv6 queries sent 28531 IPv4 responses received 4209 NXDOMAIN received 6 SERVFAIL received 31 FORMERR received 32 EDNS(0) query failures 3359 query retries 836 query timeouts 5348 IPv4 NS address fetches 3271 IPv6 NS address fetches 83 IPv4 NS address fetch failed 2779 IPv6 NS address fetch failed 17421 DNSSEC validation attempted 12731 DNSSEC validation succeeded 4690 DNSSEC NX validation succeeded 21104 queries with RTT 10-100ms 7418 queries with RTT 100-500ms 3 queries with RTT 500-800ms 1 queries with RTT 800-1600ms [View: external] 192 IPv4 queries sent 104 IPv6 queries sent 192 IPv4 responses received 2 NXDOMAIN received 104 query retries 44 IPv4 NS address fetches 44 IPv6 NS address fetches 1 IPv4 NS address fetch failed 1 IPv6 NS address fetch failed 4 DNSSEC validation attempted 3 DNSSEC validation succeeded 1 DNSSEC NX validation succeeded 152 queries with RTT 10-100ms 40 queries with RTT 100-500ms [View: _bind] ++ Cache DB RRsets ++ [View: internal (Cache: internal)] 2007 A 652 NS 131 CNAME 1 MX 32 TXT 421 AAAA 28 DS 244 RRSIG 110 NSEC 3 DNSKEY 2 !A 2 !TXT 89 !AAAA 2 !SPF 14 !DLV 148 NXDOMAIN [View: external (Cache: external)] 55 A 12 NS 34 AAAA 2 DS 10 RRSIG 1 DNSKEY [View: _bind (Cache: _bind)] ++ Socket I/O Statistics ++ 82958 UDP/IPv4 sockets opened 2118 UDP/IPv6 sockets opened 4 TCP/IPv4 sockets opened 1 TCP/IPv6 sockets opened 82956 UDP/IPv4 sockets closed 2117 UDP/IPv6 sockets closed 58 TCP/IPv4 sockets closed 15 UDP/IPv4 socket bind failures 2117 UDP/IPv6 socket connect failures 29554 UDP/IPv4 connections established 59 TCP/IPv4 connections accepted 2117 UDP/IPv6 send errors 5 UDP/IPv4 recv errors ++ Per Zone Query Statistics ++ --- Statistics Dump --- (1352931389)

    Read the article

  • Active directory integration not working properly with winbind and samba

    - by tubaguy50035
    I'm trying to get my linux box to use active directory authentication. I believe I have almost everything setup correctly. I'm able to issue wbinfo -g and wbinfo -u and see all the groups and users respectively. Brief intro to my setup: The username I use on my linux box to do admin things is nick. My active directory username is nwalke. They have two different passwords. I am able to log in to the box with nick and that user's password and I'm also able to login as nwalke with nwalke's password. The curious bit: Upon creating the active directory user's home directory, I run a script that requires root access. This is to setup some system wide things like a samba share for them. When I log in as nwalke, I enter my nwalke password and it succeeds. I'm then greeted with [sudo] password for nick:. If I enter my nwalke password here, it says Sorry, try again.. If I enter nick's password, it says Sorry, user nick is not allowed to execute scriptname as root. If I do groups as nwalke, I see that magically my user has been given the group nick. Now, I accidentally thought that nick had a UID of 100, not 1000. So originally in my smb.conf I had idmap uid 1000-10000. The only thing I can think of, is that I logged in with nwalke while that was still set and now I'm just being presented with a UID of 1000 forcing linux to think I'm nick. I'm not really sure where to go from here. Like I said, I'm fairly certain active directory is communicating with my server properly, but something must not be mapped right on the linux side. Any thoughts? Here is my smb.conf: [global] security = ads netbios name = hostname realm = COMPANY.COM password server = adshost.company.com workgroup = COMPANY idmap uid = 10000-90000 idmap gid = 10000-90000 winbind separator = + winbind enum users = no winbind enum groups = no winbind use default domain = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes domain master = no load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes Let me know if more information about something is required.

    Read the article

  • Impossible to create/format install window 7 on unpartioned HD

    - by fra pet
    Hi guys i am getting nut reinstalling windows 7 on one of those Acer aspire all-in-one... The original OS(windows professional x64) was not starting, after the initial screen the bios was prompted. So step 1: i tried to access the system partition and reinstall everything but could not get at the point step 2: i set the bios to native ide and i tried to insert my original copy of windows professional and do a clean installation, but it does not allow me to format/create other partion form the installation mask step 3: my BAD, i tried to install ubuntu and i clean the whole hard drive, i was getting an error at some point during installation so i decided to get back to windows step 4: Windows 7 again, at the disk screen of the windows installation i entered into the prompt and played around with DISKPART... -i listed the disk and the HD was disk 0. -i selected disk 0. -i CLEAN disk 0 successfully. -i tried to create a CREATE PARTITION PRIMARY but gave an error cache corrupt and disk not up-to-date (after i try to create a partiton in disk 0 it disappear when i try LIST DISK and i have to restart before he can list DISK 0 again, RESCAN did not worked). -tried CLEAN ALL(2 hours) and succeeded. -try again to create primary partition and failed, same errors -try to install my old copy of windows xP pro and it seems to work, it create a partition, format(only quick worked, slow mode was at 0% after 1 hour so i stop), it start installing and around 90% installation said it could not copy a file and he stop -back on windows 7 again, it says that the hard drive has 490+gb unpartitioned but won't create a partition and format. -i tried again DISKPART as i though i messed up with the MBR when i installed ubuntu, so i did all of the instruction below http://www.sevenforums.com/tutorials/20864-mbr-restore-windows-7-master-boot-record.html http://support.microsoft.com/kb/927392 the errors were: on bootsect: the systne partiton was not found, Data error cyclic redundancy check on bootrec /FixMbr: A device attached to the system is not functioning but did not worked, and still can not partiton/format/install on a blank HD i tried some bootable clean disk tool and start infinite loop on the same errors the bios setting are: sata: native ide. if i put AHCL(or something like that does not load the HD and the DVD). quick start/quite start: disabled. Are there any other option or tools i can try before i try change the HD(That is my last option)? Thanks to everyone.

    Read the article

  • .dll Solidworks Add-in not registering in COM

    - by Abhijit
    I am trying to register this .dll in COM as an Add-in to Solid Works software. The dll is building without any error or warnings.But the Add-in is not appearing in the Windows "Registry Editor" as should be the case.Kindly suggest me a solution. Thanks in advance. Below is my code:- using System; using System.Collections; using System.Reflection; using System.Collections.Generic; using System.Linq; using System.Text; using SolidWorks.Interop.sldworks; using SolidWorks.Interop.swcommands; using SolidWorks.Interop.swconst; using SolidWorks.Interop.swpublished; using SolidWorksTools; using SolidWorksTools.File; using System.Runtime.InteropServices; using System.Diagnostics; namespace SWADDIN_Test { [ComVisible(true)] [Guid("C380F7A6-771A-41EE-807A-1689C8E97720")] [InterfaceType(ComInterfaceType.InterfaceIsIDispatch)] interface ISWIntegration { void DoSWIntegration(); }//end of interface Dummy ISWIntegration [Guid("5EE80911-9567-4734-8E55-C347EA4635B5")] [ClassInterface(ClassInterfaceType.None)] [ProgId("SWADDIN_Test.SWIntegration")] [ComVisible(true)] public class SWIntegration : ISwAddin,ISWIntegration { public SldWorks mSWApplication; private int mSWCookie; public SWIntegration() { mSWApplication = null; mSWCookie = 0; }//end of parameterless constructor public void DoSWIntegration() { }//end of dummy method DoSWIntegration public bool ConnectToSW(object ThisSW, int Cookie) { mSWApplication = (SldWorks)ThisSW; mSWCookie = Cookie; // Set-up add-in call back info bool result = mSWApplication.SetAddinCallbackInfo(0, this, Cookie); this.UISetup(); return true; }//end of method ConnectToSW() public bool DisconnectFromSW() { return UITeardown(); }//end of method DisconnectFromSW() public void UISetup() { }//end of method UISetup() public bool UITeardown() { return true; }//end of method UITeardown() [ComRegisterFunction()]//Attribute private static void ComRegister(Type t) { string keyPath = String.Format(@"SOFTWARE\SolidWorks\AddIns{0:b}", t.GUID); using (Microsoft.Win32.RegistryKey rk = Microsoft.Win32.Registry.LocalMachine.CreateSubKey(keyPath)) { rk.SetValue(null, 1);// Load at startup rk.SetValue("Title", "Abhijit_SwAddin"); // Title rk.SetValue("Description", "All your pixels now belong to us"); // Description }//end of using statement }//end of method ComRegister() [ComUnregisterFunction()]//Attribute private static void ComUnregister(Type t) { string keyPath = String.Format(@"SOFTWARE\SolidWorks\AddIns{0:b}", t.GUID); Microsoft.Win32.Registry.LocalMachine.DeleteSubKeyTree(keyPath); }//end of method ComUnregister() }//end of class SWIntegration }//end of namespace SWADDIN_Test

    Read the article

  • ffmpeg not using all cores

    - by user2783132
    I just got my new server with two intel e5-2695 but I was shocked to see that ffmpeg or ubuntu doesn't utilize all cores. top while ffmpeg was running top - 23:35:25 up 2:41, 2 users, load average: 5.35, 4.37, 3.12 Tasks: 333 total, 2 running, 331 sleeping, 0 stopped, 0 zombie %Cpu0 : 0.0 us, 1.0 sy, 35.6 ni, 63.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 0.0 us, 0.7 sy, 35.5 ni, 63.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 0.0 us, 0.7 sy, 33.4 ni, 65.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 0.0 us, 0.0 sy, 32.7 ni, 67.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu4 : 0.0 us, 0.3 sy, 32.3 ni, 67.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu5 : 0.0 us, 0.3 sy, 33.0 ni, 66.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu6 : 0.0 us, 0.0 sy, 32.6 ni, 67.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu7 : 0.0 us, 0.3 sy, 32.7 ni, 67.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu8 : 0.0 us, 0.7 sy, 32.6 ni, 66.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu9 : 0.0 us, 0.3 sy, 33.9 ni, 65.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu10 : 0.0 us, 0.0 sy, 35.0 ni, 65.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu11 : 0.0 us, 0.7 sy, 30.0 ni, 69.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu12 : 21.1 us, 0.0 sy, 0.0 ni, 78.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu13 : 0.7 us, 0.0 sy, 4.3 ni, 95.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu14 : 0.3 us, 0.0 sy, 5.0 ni, 94.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu15 : 24.9 us, 0.0 sy, 0.0 ni, 75.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu16 : 0.3 us, 0.0 sy, 3.7 ni, 96.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu17 : 0.7 us, 0.3 sy, 4.9 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu18 : 1.0 us, 0.0 sy, 4.6 ni, 94.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu19 : 0.7 us, 0.0 sy, 4.7 ni, 94.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu20 : 11.1 us, 0.0 sy, 0.0 ni, 88.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu21 : 1.3 us, 0.0 sy, 4.6 ni, 94.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu22 : 2.0 us, 0.3 sy, 4.3 ni, 93.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu23 : 96.7 us, 1.0 sy, 0.0 ni, 2.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu24 : 0.0 us, 0.0 sy, 0.7 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu25 : 0.0 us, 0.0 sy, 3.0 ni, 97.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu26 : 0.0 us, 0.0 sy, 1.3 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu27 : 0.0 us, 0.0 sy, 4.0 ni, 96.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu28 : 0.0 us, 0.0 sy, 1.7 ni, 98.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu29 : 0.0 us, 0.0 sy, 1.7 ni, 98.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu30 : 0.0 us, 0.0 sy, 1.7 ni, 98.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu31 : 0.0 us, 0.0 sy, 1.0 ni, 99.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu32 : 0.0 us, 0.0 sy, 0.7 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu33 : 0.0 us, 0.0 sy, 1.7 ni, 98.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu34 : 0.0 us, 0.0 sy, 2.0 ni, 98.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu35 : 0.0 us, 0.0 sy, 1.0 ni, 99.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu36 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st ffmpeg was sent with -threads 0 *I also tried sending ffmoeg with -threads 500- no difference

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • HP F2180 driver installation fails on 64-bit Windows 7

    - by Noam Gal
    Hello; I am trying to install the HP Deskjet AIO (non-network) driver on my machine, which is running the 64-bit version of Windows 7. Before installing it, Windows detected my printer just fine... But I wanted to use the HP scanning application, because tt allows me to scan several photos at once. I ran the DJ_AIO_NonNetwork_ENU_NB file I got from their site, and the installation went almost without a problem... However, at the part where it should have detected the printer, it didn't, so I skipped it - telling the installer I'll connect the printer later. After it was finished I was able to use it regularly, and also scan using the wanted HP application. However, the installer kept popping at random intervals, and giving me an error message. Yesterday I tried removing all the installed HP Applications, and installing from scratch. Running the same installer setup, it now insists that it does not support my operating system, and that 64-bit Vista is the highest it can go... I just don't understand why this is occuring all of the sudden. Has anybody here successfully installed the AIO driver on the 64-bit version of Windows 7? UPDATE: Been chatting with HP chat support over the weekend. Managed to really mess up my windows. At first, they told me to uninstall using an "unintall_l3" batch file inside their installer package, and then reinstall. Didn't work. Also the "l4" batch didn't make any difference. Afterwards I was told to install "Windows install clean up" and remove many hp entries (most of which were not listed on my computer), and I also removed many other hp entries I bumped upon. Then my office 2k7 started failing. I searched around the web, and ran Security Restore, so now my office works, but my windows explorer is all buggy - can't seem to open windows explorer - it hangs while trying to load my hard drives, or completely ignores them and just shows my libraries. Anyone here has any idea how I can restore my win7 to normal, with or without the annoying scanner? UPDATE 2: Ok - explorer back to normal. I guess I just had to wait until it finishes searching while opening the windows explorer for the first time after the Security Restore. Scanner still not working though.

    Read the article

  • Ubuntu unattended-upgrades stops apache

    - by Robbie
    This morning i was alerted to the fact that both apache instances serving my app were not responding to requests from my load balancer. I attempted apachectl restart and it said apache was not running. So, i started apache on both instances and got the service up again. I then followed the logs and worked out that both had performed upgrades via the unattended-upgrades package moments before they stopped responding. /var/log/unattended-upgrades/unattended-upgrades.log 2013-07-02 06:30:51,875 INFO Starting unattended upgrades script 2013-07-02 06:30:51,875 INFO Allowed origins are: ['o=Ubuntu,a=precise-security'] 2013-07-02 06:33:57,771 INFO Packages that are upgraded: accountsservice apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common apparmor apport apt apt-transport-https apt-utils bind9-host binutils dbus dnsutils gnupg gpgv isc-dhcp-client isc-dhcp-common krb5-locales libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdns81 libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libexpat1 libfreetype6 libgc1c2 libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libisc83 libisccc80 libisccfg82 liblwres80 libruby1.8 libx11-6 libx11-data libxcb1 libxext6 libxml2 linux-firmware linux-image-virtual linux-libc-dev linux-virtual multiarch-support openssl perl perl-base perl-modules python-apport python-crypto python-keyring python-problem-report python-software-properties ri1.8 ruby1.8 ruby1.8-dev sudo tzdata update-manager-core 2013-07-02 06:33:57,772 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-02_06:33:57.772399.log' 2013-07-02 06:36:10,584 INFO All upgrades installed I'm running Ubuntu 12.04 on Amazon EC2 servers. I have unattended-upgrades installed and configured as follows: /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { }; /etc/apt/apt.conf.d/20auto-upgrades APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; I've struggled to find documentation about what happens to running processes during an upgrade. - Is this expected behaviour? Or should unattended-upgrades restart apache after upgrading it? - What can I do to ensure apache is restarted correctly? Should I just blacklist the apache package?

    Read the article

  • Download - Upload is too slow on Centos

    - by Mehdi
    My download/upload in server and out of server is too slow (around 50 KB/s !) ! Did I miss some configuration ? Some information: CentOS release 6.3 uptime load average: 0.17, 0.32, 0.37 Memory free -m total used free shared buffers cached Mem: 24009 21988 2021 0 806 18098 -/+ buffers/cache: 3083 20926 Swap: 4095 28 4067 lshw -C network *-network description: Ethernet interface product: 82574L Gigabit Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 00 serial: 00:25:90:70:17:4a size: 100MB/s capacity: 1GB/s width: 32 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=off broadcast=yes driver=e1000e driverversion=1.9.5-k duplex=full firmware=2.1-2 ip=108.175.8.123 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:16 memory:fb900000-fb91ffff ioport:e000(size=32) memory:fb920000-fb923fff ethtool ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off MDI-X: off Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000001 (1) Link detected: yes dmesg |grep e1000e dmesg |grep e1000e e1000e: Intel(R) PRO/1000 Network Driver - 1.9.5-k e1000e: Copyright(c) 1999 - 2012 Intel Corporation. e1000e 0000:02:00.0: Disabling ASPM L0s e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 e1000e 0000:02:00.0: setting latency timer to 64 e1000e 0000:02:00.0: irq 33 for MSI/MSI-X e1000e 0000:02:00.0: irq 34 for MSI/MSI-X e1000e 0000:02:00.0: irq 35 for MSI/MSI-X e1000e 0000:02:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:25:90:70:17:4a e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection e1000e 0000:02:00.0: eth0: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: eth0: Unsupported Speed/Duplex configuration e1000e: eth0 NIC Link is Up 10 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: Disabling ASPM L1 e1000e 0000:02:00.0: eth0: changing MTU from 1500 to 9000 e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO

    Read the article

  • TeamSpeak 3 Disconnects

    - by ArchUser
    I've recently had a few random TS3 mass disconnects and I'm am curious to know where I may find any applications that can help me determine the cause of any types of TS3 server disconnections as we plan on having many more users in the future. I run an almost empty VPS (OpenVZ) server with an ArchLinux template on it. I have 1.5/2GB of RAM, 2GHz of CPU and plenty of hard drive space, to run for the most part, just my TS3 and a low traffic apache web server. This is what I am investigating. 2011-02-04 06:07:05.130343|INFO |VirtualServer | 1| client disconnected 'Valamoor'(id:224) reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.131338|INFO |VirtualServer | 1| client disconnected 'Kevrow'(id:19? reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.191849|INFO |VirtualServer | 1| client disconnected 'scuba'(id:200) reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.192633|INFO |VirtualServer | 1| client disconnected '[Ash] Setna'(id:75) reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.193350|INFO |VirtualServer | 1| client disconnected 'Akiris'(id:254) reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.194047|INFO |VirtualServer | 1| client disconnected 'Marcus'(id:25? reason 'reasonmsg=connection lost' 2011-02-04 06:07:05.194726|INFO |VirtualServer | 1| client disconnected 'Guthry'(id:275) reason 'reasonmsg=connection lost' 2011-02-04 07:18:50.327071|INFO |VirtualServer | 1| client disconnected 'Valamoor'(id:224) reason 'reasonmsg=connection lost' 2011-02-04 07:18:51.339018|INFO |VirtualServer | 1| client disconnected 'Marcus'(id:25? reason 'reasonmsg=connection lost' 2011-02-04 07:18:51.339870|INFO |VirtualServer | 1| client disconnected '[Ash] Setna'(id:75) reason 'reasonmsg=connection lost' 2011-02-04 07:18:51.340515|INFO |VirtualServer | 1| client disconnected 'Guthry'(id:275) reason 'reasonmsg=connection lost' 2011-02-05 04:55:20.797353|INFO |VirtualServer | 1| client disconnected 'JohnyRingo'(id:240) reason 'reasonmsg=connection lost' 2011-02-05 04:55:20.798517|INFO |VirtualServer | 1| client disconnected 'Maloo roots'(id:196) reason 'reasonmsg=connection lost' 2011-02-05 04:55:20.799314|INFO |VirtualServer | 1| client disconnected 'Cpt dravyn'(id:234) reason 'reasonmsg=connection lost' 2011-02-05 04:55:20.839254|INFO |VirtualServer | 1| client disconnected 'scuba'(id:200) reason 'reasonmsg=connection lost' etc... I need to determine if it is my hosting provider or my server, and what tools I can use to determine the issues. My VPS host has told me this... "I checked out the node that your VPS runs on and there is no abnormal system load, or I/O wait from the drive. I also checked the bandwidth history from the server and there have been no spikes or outages."

    Read the article

  • Puppet master fails to run under nginx+passenger configuration as rack app, works when run as system service

    - by Anadi Misra
    I get the error [anadi@bangda ~]# tail -f /var/log/nginx/error.log [ pid=19741 thr=23597654217140 file=utils.rb:176 time=2012-09-17 12:52:43.307 ]: *** Exception LoadError in PhusionPassenger::Rack::ApplicationSpawner (no such file to load -- puppet/application/master) (process 19741, thread #<Thread:0x2aec83982368>): from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from config.ru:13 from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' from config.ru:1:in `new' from config.ru:1 when I start nginx server with passenger module configured, puppet master configured to run through rack. here is the config.ru [anadi@bangda ~]# cat /etc/puppet/rack/config.ru # a config.ru, for use with every rack-compatible webserver. # SSL needs to be handled outside this, though. # if puppet is not in your RUBYLIB: #$:.unshift('/usr/share/puppet/lib') $0 = "master" # if you want debugging: # ARGV << "--debug" ARGV << "--rack" require 'puppet/application/master' # we're usually running inside a Rack::Builder.new {} block, # therefore we need to call run *here*. run Puppet::Application[:master].run and the nginx configuration for puppet master is as follows [anadi@bangda ~]# cat /etc/nginx/conf.d/puppet-master.conf server { listen 8140 ssl; server_name bangda.mycompany.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; access_log /var/log/nginx/puppet/master.access.log; error_log /var/log/nginx/puppet/master.error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangda.mycompany.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangda.mycompany.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } however when I run puppet through the ususal puppetmasterd daemon it works perfect with no errors. I can see somehow the nginx+passenger+rack setup fails to initialize while the same works when running the natvie puppetmaster daemon. Any configuration that I am missing?

    Read the article

  • BIND split-view DNS config problem

    - by organicveggie
    We have two DNS servers: one external server controlled by our ISP and one internal server controlled by us. I'd like internal requests for foo.example.com to map to 192.168.100.5 and external requests continue to map to 1.2.3.4, so I'm trying to configure a view in bind. Unfortunately, bind fails when I attempt to reload the configuration. I'm sure I'm missing something simple, but I can't figure out what it is. options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; view "internal" { zone "example.com" { type master; notify no; file "/etc/bind/db.example.com"; }; }; zone "example.corp" { type master; file "/etc/bind/db.example.corp"; }; zone "100.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; }; I have excluded the entries in the view for allow-recursion and recursion in an attempt to simplify the configuration. If I remove the view and just load the example.com zone directly, it works fine. Any advice on what I might be missing?

    Read the article

  • PHP-FPM Pool, Child Processes and Memory Consumption

    - by Jhilke Dai
    In my PHP-FPM configuration I have 3 Pools, the eg: Config is: ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 1 ; ;;;;;;;;;;;;;;;;;;;;;;; [www1] user = www group = www listen = /tmp/php-fpm1.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 2 ; ;;;;;;;;;;;;;;;;;;;;;;; [www2] user = www group = www listen = /tmp/php-fpm2.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 3 ; ;;;;;;;;;;;;;;;;;;;;;;; [www3] user = www group = www listen = /tmp/php-fpm3.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 I calculated the pm.max_children processes according to some example calculations on the web like 40 x 40 Mb = 1600 Mb. I have separated 4 GB of RAM for PHP, now according to the calculations 40 Child Processes via one socket, and I have total of 3 sockets in my Nginx and FPM configuration. My doubt is about the amount of memory consumption by those child processes. I tried to create high load in the server via httperf hog and siege but I could not calculate the accurate memory usage by all the PHP processes (other processes like MySQL and Nginx were also running). And all the sockets were in use, So, I seek guidance from anyone who have done this before or know how exactly the pm.max_children in PHP Works. Since I have 3 Pools/sockets with 40 child processes does that count to 3 x 40 x 40 Mb of Memory usage ? or it is just like 40 Max. Child processes sharing 3 sockets (and the total memory usage is just 40 x 40 Mb) ?

    Read the article

  • Apache and MySQL taking all the memory? Maximum connections?

    - by lpfavreau
    I've a had one of our servers going down (network wise) but keeping its uptime (so looks the server is not losing its power) recently. I've asked my hosting company to investigate and I've been told, after investigation, that Apache and MySQL were at all time using 80% of the memory and peaking at 95% and that I might be needing to add some more RAM to the server. One of their justifications to adding more RAM was that I was using the default max connections setting (125 for MySQL and 150 for Apache) and that for handling those 150 simultaneous connections, I would need at least 3Gb of memory instead of the 1Gb I have at the moment. Now, I understand that tweaking the max connections might be better than me leaving the default setting although I didn't feel it was a concern at the moment, having had servers with the same configuration handle more traffic than the current 1 or 2 visitors before the lunch, telling myself I'd tweak it depending on the visits pattern later. I've also always known Apache was more memory hungry under default settings than its competitor such as nginx and lighttpd. Nonetheless, looking at the stats of my machine, I'm trying to see how my hosting company got those numbers. I'm getting: # free -m total used free shared buffers cached Mem: 1000 944 56 0 148 725 -/+ buffers/cache: 71 929 Swap: 1953 0 1953 Which I guess means that yes, the server is reserving around 95% of its memory at the moment but I also thought it meant that only 71 out of the 1000 total were really used by the applications at the moment looking a the buffers/cache row. Also I don't see any swapping: # vmstat 60 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 57612 151704 742596 0 0 1 1 3 11 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 1 1 24 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 2 1 18 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 0 1 13 0 0 100 0 And finally, while requesting a page: top - 08:33:19 up 3 days, 13:11, 2 users, load average: 0.06, 0.02, 0.00 Tasks: 81 total, 1 running, 80 sleeping, 0 stopped, 0 zombie Cpu(s): 1.3%us, 0.3%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1024616k total, 976744k used, 47872k free, 151716k buffers Swap: 2000052k total, 0k used, 2000052k free, 742596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24914 www-data 20 0 26296 8640 3724 S 2 0.8 0:00.06 apache2 23785 mysql 20 0 125m 18m 5268 S 1 1.9 0:04.54 mysqld 24491 www-data 20 0 25828 7488 3180 S 1 0.7 0:00.02 apache2 1 root 20 0 2844 1688 544 S 0 0.2 0:01.30 init ... So, I'd like to know, experts of serverfault: Do I really need more RAM at the moment? How do they calculate that for 150 simultaneous connections I'd need 3Gb? Thanks for your help!

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

< Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >