Search Results

Search found 10496 results on 420 pages for 'real yang'.

Page 366/420 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • External USB HD issues with a twist (works on Windows7 but not XP)

    - by Eruditass
    I have this older external USB HD, 160 GB. I was using it to copy my Steam games to another computer. On the source computer, Windows 7 64-bit, everything worked fine. Drive reported no errors, had no hiccups, etc. Plugging it into the Windows XP 32-bit computer, it worked fine for looking through the files, moving files around on it (no real reading/writing, just modifying the filesystem table). However, when copying files from it to my internal HD, after a couple seconds to tens of minutes (seemingly random times), the USB device becomes unrecognized and it reports a delayed write error. Events in system log go like this, chronologically: (number times displayed)xSource (Event ID): "message" 2xdisk (51): An error was detected on device \Device\Harddisk1\D during a paging operation. 1xftdisk (57): The system failed to flush data to the transaction log. Corruption may occur. 1xApplication popup (26): Windows - Delayed Write Failed : Windows was unable to save all the data for the file E:\$Mft. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. 1xntfs (50): {Delayed Write Failed} Windows was unable to save all the data for the file . The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. These repeat for a while, then there is 10+ disk messages or ftdisk messages. Other notes: This occurs on random files at random times. This problem cannot be replicated on the Windows 7 source machine when copying from the HD to a different location on its local disk chkdsk /f was run and found no errors. chkdsk /f/r has the delayed write issue. drive was set to quick removal. Setting to performance in device manager yielded same result I am not writing anything to the USB external drive, so I am not sure why there is even a delayed write error (writing file access times?) local Windows XP was chkdsk'd without problems Windows XP machine has no problems with other USB HD's Various USB ports were attempted Rebooting did not help Occurs with SyncToy as well as windows explorer SMART status is good on both local drive and the external one Lack of gaming is making me cranky

    Read the article

  • TCP/IP & throughput between FreeNAS (BSD) server & other LAN machines

    - by Tim Dickerson
    I have got a question for someone that knows BSD a bit better than me that are in regards to my LAN setup at home/work here outside Chicago. I can't seem to fully optimize my network's (LAN) thoughput via my FreeNAS (BSD based) file server. It runs with the latest FreeBSD release which is modified to support several protocols for file transfers and more. Every machine that is behind my Smoothwall (Linux based) router is on the usual 192.168.0.x subnet and for most part works just fine. Behind the Smoothwall box, all machines are connected to a GB HP unmanaged switch. I host a large WISP here and have an OC-3 connection here at home/work and have no issues with downloading/uploading from/to the 'net'. My problem is with throughput. When I try and transfer large files...really any for that matter..between any of the machines to/and from the FreeNAS server via FTP, the max throughput I can achieve say between a Win 7 or a Linux box is ~65Mbit/sec. All machines are running Intel Pro 1000 GB NIC's and all cable is CAT6. Each is set to 'auto negotiation' and each shows 1500 MTU Full Duplex @1GB so I know the hardware is okay. I have not adjusted the MTU on any machine as I understand it to be pointless unless certain configurations are used (I assume I am not one of those). My settings for the FreeNAS machine are the following: # FreeNAS /etc/sysctl.conf - pertinent settings shown kern.ipc.maxsockbuf=262144 kern.ipc.nmbclusters=32768 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.inflight.enable=0 net.inet.tcp.path_mtu_discovery=0 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.recvbuf_max=16777216 net.inet.tcp.recvspace=65536 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.sendspace=65536 net.inet.udp.recvspace=65536 net.local.stream.recvspace=65536 net.local.stream.sendspace=65536 net.inet.tcp.hostcache.expire=1 From what I can tell, that looks to be a somewhat optimized profile for a typical BSD machine acting as a server for a LAN. I might be wrong and just wanted to find out from someone that knows BSD better than I do if indeed that is ok or if something is out of tune or what. Are there other ways I would find better for P2P file transfers? I honestly do not know what I SHOULD be looking for with respect to throughput between the NAS box and another client when xferring files via FTP, but I am told that what I get on average (40-70MB/sec) is too low for what it could be. I have thought about adding another NIC in the FreeNAS box as well as the Win7 machine and use a X-over cable via a static route, but wanted to check with someone first to see if that might be worth it or not. I don't know if doing that would bypass the HP GB switch and allow for a machine to machine xfer anyways. The FTP client I use is: Filezilla and have tried both active and passive modes with no real gain over each other. The NAS box runs ProFTPD.

    Read the article

  • Setting up nginx as proxy to apache; All good, but nginx doesn't serve media

    - by becomingGuru
    I have set it up such that nginx proxies request and sends django requests to apache and serves media itself. Following documents my setup: Nginx Configuration: /etc/nginx/nginx.conf user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; include /etc/nginx/sites-enabled/*; } ===== ngnix proxy /etc/nginx/proxy.conf ============ proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; =========== Nginx server file: /etc/nginx/sites-enabled/some-name.txt ========== server { listen 208.109.252.110:80; server_name netconf; autoindex on; access_log /home/site/server_logs/nginx_access.log; error_log /home/site/server_logs/nginx_error.log; location / { proxy_pass http://127.0.0.1:80/; include /etc/nginx/proxy.conf; } location /site_media/ { root /home/site/folder/static; } } ========== Nginx very well proxies the request and passes to apache, the required requests, but doesn't serve the media. In the last server file, location site_media is not served, at all. :( Everything seems perfect to me. What is wrong? Thanks in advance.

    Read the article

  • WinPE, Startnet.CMD and passing variables to second batch file not working

    - by user140892
    I don't know scripting or PowerShell (yes I need to learn something). I'm not an expert batch file maker either. I have a WinPE flash drive which I used to deploy OS images. I have the WIM, drivers and anything needed else outside the WinPE environment to ensure that Updates, changes are easier for me to make. I use the "STARTNET.CMD" batch file which is part of the WinPE. The reason to go through the letter drives is that the WinPE always gets the X letter drive assigned. The flash drive itself can receive a random letter which always changes. My deployment menu is located on the flash drive it self and not inside the WinPE. This is so that if I need to make a change I don't have to re-do the WinPE. I am able to locate the "menu.bat" batch file and launch it. I use a variable to capture the letter drive. I call the second batch file named "menu.bat" and pass the variable to it. When the second batch file loads, I believe that I am calling the variable correctly. If I break out of the batch file I can echo the variable and see the expected reply. The issue is that I can't use the variable to work with anything on the second batch file. In my test, I can get this to work over and over. When it runs from the real USB flash drive it does not work. I removed comments from the second batch file to make it smaller. My issue is that files below all get a message stating that the system cannot find the path specified. Diskpart Imagex.exe bcdboot.exe Why can't I get the varible to properly function when I try to using example "ImageX.exe"? Contents of the Startnet.cmd @echo off for %%p in (a b c d e f g h i j k l m n o p q r s t u v w x y z) do if exist %%p:\Tools\ set w=%%p Set execpatch=%w%\Tools\ call %w%:\Menu.bat \Tools\ Contents of the Menu.BAT @echo off set SecondPath=%1 cls :Start cls Echo. Echo.============================================================== Echo. Windows 7 64 Bit Ent Basic Desktops Echo.============================================================== Echo. Echo A. 790 Windows 7 - Basic Echo. Echo. Echo I. Exit Echo. Echo. set /p choice=Choose your option = if not '%choice%'=='' set choice=%choice:~0,1% if '%choice%'=='a' goto 790_Windows_7_Basic echo "%choice%" is not a valid (answer/command) echo. goto start :790_Windows_7_Basic REM DISKPART /s %SecondPath%BatchFiles\Make-Partition.txt %SecondPath%imagex.exe /apply %SecondPath%Images\Win7-64b-Ent-Basic-SysPreped.wim 1 o:\ /verify %SecondPath%bcdboot.exe o:\Windows /s S: Copy %SecondPath%Unattended\unattend.XML o:\Windows\System32\sysprep\unattend.XML /y xcopy %SecondPath%Drivers\790\*.* o:\Windows\INF\790\ /E /Q /Y MD o:\Windows\Setup\Scripts\ Copy %SecondPath%BatchFiles\SetupComplete.cmd o:\Windows\Setup\Scripts\ /y Goto Done :Done Exit

    Read the article

  • Passenger 2.2.4, nginx 0.7.61 and SSL

    - by boompa
    Has anyone had any luck configuring Passenger and nginx with SSL? I've spent hours trying to get this configuration working as I'd like, using what few resources there are out there on the net, and I can't get any of the supposedly forwarded headers to show up in the Rails controller. For example, with a conf file of the following (and multiple variations thereof): server { listen 3000; server_name .example.com; root /Users/website/public; passenger_enabled on; rails_env development; } server { listen 3443; root /Users/website/public; rails_env development; passenger_enabled on; ssl on; #ssl_verify_client on; ssl_certificate /Users/website/ssl/server.crt; ssl_certificate_key /Users/website/ssl/server.key; #ssl_client_certificate /Users/website/ssl/CA.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-SSL-Subject $ssl_client_s_dn; #proxy_set_header X-SSL-Issuer $ssl_client_i_dn; proxy_redirect off; proxy_max_temp_file_size 0; } and Rails code in the controller like this: request.headers.each { |k, v| RAILS_DEFAULT_LOGGER.error "Header #{k} Val #{v}" } other headers appear, but not those set in nginx, e.g.: Header rack.multithread Val false Header REQUEST_URI Val /login/new Header REMOTE_PORT Val 64021 Header rack.multiprocess Val true Header PASSENGER_USE_GLOBAL_QUEUE Val false Header PASSENGER_APP_TYPE Val rails Header SCGI Val 1 Header SERVER_PORT Val 3443 Header HTTP_ACCEPT_CHARSET Val ISO-8859-1,utf-8;q=0.7,*;q=0.7 Header rack.request.query_hash Val Header DOCUMENT_ROOT Val /Users/website/public I've even gone so far as to modify Passenger's abstract_request_handler's main_loop method, i.e., headers, input = parse_request(client) if headers if headers[REQUEST_METHOD] == PING process_ping(headers, input, client) else headers.each { |h,v| log.unknown "abstract_request_handler: #{h} = #{v}" } process_request(headers, input, client) end end only to find that the supposedly added headers do not exist there either: abstract_request_handler: HTTP_KEEP_ALIVE = 300 abstract_request_handler: HTTP_USER_AGENT = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 abstract_request_handler: PASSENGER_SPAWN_METHOD = smart-lv2 abstract_request_handler: CONTENT_LENGTH = 0 abstract_request_handler: HTTP_IF_NONE_MATCH = "b6e8b9afbc1110ee3bf0c87e119252ad" abstract_request_handler: HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5 abstract_request_handler: SERVER_PROTOCOL = HTTP/1.1 abstract_request_handler: HTTPS = on abstract_request_handler: REMOTE_ADDR = 127.0.0.1 abstract_request_handler: SERVER_SOFTWARE = nginx/0.7.61 abstract_request_handler: SERVER_ADDR = 127.0.0.1 abstract_request_handler: SCRIPT_NAME = abstract_request_handler: PASSENGER_ENVIRONMENT = development abstract_request_handler: REMOTE_PORT = 64021 abstract_request_handler: REQUEST_URI = /login/new abstract_request_handler: HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.7 abstract_request_handler: SERVER_PORT = 3443 abstract_request_handler: SCGI = 1 abstract_request_handler: PASSENGER_APP_TYPE = rails abstract_request_handler: PASSENGER_USE_GLOBAL_QUEUE = false I'm tired of banging my head against the wall, so I'd truly appreciate any help I can get!

    Read the article

  • Postfix: How to apply header_checks only for specific Domains?

    - by Lukas
    Basically what I want to do is rewriting the From: Header, using header_checks, but only if the mail goes to a certain domain. The problem with header_check is, that I can't check for a combination of To: and From: Headers. Now I was wondering if it was possible to use the header_checks in combination with smtpd_restriction_classes or something similar. I've found a lot information about header_checks and multiple header fields, when searching the net. All of them basically telling me, that one can't combine two header for checking. But I didn't find any information if it was possible to only do a header check if a condition (eg. mail goes to example.com) was met. Edit: While doing some more Research I've found the following article which suggests to add a Service in postfix master.cf, use a transportmap to pass mails for the Domain to that service and have a separate header_check defined with -o. The thing is that I can't get it to work... What I did so far is adding the Service to the master.cf: example unix - - n - - smtpd -o header_checks=regexp:/etc/postfix/check_headers_example Adding the followin Line to the transportmap: example.com example: Last but not least I have two regexp-files for header checks, one for the newly added service, and one to redirect answers to the rewritten domain. check_headers_example: /From:(.*)@mydomain.ain>(.*)/ REPLACE From:[email protected]>$2 Obviously if someone answers, the mail would go to nirvana, so I have the following check_headers defined in the main postfix process: /To:(.*)<(.*)@mydomain.example.com>(.*)/ REDIRECT [email protected]$2 Somehow the Transport is ignored. Any help is appreciated. Edit 2: I'm still stuck... I did try the following: smtpd_restriction_classes = header_rewrite header_rewrite = regexp:/etc/postfix/rewrite_headers_domain smtpd_recipient_restrictions = (some checks) check_recipient_access hash:/etc/postfix/rewrite_table, (more checks) In the rewrite_table the following entries exist: /From:(.*)@mydomain.ain>(.*)/ REPLACE From:[email protected]>$2 All it gets me is a NOQUEUE: reject: 451 4.3.5 Server configuration error. I couldn't find any resources on how you would do that but some people saying it wasn't possible. Edit 3: The reason I asked this question was, that we have a customer (lets say customer.com) who uses some aliases that will forward mail to a domain, let's say example.com. The mailserver at example.com does not accept any mail from an external server that come from a sender @example.com. So all mails that are written from example.com to [email protected] will be rejected in the end. An exception on example.com's mailserver is not possible. We didn't really solve this problem, but will try to work around it by using lists (mailman) instead of aliases. This is not really nice though, nor a real solution. I'd appreciate all suggestions how this could be done in a proper way.

    Read the article

  • My computer's dying :(

    - by j-t-s
    Hi All I have a sweet computer and have never had any real problems with it until now. I recently formatted my computer and all was well. But now my monitor randomly flickers. The monitor will just go black for about a second or two, then it'll return back to its normal state with all the stuff still on the screen. This is getting progressively worse. Another problem that has started is my computer randomly restarts. (I've managed to prevent it from restarting by removing the check in Automatically Restart from the Startup and Recovery Dialog. But I know this doesn't solve the problem). It also completely freezes up on me. One last thing, this morning I got a big blue screen. I can't remember what it said, but if it happens again I will take note of it and repost. Or if I can find some kind of a log file containing that bluescreen error I will post. I have checked all cords, and they're all fine. Nothing's loose. My computer isn't overheating either. I've taken the case off for 3 whole days and haven't used the computer for those 3 days, which has had no effect. I've checked the connections inside and nothing's loose there either. I know there's nothing wrong with my monitor because my friend has 2 computers and it works perfectly on those computers. I don't understand how my computer could suddenly become so unstable. I'm almost certain that I have no viruses; I full-scan my pc everyday for nastys, and have strict firewall settings. Anyway, I don't see how it could be a virus anyway simply due to the fact that I had these problems right after I formatted, and before I even had the chance to install or copy anything or even connect to the internet. I know it has nothing to do with the way I formatted the computer too, I've been fixing computers for years. Sorry for rambling on, just making sure I don't leave anything out. Has anybody else had this problem? Can somebody please try to help me with this? Thank you :)

    Read the article

  • Solution to route/proxy SNMP Traps (or Netflow, generic UDP, etc) for network monitoring?

    - by Christopher Cashell
    I'm implementing a network monitoring solution for a very large network (approximately 5000 network devices). We'd like to have all devices on our network send SNMP traps to a single box (technically this will probably be an HA pair of boxes) and then have that box pass the SNMP traps on to the real processing boxes. This will allow us to have multiple back-end boxes handling traps, and to distribute load among those back end boxes. One key feature that we need is the ability to forward the traps to a specific box depending on the source address of the trap. Any suggestions for the best way to handle this? Among the things we've considered are: Using snmptrapd to accept the traps, and have it pass them off to a custom written perl handler script to rewrite the trap and send it to the proper processing box Using some sort of load balancing software running on a Linux box to handle this (having some difficulty finding many load balancing programs that will handle UDP) Using a Load Balancing Appliance (F5, etc) Using IPTables on a Linux box to route the SNMP traps with NATing We've currently implemented and are testing the last solution, with a Linux box with IPTables configured to receive the traps, and then depending on the source address of the trap, rewrite it with a destination nat (DNAT) so the packet gets sent to the proper server. For example: # Range: 10.0.0.0/19 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.0.0/19 -j DNAT --to-destination 10.1.2.3 # Range: 10.0.33.0/21 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.33.0/21 -j DNAT --to-destination 10.1.2.3 # Range: 10.1.0.0/16 Site: xyz01 Destination: bar01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.1.0.0/16 -j DNAT --to-destination 10.3.2.1 This should work with excellent efficiency for basic trap routing, but it leaves us completely limited to what we can mach and filter on with IPTables, so we're concerned about flexibility for the future. Another feature that we'd really like, but isn't quite a "must have" is the ability to duplicate or mirror the UDP packets. Being able to take one incoming trap and route it to multiple destinations would be very useful. Has anyone tried any of the possible solutions above for SNMP traps (or Netflow, general UDP, etc) load balancing? Or can anyone think of any other alternatives to solve this?

    Read the article

  • What presentation software suits my needs?

    - by claws
    Background: I'm teaching biology to 12th grade students. The syllabus I'm teaching is huge. I mean literally, very huge. There is a lot for students to remember. There are no less than 1000 facts (weird names, dates etc) for students to remember. They'll have to remember all of them, they don't have a choice. The notes I compiled for their learning itself is upto 80 printed pages(Just the bullet outline & facts). That's just one chapter. We have 34 chapters. Also my students are very hardworking, they study upto 8-10hrs per day (Yeah! we are from India :). So, I want to ensure maximum retaining by the students at each and every stage (Teaching & Learning). I'm trying to as many memory training techniques as possible. I'm trying to incorporate, mnemonics, strong visual aids (pictures, 3D-animations, real videos etc.), spaced repetition etc. I think MS powerpoint is not suitable for my needs: There are about 200 slides per chapter. Its very easy for students to get lost while teaching. Because the problem with powerpoint is that it gives facts (as bullets) but it doesn't exploit the association & organization (Concept Map) of the content, which helps students learn quickly. I found an amazing software called XMind. You can see the screenshot here. Problem is that it is not as powerpoint in terms of powerpoint. This software can be used for just for concept maps. In the above screenshot, each topic occupies a single slide. I have an Image/picture(Detailed huge picture) and about 5-10 bullet points and probably a video or an animation of somethings. And this XMind is not good at presenting, in terms that it doesn't allow me to set what to present after what. I want to present a top down view, with a slide for each topic. PS: I Don't like prezi.com. I tried but it simply is too confusing for my students. It zooms here and there. I didn't tried it but I've seen few presentations.

    Read the article

  • APC ups es 700 randomly overload

    - by Matteo Mosca
    First of all, I live in Italy, Europe, so keep this in mind for Volt/Watt considerations. Standard voltage in Italian apartments is 220V. In my living room I have 2 APC ups, one being an ES-550 and the other an ES-700 They each have 4 slots for surge protection only, and 4 slots for surge protection + battery backup. Just to give all the information, they both got their battery replaced less than one month ago. The ES-550 works fine, without any problem. On the battery I have connected: Pc Monitor Sony Bravia 46'' 4th slot is empty The ES-700 has the following on battery: Xbox 360 Ps3 (standby when not used) Wii (standby when not used) Netgear 8 port switch (always on) Here's what happens: the ES-700, randomly, but mostly at night when I'm sleeping, goes like "overload", with the constant beep. If I try to shut it off keeping the power button pressed, nothing happens. The only thing that works is unplugging random stuff (sometimes unplugging 1 console works, other times I have to unplug all 4 devices). Every time this happens the problem is "real", meaning the 4 devices become unpowered, so it's not just an "alarm no working properly" problem. While I'm sleeping, of course, the power usage is what described on the list, 2 devices on standy, 1 off and 1 on. Today it happened again while I was playing with my Ps3. I unplugged it, problem went away. I plugged it again, and it kept working fine. I just can't figure out what's the problem. The only additional info I can provide is that this behaviour started after a big power outage last december 26 (a blackout that lasted almost all day) but the "surge protection" part of those UPS should be there for those problems, to leverage peaks when power goes away or gets restored. Another funny thing is, althought it might not be related, for a couple of days after that event the Wii was unable to power on, I thought its power transformer was broken, but then it suddenly started working again. I can be sure it's not the Wii overloading the UPS because the overload happens even if I leave the Wii unplugged. Any suggestion is really appreciated, and I can provide any additional info, if needed, that didn't come to mind right now. Thanks.

    Read the article

  • Network use of Gaming PC

    - by Matthew Patrick Cashatt
    Background After YEARS of waiting, I built the custom gaming PC of my dreams: Intel i7 - 975 Extreme Edition 3.3ghz (overclocked to 4.0) ATI Radeon 5970 2gb Corsair 256 gb SSD Drive 2 TB Sata II 3.0 7200rpm data drive 12 GB Kingston Hyper-X (1600mhz) DDR3 Windows 7 Ultra 64 bit And so on. . . Problem I hooked this beast up to our home theater and settled in for a great gaming season only to realize a couple of drawbacks: It's hard to accurately wax bad guys using a keyboard in your lap whilst reclined on your couch (and using a wireless keyboard). It's hard to read the text on the screen (i.e. menus, etc). I find that a 1:1 ratio (screen diagonal inch to inch away from screen) is optimum, but using the home theater, it's more like 1:3 which has me squinting unless I sit on the coffee table. The wife always seems to want the TV the same time I do and, unfortunately "Real Housewives of Beverly Hills" and Battlefield BC don't mix. I am losing the battle in the home theater room, but the PC has to stay there (long story). So, this leaves me with the option of playing in my home office which is about 30 feet away from the home theater. I am a software developer so I have a pretty decent set up in my office--multiple 1080p monitors, HP Envy 17 which can run games like Crysis in 720p with out stammering too much. Also, I can game very comfortably at my desk in the office. Still, even though the set up in my office can run games well enough, I don't want to regress to that when I have worked YEARS for an awesome gaming PC that can run everything on ultra high settings. My Question What are my options for running my games on the beastly desktop in the Home Theater, but physically playing in my office about 30 feet away? A really long HDMI cable? LAN/RDC? Details that May Help We have an open crawlspace so running cable from HT room to office is no problem. I already have networked the house with a LAN Any help is GREATLY appreciated. Thanks, Matt

    Read the article

  • Chrome Problems on Windows 8

    - by Akshat Mittal
    There are a lot of problems with Chrome (24.0.1312.14 beta || But all this happened before update also) on Windows 8. Problems and explanations are listed below: Google Chrome re-draw time: When I switch tabs, the window retains the content of the previous tab and displays that even if I move my mouse, if only refreshes (re-draws) when there is a change on the webpage (like on hover) or I do a select all (or scroll). One thing to note is that the hover and select happens on the real page and not the retained image-like thing of the older webpage. Chrome is slow and laggy: Websites such as Facebook and Twitter (and more) have gone extremely laggy on Chrome (Win 8). When I was using Windows 7, I never experienced a lag or something. Also when using HTML-5 Websites, the transition (the -webkit-transition in CSS) goes extremely slow at times. Plugins Crash: Plugins like Flash Player, Shockwave Player, and more that are in-built into Chrome Crashes a lot, even when doing simple tasks like playing YouTube Videos, displaying ads or something. Chrome Crashes: Chrome has crashed over 100 times in the past month. Google Chrome just crashes randomly or I don't know the reason. Random Page crashes: Chrome results chrome://crash/(Copy-Paste this in address bar) on random pages even when the page is just loaded, I understand that this can happen on heavy HTML5 or JS websites but what about HTML only websites! Most of the things above happens on Super User also, Super User never had any problem when using Chrome on Windows 7. UPDATE 1: @magicandre1981 Commented for trying to disable Hardware Acceleration. I tried it, it somewhat solved the problem but din't fix it. I am still experiencing all the above issues but less frequently (maybe because Chrome Restarted Completely) UPDATE 2: @avirk asked me to try a Stable Version of Chrome and Firefox, I din't experience any lag in Firefox, a little (negligible) lag in Chrome 22 (Maybe because its a new copy of Chrome, I haven't used it much). Is anybody else experiencing such issues? Does anybody has a solution to any of these? Any Help is appreciated! Thank You!

    Read the article

  • Building an SSL server farm

    - by dan
    I'm interested in building the the architecture in the article referenced below. I currently have a modestly-priced layer-4 load balancer and my application servers are the SSL endpoints. I want to put an SSL server farm in between my load balancer and my app servers. Then I will put another inexpensive load balancer between the SSL farm and my app servers, to do layer-7 routing. My web application has a fairly high amount of consumer traffic, that 6 servers can handle at about 50% capacity. Additionally, I have infrastructure traffic that is several orders of magnitude heavier than my consumer traffic. This is data coming in from all over the world that must integrate with my web application in real time. In total I have 18 app servers to handle all the traffic, plus 6 database servers. I will be adding 6 more app servers over the next 2 weeks and another 6 the 2 weeks after that. Conservatively, I estimate I will need to scale to 120 servers by the end of the year. My motivation right now is to separate the consumer traffic from the infrastructure traffic. The consumer traffic is higher priority than the infrastructure traffic and I cannot allow a stampede on the infrastructure side to take down my consumer-facing servers. Having a website that is always up is the top priority. However if there is a failure in one of the consumer app servers, I want to route that traffic to the servers designated for infrastructure traffic. The complication is that all the traffic is addressed using the same hostname and is nearly 100% https. The only way in my case to distinguish infrastructure from consumer traffic is by URL (poor architecture I inherited), so I need a layer 7 load balancer to be able to route. However for that to work I need either a fancy hardware-based SSL terminator or an SSL server farm as described above. Because my user base is rapidly scaling, I worry that if I go down the hardware path it will become very expensive very fast, especially since I will need 4 of everything for high availability (2 identical setups in 2 facilities). Meanwhile, the above diagram seems very flexible and more horizontally scalable. Has anyone built this before? Are there pre-built configurations? What considerations should I make and what software should I use (I've heard of people using apache with mod-ssl, nginx, and stunnel)? Also, when does it make sense to buy an expensive load balancer vs building an SSL server farm? http://1wt.eu/articles/2006_lb/index_05.html

    Read the article

  • UAE and the mysteries of unreachable websites

    - by 0plus1
    I write here because I'm really lost, please stay with me because it's not easy to explain. A company asked me to set-up a private server, now I'm a programmer so I got a solution with technical support and cpanel which helped me to setup everything and it's working smoothless. I'm by no means a professional sysadmin, but I have a fair knowledge of server configurations, but this problem is way over my knowledge, and apparently way over the knowledge of most sysadmins, I really hope that here I'll find someone with enough experience to help me or at least give me more insight. Now this company for which I'm consulting operates in the UAE (United Arab Emirates) and from there the server is almost unreachable. It started with ns not registering in the UAE, after a week that sorted itself out and now the site is indeed reachable, but it takes almost 2 minutes to load a webpage with one line of text. Emails go in timeout. The domain currently parked there has been bought appositely for tests, the main one that was supposed to go there, after a catastrophic week has been transferred to a shared hosting solution in the UK, and from there it works like a charme. Now after doing some research I discovered that I'm not alone in this, there are several reports of webmasters discovering that their website is not reachable inside the UAE, and mind this has nothing to do with the state-wide block of questionable sites, because in that case an error message appears, this seems to be related to the infrastructure of the UAE, which apparently reroutes everything through their own "fake" internet. Apparently new servers with their own IP are not recognized (yet?) by the UAE infrastructure, while shared hosting solutions seeing that they operates tons of other websites are more likely to be part of the UAE network. Now my questions are: 1) Has someone a real explanation for this? The only thing I can think of is that the server is on a new IP that is not yet recognized by the UAE, but that doesn't explain why it loads (even if after 2 minutes). I don't have any help from within the UAE as the only people that are "experts" are questionable companies that simply try to sell their own services. 2) If there is really some kind of block of new servers, is it possible to know before if a server is reachable from within the UAE, currently this is not a ns problem as even accessing the server with its IP result in a 2 minute wait. 3) Can it be that the problem lies somewhere else? There are some tests that I can perform? I'm not physically in the UAE, but I can ask the people there, or use teamviewer. Could it be some misconfiguration on the server (mind that the site works EVERYWHERE else in the world). Thank you for ANY kind of help

    Read the article

  • Unecrypted Image of Truecrypt-Encrypted System Partition

    - by Dexter
    The general tenor around the internet seems to be that you can't create images of system partitions that have been encrypted (with truecrypt) other than with dd or similar sector-by-sector copy tools. These files however are very impractical given their size (and are obviously incompressible) which makes keeping multiple states/backups of your system partition rather expensive (..especially considering current hdd prices). The problem is that backup tools (like Acronis True Image, Clonezilla, etc.) won't give you the option to create an image of (mounted/opened) Truecrypt partitions, or that there is no recovery environment for restoring the backup, that would allow to run truecrypt before doing any actual restoring. After some trial and error however, I believe I have found a very simple way. Since Truecrypt (running in Linux) creates a virtual block device, that it uses for mounting the unencrypted partitions into the file system, partclone can be used for creating/restoring images. What I did: boot up a linux live disk mount/open the drive/device/partition in truecrypt unmount the filesystem mount point again, like so: umount /media/truecryptX ("X" being the partition number assigend by truecrypt) use partclone (this is what clonezilla would do too, except that clonezilla only offers you to back up real drive partitions, not virtual block devices): partclone.ntfs -c -s /dev/mapper/truecryptX -o nameOfBackupFile for restoring steps 1-3 remain the same, and step 4 is partclone.ntfs -r -s nameOfBackupFile -o /dev/mapper/truecryptX A backup and test-restore of the system (with this method) seems to have worked fine (and the changed settings were reverted to the backup-state). The backup file is ~40 GB (and compressible down to <8GB with 7zip/LZMA2 on the "fast" setting). I can't quite believe that I'm the only one that wants to create images of encrypted drives, but doesn't want to waste 100GB on the backup of one single system state. So my question now is, given how simple this was, and that no one seems to mention anywhere that this is possible - did I miss something? or did I do something wrong? Is there any situation that I didn't think of where this method will fail? Obviously, the backup file needs to be stored in some other encrypted place in order to still remain confidential, since it is unencrypted. Also, in order to do a full "bare metal" restore, one would have to actually first (re-)install Windows, encrypt it, and only then restore the backup file. The funny thing however is that you won't need to backup any partition tables, etc. since the reinstall will effectively take care of that. Is there anything else? This is imho still a lot better than having sector-by-sector images..

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • nginx proxypath https redirects to http

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass however several pages load with 404s The links on the pages have https:// in front, but result in a http request - which ends in a 404 - I only want these services to be available through https. I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf, I've also tried commenting out www.conf (just incase its location blocks could have caused any conflicts) to no effect. So if a link is too https://example.com/sickbeard/errorlogs in a browser when loaded https://example.com/sickbeard/errorlogs gives a 404 in a browser https://example.com/sickbeard/errorlogs/ loads nginx error log; 2011/11/23 14:21:58 [error] 28882#0: *6 "/var/www/sickbeard/errorlogs/recent.html" is not found (2: No such file or directory), client: 192.168.1.99, server: example.com, request: "GET /sickbeard/errorlogs/ HTTP/1.1", host: "example.com" Config files; proxy.conf location /sickbeard { proxy_pass http://localhost:8081/sickbeard; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

  • http://localhost/ always gives 502 unknown host

    - by Nitesh Panchal
    My service for World Wide Web Publishing Service started successfully but whenever I browse to http://localhost/ I always get 502 Unknown host. Also, wampapache is installed side by side but when I stop IIS service and start wampapache from services.msc I get error and when I view it in System event log I get this error: - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="Service Control Manager" Guid="{555908d1-a6d7-4695-8e1e-26931d2012f4}" EventSourceName="Service Control Manager" /> <EventID Qualifiers="49152">7024</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2011-06-12T17:43:28.223498400Z" /> <EventRecordID>346799</EventRecordID> <Correlation /> <Execution ProcessID="456" ThreadID="3936" /> <Channel>System</Channel> <Computer>MACHINENAME</Computer> <Security /> </System> - <EventData> <Data Name="param1">wampapache</Data> <Data Name="param2">%%1</Data> </EventData> </Event> I am fed up of this error and it is driving me nuts. I feel like banging my head against the laptop. I am really serious. Without concentrating on my real application I am trying to solve this issue since 3 hours. I google various threads and few of them said that there could be issue of Reporting Services or Skype. But I have uninstalled Skype and Reporting Services are disabled. What more should I do? I have hosts file present in etc directory and it does have mapping for localhost to 127.0.0.1. What more could I do?

    Read the article

  • Force caching of handler output which actively resists caching

    - by deceze
    I'm trying to force caching of a very obnoxious piece of PHP script which actively tries to resist caching for no good reason by actively setting all the anti-cache headers: Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type: text/html; charset=UTF-8 Date: Thu, 22 May 2014 08:43:53 GMT Expires: Thu, 19 Nov 1981 08:52:00 GMT Last-Modified: Pragma: no-cache Set-Cookie: ECSESSID=...; path=/ Vary: User-Agent,Accept-Encoding Server: Apache/2.4.6 (Ubuntu) X-Powered-By: PHP/5.5.3-1ubuntu2.3 If at all avoidable I do not want to have to modify this 3rd party piece of code at all and instead just get Apache to cache the page for a while. I'm doing this very selectively to only very specific pages which have no real impact on session cookies or the like, i.e. which do not contain any personalised information. CacheDefaultExpire 600 CacheMinExpire 600 CacheMaxExpire 1800 CacheHeader On CacheDetailHeader On CacheIgnoreHeaders Set-Cookie CacheIgnoreCacheControl On CacheIgnoreNoLastMod On CacheStoreExpired On CacheStoreNoStore On CacheLock On CacheEnable disk /the/script.php Apache is caching the page alright: [cache:debug] AH00698: cache: Key for entity /the/script.php?(null) is http://example.com:80/the/script.php? [cache_disk:debug] AH00709: Recalled cached URL info header http://example.com:80/the/script.php? [cache_disk:debug] AH00720: Recalled headers for URL http://example.com:80/the/script.php? [cache:debug] AH00695: Cached response for /the/script.php isn't fresh. Adding conditional request headers. [cache:debug] AH00750: Adding CACHE_SAVE filter for /the/script.php [cache:debug] AH00751: Adding CACHE_REMOVE_URL filter for /the/script.php [cache:debug] AH00769: cache: Caching url: /the/script.php [cache:debug] AH00770: cache: Removing CACHE_REMOVE_URL filter. [cache_disk:debug] AH00737: commit_entity: Headers and body for URL http://example.com:80/the/script.php? cached. However, it is always insisting that the "cached response isn't fresh" and is never serving the cached version. I guess this has to do with the Expires header, which marks the document as expired (but I don't know whether that's the correct assumption). I've tried to overwrite and unset headers using mod_headers, but this doesn't help; whatever combination I try the cache is not impressed at all. I'm guessing that the order of operation is wrong, and headers are being rewritten after the cache sees them. early header processing doesn't help either. I've experimented with CacheQuickHandler Off and trying to set explicit filter chains, but nothing is helping. But I'm really mostly poking in the dark, as I do not have a lot of experience with configuring Apache filter chains. Is there a straight forward solution for how to cache this obnoxious piece of code?

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • Plesk Postfix Mail Server 9.5.4 very heavy load, 1000s of processes

    - by Eugene van der Merwe
    Our Plesk Linux Ubuntu 64-bit mail server has extremely high load and we don't know how to isolate it. The load was okay will two weeks ago but in the last two weeks it's seriously deteriorated. The mail server has been running for years and we have had sporadic performance issues. Normally we reduce the load by turning off all SPAM checks until the problem is sorted (which sometimes resolves itself). Currently we have turned of real time block lists, SPF checking and we have attempted to turn off SpamAssassin. No matter what we do the SpamAssassin check box stays ticked in the GUI. Out of desperation we have done /etc/init.d/psa-spamassassin stop. For years we haven't been able to do SpamAssassin because it kills the server. We would like to use it but performance is more important for now. We cannot turn off Greylisting. The moment we turn off Greylisting our help desk is inandated with calls. Out of desperation we investigated truncating the Greylisting database which is now 2.5 GB big but we abandoned this after noticing turning of Greylisting doesn't improve the performance at all. We have no anti-virus. It's just more load and Dr. Web never really worked that well for us. But we'll try that if it will make a difference. We have implemented Postfix Anvil. This seems to have made the situation worse so we disabled it. We’re not sure if this is the case. Our current mail server is configured to forward all SMTP to a relay server. We did so to reduce the load. This helped a lot because outgoing queues are generally empty. We are running in an Expand configuration. The mail server has about 12 000 accounts of which maybe half are active. We have read through this document: http://www.postfix.org/STRESS_README.html but there are too many settings and we don’t know which ones to choose. Please assist urgently. We need advice on how to fix this problem before all our clients abandon is. The only clue we have is that there are 100s of these processes: 30 13205 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue 30 13207 1 0 11:38 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue 30 13208 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10026 before-remote 30 13209 1 0 11:38 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10026 before-remote 30 13213 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue

    Read the article

  • Sendmail Tuning For Batch Mail Jobs

    - by Kyle Brandt
    I have a webservers that send out emails to a sendmail relay server as a batch job. The emails need to be accepted by the relay sendmail server as fast as possible, however, they do not need to go out (be relayed) very quickly. I am seeing a couple timeouts once and a while from the webserver trying to connect to the relay server. The load currently is about 30 emails a second for a couple minutes. There are quite a few tuning options for sendmail in the sendmail tuning guide. What I am focusing on now is the Delivery Mode: Delivery Mode There are a number of delivery modes that sendmail can operate in, set by the DeliveryMode ( d) configuration option. These modes specify how quickly mail will be delivered. Legal modes are: i deliver interactively (synchronously) b deliver in background (asynchronously) q queue only (don't deliver) d defer delivery attempts (don't deliver) There are tradeoffs. Mode i gives the sender the quickest feedback, but may slow down some mailers and is hardly ever necessary. Mode b delivers promptly but can cause large numbers of processes if you have a mailer that takes a long time to deliver a message. Mode q minimizes the load on your machine, but means that delivery may be delayed for up to the queue interval. Mode d is identical to mode q except that it also prevents lookups in maps including the -D flag from working during the initial queue phase; it is intended for ``dial on demand'' sites where DNS lookups might cost real money. Some simple error messages (e.g., host unknown during the SMTP protocol) will be delayed using this mode. Mode b is the usual default. If you run in mode q (queue only), d (defer), or b (deliver in background) sendmail will not expand aliases and follow .forward files upon initial receipt of the mail. This speeds up the response to RCPT commands. Mode i should not be used by the SMTP server. I currently have the CentOS default modes: Sendmail.cf: DeliveryMode=background Submit.cf: DeliveryMode=i Is sendmail.cf/mc for outgoing email from relay (to the intertubes) and sumbit.cf/mc for incoming eamil (from my webservers). Would it make sense to change the outgoing delivery mode to queue? If I did, what would the outbound email flow behave like? If this is the right thing to do, can anyone show me example mc configurations for this change? If it isn't, what recommendations are there for these constraints?

    Read the article

  • Drupal on an NFS share has terrible performance

    - by Marcus
    We have a setup where a Drupal 7 site with the following setup - a VMware ESXi 4.1 host server running a web vm and an NFS VM. The web VM is using Apache and mod_php. The site is still in development thus we have to turn off all forms of caching due to the frequently-updated files. Each page request takes around 15-20 seconds to complete. Profiling the PHP code shows that the vast majority of time (normally over 90%) is taking by all the is_dir(), is_file() function calls that load up the modules. I've increased PHP's realpath cache size to several megs and an strace shows that the lstat calls then drop from over 200 to around 6 and stat() decreases a bit (around 600 calls). However, while this has shaved off quite a bit of time, I am simply unable to break past the 10 second per request barrier. Is there a way to get better performance out of this setup that doesn't involve caching? Configs and stats: VMs: web - Centos 6 64bt, 2.5GB RAM, normal CPU/HD prioritisation nfs - Centos 6 64bt, 2GB RAM, normal CPU priority, high HD priority PHP: 32M realpath cache size (it's this high for testing purposes) NFS: ~]# egrep -v '#|^$' /etc/nfsmount.conf [ NFSMount_Global_Options ] Defaultvers=4 Ac=False Rsize=32k Wsize=32k Bsize=32k Reading speeds via NFS are not an issue a dd of a 100M test file using 32k blocks returns: 3200+0 records in 3200+0 records out 104857600 bytes (105 MB) copied, 1.84984 s, 56.7 MB/s real 0m1.857s user 0m0.007s sys 0m0.330s Strace on Apache process with empty realpath cache: % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 50.78 1.157452 337 3434 28 stat 32.58 0.742656 628 1182 425 open 9.29 0.211788 762 278 1 lstat 3.17 0.072322 0 237865 write 2.45 0.055839 490 114 13 access 0.45 0.010262 43 237 brk 0.34 0.007725 10 811 74 read 0.28 0.006340 9 679 fstat 0.22 0.005069 18 281 poll 0.20 0.004533 6 698 getdents 0.09 0.001960 10 190 mmap 0.05 0.001065 14 74 accept4 0.04 0.001000 333 3 chdir 0.03 0.000750 4 190 munmap 0.01 0.000339 0 836 close 0.01 0.000247 3 75 writev 0.00 0.000068 0 611 fcntl 0.00 0.000063 1 77 shutdown 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 5 socket 0.00 0.000000 0 5 5 connect 0.00 0.000000 0 74 getsockname 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 1 futex ------ ----------- ----------- --------- --------- ---------------- Strace after realpaths are cached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 60.14 1.371006 484 2831 28 stat 31.79 0.724705 627 1155 425 open 3.53 0.080354 0 237865 write 2.65 0.060433 530 114 13 access 0.43 0.009913 99 100 brk 0.38 0.008730 11 804 74 read 0.35 0.007910 12 675 fstat 0.30 0.006775 10 654 getdents 0.13 0.003065 11 281 poll 0.09 0.002000 333 6 1 lstat 0.07 0.001545 2 807 close 0.05 0.001063 14 74 accept4 0.04 0.001000 6 179 mmap 0.02 0.000404 2 179 munmap 0.01 0.000271 4 75 writev 0.01 0.000212 0 611 fcntl 0.01 0.000129 2 77 shutdown 0.00 0.000022 0 74 getsockname 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 3 socket 0.00 0.000000 0 3 3 connect 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 3 chdir ------ ----------- ----------- --------- --------- ---------------- Mount: nfs.xxx.xxx.xxx:/path/to/website/files on /path/to/website/files type nfs (rw,hard,intr,noac,vers=4,addr=xx.xx.xx.xx,clientaddr=xx.xx.xx.xx) Any help is, naturally, appreciated.

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >