Search Results

Search found 13180 results on 528 pages for 'non interactive'.

Page 406/528 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • Strange File-Server I/O Spikes - What Is Causing This?

    - by CruftRemover
    I am currently having a problem with a small Linux server that is providing file-sharing services to four Windows 7 32-bit clients. The server is an AMD PhenomX3 with two Western Digital 10EADS (1TB) drives, attached to a Gigabyte GA-MA770T-UD3 mainboard and running Ubuntu Server 10.04.1 LTS. The client machines are taking an extremely long time to access/transfer data on the file server. Applications often become non-responsive while trying to open files located remotely, or one program attempting to open a file but having to wait will prevent other software from accessing network resources at all. Other examples include one image taking 20 seconds or more to open, and in one instance a user waited 110 seconds for Microsoft Word 2007 to save a document. I had initially thought the problem was network-related, but this appears not to be the case. All cables and switches have been tested (one cable was replaced) for verification. This was additionally confirmed when closing down all client machines and rebooting the server resulted in the hard-drive light staying on solid during the startup process. For the first 15 minutes during boot, logon and after logging on (with no client machines attached), the system displayed a load average of 4 or higher. Symptoms included waiting several minutes for the logon prompt to appear, and then several minutes for the password prompt to appear after typing in a user name. After logon, it also took upwards of 45 seconds for the 'smartctl' man page to appear after the command 'man smartctl' was issued. After 15 minutes of this behaviour, the load average dropped to around 0.02 and the machine behaved normally. I have also considered that the problem is hard-drive-related, however diagnostic programs reveal no drive problems. Western Digital DLG, Spinrite and SMARTUDM show no abnormal characteristics - the drives are in perfect health as far as the hardware is concerned. I have thus far been completely unable to track down the cause of this problem, so any help is greatly appreciated. Requested Information: Output of 'free' hxxp://pastebin.com/mfsJS8HS (stupid spam filter) The command 'hdparm -d /dev/sda1' reports: HDIO_GET_DMA failed: Inappropriate ioctl for device (the BIOS is set to AHCI - I probably should have mentioned that).

    Read the article

  • FTP account ownership on vhost directory makes Apache not run website correctly

    - by CodeShining
    I've purchased a virtual server, where I'm given of a non-root sudo-enabled user. Actually I do need to create an FTP account that's not that sudo-able account, so I created a no-login account just for that purpose. I've set up VSFTPd correctly, also enabling the "userlist" feature, to specify which user are permitted to use FTP. Then I created an empty directory under my sudo-able account, and I gave ownership permissions to the second account, so to make it more easy to understand, let's say the main account (the one I do use to manage my VPS) is called ubuntu and the FTP-user is named ftpuser, I created a directory /home/ubuntu/mywebsite giving the ownership to ftpuser:ftpuser. Then I uploaded a worpdress website, whose default permissions are 755 and 644. The issue is that Apache is not given of any privilege to run the website. How can I make the website run properly, and which is the most secure? Should I run that virtualhost with another user (if it's possible)? Should I force the FTP user to use the www-data group (if that's possible) and run with permissions like 775 and 664? How can I solve this issue? Any help is appreciated, I'd like to run it using the default permissions, so any update won't break up anything (because of permissions reset).

    Read the article

  • Easiest way to do host name resolution with IPA?

    - by Luke
    We are currently using static LAN IP addresses for our internal non-public facing servers. We don't have DHCP configured. We're using Vyatta for our router and firewall. The firewall is configured to be zone based. We want to setup IPA for centralized authentication (LDAP+Kerberos). IPA is requiring resolvable host names. I want to avoid having to enter DNS records by hand. What is the most painless way to make host names resolvable that works with IPA in a Linux only environment? We arn't using anything to resolve host names now. Up until now we've been using static ip addresses and local users on each server. We've looked at BIND, DHCP (does that even solve the problem?), and multicast DNS. At this point we're not sure which solution would work best. Is there another option we haven't considered? Security is very important. We have multiple zones where each zone has very specific or no access to another zone. DNS for public domains is forwarded from Vyatta to our ISP's DNS server.

    Read the article

  • How to create video file from DVD

    - by pkaeding
    I want to take the DVD video slideshow that I got from my wedding photographer, and put the video on youtube (I have permission, I made sure to get the non-exclusive rights to use anything she created while working for me in any way I wanted when we working out the terms before-hand). Can anyone suggest the best way to get a video file that can be uploaded to YouTube from this DVD? The source DVD is not encrypted, so I don't need to worry about that. I am using a Mac, so Mac-friendly suggestions are preferred. Thanks! EDIT: So, I tried Handbrake, and it looks very promising. However, When I select the title in Handbrake, it says it is about 12 minutes long. The resulting file ends up only being about 4 minutes long, and the music is messed up. It seems to be going normally through the photo transitions, but removing the time that the slideshow stays on each photo. I believe the DVD was created using iDVD. Does it do anything weird to save space by varying hte framerate, or anything like that? Are there special settings in Handbrake I need to use?

    Read the article

  • optimize mod_rewrite in htaccess

    - by clarkk
    I got some mod_rewrite conditions in a .htaccess file which I have extended from time to time.. But I don't think its very well written (I'm still quite new to mod_rewrite) Some times requests end up in infinite loops And just now I added SSL to the file.. When requesting https:// I get a 404 error The requested URL /_secure/_secure/ was not found on this server. Somehow it adds an extra _secure to the path? .htacces # set language RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_URI} ^/(da|en)/(.*)(\?%{QUERY_STRING})?$ [NC] RewriteRule ^(.*)$ /%2?%{QUERY_STRING}&set_lang=%1 [L] # put 'www' as subdomain if none is given RewriteCond %{HTTP_HOST} ^([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.%1/$1 [L,R=301] # rewrite subdomain RewriteCond %{HTTP_HOST} ^(admin|files)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_(admin|files)/ [NC] RewriteRule ^(.*)$ /_%1/$1 [L] # redirect to subdomain RewriteCond %{HTTP_HOST} ^www\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^_([^/]+)/ http://$1.%1/ [L,R=301] # start SSL on 'secure' subdomain if not started RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^(secure)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ https://%1.%2/$1 [L,R=301] # rewrite 'secure' subdomain RewriteCond %{HTTP_HOST} ^(demo|secure)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_secure/ [NC] RewriteRule ^(.*)$ /_secure/$1 [L] # rewrite 'api' subdomain RewriteCond %{HTTP_HOST} ^api\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_api/ [NC] RewriteRule ^(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)? /_api/?%{QUERY_STRING}&v=$1&i=$2&k=$3&a=$4&t=$5&f=$6 [L] # redirect non-active subdomain to 'www' RewriteCond %{HTTP_HOST} !^(admin|api|demo|files|secure|www)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.domain.com [L,R=301] # hide file extensions RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !\.php$ [NC] RewriteCond %{REQUEST_URI} ^/([^/]*)/(?:([^/]*)/)?(?:([^/]*)/)?$ [NC] RewriteRule ^(.*)$ /%1.php?%{QUERY_STRING}&subpage=%2&subsection=%3 [L]

    Read the article

  • MySQL stopped asking for passwords

    - by BlaM
    I'm currently experiencing a weird problem with one of my MySQL database servers: It stopped asking for passwords when I try to access the database from local with the mysql command line tool. I need a valid admin username. I also still need a password for remote access (i.e. from another IP). And I need a password when I - for example - access the database from a PHP script. But when I try to access the database from local host/commandline it will let me straight in to the data with my administrative users. They (admin users) have passwords set - and as I mentioned - I still need to specify those when I try to access the data via PHP. Changing the password didn't help. Non-Administrative users need to specify their passwort, but that doesn't really help if they can get anywhere with "mysql -u root" (or another admin user account name). (System Debian Linux Lenny, MySQL 5.0.51a) Any ideas? Anything that explains this behaviour? I don't understand how this can happen.

    Read the article

  • Computer loop restarts on Windows loading

    - by Robinson G.
    My computer restarts, just after or during Windows loading (4 squares getting together), or after some time on the desktop IF I'm idle. In order to start the computer I either have to swap my RAM (24Go) in differents DIMM slots, for example (I have 4 slots) : 1-2-4 or 2-3-4 or 1-3-4, not the same position each boot or it restarts in a loop... I can also change the ram timing from 1.5 to 1.6V and it starts, at the next reboot I have to change it again to 1.5 or it won't boot... And If after all, I succeed to boot, I have to use the computer for about 10 minutes, and it's ok, if I don't, it restarts by itself after some minutes. If I stay on the BIOS, It's all OK, I can stay for a whole year without restarting. I have check my RAM on Memtest (4*8G SDRAM DDR3) : OK I have check without graphic card : Still the same. I tried to reset the bios stack by getting it off and on : Still the same. I tried to reset my BIOS, and many settings about the RAM in the menu : Still the same. CPU temps are just fine (around 35°C) I was thinking ofc about the motherboard but I want to be sure. Motherboard : MSI zh77a-g43 RAM : 4*8G Dual Mode or not (depends on the number I put ofc.) PSU : 600W (enough to run all the config) CPU : i7 3770 non K

    Read the article

  • Disable internal display on Macbook Pro without closed lid mode?

    - by jslaker
    I have an early 2007 Macbook Pro running 10.5 that I've recently set up on a KVM with my primary desktop system. The problem I've run into is that I have a 20" 1680x1050 LCD, and OS X only provides options to mirror at the resolution of the built-in display or to span. Since the built-in display runs at 1440x900, this leads to running my LCD at non-native res and a fuzzy picture. There isn't any option that I can find to simply disable the built-in display entirely and run the external LCD at its native resolution. I am aware of closed lid mode, but the MBP was disassembled while in storage for about 6 months (took it apart to pull the HDD) and the cable to the touchpad, which controls the sleep sensor was damaged, meaning closed lid mode won't work. I've looked into replacing the cable, but the cheapest I've been able to find it is $75-100, and I'm trying not to invest any more money into this computer as it also has a completely dead battery and a few other minor problems. I've found the app SwitchResX which appears to allow you to do what I need, but it has a lot of functionality I don't need and a ~$20 registration charge attached to it. An odd set of circumstances, I'm aware, but I was hoping somebody might know of an OS hack that would let me just disable the internal display and be done with it. :)

    Read the article

  • Ways to parse NCSA combined based log files

    - by Kyle
    I've done a bit of site: searching with Google on Server Fault, Super User and Stack Overflow. I also checked non site specific results and and didn't really see a question like this, so here goes... I did spot this question, related to grep and awk which has some great knowledge but I don't feel the text qualification challenge was addressed. This question also broadens the scope to any platform and any program. I've got squid or apache logs based on the NCSA combined format. When I say based, meaning the first n col's in the file are per NCSA combined standards, there might be more col's with custom stuff. Here is an example line from a squid combined log: 1.1.1.1 - - [11/Dec/2010:03:41:46 -0500] "GET http://yourdomain.com:8080/en/some-page.html HTTP/1.1" 200 2142 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; C) AppleWebKit/532.4 (KHTML, like Gecko)" TCP_MEM_HIT:NONE I'd like to be able to parse n logs and output specific columns, for sorting, counting, finding unique values etc The main challenge and what makes it a little tricky and also why I feel this question hasn't yet been asked or answered, is the text qualification conundrum. When I spotted asql from the grep/awk question, I was very excited but then realised that it didn't support combined out of the box, something I'll look at extending I guess. Looking forward to answers, and learning new stuff! Answers doesn't have to be limited to platform or program/language. For the context of this question, the platforms I use the most are Linux or OSX. Cheers

    Read the article

  • How do I access my webserver on my stationary from my laptop?

    - by Steven
    I'm running Apache on my stationary and I would like to access a website through my laptop. This is some of the Apache config: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerName mysite.com DocumentRoot I:/wamp/www/mysite/ </VirtualHost> ServerName localhost:80 <Directory /> Options FollowSymLinks AllowOverride all Order deny,allow Deny from all </Directory> On my laptop I've added the following to the HOSTS file: 10.0.0.3 mysite.com But accessing the page through mysite.com is not very successfull. If I enter the IP address directly, I only get a Forbidden message. What do I need to do in order to get this to work? Update I'm runing WAMPSERVER 2.1 (Apache 2.2.17) Apache is up and running I can ping 10.0.0.3 from laptop I'm not able to ping http://mysite.com from laptop IE gives me a 403 Forbidden - The website declined to show this webpage The only log that get's entries when trying to access the website from my laptop, is access.log. access.log 10.0.0.4 - - [13/Jun/2011:10:14:04 +0200] "GET / HTTP/1.1" 403 202 apache_error.log [Mon Jun 13 10:08:16 2011] [error] VirtualHost localhost:0 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results UPDATE 2 My apache config has the following entry: AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 Could it be that this Allow from is stopping other computers accessing the page?

    Read the article

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • Re-packaging commercial software into RPM packages

    - by gac
    The situation is this - I have a small CentOS 5 "cluster" (currently 7 machines, but potential for more) which run a commercially available software package that's distributed essentially in tarball format (it's actually a zip file with a mixture of Windows/Linux binaries and an installation shell script with no potential for automation). I'd like to re-package this somehow into an RPM package (ideally that I can throw onto a self-hosted yum repository) in order to keep these "cluster" machines both up to date and consistent. I could do 7 manual installations, but there's scope for error. As I understand it, I'll need to accomplish the following tasks: add a non-privileged user to the target system for running the daemon without unnecessary root privileges package the binary files themselves up from the final installation location on a separate build machine (probably under /opt/package for sanity's sake). No source is available. add a firewall hole in order for the end-users to be able to communicate with the "cluster" nodes add a cron task which can start the daemon on @reboot I'm coming up with plenty of good packaging resources so far, but all are based on the traditional method (i.e. if I were the vendor packaging up my source files), rather than re-packaging a ton of binary files from an already-installed instance of the application, which is the only option available to me. Anyone have any good resources they can share for achieving this goal? Thanks!

    Read the article

  • One Way Sync with Dropbox?

    - by user244805
    Is there any way I can mirror a dropbox folder to my C drive by just running a portable file? Extra background information because I know you guys hate it when you don't get the entire situation: I go back to University in fall and I need a new storage solution. I decided to use DropBox to sync my tiny University files (< 5 MB). I need to access these files from 4 machines: Windows 7 Home machine Windows 7 University A machine Windows 7 University B machine Android tablet 1 and 4 are a non-issue. The problem lies with 2 and 3. I want to be able to edit my files on 2 and 3 but those machines are not mine. There is an easy fix. Run a portable version of the DropBox syncer on a USB drive. But the problem is that I don't want to carry a USB drive around with me all the time. In that case, I can just run the small portable DropBox syncer off the internet. But where will it to store the files? A temporary directory on the C drive. There is only one issue left: there are hundreds of machines that I will randomly use that fit in categories 2 and 3. My portable DropBox syncer will notice that the temporary directory is empty on each new PC I use and instead of downloading my DropBox folder to the machine, it syncs the other way around i.e. it deletes my entire DropBox. The solution is to mirror my DropBox onto the temporary directory before running the DropBox syncer.

    Read the article

  • Full Apache config migration

    - by Victor Rashkov
    I searched alot and didn't find an applicable answer. I have a working LAMP setup on Ubuntu machine and I have to migrate to a new server in a different country. The old server is 11.10, the new server is 12.04LTS. My problem is that I simply can not remember the steps I followed when I configured the current server which is not the basic LAMP install. It is Apache with FastCGI, SuEXEC, a GD library, worker MPM and all sitting on top of a mhddfs system. There are also other configs I've changed and I can not recall what they are. Because of the complexity of the setup, my attempts to migrate to the new server fail. I get permissions errors, cgi problems etc. Therefore my question is : Is there a sane way to simply tar a full backup of the current web server installation, including MySQL, Php amd the apache server with all configs, and then move it to the new machine? I shall be forever thankful on any advise. So far non of thise I found here gave me an answer. Thanks!

    Read the article

  • CPANEL ModSec2 not working with SecFilterSelective

    - by jfreak53
    Ok, I have cPanel/WHM latest on a Dedi, here are my specs on apache: Server version: Apache/2.2.23 (Unix) Server built: Oct 13 2012 19:33:23 Cpanel::Easy::Apache v3.14.13 rev9999 I just ran a re-compile using easyapache as you can see by the date. When running it I made sure that ModSec was selected and it stated in big bold letters something to the effect of If you install Apache 2.2.x you get ModSec 2 So I believed it :) I recompiled, I then ran: grep -i release /home/cpeasyapache/src/modsecurity-apache_2.6.8/apache2/mod_security2.c Hmm, the file is there but grep doesn't output anything, if I run: grep -i release /home/cpeasyapache/src/modsecurity-apache_1.9.5/apache2/mod_security.c I of course get the ModSec 1 version output. But the thing is that ModSec2 is installed since the c file is there. So I continued and put the following in modsec2.user.conf: SecFilterScanOutput On SecFilterSelective OUTPUT "text" Now when I restart Apache I get this error: Syntax error on line 1087 of /usr/local/apache/conf/modsec2.user.conf: Invalid command 'SecFilterScanOutput', perhaps misspelled or defined by a module not included in the server configuration Now supposedly this is supposed to work, I even have it running in ModSec2 on a non-cpanel server setup manually. So I know ModSec2 supports it. Anyone have any ideas? I have asked this question over at cpanel forum and it got nowhere.

    Read the article

  • Truecrypt files corrupted after moving PC into another case

    - by Dygerati
    I recently bought a new PC case and transferred all of my PC hardware into it. The only hardware modification was the addition of two identical ram modules. The entire process went smoothly, and everything worked and booted as before. The only side-effect I found when accessing one my of file-based hidden truecrypt volumes shortly there after. Some of the files in the volume - NOT all - seemed to be entirely corrupted. The directory and file names are garbled characters, but a few of the directories in the same volume appear and function normally. Also, all files in the non-hidden tc volume were still intact. Is this not weird? The only other real change I could think of would be that the hard drives were connected to different SATA ports on the mobo. I really don't know how the truecrypt encryption works well enough to know what could cause this...and the fact that not all the files were corrupted makes it more bizarre still. So, first off (and I'm not too hopeful on this point), would it be possible to restore these files? I had a backup of most, but not all of the files involved. Other than that I'm just curious how this happened and how I can prevent it next time. Thanks!

    Read the article

  • Locked out of our Comcast Business Gateway for seemingly no reason

    - by Tyler
    A little backstory... Last fall we migrated our ISP from AT&T to Comcast at one of our offices. At that time, we received a new modem/router from Comcast and we configured everything to our liking. We've never really had very many issues with the router aside from having to restart it every once in a while. Here's the problem... About three months ago I changed the password on the router from the default. After that, I logged into the router several times to make changes with no issue. During May I logged into the router to add two new static routes, no problems. A week ago, I tried to log into the router and could not. I tried the non-default password that I changed it to, the default, anything and everything I could think of and no luck. I restarted the router on Monday thinking it may just be locked up, but after the restart it would still not let me log in. This router is at our other office about 2 hours from here and I want to avoid having to drive down there and reset to factory defaults, reconfigure, etc. Any ideas?

    Read the article

  • Is there a remote desktop or vnc app for the IPad that properly handles Bluetooth keyboard shortcuts?

    - by Steve Bison
    I've tried 4 or 5 remote desktop apps, the most notable being Jump Desktop and Splashtop Streamer. Most of these remote desktop apps have some sort of on-screen keyboard for typing with the IPad, including special keys like shift, control, alt. The special keys act like "sticky keys" meaning they stay depressed until another key is pressed, to make it easier to do key combinations. Even non-standard keyboard combinations like shift+enter work, in this sticky sense. When using a Bluetooth keyboard with the remote desktop apps, both Jump and Splashtop Streamer recognize the shift + letter combination for doing capital letters. However, generically pressing shift, cntrl, or alt does not depress the sticky on screen shift buttons or do anything at all. Only a few combinations are recognized (again like shift+letter, cntrl+C). Most combinations do not work (shift+enter, alt+tab). Even having the keyboard shortcuts work like sticky keys (press shift then enter, not both at once) would be much better than the limited functionality they have now. Is there an app, jailbreak app, or workaround that lets me use bluetooth keyboard properly with remote desktop on the ipad?

    Read the article

  • Serve web application error messages from Http server

    - by licorna
    I have nginx as a http server with tomcat as a backend (using proxy_pass). It works great but I want to define my own error pages (404, 500, etc.) and that they are served by nginx and not tomcat. For example I have the following resource: https://domain.com/resource which doesn't exist. If I [GET] that URL then I get a Not Found message from Tomcat and not from nginx. What I want is that every time Tomcat responds with a 404 (or any other error message) nginx sends itself a message to the user: some html file accessible by nginx. The way I have my nginx server configured is very easy, just: location / { proxy_pass http://localhost:8080/<webapp-name>/; } And I've configured port 8080, which is tomcat, as not accessible from outside this machine. I don't think that using different location directives in nginx configuration will work, because there are some resources that depend on the URL: https://domain.com/customer/<non-existent-customer-name>/[GET] Will always return 404 (or any other error message), while: https://domain.com/customer/<existent-customer>/[GET] Will return anything different from 404 (the customer exists). Is there any way of serving Tomcat (Application Server) error messages with Nginx (http Server)? To check the message sent by the proxy_pass directive and act upon it?

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • disk write cache buffer and separate power supply

    - by HugoRune
    Windows has a setting to turn off the write-cache buffer (see image) Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. Is it feasible and economical to get such a "separate power supply" for the internal sata drives of a non-server PC? Under what name is such a power supply sold? I know that there are UPS devices that can be connected to external drives,but what is required to be able to switch this setting safely on for an internal disk? The setting has different descriptions in different version of windows Windows XP: Enable write caching on the disk This setting enables write caching in Windows to improve disk performance, but a power outage or equipment failure might result in data loss or corruption. Windows Server 2003: Enable write caching on the disk Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows Vista: Enable advanced performance Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows 7 and 8: Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. This article by Raymond Chen has some more detailed information about what the setting does.

    Read the article

  • How do you enable webcam support in Facebook for Ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in Ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated!

    Read the article

  • Nginx Config - I can't access WordPress admin area

    - by WebDevDude
    I am a complete noob when it comes to Nginx, but I'm trying to make the switch over for my WordPress site. Everything works, even the permalink, but I can't access my WordPress admin directory (I get a 403 error). I have my WordPress install in a subfolder, so that complicates things a bit for me. Here is my Nginx config file: server { server_name mydomain.com; access_log /srv/www/mydomain.com/logs/access.log; error_log /srv/www/mydomain.com/logs/error.log; root /srv/www/mydomain.com/public_html; location / { index index.php; # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location /myWordpressDir { try_files $uri $uri/ /myWordpressDir/index.php?$args; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/mydomain.com/public_html$fastcgi_script_name; fastcgi_split_path_info ^(/myWordpressDir)(/.*)$; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } }

    Read the article

  • Drive letter not appearing after heat-related crash

    - by NickAldwin
    I recently had my old PC (has 3 physical hard drives partitioned into 6 partitions) off while on vacation. When I came back, I turned it on. I hadn't realized the room was warmer than it usually is due to hot weather while I was away. The computer was extremely slow to start up, then it crashed. When i rebooted, it got halfway through chkdsk on one of the non-system partitions, then crashed again. I opened it up and felt the hard drives and immediately shut down the computer and moved it to my basement to cool down because it was so hot. I left it there for a length of time while I reinstalled the A/C. I have now turned it on again. It is working fine, and every drive except for the one with the partition that was being checked has appeared in Windows. I scheduled chkdsk for all of the other partitions anyway, just in case, but I'm worried about that drive. I'm pretty sure the drive itself hasn't broken but that crashing in the middle of a chkdsk repair may have corrupted the data. What would you do in this situation? Most of the data on that drive was backed up, so it's not a huge deal if I lost it, but I'd like to get it back if I could. I also would love to regain usability of the drive, even if I have to wipe it -- but that's a last-resort sort of thing. What do you suggest I do?

    Read the article

  • Debian grub2 update removed Windows boot option.

    - by Wrikken
    Since I updated grub to grub 2 I no longer get the option to boot to Windows (which is unfortunately sometimes necessary for proprietary MSIE browser plugins I need to use for work). Relevant /boot/grub/menu.lst portion: ### END DEBIAN AUTOMAGIC KERNELS LIST # This is a divider, added to separate the menu items below from the Debian # ones. title Other operating systems: root # This entry automatically added by the Debian installer for a non-linux OS # on /dev/hda1 title Windows NT/2000/XP root (hd0,0) savedefault makeactive chainloader +1 This however does not appear anymore. I do have some entries in /boot/grub/grub.cfg with entries like these: menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set e638c434-4884-412f-a141-2c194f881fae echo 'Loading Linux 2.6.32-5-amd64 ...' linux /boot/vmlinuz-2.6.32-5-amd64 root=UUID=e638c434-4884-412f-a141-2c194f881fae ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-5-amd64 } Do I have to alter that file? If so, what is the correct syntax for a Windows boot? If not, what could be the problem?

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >