Search Results

Search found 15651 results on 627 pages for 'setup'.

Page 504/627 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • I can't connect to internet via lan cable because 2032 battery died and my bios bios info is now empty [closed]

    - by Rand Om Guy
    I have a compaq CQ61-112SL from about 5 years now... the main battery is almost dead, doesn't keep more then 10 minutes. anyway my problem is that my motherboard battery didn't have any more energy left a few days ago and since then I can't access internet through lan cable but only via wifi. I need cable though. I saw that on my BIOS setup page there were a bunch of parameters missing like serial number, UUID, product number and stuff like that. Also when I start the notebook it prints something like : No serial found. or something like that. I don't really know if the reason why my lan cable doesn't work is the empty BIOS but i assume that's it. If it's not please enlighten me. Or anyway tell me how to update the serial number and product number to the real ones (instead of the 0000000000000 that is now in my bios). I downloaded HP DMI which should make it possible to set these variables on the BIOS but i'm on Windows 8 64bit and the executable file that I need to open for my laptop model says it can't run on 64 bit.

    Read the article

  • How should I configure my Apache Hosts File to serve a different site for localhost than for my domain/publicip?

    - by rofls
    I'm trying to test out a LAMP (with PHP5 specifically) setup with Django already serving a website. I want to do the PHP stuff on localhost for now, so that when I do something like this: curl http://localhost/database/script.php?var=1, I get a response from the php server. Right now I'm getting a Django error. I tried something like this in the default file in sites-available: Listen 80 <VirtualHost aaa.bbb.ccc.ddd> ServerName localhost DocumentRoot /home/phpsite </VirtualHost> where aaa.bbb.ccc.ddd is the local ip address, and changing my actual site's settings to specify the public ip, like this: Listen 80 <VirtualHost www.xxx.yyy.zzz> ServerName mysite.com DocumentRoot /srv/www/mysite WSGIScriptAlias / /srv/www/mysite.wsgi </VirtualHost> but then I start getting all kinds of errors when I start apache, such as port ::[80] is already in use or something. I noticed that the hosts file that's located in /etc/apache2/ is apparently pointing everything to mysite.com, including my local ip as well as 127.0.0.1 and 127.0.1.1; Do I need to change the configuration there too?

    Read the article

  • How to fix Truecrypt MBR using Command Prompt or Linux live USB?

    - by Michal Stefanow
    I was playing with TrueCrypt and decided to make a fresh installation of Windows 7 from USB stick. Unfortunately Windows 7 installer: "setup was unable to create a new system partition" My entire HDD has been formatted and is visible as 320GB unallocated space, but no fdisk nor Windows 7 installer nor Windows XP installer could help. (Windows XP doesn't even see the HDD, it sees only USB stick and says "not enough space to install") It may be related to Truecrypt and pre-boot authentification, boot loader and/or MBR. As I don't have optical device I could not have created rescue disk. Right now I need a rescue of some kind, supposingly by erasing/fixing MBR using Linux live USB or using Command Promt. Another approach is to click "repair your comuter" menu from Windows 7 installer then click "restore your computer", then click OK upon error and get access to Command Prompt. Yet another another approach is to start computer without Linux USB I receive this: error:unknown filesystem. grub rescue> Any help would be greatly appreciated as my laptop is kind of not fully operational now. UPDATE: This was asked long time ago, ended up formatting everything (eventually it worked using different bootable USB)...

    Read the article

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • How do I install Photoshop CS2 in Wine w/ Creative Suite Installer?

    - by kellishaver
    I'm running Ubuntu 9.10 and wanted to install Photoshop CS2 in Wine (wine1.2). From what I've read, the Photoshop installer and application should both run fine. However, I don't have a specific installer for Photoshop. The setup program on the CD is for the entire Creative Suite 2 bundle. When I try to run it, I get through the splash screen, license agreement, and language selection screens, but when I click the button to start/customize the installation, the installer dies. The Photoshop CS 2 folder on the CD has two exe files, instmsia.exe and instmsiw.exe, and I tried those, hoping to find a stand-alone Photoshop installer, but neither work. I tried downloading a trial, but my license key is apparently for the entire bundle, because it didn't work. Does anyone know of a work-around for this or a way to make the Creative Suite installer work? I'm currently running Photoshop under a WinXP VM, but it would be nice to have the option of using it via Wine, so I don't have to boot the VM every time I want to edit an image (and also reading/writing to my Ubuntu shares is really slow in Virtualbox). Thanks!

    Read the article

  • Router that allows custom Dynamic DNS server [closed]

    - by Thuy
    I've made my own DDNS service and it works fine using an application running on clients to update the IP. But if for some reason I don't have the choice of using my software and instead I need to use a router to update the IP, it becomes troublesome. For example, I needed to setup IPsec from a customer to me and the customers router/firewall (netgear srx5308) has a dynamic IP which is given from the ISP which can't offer static IPs. So it needs to use dynamic dns for it to work. In this case there really isn't a client to run the software on since it's a router/firewall. Unfortunately it seems that most routers are rather unfriendly towards custom DDNS solutions and most offer only dyndns.com or similar templates. Which was the case with this router too. Leaving me with no way to use my own dynamic dns server IP. I have the option of switching out the customers router and I've been looking around for alternatives and other routers/solutions and I was wondering if anyone on this great site might have been in a similar situation or might just know about some router/firewall that is more friendly towards custom ddns solutions that I might be able to use. Thanks in advance for any help or guidance!

    Read the article

  • mod_rewrite redirect subdomain to folder

    - by kitensei
    I have a wordpress blog at the url http://www.orpheecole.com, I would like to setup 3 subdomains (cycle1, cycle2, cycle3) being redirected to their folders (1 subdomain = 1 wp blog, no multisite enabled) The file tree looks like this: /var/www/orpheecole.com/ /var/www/cycle1.orpheecole.com/ /var/www/cycle2.orpheecole.com/ /var/www/cycle3.orpheecole.com/ the following .htaccess try to redirect to /var/www/orpheecole.com/cycleX instead of its own directory, but id it's possible i'd rather redirect every subdomain to its own www folder. my sites-enabled file for main site is # blog orpheecole <VirtualHost *:80> ServerAdmin [email protected] ServerName orpheecole.com ServerAlias *.orpheecole.com DocumentRoot /var/www/orpheecole.com/ <Directory /var/www/orpheecole.com/> Options -Indexes FollowSymLinks MultiViews Order allow,deny allow from all </Directory> ErrorLog /var/log/apache2/orpheecole.com-error_log TransferLog /var/log/apache2/orpheecole.com-access_log </VirtualHost> and the .htaccess located on /var/www/orpheecole.com/ looks like this <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} !^www.* [NC] RewriteCond %{HTTP_HOST} ^([^\.]+)\.orpheecole\.com$ RewriteCond /var/www/orpheecole.com/%1 -d RewriteRule ^(.*) www\.orpheecole\.com/%1/$1 [L] # BEGIN WordPress RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] # END WordPress </IfModule> I tried to remove wordpress directives but nothing change, and the rewrite mod is enabled and working.

    Read the article

  • Apache /server-status/ gives a 404 not found

    - by user57069
    I am trying to solve a problem where Apache stats aren't displaying correctly in Munin. I've ran through quite a bit of checks and tests regarding Munin setup, but I think my issue is related to Apache, but my skill set there is lacking. first, system info: monitored server CentOS 5.3 kernel 2.6.18-128.1.1.el5 Apache/2.2.3 "server-status" directive in httpd.conf (i've cross-compared this with another system that i did a successful parallel install of Munin on, correctly showing Apache stats, and the directive below is the same for both) ExtendedStatus On <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ran lynx http://localhost/server-status got HTTP/1.1 404 taking a look at Apache access_log: 127.0.0.1 - - [13/Oct/2010:07:00:47 -0700] "GET /server-status HTTP/1.0" 404 11237 "-" "Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8e-fips-rhel5" mod_status is also loaded: % grep "mod_status" /etc/httpd/conf/httpd.conf LoadModule status_module modules/mod_status.so iptables is turned off also i did notice that the ownership status on httpd.conf on this system is root.root.. whereas the system that is displaying correctly is apache.www -- not certain that this matters?? its got to be permission issue, but i'm not certain where the permissions are messed up. any thoughts on why the test of server-status is giving me a 404?

    Read the article

  • Take a regular Windows 7 clone with clonezilla (device-to-image)

    - by Mario De Schaepmeester
    I am unexperienced with cloning software and I've decided to use Clonezilla as it seemed best as freeware. I chose device image and left most options standard. I chose expert mode anyway to see what I could configure, and decided to try the lzop algorithm instead of the default one for compression. The rest was left at default. When Clonezilla asked me which partitions to clone (I chose parts to image), I chose the C:\ drive but Windows 7 also creates a 100MB partition on setup for system files (the actual boot partition?). I copied that into the image as well. The reason I didn't choose disk to image is that I also have a data partition that needs to stay intact. Now I'm simply not sure that this is the way to go, should I ever need to restore my disk image. Will Clonezilla know what to do with both partitions and will Windows 7 work perfectly after restoring? Edit: apparantly a similar question has been asked before. The link to the first article in the answer is not relevant to me since it covers a device-to-device clone. It appears the windows installation disk can repair the 100MB partition. As for Clonezilla, it copies "hidden data after the MBR" by default too. I don't know, I feel I'll be allright whether by restoring the partition with Clonezilla or repairing it with the Windows 7 disk.

    Read the article

  • Periodic internet connection drops

    - by user9647
    My setup is a dsl modem, and a dlink di 524M router. I'm also using a Witopia VPN which runs through OpenVPN. I've been having trouble with the internet connection dropping very frequently. It comes back shortly, without even a router/modem/computer restart. This happens as frequently as every ten minutes. Occasionally (not often) it will last as long as an hour or two without dropping. When it drops, I can get it back almost immediately by clicking Reconnect in the OpenVPN GUI and letting that do it's thing. It's worth noting that I'm in China. Calling support is a bit difficult because of that. Also I don't really understand all of the router's software, although I've got it generally figured out. I've tried a bunch of stuff, attempts to diagnose and/or fix the problem. No success with any of the following: I've power cycled both the modem and the router. I've tried an ethernet connection to the router. I've connected without the VPN. I've disabled IEEE authentication on all connections. I've checked for viruses. I've tried lifting it off the ground so as to prevent overheating.

    Read the article

  • Nvidia RAID 1 Problem. Degraded drives...

    - by Vedat Kursun
    I had a RAID 1 on my system which has a Gigabyte GA 8N SLI motherboard with a Nvidia chipset.(Nvidia Raid IDE ROM BIOS 4.84) When the system was working probably there used to be an icon on the system try which showed my two RAID disks. Bu after my friend accidentally clicked on the "Remove drive safely" icon while trying to disconnect her USB, I noticed that the RAID system wasn't working. After a reboot there was suddenly a failure message during boot screen. When I enter the Nvidia RAID setup utility (F10) I can see that both drives are degraded and that won't change even if I get into them and press R for Rebuild. Other options are only Delete and Exit. When I boot to Windows (XP Pro 32 Bit) I can see both my disks with the same data on each of them but my RAID 1 is broken. It's a relief to see that at least my RAID 1 was active but it's annoying not being able to rebuild it. Is there a way where I can rebuild my RAID 1 without having to delete the array and build it again? Cause I don't want to backup 400 Gigs of data and then recopy it to my drives... (Disks 2 x Seagate ST3500418 AS SATA Drives)

    Read the article

  • Nvidia RAID 1 Problem. Degraded drives...

    - by Vedat Kursun
    I had a RAID 1 on my system which has a Gigabyte GA 8N SLI motherboard with a Nvidia chipset.(Nvidia Raid IDE ROM BIOS 4.84) When the system was working probably there used to be an icon on the system try which showed my two RAID disks. Bu after my friend accidentally clicked on the "Remove drive safely" icon while trying to disconnect her USB, I noticed that the RAID system wasn't working. After a reboot there was suddenly a failure message during boot screen. When I enter the Nvidia RAID setup utility (F10) I can see that both drives are degraded and that won't change even if I get into them and press R for Rebuild. Other options are only Delete and Exit. When I boot to Windows (XP Pro 32 Bit) I can see both my disks with the same data on each of them but my RAID 1 is broken. It's a relief to see that at least my RAID 1 was active but it's annoying not being able to rebuild it. Is there a way where I can rebuild my RAID 1 without having to delete the array and build it again? Cause I don't want to backup 400 Gigs of data and then recopy it to my drives... (Disks 2 x Seagate ST3500418 AS SATA Drives)

    Read the article

  • Why is my computer randomly restarting? How can I fix it?

    - by kinglime
    I have a custom built desktop computer that I've been using since about a year ago. The main specs are: ASUS P8Z68-V LE Motherboard Intel Core i7 2600k Corsair Vengeance 16GB Ram ASUS ENGTX570 GPU Corsair TX650M PSU I was running an overclock of 4.4ghz for my cpu (I have a Hyper212 EVO) and 1600mhz for my RAM but have it currently turned off due to my issues. I am also currently running Windows 8 but this problem occured in Windows 7 too. Basically my issue is that seemingly randomly, with no pattern my PC will reset itself and ASUS Anti-Surge will alert me that something went wrong. This issue is not related to system stress. I can run it fine for an hour maxxed out on Prime95 then later I can be watching a mere YouTube video when it will randomly reset. This has been occurring probably for the last two weeks and it seems to be getting worse. I believe this might be related to the power supply but when I monitor it in the bios and in Windows it appears to be putting out the proper voltages. Also possibly related or not but my Nvidia drivers frequently temporarily fail and then warn me of some kind of kernel error? If I have to buy a new power supply that is what I'll do but I want to make damn sure that's the only issue at hand. Thank you everyone in advance, please help me diagnose the issue and tell me what I can do to fix it. If you need any additional info about my setup please ask me.

    Read the article

  • How can I make my Super keys (Windows Key) behave more like Ctrl/Alt/Shift in Linux

    - by deltaray
    After using the Ctrl + "arrow keys" for 13 years to switch virtual desktops in X windows, I've been convinced recently to change to using the Super keys instead (the windows key and the context menu key, which I've remapped). This all works fine for the most part. However, something is still picking up the key events that these keys are sending as if they are a normal alphanumeric like key. For example, I first noticed this in Google Docs spreadsheet that if I press the windows key alone over top of a cell, that it starts editing that cell. It doesn't insert anything, it just sends a key event that Firefox sees and starts editing the cell. This caused problems on a collaborative document I was working on as the way Google docs works, it led to me accidentally erasing the data in a few fields before I realised what was going on. I like using the super keys, but I want them to behave more like a Ctrl or Alt key does in that its a modifier key and doesn't send anything until a second key is pressed. My setup is the following: Ubuntu 10.10 XFCE 4 Microsoft Natural Ergo 4000 keyboard (with the logo scratched out) The following is my .Xmodmap file: remove Lock = Caps_Lock keycode 66 = Escape ! The below maps my other windows context menu key. keycode 135 = Super_R Edit: As requested, here is the relevant output from xev for a keypress and keyrelease of my Super_L (left windows key) KeyPress event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849342, (177,174), root:(182,228), state 0x10, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849430, (177,174), root:(182,228), state 0x50, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • Apache forwarding to tomcat shows a blank page

    - by MNS
    I have an application running on tomcat at http ://www.example.com:9090/mycontext. The host name in server.xml points to www .example.com. I do not have localhost anymore. I am using apache to forward requests to tomcat using mod_proxy. Things work fine as long as the ProxyPath is /mycontext. The server name setup in virtual host is www .abc.com and http ://www.abc.com/mycontext works fine. However I would like to ignore the context path and simply use http://www.abc.com/ to forward requests to http://www.example.com:9090/mycontext. When I do this, apache shows me a blank page. What am I missing here? I have not changed anything in server.xml except the default host to www .example.com. <VirtualHost *:80> ServerName www.abc.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://www.example.com:9090/mycontext ProxyPassReverse / http://www.example.com:9090/mycontext </VirtualHost> Thanks

    Read the article

  • NFS on top of GFS2 - does it work?

    - by Matthew
    We're currently using a NoSQL derivative called Splunk to receive our data. The software supports something called "search head pooling" in which the job-dispatching engine is housed on several servers which share a common storage point. Originally our intention was to use a clustered filesystem like GFS2 because of low latency, stability, and ease of setup. We set up GFS2, and it's working with no issues. However when trying to run the software, it's trying to create lock files, and a bunch of other things that their support team can't quite explain. Ultimate feedback from them was that they only support NFS. Our network administration team heavily frowns on NFS (lack of stability, file lock issues, etc). So, I was thinking about the possibility of setting up NFS on each server in the cluster to act as a wedge layer between the GFS2 filesystem and the software. Basically configure each server to export the GFS2 filesystem's mountpoint via NFS, and then tell each server to connect to that NFS share. That way we aren't introducing any single-points-of-failure should a dedicated NFS server go down, but the vendor gets their "required" NFS share. I'm just brainstorming ways around, so please tear this apart :)

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • W2003StdR2 server: DNS dysfunctional!

    - by Tor
    I hate to have to do this, but i feel up that creek with no... well, some of you might know. At the moment my one and only DNS server refuses to do Forwarding. The story is as goes: This site had 2 servers, one W2003SBS and an W2003StdR2. The SBS degraded over a short periode of time, and to not go down with it i decided to move all data over to the other server. This was of course an AD integrated site. Move went ok, the Std server removed from the domain, and the SBS put to rest. For the time being we decided to run the Std as a server only, and no AD. We renamed the internal domain to xxx.local, and set the server up with DNS, DHCP and installed WINS (not activated). Forwarding of DNS is to our ISP through a Netgear Firewall. The same address setup used as before. So - DNS server started and all went ok, clients reconfigured and hooked up and then - after a day's time - internet name resolution stopped working on the server! Nothing had changed, been altered, modified, nothing! What i now get when doing NSLOOKUP is just a 2 sec timeout response! And i have checked and looked, but to no avail. Anybody seen this behaviour before? And yes - ALL servicepacks have been updated on the server. I would be much obliged if anyone in here could lend an ear... and give advice! Thanks.... from Tor in Norway Today is the 14th, and i still have no resolution to this nagging problem. Anybody else got any advice in the matter? Please?

    Read the article

  • Cant ping ip on LAN. Port forward works fine though.

    - by Anoop
    I have a Solaris 11 machine running inside the LAN. It is a default install. I can access the machine and ping it if I ssh into my router (if it matters, it is running dd-wrt). I cannot ping the Solaris machine using ip address from any other machine inside the LAN. But if I setup port forwarding everything works perfectly fine. I can also use the port forward from outside the LAN (from my office) - which is good and how I want it to be. I can SSH and ping and do pretty much everything else from outside as well as inside but only as long as I have the port forwarded from my router. Why would I not be able to ping or ssh or even access the Solaris 11 machine from within the LAN - I have checked and couldn't find any firewall running on the Solaris 11 box. I even tried disabling every known firewall on the router (dd-wrt, it had something like SPI firewall running). I even tried setting a static IP for my Solaris box but all in vain! Please help me understand how and why this happens!! Thanks.

    Read the article

  • What's Keeping My Computer Awake?

    - by phantomdata
    First the question; How do I figure out what is preventing my Windows 7 computer from going into sleep mode? Second; some background... I've been struggling with this for a few days and am utterly perplexed. I setup sleep mode on my Windows 7 PC a few weeks ago, and all was well. The PC would sleep as expected and I was snuggly in knowing that my computer was saving power and some wear and tear on the components (we'll leave the 'is it better to sleep' debate for another thread/day, please don't start it). Well, I noticed the other night that my system stopped ever going to sleep. I set the sleep time down to 1 minute and wandered fully away from the PC (ensuring that no errant mouse or keyboard movements would occur) and the PC never went to sleep. I've also observed this over longer intervals as well, such as overnight. I have sleep mode enabled, of course "multimedia settings - When Sharing Media" is set to allow the computer to sleep. "powercfg -lastwake" show nothing of interest, since it never goes to sleep and can't wake up. "powercfg /requests" shows 3 entries - all "[DRIVER] ?". I assume that 2 of these are my mouse and keyboard - as I've recently used them to run the powercfg command. I'm at a loss for the third though. I've unhooked all USB peripherals save for my keyboard and mouse. Wake on LAN is disabled in my BIOS. I know that you can disable all apps from waking/preventing sleep - but I want the ability to remain for those apps that do legitimately need to keep the system awake. So; does anyone know of a way to figure out what the 3rd phantom "[DRIVER] ?" is in powercfg /requests?

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • How to resolve 'No internet connectivity issues' with a Virtualised 2008 R2 Server using Forefront UAG

    - by user684589
    I have spent some considerable time reading up on as many possible blogs and articles as I can to help me solve why my VM (Running on Hyper-V) for DirectAccess has suddenly stopped being able to access the internet. The VM setup shares the same internet connection on which I have written and submitted this question so I know that the actual underlying internet connection is fully functional. Previous to last week the DirectAccess was fully functional and had no issues. This is a recent problem which was led up to by a number of consistent crashes on the DA machine when access was attempted. Upon reboot all seemed well until recently. I am not certain whether it is relevant, but previously to this I had a number of power issues where the entire VM host shutdown unexpectedly leaving around 8 VM's in a bad way. Upon restart, the UAG DirectAccess machine was unable to access its configuration service (although the service was started) but this seemed to relate to the Light-Weight Active Directory Service AD LDS which had a corrupted database. Having repaired this database, I restarted the service and could subsequently reconnect to the configuration service again. For good measure I re-bound the network adapters (virtualised through Hyper-V) and DirectAccess claimed to be all happy again. However as it stands my machine is still unable to access the internet showing the "No internet connectivity" exclamation mark for the external facing NIC. I have also tried removing the adapters, disabling, re-enabling and the problem persists. The intranet part of the VM CorpNet seems to be fully functional as before and I'm running out of ideas. Any input would be greatly appreciated. I am not an advanced Domain Administrator so please be gentle.

    Read the article

  • Hyper-V and attaching physical disks [migrated]

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >