Search Results

Search found 41582 results on 1664 pages for 'fault tolerance'.

Page 209/1664 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Find out what fields are available to IIS 7 Advanced Logging from Modules

    - by Grummle
    You can install the Advanced Logging module for IIS 7. Once installed you have the option to define new fields from several different sources. One of those sources is other modules. What I am unable to figure out is how to get a list of the fields that the other modules 'publish'. There a boat load of modules installed by default and I have to imagine they are publishing some data I would care to know about (hopefully UrlRoutingModule publishes what I'm specifically looking for). Also as an aside if you know how to or know where good documentation on writing .net HttpModules that publish custom fields I'd love to see/hear about it.

    Read the article

  • "setpci: command not found" in CentOS

    - by spoon16
    I'm trying to configure my Mac Mini running CentOS 5.5 to start automatically when power is restored after a power loss. I understand the following command has to be executed: setpci -s 0:1f.0 0xa4.b=0 When I run that command on my machine though I get bash: setpci: command not found. Is there a package I need to install via yum or something? I'm not seeing a clear answer via Google and I looked at the man page for setpci and it doesn't mention anything. Also, does this command need to be run every time the machine starts or just once?

    Read the article

  • Grant access for users on a separate domain to SharePoint

    - by Geo Ego
    Hello. I just completed development of a SharePoint site on a virtual server and am currently in the process of granting users from a different domain to the site. The SharePoint domain is SHAREPOINT, and the domain with the users I want to give access to is COMPANY. I have provided them with a link to the site and added them as users via SharePoint, which is all I thought I would need to do. However, when they go to the link, the site shows them a SharePoint error page. In the security event log, I am showing the following: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:11:49 AM User: COMPANY\ThisUser Computer: SHAREPOINT Description: Object Open: Object Server: Security Account Manager Object Type: SAM_ALIAS Object Name: DOMAINS\Account\Aliases\00000404 Handle ID: - Operation ID: {0,1719489} Process ID: 416 Image File Name: C:\WINDOWS\system32\lsass.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: ThisUser Client Domain: PRINTRON Client Logon ID: (0x0,0x1A3BC2) Accesses: AddMember RemoveMember ListMembers ReadInformation Privileges: - Restricted Sid Count: 0 Access Mask: 0xF Then, four of these in a row: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:12:08 AM User: NT AUTHORITY\NETWORK SERVICE Computer: SHAREPOINT Description: Object Open: Object Server: SC Manager Object Type: SERVICE OBJECT Object Name: WinHttpAutoProxySvc Handle ID: - Operation ID: {0,1727132} Process ID: 404 Image File Name: C:\WINDOWS\system32\services.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: NETWORK SERVICE Client Domain: NT AUTHORITY Client Logon ID: (0x0,0x3E4) Accesses: Query status of service Start the service Query information from service Privileges: - Restricted Sid Count: 0 Access Mask: 0x94 Any ideas what permissions I need to grant to the user to get them access to SharePoint?

    Read the article

  • IIS6 Multiple SSL websites to a single HTTP website?

    - by docflabby
    Running a IIS6 server on Windows 2003. All the websites use ASP.NET I have a number of websites all running separate HTTP websites: www.domain1.com www.domain2.com www.domain3.com I have a separate HTTPS website www.secure.com These websites are all running on the same server. I now wish to intergrate the content of www.secure.com into each of the domains in a transparent way. Such that each website despite having its own SSL connection displays the same website. The complicatrion is www.secure.com needs to know which website the connection has come from to apply the appropriate branding. The idea behind this is to have only one website, and location, but it keeps the core website brand. https://domain1.com looks alot better from a marketing point of view (and avoids users getting confused about what our secure website is) SSL www.domain1.com/secure - displays www.secure.com (branded domain1) SSL www.domain2.com/secure - displays www.secure.com (branded domain2) SSL www.domain3.com/secure - displays www.secure.com (branded domain3) How would the best way of achieving this, i'm open to using additional software if necessery. Would a reverse proxy be sutible for this situation?

    Read the article

  • Windows XP SP3 client over NAT to a Windows 2008 R2 SP1 file server disconnection

    - by Patrick Pellegrino
    we just transferred a pilot group from our old(!!) Netware infrastructure to an Microsoft infrastructure. Since then, our users got problems accessing their files. They all experience disconnection from the mapped drives. The file server is access via a WAN connection by a firewall (Sonicwall) between both network and we do NAT. All clients have Windows XP SP3 and the file server is an Windows 2008 R2 SP1. On the file server I got many Event Id 2012. Many post over the Internet suggested a problem between the SMB protocol and NAT. We need a short term fix to continue to transfer users from Netware to Microsoft after what will work to remove the NATing. I found this MS KB http://support.microsoft.com/kb/2444558 that suggested a kind of workaround for Windows 7 clients but I can found anything for Windows XP. Anyone can help me with this ? We don't want to stop the project and do a network job before migrating. Regards. Update: Our few Windows 7 computers doesn't seem to have this issue.

    Read the article

  • Spamassassin not work

    - by John
    I set the threshold to 7.5. But this mail still can't work thought will auto spam. Any idea? Thanks. X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on X-Spam-Level: * X-Spam-Status: Yes, score=5.9 required=5.0 tests=DNS_FROM_OPENWHOIS, FH_DATE_PAST_20XX,HTML_MESSAGE,RDNS_NONE autolearn=no version=3.2.

    Read the article

  • varnish demon error: libvarnish.so.1 not found

    - by Max
    In order to try out varnish for an upcoming project I installed it on an ubuntu server using this tutorial: http://varnish-cache.org/wiki/InstallationOnUbuntuDapper The build process worked without any errors, but I cant start the varnish demon. I always get the error message varnishd: error while loading shared libraries: libvarnish.so.1: cannot open shared object file: No such file or directory But /usr/local/lib/libvarnish.so.1 clearly exists. How can I tell varnish to look in that directory and load the library?

    Read the article

  • Virus that duplicates word documents as exe

    - by Bob Rivers
    Hi, We are facing a virus problem on our network, but I'm unable to identify it, so we can't properly deal with it. The symptoms are that the virus duplicates a word document (.doc) generating a new archive with the same name, but with an exe extension, and, after that, the virus hides the original file. So, when the user clicks over the file, it propagates itself. Symantec AV seems to be able to block it: every time that the virus tries to generate the exe, symantec blocks it, but at this point, the original file was already converted to hidden, so the user thinks that the file has been deleted. Symantec identifies it as a simple trojan horse. I already started a full scan, but it didn't found nothing. I'm trying to know the virus name in order to fight it. Does anyone has any kind of information? TIA, Bob

    Read the article

  • Locate rogue DHCP server

    - by Farseeker
    I know this is a serious long shot, but here we go. In the past week or so, for users connected to a particular switch in our network (there are four dumb switches all connected, and it only affects SOME, not all, users on the one switch) are getting DHCP addresses from a rogue DHCP server. I have physically checked every cable plugged into the switch in question to make sure that none of them have a router or wifi point attached to it. I know the IP of the DHCP server, but I cannot ping it, and it does not have a web interface. Does anyone have any suggestions on what I can do to locate it or shut it down? Unfortuantely all the switches are unmanaged, and as mentioned, there's no physical device (that I can find) plugged in to anything. It's getting critical, because it's screwing up the PXE boot of a whole bunch of thin clients.

    Read the article

  • How to setup squid only cache specific domains?

    - by ???
    For example, I want squid to cache HTTP contents only for *.archive.ubuntu.com, which is blocked by firewall, and don't cache for other domains. And, only LAN (192.168.0.0/16) users can access the cached contents, but all users are allowed to access non-cached contents. User-IP Dest-Domain acl Expect ---------------- ----------------------- ------ ------------------------- 192.168.0.0/16 *.archive.ubuntu.com allow Cache Proxy, Fast 192.168.0.0/16 *.other allow Pass Proxy, Slow Other * allow Pass Proxy, Slow

    Read the article

  • Invalidating unused ssh keys

    - by JH
    I am using one ssh account for all my Subversion users. They send me their public keys and I put them in .ssh/authorized_key of the svn account, then they can check out the code from Subversion using ssh tunnel. So far everything works fine. The problem though is that I want to invalidate keys that have not been used for some time (say one month). Does anyone know a way to make sshd log the public key when a user signs in? Thanks.

    Read the article

  • Dell printer goes offline after second print job.

    - by Ac0ua
    Dell printer goes offline(server connection) after second print job. Although the printer's display says it is ready. If you turn off then on the printer you can send one job and goes back offline(server connection) on the next print job. We have multiple Dell 2330dn printers installed through a print server, only one of the printers is experiencing this problem. Two different users. Two different machines. Two different operating systems (win7 and Vista). The computers have been reset. Dell printers have web interface if this helps (through IP address). Thanks for any help!

    Read the article

  • IIS7 failover cluster across datacenters

    - by Scott
    Hello, I have servers in two different datacenters with each datacenter getting static IPs. What I would like to do is setup the servers as IIS7 servers and allowing them to failover from datacenter to datacenter with little (or preferably) no interruption. Servers on both sides are running Windows Server 2008 x64 with IIS7 (or 7.5 if needed). I am interested in how to point DNS traffic to the new datacenter without manual human intervention. For example: Datacenter A: IP: 192.168.1.115 Servers: Server 2008 x64 w/ IIS 7 Datacenter B: IP: 192.168.1.220 Servers: Server 2008 x64 w/ IIS 7 Other information: Domain Name: Example.org Domain DNS: 192.168.1.115 If Datacenter A connectivity went down (broken service line, etc.) how does the traffic know to route to Datacenter B on 192.168.1.220? Thanks, Scott

    Read the article

  • NFS v4, HA Migration, and stale handles on clients

    - by Karl Katzke
    I'm managing a server running NFS v4 with Pacemaker/OpenAIS. NFS is configured to use TCP. When I migrate the NFS server to another node in the Pacemaker cluster, even though the metadata is persisted, connections from the clients 'hang' and eventually time out after 90 seconds. After that 90 seconds, the old mountpoint becomes 'stale' and the mounted files can no longer be accessed. The 90 second grace period seems to be part of the server configuration and not the client configuration. I see this message on the server: kernel: NFSD: starting 90-second grace period If I restart the NFS client on the client nodes after I migrate (unmounting and then remounting the share), then I don't experience the problem, but connections and file transfers still interrupted. Three questions: What is the 90 second grace period? What's it there for? How can I keep the files from going stale on the clients without restarting them after I migrate the NFS server to another node? Is it actually possible to migrate the NFS server without having large file uploads drop?

    Read the article

  • Fresh Red Hat Enterprise Linux fails to install httpd using yum

    - by Julian
    I'm trying to install a LAMP stack in a fresh red hat server but yum is misbehaving. Being linux illiterate I'm at a loss. $yum install httpd Loaded plugins: security Setting up Install Process No package httpd available. Nothing to do My yum config $ cat /etc/yum.conf [main] cachedir=/var/cache/yum keepcache=0 debuglevel=2 logfile=/var/log/yum.log distroverpkg=redhat-release tolerant=1 exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 # Note: yum-RHN-plugin doesn't honor this. metadata_expire=1h # Default. # installonly_limit = 3 # PUT YOUR REPOS HERE OR IN separate files named file.repo # in /etc/yum.repos.d Other stuff in the yum.repos.d dir $ ls -lah /etc/yum.repos.d/ total 12K drwxr-xr-x 2 root root 4.0K Feb 4 01:15 . drwxr-xr-x 59 root root 4.0K Feb 4 01:28 .. -rw-r--r-- 1 root root 561 Mar 10 2010 rhel-debuginfo.repo What could be going on? I thought "out of the box" RHEL5.5 would be friendlier :)

    Read the article

  • Virtualhost pulling from the same site??

    - by Matt
    I have my httpd.conf on fedora 8 that I am setting the virtual host file. Here is what I have: DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> then below I am trying to setup a vhost to have multiple sites on the server: NameVirtualHost *:80 <VirtualHost *:80> ServerName kadence.tv DocumentRoot /var/www/html/ </VirtualHost> <VirtualHost *:80> ServerName nacc.biz DocumentRoot /var/www/html/nacc/ </VirtualHost> also in the /var/www/html/ directory I have the index.php file for the kadence site...when I do to either site I get the index for the kadence site...any ideas what I am doing wrong EDIT the full contents of my httpd configuration file are here.

    Read the article

  • Set Error-Pages for all Applications in Tomcat

    - by user38511
    I'm trying to set up custom error pages in tomcat 6, because I don't want the default ones to show up. My error pages are static html, no jsp or anything dynamic. I know how to do this through the web.xml in each application but I'd prefere to setup the error pages only once for the entire server. I tried to add the following fragment to the global web.xml (in conf), but no matter what I add under location, it does not show. <error-page> <error-code>404</error-code> <location>/404.html</location> </error-page> What do I need to do to gobally define custom error pages? Thanks!

    Read the article

  • two domains two servers one dynamic ip address

    - by giantman
    as i said i have 2 domain hi.org and bye.net and one dynamic ip address and two servers. i want to attach one domain bye.net to server1 and hi.org to server2. using apache wamp 2.0i. i hope someone will be able to answer. ` httpd.conf file additions ProxyRequests Off Order deny,allow Allow from all vhost file additions NameVirtualHost *:80 default DocumentRoot "c:/wamp/www/fallback" Server 1 DocumentRoot "c:/wamp/www" ServerName h**p://bye.net ServerAlias bye.net Server 2 ProxyPreserveHost On ProxyPass / h*p://192.168.1.119/ DocumentRoot "g:/wamp/www" ServerName h*p://hi.org ServerAlias hi.org ` after doing all this i fallback to server1 only i don't get the page hi.org i only get the page bye.net, i don't even get the default fallback page which gets executed when a person enters ip address but not the domain name. i use windows 7 (server2) and windows xp (server 1)

    Read the article

  • Apache 2.4.3 php-fpm mod_fast_cgi and mod_cache

    - by Anjia
    Did anybody successfully configured mod_cache in apache 2.4 with php-fpm and fastcgi? my cgi config: <IfModule mod_fastcgi.c> Alias /php5.fastcgi /var/www/fastcgi/php5.fastcgi AddHandler php-script .php FastCGIExternalServer /var/www/fastcgi/php5.fastcgi -socket /mnt/tmp/fast/php-fpm.sock -idle-timeout 1600 -pass-header Authorization Action php-script /php5.fastcgi virtual My php-fpm config is standard and I am loading mod_cache and mod_disk_cache in Apache. However the Apache does not seems to cache any content. The debug log file: Fri Sep 07 23:22:59.691333 2012] [cache:debug] [pid 35623:tid 123613201929984] mod_cache.c(161): [client 10.0.0.22:21938] AH00750: Adding CACHE_SAVE filter for /index.html [Fri Sep 07 23:22:59.691345 2012] [cache:debug] [pid 35623:tid 123613201929984] mod_cache.c(171): [client 10.0.0.22:21938] AH00751: Adding CACHE_REMOVE_URL filter for /index.html [Fri Sep 07 23:23:01.326598 2012] [cache:debug] [pid 35623:tid 123613185144576] cache_storage.c(626): [client 10.0.0.110:5414] AH00698: cache: Key for entity /index.html?(null) is `http://10.0.1.16:8080/index.html?`

    Read the article

  • What is the difference betweeen "Network install" and "Network Boot" options in virt-manager when installing a new virtual machine

    - by Marwan
    From my understanding of PXE (Preboot Execution Environment), I know that there must be some negotiation first between the booting client and a DHCP server to obtain network parameters (IP address, etc) in order for the client to be able to fetch the boot loader and kernel image from the boot server. In other words, and aside from being a "virtual" machine, we're talking here about a "bare metal" machine, so there must be some "pre boot" mechanism for those negotiations to take place, and this is exactly what PXE is all about. When I think about the "Network install" option, I can't figure out how the new VM would be able to fetch the boot images (bootloader and kernel) without the previously mentioned mechanism. So, here is a short version of the question: When provisioning a new virtul machine, how do you expect the "Network install" option in virt-manager to work behind the scenes? Many thanks.

    Read the article

  • HTTP Error 503. The service is unavailable

    - by user1671639
    I'm struggling to setup the environment in IIS8, I searched a lot but couldn't find a right solution. I checked the error logs, but no idea. C:\Windows\System32\LogFiles\HTTPERR 2013-10-09 09:28:39 192.168.43.205 60172 192.168.43.205 80 HTTP/1.1 GET / 503 2 AppOffline qa.hti.local 2013-10-09 09:28:39 192.168.43.205 60192 192.168.43.205 80 HTTP/1.1 GET /favicon.ico 503 2 AppOffline qa.hti.local Then in Event Viewer: WARNINGS: A listener channel for protocol 'http' in worker process '11188' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '7492' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '9088' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '9964' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '7716' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. I don't understand what the warning means. ERROR: Application pool 'qa.hti.local' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Note: I learned that consecutive 5 failures leads to APP Pool crash, and this can increased. I also tried increasing this but no success. OS: Windows server 2012 IIS Version: 8 Please share your thoughts.

    Read the article

  • Perforce Restore From Multiple Checkpoint Files?

    - by AJ
    Hi all, I am working with a very large (~11GB) checkpoint file and trying to do a -jr (journal restore) operation. About half way through the file, I'm hitting an entry which causes an error to occur. I'm unable to come up with a conventional way to print, edit, and save changes to the offending line. So right now I'm splitting the checkpoint into files of 500k lines each...up to 47 files and counting. My question is, once I have these separate files: Can I run journal restore on each one separately to check for errors? Once fixed, is it necessary to merge them back together again to do my full journal restore? Any other ideas on how to tackle this problem would be appreciated. Thanks in advance, -aj

    Read the article

  • Can I change the user id of a user on one Linux server to match another server in /etc/passwd?

    - by user76177
    I have a Rails application that is on a virtual machine (RHEL 6) and it's database is on dedicated hardware (also RHEL 6). The app server has an NFS directory from the db server mounted and accessible. It needs to write images to that server that are uploaded via the app. Background processes on the db server need to read and write to the same directory, as they perform resizing operations on the uploaded files. Right now none of this is working, because the user ids are different between the two systems. I only need this to work for this one application, so it is way too much overhead to put an LDAP system in place. Can I simply change the user id of this one user in one of the systems, or will that cause mass chaos? UPDATE: The fix worked, at least on local devices. Unfortunately the device I have mounted to the main db server still thinks my user id is 502 instead of 506. Do I need to remount that device, or is there an NFS daemon I can stop and restart to refresh it?

    Read the article

  • CPU Limits for Application Pools in IIS 7.5

    - by Kyle Brandt
    I see that in iis 7.5 I can set a CPU % utilization limit for a specified amount of time for an application pool. I can have also have it kill the worker process if this limit is violated. If tell it to do this, will the worker process automatically restart after it is killed, or is manual intervention required? Over at Stack Overflow there is the mention that it can restarted at the completion of the interval...

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >