Search Results

Search found 20799 results on 832 pages for 'long integer'.

Page 619/832 | < Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >

  • Tell Tomcat to drop requests instead of dying "All threads (150) are currently busy"

    - by Nicolas Raoul
    My Tomcat 6.0.26 sometimes dies saying: SEVERE: All threads (150) are currently busy, waiting. Increase maxThreads (150) or check the servlet status ... then Tomcat shuts down, and users can't access the webapp until I restart Tomcat manually. Some of the threads indeed take a long time to execute, it is by-design, not a thread-gone-wild problem. I know I could increase maxThreads, but that is not a viable solution, because the server might receive requests even more requests. QUESTION: Instead of dying, can I tell Tomcat to just drop requests when maxThreads is reached and the AJP/1.3 backlog is full? Below is my server.xml in any case: <?xml version='1.0' encoding='utf-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <Service name="Catalina"> <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" minSpareThreads="100"/> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" enableLookups="false" useBodyEncodingForURI="true" backlog="150" maxThreads="150" executor="tomcatThreadPool" keepAliveTimeout="5000" connectionTimeout="300000" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="ecm1"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> </Server>

    Read the article

  • Networking problems in VMWare with wireless bridge

    - by Robert Koritnik
    Barebone data: virtualization: VMWare Workstation 6.5 (latest) Host: Windows Server 2008 x64 Guest: Windows Server 2008 x86 Host network adapter: Ethernet (see comment) Host network adapter: Wireless (see comment) Guest ethernet network adapter 1: Bridged VMNet (automatic) Guest ethernet network adapter 2: Host only VMNet comment: my host has LAN and Wifi but only one at the same time. I'm either wired or wireless. Never both. So bridged connection on VM goes either via wire or air. Problem When I'm wirelessly connected on the host and I access internet within VM my connection just gets stalled (not dropped). It doesn't experience any timeout whatsoever, it just stops downloading/communicating. For instance: I start downloading a file with a browser (IE/FF/CR doesn't matter) and I have to pause/restart download when speed drops to 0. I could wait indefinitely but connection won't pick-up automatically. What did I miss in my network configuration? Update 1 I've tested this in various combinations. This works fine when host is connected via Ethernet. But when host is connected via Wifi, the connection on the guest works as previously described. It connects fine. It gets a valid IP from DHCP... Everything is cool as long as you don't start doing some intensive network traffic (ie. download a 2MB file) In this case it starts downloading and stops after a while. Speed just drops to 0B/s... Sometimes it picks up back, sometimes it doesn't. Connection still stays and works. I can ping around with no problem.

    Read the article

  • Rundeck get verbose output of command executing on node

    - by Leon Stafford
    I have Rundeck executing a remote script, which is in python is using print statements to return output normally such as: $ python mytest.py PASS: Condition 1 passed PASS: Condition 2 passed PASS: and so on... When I run this via Rundeck, however, it doesn't show me the same print generated outputs as above. In Rundeck's most detailed Debug output mode, I only receive the following: 06:31:12 Permanently added 'myremotenode.com' (RSA) to the list of known hosts. 06:31:12 SSH_MSG_NEWKEYS sent 06:31:12 SSH_MSG_NEWKEYS received 06:31:12 SSH_MSG_SERVICE_REQUEST sent 06:31:13 SSH_MSG_SERVICE_ACCEPT received 06:31:13 Authentications that can continue: publickey,password,keyboard-interactive 06:31:13 Next authentication method: publickey 06:31:13 Authentication succeeded (publickey). 06:31:13 /cygdrive/c/Program Files (x86)/Mozil... 06:32:06 Adding reference: ant.PropertyHelper 06:32:06 Setting project property: sshexec.output -> /cygdrive/c/Prog... I know that the remote script is actually executing just as usual, as I'm receiving other emails generated by the ~30min long script. Obviously, I don't want to have to wait 30mins to see the result of each print statement within the python script. How can I get the same level of output in Rundeck as I do in the bash shell directly?

    Read the article

  • Where can I find a useful multi-language Unicode font for Mac OS X?

    - by Stephen Jennings
    On every browser I've tried (Firefox, Safari, Chrome, and Omniweb), when I go to a web page containing somewhat less-common characters, I can't see the glyphs. For example, on the Wikipedia page for the Bengali Language, the very first line contains a string of squares; on Windows, I can see the Bengali writing. Firefox does display code points on the Coptic Language article, but not Bengali. I'm not sure why. On Windows, as long as I have the Arial Unicode MS font installed, these characters fall back to that font and display properly. Mac OS X doesn't seem to ship with a font containing these Unicode characters (it has Arial Unicode MS, but it must be a subset of the Windows version because Bengali doesn't display in that font). I checked on my Snow Leopard DVD and I installed "Additional Fonts" from the Optional Installs package, but I'm still missing many languages. Is there any good, free font that contains a large collection of languages? I know creating fonts is difficult and time-consuming, but it seems like including at least one font like this with operating systems should be standard by now.

    Read the article

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

  • Protocol to mount fat32 network filesystem on Linux with ability to lock files ( not advisory locks

    - by nagul
    I have a fat32 filesystem sitting on a NAS storage device (nslu2) that I need to mount on my Ubuntu system. I've tried Samba and NFS mounts, but both don't seem to support proper locking. More specifically, I am unable to save files to the mounted drive through GNUcash, KeepassX etc, which makes the share fairly useless. Is there a protocol that allows me to achieve this ? Note that the NAS storage device is running a linux OS so I can run pretty much any protocol that has a linux implementation. The only option I'm not looking for is to reformat the partition to ext3, which I'm not able to do due to other constraints. Alternatively, has anyone managed proper locking of a fat32 system over the network using Samba ? Or, is advisory locking the best you get with a network-mounted fat32 file system ? I've thought of trying sshfs but I've not found any indication that this will solve my problem. Edit: Okay, maybe I can reformat the drive, but to any file system except ext3. The "unslung" nslu2 doesn't like more than one ext3 drive, and I already have one attached. So any solution that involves reformatting the drive to ntfs, hfs etc is fine, as long as I can mount it on linux and lock files.

    Read the article

  • eee box "drobpox" server 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have a Samsung A4 small laser printer, a HP500 Designjet Plotter, a Samsung Multifunction Machine (fax/print/scan/copy), a modern HP color A3 Deskjet printer and a HP laserjet A4 color printer. All of them need to be connected to this mini server. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: - If You have something similar running for a long time - If You disagree with this hardware choice and If You would suggest some other device. - If You see any issues with my printing setup - Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • java max heap size, how much is too much

    - by brad
    I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue). I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup. I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS: -Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual. I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation. I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!! update I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as: 1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs] 1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs] about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

    Read the article

  • What does a status of "Backup" mean for Windows 7 local user profiles?

    - by Howiecamp
    Summary: Upon logging on to Windows 7 RTM I get a message that my profile can't be loaded and a temporary user profile is created. I logged off and back on as Administrator. The user profiles dialog shows my user profile with a Type of "Local" and a Status of "Backup" rather than "Local" which it should be. How can I change this to make my user profile accessible? The long story: My PC has a single hard drive partitioned into a C: and a D:. I'd moved my user profile directory (c:\Users) to d:\Users, removed c:\Users and then used mklink.exe to create a directory symbolic link c:\Users -- d:\Users. Worked like a charm since I did it. Today, I make a System Restore Point for drives C: and D:. Next, I dismounted D: and used the Disk Management tool to remove the "D:" drive letter from the D volume. (My plan was to reboot and then redirect the symbolic link.) Upon reboot, I got the user profile error described above. Finally, I restored the System Restore Points that I'd created for both drives and then rebooted again. Same issue.

    Read the article

  • Windows, never "lose focus" of the current window

    - by Mazura
    I want the task bar to light up orange and never lose focus to anything. If I'm installing something and then go play a game at some point it will drop out to the finished instillation. Also, if installing multiple programs at once my 'next' button can all of sudden become "click here to install this crappy toolbar" of another program's install. Of course there are settings for some programs to not "lose focus" or "stay on top" but I really want windows to handle it. If its somehow an exe called say, Taskswtich.exe, I could possibly use Process Blocker however I'm assuming its part of a function call or some such. For XP I found this: How to disable auto focus of opened Windows applications? but what about Windows7? And this old post Preventing applications from stealing focus with a bunch of long answers that say "no". I'd appreciate this not being merged with a 4 year old question. I'd like to avoid 3rd party software. This is 2014, don't we know how to hack windows yet?

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Need help diagnosing my machine

    - by Tom Collins
    I have something that just slows my computer to a crawl sometimes. Not running anything big. Yesterday all I had running (besides background apps) were Firefox & Windows Explorer and could barely even switch screens. Nothing showing up in the task manager as hogging CPUs. I have all non-essential services stopped (MySQl & MSSQL) unless I need them. I made some restore points not long ago, but they disappeared. This is a development mach with a LOT of apps installed, so I really, really do not want to re-install Windows. So, what I'm looking for are ideas or tools I can use to help diagnose this problem. The only clues I have is this started right after I installed Office 2013 (with Office 2010 still installed as well) installed Visual Studio 2012 (also keeping 2010 as a co-install) and installed MSSQL 2012 (upgrade from 2008, no co-install) Also, computer runs fine in Safe Mode. I've just ran out of ideas of what to check. Any help / suggestions would much appreciated. Thanks P.S. I'm running Win 7 Pro (x64). Office is also 64 bit. Visual Studio & MSSQL are 64 bit if that option was available (not sure).

    Read the article

  • windows 8.1 does not boot after CHKDSK command

    - by sepehr
    I had problem with Windows 8.1 high Disk usage so in order to solve , searched forums. A voted answer suggested to use CHKDSK command as follow: run command prompt and as administrator type code snippet below: CHKDSK /f /v /b C: (I can not remember accurately) CHKDSK printed : can not run CHKDSK right now , would you like to schedule C drive to be checked next time windows starts? (Y/N) my response was "Y" ,please ! Following this suggestion not only didn't solve my problem as expected but also added another one ! The next time system booted, after windows authentication I just could see a black screen and mouse pointer. So I force shut downed the system and tried to start windows again. This time , windows got stuck in Scanning and Repairing on 22% for around 3 hours so I got tired and forced shut downed again. is CHKDSK source of Scanning and Repairing problem or they are discrete ? is there any hope to overcome this problem without re-installing windows ? can any one else run CHKDSK on newer versions of windows without problem? is CHKDSK effective but inefficient ?(it finally get to end but will take a long time) If yes, how much time does it take? I also have Linux Ubuntu 14.04 installed along side windows.

    Read the article

  • Name resolution not working with ipv6 on centos

    - by jolivier
    I just installed CentOs 6.3 on a server to be installed in a data center, but cannot get name resolution / curl to work. I know this is because of it trying to use ipv6, since ping google.com works, curl -4 google.com works, but not curl google.com. I removed the ipv6 adress from the interface and it does not change anything. This is very problematic since most system tools like yum fail at name resolution currently. Browsers like Firefox work because they might be using another tool for name resolution than the one use by curl. I managed to fix this on workstations by completely disabling ipv6 following tutorials like this one / hardcoding name resolution in /etc/hosts. But since I am here configuring a server which will be later installed in a remote data center, I would like not to mess up, understand what is going on and fix it properly. Besides, I will face the same issue with more servers to come so I would really appreciate your help in understanding this problem and how to solve it. I would be happy to provide more information if needed to help understand what is going on. The current network configuration is a small enterprise network, with a DNS server (let's call it A) configured once a long time ago. dig google.com and dig -4 google.com are both refused by the A DNS. But this is also true for my workstation on which curl is working (and yes they both use the same A DNS server). Indeed this faulty server and my workstation have multiple nameservers in /etc/resolv.conf, and the second one is working fine for both of them, so if I remove A from my resolv.conf everything works fine! Regards, Olivier

    Read the article

  • Firefox does not upgrade properly (infinite installation loop without success)

    - by Mehper C. Palavuzlar
    I've been using Firefox for a long time but I have a problem with upgrading process now. As you know, Firefox automatically checks for the latest version and downloads it, and when it is ready to be installed it starts the installation process as usual. Last month, when I clicked on "OK, install v3.6.2", the process started but couldn't be finished properly. FF gave a message like that: "Installation cannot be completed, please restart Firefox to retry". When I restarted FF, the same thing happened, and the process entered in an infinite loop. I restarted my PC and clicked on FF icon, and again the installation process began and went into the infinite loop. My solution to this problem was to download FF 3.6.2 and manually install it after killing firefox.exe. Today, FF proposed me to install v3.6.3 and again the same problem occured. I manually downloaded FF 3.6.3 and installed it successfully. Anyone experiencing this problem? In case you want to know, I did not install any new addons in the last few months, and my FF operates without problems. I'm using Windows 7 Home Pro x64.

    Read the article

  • Setup a new domain controller over a temporary VPN, but now Windows delays startup?

    - by Kris Anderson
    I'm migrating servers from colo locations to Amazon's VPC EC2 instances. If anyone hasn't worked with Amazon VPC before, VPN is a pain in the arse! Anyways, I setup a new server that acts as the domain controller for our Amazon VPC. In order to migrate all the user accounts from our existing domain controllers I manually connected to our colo VPN using my user account on the new Amazon EC2 machine. I was able to join the domain and the new Amazon server became another domain controller on our network. So far so good. The problem I'm having is that when booting the EC2 domain controller (which is no longer connected to the VPN so it can't communicate with the existing controllers), it takes a good 6-8 minuted before I can remote into the server (instead of the 1-2 minutes it should take). Also, during this time most of the services we also run (like IIS) also give 404 errors until the 6-8 minutes have passed. It's almost like the domain controller is attempting to reach the other domain controllers first and after 6-8 minutes it falls back to the one located on the local machine? I don't think that's what's happening though, because Server 2008 R2 doesn't have primary and backup domain controllers. They're all equal as far as Windows is concerned. For my network adapter I have only one DNS listed, 127.0.0.1, so it should be looking up the local domain controller and not the other domain controllers it connected to over VPN when VPN was enabled. In the server logs I'm seeing these warnings pop up during a reboot: The winlogon notification subscriber is taking long time to handle the notification event (CreateSession). The winlogon notification subscriber took 409 second(s) to handle the notification event (CreateSession). Any ideas on what's happening here? I would try removing the existing domain controllers from the new Amazon EC2 machine, but I still need to connect over VPN a few times to migrate some data between the servers, and I don't want that change being reflected back to the other domain controllers in our colo locations.

    Read the article

  • Curios: What makes CPUs better than others? [closed]

    - by Zizma
    I have been wondering about this for a long while now and was hoping someone here could answer it pretty easily. If I was looking for the most powerful CPU what should I really be looking at? There are so many different parameters of a CPU and I am wanting to know what each thing does and what really matters. Basically this: What is the deal with cores? If I take using optimized applications out of the mix would it theoretically better to get quad core 1.0GHz CPU or a 1 core 4 GHz CPU? Also, what is the difference between maybe an Sandy Bridge CPU versus an Ivy Bridge CPU? If they both were had the same clock speed and number of cores would the Ivy Bridge perform better? Does an older Xeon with an equal clock speed and number of cores to a new i7 really perform worse/slower? Does size matter? Why would I go with a 22nm CPU over a 32nm when the size difference is so trivial? What about the cache? When does the cache come into play with performance?

    Read the article

  • EEE PC dropbox server running 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have Samsung SCX 4521F (fax/print/scan/copy), Samsung ML2010, HP Laser jet P1006, HP Color Laserjet CP1215, HP Office jet pro K8600, HP Design jet 500. All of them now are connected using little print servers and I want to get rid of them hooking everything to this mini server. The fax/print/scan/copy machine need to stay connected to a PC to make me able to use the software that comes with It. The mini server would save me on this too. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: If You have something similar running for a long time If You disagree with this hardware choice and If You would suggest some other device. If You see any issues with my printing setup Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • Windows Vista claims wireless key is the wrong length

    - by humble coffee
    A family member of mine is house sitting and has been given the details of their wifi. The access point is an Airport Express, it has WEP encryption (I think) and they've been given a passphrase to use. I know it's a passphrase and not the encrypted key as it's an English word. The passphrase is 10 characters long. The problem is that Vista complains that it's not a valid key as it must be a 5 or 13 character non-hex key or a 10 or 26 character hex key. (From what I've read this suggests the encryption is WEP?) I've found a couple of suggested solutions, but I'm not actually at the house at the moment so I wanted to make sure I have a good chance of getting it to work when I'm there but have no internets to ask. Solution 1: Vista needs to be told explicitly what kind of encryption and key is being used. Specify in the connection settings that you are using WEP and that it is a "shared key". Solution2: Try converting the passphrase to hexadecimal using an ASCII-hex converter and entering that.

    Read the article

  • Seeking web-based FTP client for very large file upload

    - by Paul M. Nguyen
    I have looked around for these for some time... the limits imposed by the web server and/or the dynamic programming environment (e.g. PHP) are far too restrictive for the application I'm working on. We need to be able to move large graphics and video files to and from clients (ranging from tens of MB to a few GB in a single file). Plain FTP with a proper desktop client will do the trick, and we're hosting this in Amazon EC2 with EBS. User management will be done from the office via webmin. Users are chroot-jailed into their home dir by proftpd. net2ftp will work for many clients, but we often need to move single files that approach 1GB or exceed 2-3GB which is way out of the range of any http-based uploader. So we turn to Java or Flash - can they do it? From within the web browser establish an FTP connection and grab a huge file? There are licensed applets and such out there, but none seem convincing. Again, I'm looking for some code that can speak FTP and read (& write?) the local disk, that is delivered in a web browser, and can move single files of 2GB+. The reason for having a web-based interface to FTP is to skip the software installation step for our clients. I will consider proper desktop client software as long as it's "portable" and at least Win+Mac and can be easily configured by lay-man users in a hurry.

    Read the article

  • Mod disk_cache permanent caching images and disabling reacurring header updates

    - by user135532
    I am trying to get mod disk_cache to permantly cache images retrieved from an image server on the webserver using ProxyPass. While the image is being retrieved correctly from the server and is served from the cache on further requests, then I am still having the webserver call the image server and causing the cached header to be updated. Because of load concerns then I need to never call the image server on a specific url again after it has been cached once, or extend the refresh time for as long as possible. The webserver is IHS 7.0 The mod's are mod_disk_cache.so, mod_cache.so, mod_proxy.so Version 2.2.8.0 Following is from my httpd.conf: ProxyPass /webserver/media/images/ http://imageserver.com/ws/media/images/ # Caching pictures <IfModule mod_cache.c> <IfModule mod_disk_cache.c> CacheDefaultExpire 2628000 #CacheDisable CacheEnable disk /webserver/media/images/ CacheIgnoreCacheControl On CacheIgnoreHeaders Cookie Referer User-Agent X-Forwarded-For X-Forwarded-Host X-Forwarded-Server Accept-Language Accept Host CacheIgnoreNoLastMod On CacheIgnoreQueryString Off #CacheIgnoreURLSessionIdentifiers CacheLastModifiedFactor 10000000.1 #CacheLock on #CacheLockMaxAge 5 #CacheLockPath CacheMaxExpire 1576800 CacheStoreNoStore On CacheStorePrivate On CacheDirLength 2 CacheDirLevels 3 CacheMaxFileSize 1000000 CacheMinFileSize 1 CacheRoot c:/cacheroot2 </IfModule> </IfModule>

    Read the article

  • System32 files can be deleted in Windows 2008 but not in Windows 2003 [closed]

    - by Harvey Kwok
    I have been using Windows 2003 for a long time. There is a wonderful feature. I don't know the name of it but the feature is like this. You can rename or delete some important files inside C:\windows\system32. e.g. kerberos.dll. After a while, the deleted files will be automatically recovered. I think this is because those files are criticl enough that Windows cannot survive without them. However, in Windows 2008, this feature is gone. Instead, all the files in System32 are owned by TrustedInstaller. However, as a administrator, I can still take the ownserhip of the files and then delete them. Windwos 2008 won't recover the deleted files and hence the system is screwed next time it's reboot. So, I wonder why Windows 2008 dropping that wonderful feature. Was that auto-recovery feature also suffer from some issues? Does Windows 2008 have some other features that can prevent this type of disaster from happening?

    Read the article

  • How do I connect a 2008 server to a 2003 server active directory?

    - by Matt
    Our DC is running Windows Server 2003. I've just set up Windows Server 2008 and have terminal server running on it. When setting the terminal server permissions, it was able to allow a group name that was read from the domain. In the DC the new terminal server shows up as a computer in the domain. I can also log in as a user within the domain even though that user doesn't exist locally on the new server. However, when I go to set sharing permissions on the new machine it doesn't show my domain as a location. Instead it is only looking at location "machinename" and not allowing domain to be seen or added. Is there something I'm missing? Ok, lots of errors in the event log. We have this: The winlogon notification subscriber is taking long time to handle the notification event (Logon). Followed by this: The winlogon notification subscriber took 121 second(s) to handle the notification event (Logon). Followed by: The processing of Group Policy failed because of lack of network connectivity to a domain controller. This may be a transient condition. A success message would be generated once the machine gets connected to the domain controller and Group Policy has succesfully processed. If you do not see a success message for several hours, then contact your administrator. I think this might be the same problem I'm having http://serverfault.com/questions/24420/primary-domain-controller-slow Solved. The issue was that I had changed from DHCP to static and put the wrong DNS server IP in. i.e. firewall instead of DC/DNS server.

    Read the article

  • Outlook 2007 font sizes

    - by Flack
    Hello, Something really strange seems to have happened to my Outlook 2007. Everything was working fine for a long time now but at the end of today, all of a sudden, all of the fonts in Outlook are messed up. The font size of mails I write is huge (I am not zoomed in) and the font sizes of the buttons are big too, specifically the "Send", "To", and "Cc" buttons. I tried changing the font sizes through Outlook, but some of the buttons on the "Mail Format" tab in Options are not working, mainly the "Stationary and Fonts" button. I hit it but no window opens. This is all happening on my x64 machine. I took a look at my x32 machine, which also has Outlook 2007 installed and everything is ok there. Below is a link to an image comparing the broken, large font Outlook (top of picture) and the normal, working outlook. The text in the mails I compose is also abnormally large in the broken Outlook. Big font Outlook buttons. Any ideas? This came out of no where after a few months of no problems. Thanks.

    Read the article

  • Slow network file transfer (under 20KB/s) on newly built x64 Win7

    - by Mangoshake
    I am getting <20KB/s for local network file transfer. If I transfer a very small file (less than 100KB) it would start quickly then slow down to <20KB/s. all subsequently network file transfer would be slow, a reboot is needed to reset this. If I transfer a large file it would be stuck on calculating for a long time and then begin with <20KB/s immediately. This is a newly built desktop running Windows 7 x64 SP1. Realtek gigabit LAN from the motherboard (ASRock Extreme3 gen3). Problematic speed is observed on the private LAN, both through ethernet and WiFi. The Router is D-Link DIR-655. Remote Differential Compression is off. Drivers are up-to-date from ASRock's website. I have tested network file transfer to and from another Windows 7 laptop and a MacBook Pro, so I am fairly certain it is the desktop's problem. The slow speed only happens with one direction also, outbound from the desktop, regardless of whether I initiate the file transfer action from the origin or the destination. Inbound network file transfer and internet speeds are fine, so I don't think this is a hardware issue. I am getting 74.8MB/s internet upload speed from speedtest.net (http://www.speedtest.net/result/1852752479.png). Inbound network file transfer I can get around 10-15MB/s. I am hoping this community has some insight for me to troubleshoot this. I don't see anything obviously related from the Event Viewer, and beyond that I just don't know where else to look. Any suggestions are greatly appreciated, thank you in advance.

    Read the article

< Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >