Search Results

Search found 8379 results on 336 pages for 'article'.

Page 287/336 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • glassfish timeout

    - by Stefano
    Environment: Windows 2008 Server Edition Netbeans 6.7.1 Glassfish 2.1 Apache 2.2.15 for win32 Original problem (almost fixed): The HTTP/1.1 GET method to send data fails if I wait for more than 30 seconds. What I did: I added to the http.conf file of Apache these lines: # # Timeout: The number of seconds before receives and sends time out. # Timeout 9000 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On I went to the Glassfish panel (localhost:4848) and in Configuration HTTP services and I put: Timeout request: 9000 seconds (it was 30) Standby time: -1 (it was 30 seconds) Problem: I am not able to put for glassfish a timeout bigger than 2 minutes to send a GET method. I found this article about glassfish settings, but i'm not able to find WHERE I should put those parameters, and if they could work. Can anybody help try to set this timeout to a higher limit? Maybe it's even a different setting? New tried solution: I went to the glassfish panel control, and to Configuration Subprocesses "Thread-pool-name" and changed the idle timeout from 120 seconds to 1200 seconds. Then I restarted the glassfish service (both from the administrative tools and from asadmin), but still it waits 120 seconds to go idle. I even tried restarting the whole server, still no results. Maybe some setting in postgres? Or the connection of netbeans to postgres through glassfish?

    Read the article

  • Exchange migration: ExchangeTransport warnings after uninstalling source server

    - by carlpett
    After disabling/uninstalling Exchange from our source SBS2003 server, I'm getting these warnings: Event 5020 "The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003 sourceserver.domain.local in Routing Group [...]" Event 5006 "Cannot find route to Mailbox Server CN=SOURCESERVER [...] for store CN=[...]", for Public folder, First storage group and Recovery storage group. I followed the technet article here: http://technet.microsoft.com/en-us/library/bb288905.aspx (linked from the SBS 2003 - 2011 migration guide). When uninstalling Exchange, I got a warning about NNTP not being found in the registry, but that didn't seem relevant, and the uninstall continued. The server was subsequently removed from the domain and shut down, as per the instructions. If I open the Public Folder Management console on the Exchange 2010 server, the public folders \NON_IPM_SUBTREE\EFORMS_REGISTRY and \Archived mails gives an error on "Update content". I haven't found anything else which indicates something is wrong. We never really used the public folders on the old server, so there isn't really anything lost. Can I just remove these folders and let them be created anew?

    Read the article

  • Copy Database Wizard fails on creation of view into another not-yet-copied database

    - by user22037
    Update - I found that doing a manual detach/reattach using MSDN article "How to: Move a Database Using Detach and Attach (Transact-SQL)" got around this issue. I'll just be creating a script to dettach and reattach but do the file copies manually. Any info on how to overcome the problems with the wizard would be helpful in the future. I am in the process of moving around 20 databases from our current server to a new one. When performing the copies however I have found that some databases can not copy if they have views into other databases that have not yet been copied to the target system. The log file generated says "failed with the following error: "Invalid object name" in reference to the database in the view. If I first copy just the database referenced in the view and then in a separate step copy the database over containing the view it is successful. However some other database have views into each other so can't just adjust the order in which the copy occurs. Is there any way to ignore this error and just allow everything to copy?

    Read the article

  • Windows Server 2008 R2 creating a multi-year client certificate using the IIS certsrv page while deploying SSTP VPN

    - by Warren P
    I am trying to follow instructions on Technet about deploying a Standard (non-enterprise) SSTP based VPN) that were originally written for Server 2008, but I am using Server 2008 R2, I have gotten as far as the part where it asks you to create a request a Server Authentication certificate. I have deployed IIS, and Active Directory Certificate Services, and chose "Standalone" and "Standard" (non-enterprise) Certificate Authority because I don't have an OID and don't think I should have to get one for a simple deployment of SSTP. The resulting certificates made by the Certification Authority "Issue" command, only have a 1 year period of validity, I want a multi-year certificate. At no point in this process is there any way to input this information unless it's through the Attributes text input area on the Advance Certificate Request page, which appears to be generated using an old ActiveX control, which means I can only do this using the workarounds in the article that I linked at the top, and only using Internet Explorer. Update:: It may be that this question is pointless since self-signed keys do not appear to work, when I try them, using Windows 8 as the VPN client. The problem is that the keys that are self-created by the technique shown here do not have any Certificate Revocation Server URLs and so you get an error "The revocation function was unable to check revocation", and the VPN connection fails.

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • sub application and virtual directory file permissions

    - by Zeus
    I have a website setup in IIS7, exampledomain.com. Under the application exampledomain.com lives a sub application cms. In a rather convoluted way, we have content in our cms system in this sub-app, under cms\content\{generatedfoldername}. So to access an image in this content, the full URL would be http://www.exampledomain.com/cms/cms/content/{generatedfoldername}/image.jpg, (yes, cms twice...) and this works just fine. Now, we have a virtual directory under the parent website, called stuff which points at the content of the cms. So I should be able to get to the image using the url http://www.exampledomain.com/stuff/{generatedfoldername}/image.jpg. Unfortunately this gives a server 500 error "There is a problem with the resource you are looking for, and it cannot be displayed." Whilst you do have to log into the cms system to access any of the admin pages within, I don't think the image files are protected by login, or else the first example URL wouldn't work, right? Also it's a server 500 error, rather than a 403. I'm sure I must be missing something obvious here- will the virtual directory be using the permissions defined in the parent application, or the subapplication to which it is pointing? Or is there some other permissions I may have missed? Sorry, that was a bit long, thanks for reading all the way down here! (I also must point out that I'm pretty new to the server management stuff.) edit: also, we have <location path="." inheritInChildApplications="false"> specified in the webconfig of the parent app, so it's hopefully not the issue described in this config file hierarchy article.

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Why does windows (file) explorer try to connect to port 80 (http) instead just using smb?

    - by Erik
    Background: On an almost freshly installed pc I get a message along the lines of : "windows cannot find some-file-server-name. Check the spelling and try again"... when trying to access any fileshare. Troubleshooting so far: pinging works. Both by ip and by name the almost identical pc next to this one can access the file server everyone else can access the file server the pc in question can not access other open fileshares but it can connect to the internet And now for what I think is the interesting part: running wireshark with ip.addr == local.ip.add.ress and ip.addr == server.ip.add.ress tells me that it tries to connect over http. the server replies but after a few messages back and forth it stops the other machine of course just uses smb I guess port 80 just means it defaults to webdav, but I haven't been able to find anything that can cause this. Googling it the closest thing I found was this http://www.techrepublic.com/article/get-vista-and-samba-to-work/6353849 but then again this was an XP pc and I wasn't able to connect to other native Windows shares (and I tried the solution anyway and it didn't work.)

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • How to prevent the command prompt from closing after execution?

    - by Sk8erPeter
    My problem is that in Windows, there are command line windows that close immediately after execution. To solve this, I want the default behavior to be that the window is kept open. Normally, this behavior can be avoided with three methods that come to my mind: Putting a pause line after batch programs to prompt the user to press a key before exiting Running these batch files or other command line manipulating tools (even service starting, restarting, etc. with net start xy or anything similar) within cmd.exe(Start - Run - cmd.exe) Running these programs with cmd /k like this: cmd /k myprogram.bat But there are some other cases in which the user: Runs the program the first time and doesn't know that the given program will run in Command Prompt (Windows Command Processor) e.g. when running a shortcut from Start menu (or from somewhere else), OR Finds it a little bit uncomfortable to run cmd.exe all the time and doesn't have the time/opportunity to rewrite the code of these commands everywhere to put a pause after them or avoid exiting explicitly. I've read an article about changing default behavior of cmd.exe when opening it explicitly, with creating an AutoRun entry and manipulating its content in these locations: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Command Processor\AutoRun HKEY_CURRENT_USER\SOFTWARE\Microsoft\Command Processor\AutoRun (The AutoRun items are _String values_...) I put cmd /d /k as a value of it to give it a try, but this didn't change the behaviour of the stuffs mentioned above at all... It just changed the behaviour of the command line window when opening it explicitly (Start-Run-cmd.exe). So how does it work? Can you give me any ideas to solve this problem?

    Read the article

  • Power surge PC damage: How can I test all components of my PC without access to a second computer?

    - by Doug T.
    Ever since we had some crazy power surges last week my 64 bit Windows 7 PC has been acting strange. My USB network adapter disconnects from the wireless and can't detect the signal. I have to disable/reenable the adapter to detect it again. Also my wife has reported that the PC has rebooted a few times while I'm not sitting at it. Today I finally caught the reboot while I was using the PC. I got this blue screen of death. Stop Code 0x00000109: "Modification of system code or a critical data structure was detected." I followed the advice at the linked article and ran a memory test. I used memtest86 and its already found around 300,000 errors out of 8 gigs of ram. Now I'm worried -- what are the odds this is isolated to just my memory and not just a system wide problem? Isn't there a good chance that many other components are fried? More importantly, how can I test those other components? Are there tools similar to memtest I can use to test my motherboard/video card/power supply? If these are vender specific, is it typical for vendors to provide testing tools?

    Read the article

  • Force Installing a Radeon HD 2100 on Windows 8

    - by Click Ok
    I'm trying force installing a Radeon HD 2100 on Windows 8. I've found that link from AMD with the drivers for Windows7: http://support.amd.com/us/gpudownload/windows/legacy/Pages/legacy-radeonaiw-vista64.aspx I know too that AMD will stop support Radeon HD 400 and older: http://www.techspot.com/news/48321-amd-drops-windows-8-support-for-radeon-hd-4000-and-older.html Now, let's go to the problem. If I try install the 12.6 driver, Windows will stick with the "basic display adapter", and this is bad for 3d games like Minecraft, that runs really slow now compared with the previous Windows7 installation. Forcing install the catalyst driver can help to fix it. So, I follow that steps: Extract the Catalist Driver (C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql) Right click the "basic display adapter" on device manager, and "update driver" Search on PC I will choose the driver "With Disk" "C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql\Packages\Drivers\Display\W76A_INF" There is a big list of drivers and the nearest driver to HD 2100 is "Radeon HD 2350 Series" My questions: Why isn't "Radeon HD 2100 Series" listed? (or Where is it listed?) In theory it must be listed" The first link above show that "This article applies to the following configuration(s):" (...) "AMD Radeon HD 2000 Series" Am I doing something wrong?

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • How do I serve Ruby on Rails applications on Windows Server 2008?

    - by Adam Lassek
    I have spent the last several hours attempting to get Ruby on Rails running on a Windows server with no luck. At first I tried configuring a test application through IIS7's FastCGI support, but the documentation for this is not very good. I've been following this blog entry, and this one, and this one, and this one but everything seems to be missing major steps, or are out of date. And every article keeps linking back to this Howto from rubyonrails.org that doesn't exist. The sense that I'm getting is that even if I manage to make this work, IIS' FastCGI isn't good enough to use in a production environment anyway. So it looks like my best bet is to setup a reverse proxy in IIS that points to Apache & Mongrel/Passenger using ARR and UrlRewrite. Is there anybody else out there stuck deploying a Rails application on a Windows stack? Am I on the right track? Can you give me a better idea of how to configure this? I believe Plesk already installed an instance of Apache/Tomcat running on this server using a different port, so adding another virtual host shouldn't be difficult; the hardest part seems to be setting up the reverse proxy through IIS.

    Read the article

  • What are the advantages of registered memory?

    - by odd parity
    I'm browsing for a few low-end servers for a startup and I'm a bit confused about the different memory types. The advantage of ECC is clear - single-bit error correction. When it comes to registered memory it seems more vague, especially in systems that support both registered and unbuffered memory. A Google search mostly finds copies of the Wikipedia article, which states that registered memory chips "...place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise". However I can't find any quantification of this. What I'm wondering about is: Is registered memory an improvement over unbuffered when it comes to soft error rate, or is it purely about the maximum number of modules supported? If yes, at what point (amount of modules or GB of memory) do these improvements start to become noticeable? For a specific example, the HP ProLiant DL 120 G6 server manual states that maximum supported memory configuration is 16 GB unbuffered (4x4GB) or 12 GB registered (6x2GB). In this case I'd rather have the extra 4GB of memory if the reliability difference is negligible.

    Read the article

  • Choosing local versus public domain name for Active Directory

    - by DSO
    What are the pros and cons of choosing a local domain name such as mycompany.local versus a publicly registered domain name such as mycompany.com (assuming that your org has registered the public name)? When would you choose one over the other? UPDATE Thanks to Zoredache and Jay for pointing me to this question, which had the most useful responses. That also led me to find this Microsoft Technet article, which states: It is best to use DNS names that are registered with an Internet authority in the Active Directory namespace. Only registered names are guaranteed to be globally unique. If another organization later registers the same DNS domain name, or if your organization merges with, acquires, or is acquired by other company that uses the same DNS names, then the two infrastructures cannot interact with one another. Note Using single label names or unregistered suffixes, such as .local, is not recommended. Combining this with mrdenny's advice, I think the right approach is to use either: Registered domain name that will never be used publicly (e.g. mycompany.org, mycompany.info, etc). Subdomain of an existing public domain name which will never be used publicly (e.g. corp.mycompany.com). The "never used publicly" part is a business decision so its probably best to get sign off from those in the company authorized to reserve domain names and subdomains. E.g. you don't want to use a registered name or subdomain that the marketing dept later wants to use for some public marketing campaign.

    Read the article

  • How do I re-configure grub options

    - by Ash
    I've searched this topic and I can't find an article with a plausable solution, to my problem. I installed windows 7 first, with 100gb of disk space. I created the necessary partitions via windows. Then I installed Ubuntu 12.04 on the remaining 400gb of disk space. During the Ubuntu installation I installed the ubuntu boot loader on /dev/sda3 which Windows (as expected never granted me the option pre-boot for which OS I wanted to boot). So I re-installed Ubuntu on that /dev/sda3 partition, overriding the windows 7 loader. Now when I boot windows 7, it runs GNU Grub, so like an infinite loop. How can I reconfigure grub to say: /dev/sda is the bootloader. /dev/sda2 is Windows. /dev/sda3 is Ubuntu. Re-installing windows and my partitions isn't an option, purchasing software for windows isn't an option (theres a reason I use linux, it's not because it's free, because you don't have to install lots of shit to get stuff working, and over-all it's a robust OS).

    Read the article

  • Changing farm account in Sharepoint 2010

    - by user55709
    After changing the the farm account to a domain user account I get the following error when trying to access the Central Administration page: "Cannot connect to configuration database" After I realized the headache may not be worth it, I decided to a reinstall using the following SP user account guidelines: http://technet.microsoft.com/en-us/library/ee662513.aspx After getting everything up, I am getting an error when using the designated farm account under the Central Admin Website Manage Service Accounts: "Access denied" If it is the farm administrator, why would I not be able to manage service accounts? I am able to access the other part of the admin site. Also, when logging in with the farm account it lists me as a "system account" not the domain account which I used for log in. Am I missing something or is this normal behavior? Am I not suppose to login with the farm account? When I log in with the Setup account (also a domain account) I can access everything with no errors on the site. The only difference between the two accounts is one has local admin privileges on the Sharepoint farm server which is the setup account. if you notice those privileges are not necessary for the farm account according to the article cited.

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    Hi, I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • Profile not loaded correctly (Cannot access registry)

    - by xaav
    Every so often, I log on and get the Following Message: User profile was not loaded correctly. You have been logged on with a temporary profile. Changes you make to this profile will be lost when you log off. Please see the event log for details or contact your administrator This almost always happens when somebody else has been on the computer for a while, and then I log on. This never used to happen, but now it happens pretty often. My profile is not permanently corrupted, all I have to do is restart my computer, but this annoys me, and I would like to fix it. I was curios about the reason of this cause, so I looked into the Event Log, and found the root of the problem was the ntuser.dat file in the profile that I was logging on to was locked at logon time. This resulted in the current users registry not being loaded, resulting in failure to load the profile. I just found a microsoft article that mentions this exact issue: http://support.microsoft.com/kb/960464/ The problem is that I do not want to delete this profile; and this issue does not come up every time that I log on, only when somebody else has been on a long time before me. What could be locking this file? Is there any way to get a process list without logging on so that I can identify which process has the file locked? Any other suggestions?

    Read the article

  • Is there any way to shut up my ATI HD 5770?

    - by slpsys
    So to preface, I basically built Jeff's machine; I already had some of the components, including (scarily enough) the exact same case1. I've been buying bits and pieces over the past few months, which coincided perfectly with his recent post about three monitors, though not being a gamer outright, I opted for the second-from-the-bottom option. After finally plopping all the pieces lovingly into the case this evening, I turn it on...and it sounds like four professional grade hair-driers. Some quick regression analysis determined that with the video card out, the running machine sounded no louder than our house's vents. Basically, my last desktop build included a $45-at-the-time graphics card, and it's been Macbook Pros and workstations since then, so I have zero idea whether I'll just be able to tune the fan speed later on. Will I be able to get this thing to quiet down every time I'm not playing Modern Warfare 2 at maximum framerate, or should I just send this thing back now, and get the quietest card in my pricerange? 1 One thing of note is that I do not have noise-absorbing foam in the case, as is pictured in the article. I'm only mentioning that because I suspect it could drop the overall output a few decibels, but obviously not that many.

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • How can I change the default program installation directory in Windows 7?

    - by Max
    Windows 7 is installed on my C drive, which is quite small. I am very tired of instructing new programs to put their files on my larger D drive during installation; I would like to change the default drive. This article says that you can use a registry hack, but I am giving Microsoft the benefit of the doubt and naively assuming that a configuration option exists somewhere. It's 2010... do I really have to hack my registry to make a simple tweak like this? Also, there's a ServerFault question that explains how to move the "Users" directory and create a symlink, which could also work. However, at the moment I have some apps in C:\Program Files, some apps in C:\Program Files (x86), and some apps in the corresponding folders on D:\, so it would be a hassle. Also, my small OS boot drive is a 10k RPM WD Raptor, and I feel like that probably gives a speed boost to apps installed on it that need to read & write to their directories a bunch. I wonder if it actually matters.

    Read the article

  • Run as another user, but also as administrator

    - by Tewr
    I am trying to debug a virtual machine (VM) running on a remote computer from my workstation (A). Both VM and A are running Windows 7 Enterprise. Apparently, I need to start the Remote Debugger Service (RDS) on VM as an administrator. Apparently, I also need to run RDS as the user Tewr logged in on A (domain: DOM). VM runs the services i need to debug, as well as the remote desktop interface with an account VMUSER in a domain called VMDOMAIN. I manage to start RDS as administrator, but then the RDS process is owned by VMUSER and that's not good enough. I also manage to run RDS as DOM\Tewr, but then not as an administrator. I have Added DOM\Tewr as an administrator on VM, but thats not good enough becuase the process is still not run as administrator. How can I run the RDS process as DOM\Tewr and "As Administrator", while logged on in windows as VMDOMAIN\VM? (note: I have tried creating an account with the same credentials / password as VMUSER, as hinted in the ms article above, but with no luck...)

    Read the article

  • Reasons for Ajax navigation breaking on Coldfusion/Apache when running an app in iOS fullscreen mode? [closed]

    - by frequent
    Not sure if this belongs to SO or here. I'm running a webApp using jquery, jquerymobile, requireJS and apache, coldfusion8, mysql 5.0.88 serverside. The app works fine until I try to run it in fullscreen mode on iOS (add icon to homescreen, launch app from there with <meta name="apple-mobile-web-app-capable" content="yes" /> specified). This meta tag will break the Jquery Mobile AJAX navigation. The AJAX request will fail and the requested page will be loaded as a new page, thereby restarting the app on every page change. I have chased this through the whole front end starting from requireJS through Jquery Mobiles AJAX navigation down to the AJAX request being made in Jquery. xhr.send( ( s.hasContent && s.data ) || null ); In regular browser this works no problem. In fullscreen mode, this fails (readystate=0, empty response). I have found this article, which argues that fullscreen mode is like a browser instance with different HTTP strings. On ASP.net this results in the browser not being identified by the server and only basic browser settings being assumed (e.g. no Javascript). I'm a little lost where to start looking for possible reasons serverside. I have not written any server code for handling Ajax page navigation, so this must be something that is handled out of the box by Coldfusion or Apache? Question: Where could I start looking for problable causes of fullscreen mode breaking AJAX navigation if I assume Coldfusion or Apache are the culprits? Is there a setting I'm missing in httpd.config? What else could be the problem? Thanks for inputs!

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >