Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 519/880 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • Creating a vm using Hyper-V causes the host of BSOD

    - by Arcass
    Hi, Problem description: When I try to create a virtual machine, the host bsod part way through the process. From the logs in lookes to fail/hang on the "Creating new VirtualDisckDriver with new VHD" step. The BSOD error code is SYSTEM_SERVICE_EXCEPTION : STOP:0x0000003B When the machine has finished restarting, it looks to have created the vhd and XML files for the vm but it isn't accessable. I have two server bothing behaving in exactly the same way, so I don't believe it's a hardware fault. Has anyone had a similar experince? How did you resolve the problem? NOTES Hardware: HP DL380 G6 BIOS : 2010.03.30 (14 Apr 2010) [Latest from HP website] Inter Hyperthreading: Disabled Intel Virtuazation Technology : Enabled No-Execute Memory Protection: Enabled Mem check reports no errors OS: Windows 2008 Sp2 x64bit fully updated regards Arcass

    Read the article

  • non-GUI connection to local Hyper-V VM without network

    - by sandro
    I have a virtual machine on Hyper-V manager (Windows 2008 R2) without a network configured on the VM. From a powershell script running on the host Windows server, I would like to query into the OS of that local VM for certain information (i.e. if a given process has finished completion). I am using codeplex's pshyperv module (https://pshyperv.codeplex.com/) to interact with Hyper-V manager, but the only cmdlet to connect to the vm is 'New-VMConnectSession', which launches a 'vmconnect.exe' connection to the VM. Since vmconnect.exe is essentially RDP, this is not very script-friendly. From within a host's powershell script, is there any way to send a command to a local virtual machine's OS and receive output, if no network is configured on the VM? (I believe Vmware's 'vmrun' utility has this capability) Another way to ask this question: Does Hyper-V have a non-GUI-based form of vmconnect.exe? (PS. Not sure if this was more stackoverflow or serverfault)

    Read the article

  • Portal And Content – Introduction (1 of 7)

    - by Stefan Krantz
    The coming post over the next two months will be included in a new series. The idea is to help the reader to understand how to enable a versatile and manageable portal. Each post will go through a specific use case or lifecycle group of events that a Content Driven Portal requires the development team to consider. The current planning is to deliver following subjects, each topic will be enclosed in a separate blog post. Introduction – Introduction to the series of posts and what to expect at the end of the series Components, part 1 – UCM, Site Studio and high level introduction to content templates Components, part 2 – Page Templates and  Navigation model Components, part 3 – Applied Customization Framework for Content Presenter Taskflows Scenario 1 – Enable a Portal for runtime administration Scenario 2 – Enable a Portal for Internationalization Scenario 3 – Enable a Portal for Content Workflows Background This post series has been issued to help customers, partners and consultants to understand the concept of a WebCenter Portal project where the main focus or a majority of the portal has content interaction. Today the most portal installations Oracle WebCenter Portal is involved in have a vast majority of content based pages. Many of the Portal projects have or will run into challenges, to mitigate these challenges the portal and content lifecycle has to be well designed. The coming posts will address the main components that should be involved when creating such scenarios; it will also go into details on the process by describing three solution scenarios. The aim with the scenarios is to give the reader a more hands on understanding of the concept of building and architecting a Content Driven Portal. The selected scenarios are selected based on the most common use cases that we have identified until today.

    Read the article

  • Rails: Law of Demeter Confusion

    - by user2158382
    I am reading a book called Rails AntiPatterns and they talk about using delegation to to avoid breaking the Law of Demeter. Here is their prime example: They believe that calling something like this in the controller is bad (and I agree) @street = @invoice.customer.address.street Their proposed solution is to do the following: class Customer has_one :address belongs_to :invoice def street address.street end end class Invoice has_one :customer def customer_street customer.street end end @street = @invoice.customer_street They are stating that since you only use one dot, you are not breaking the Law of Demeter here. I think this is incorrect, because you are still going through customer to go through address to get the invoice's street. I primarily got this idea from a blog post I read: http://www.dan-manges.com/blog/37 In the blog post the prime example is class Wallet attr_accessor :cash end class Customer has_one :wallet # attribute delegation def cash @wallet.cash end end class Paperboy def collect_money(customer, due_amount) if customer.cash < due_ammount raise InsufficientFundsError else customer.cash -= due_amount @collected_amount += due_amount end end end The blog post states that although there is only one dot customer.cash instead of customer.wallet.cash, this code still violates the Law of Demeter. Now in the Paperboy collect_money method, we don't have two dots, we just have one in "customer.cash". Has this delegation solved our problem? Not at all. If we look at the behavior, a paperboy is still reaching directly into a customer's wallet to get cash out. EDIT I completely understand and agree that this is still a violation and I need to create a method in Wallet called withdraw that handles the payment for me and that I should call that method inside the Customer class. What I don't get is that according to this process, my first example still violates the Law of Demeter because Invoice is still reaching directly into Customer to get the street. Can somebody help me clear the confusion. I have been searching for the past 2 days trying to let this topic sink in, but it is still confusing.

    Read the article

  • Strategy for backwards compatibility of persistent storage

    - by Baqueta
    In my experience, trying to ensure that new versions of an application retain compatibility with data storage from previous versions can often be a painful process. What I currently do is to save a version number for each 'unit' of data (be it a file, database row/table, or whatever) and ensure that the version number gets updated each time the data changes in some way. I also create methods to convert from v1 to v2, v2 to v3, and so on. That way, if I'm at v7 and I encounter a v3 file, I can do v3-v4-v5-v6-v7. So far this approach seems to be working out well, but I haven't had to make use of it extensively yet so there may be unforseen problems. I'm also concerned that if the objects I'm loading change significantly, I'll either have to keep around old versions of the classes or face updating all my conversion methods to handle the new class definition. Is my approach sound? Are there other/better approaches I could be using? Are there any design patterns applicable to this problem?

    Read the article

  • Automatic server redirect that passes through as the referring site

    - by nsearle
    Hey All, As always, I would like to thank all of your help and assistance in advance. I am looking for a way to automatically redirect a user to a site and maintain the redirect site as the referral site. Let me explain the process and what is needed in a step by step format. User clicks on a desired link in an email User is navigated to testdomain.com User is automatically redirected to test.com/landing test.com sees testdomain.com as the referral site Data is gathered via Google Analytics I am unsure as if this PHP code will take care of it, or not - header("Location: http://google.com", true, 303); I could test and that is most likely what I am going to do. But I would like to understand a little more as to WHY this would or WHY this wouldn't work.

    Read the article

  • HTTP Protocal

    I have worked with the HTTP protocal for about ten years now and I have found it to be incredibly usefull for transfering data espicaly for remote systems and regardless of the network enviroment. Prior to the existance of web services, developers use to use HTTP to screen scrap data off of web pages in order to interact with remote systems, and then process the data as they needed. I use to use the HTTPWebRequest and HTTPWebRespones classes in order to screen scrap data from various sites that had information I needed to use if no web service was avalible. This allowed me to call just about any webpage and grab all of the content on the page. Below is piece of a web spider that I build about 5-7 years ago. The spider uses the HTTP protocal to requst webpages and then parse the data that is returned.  At the time of writing the spider I wanted to create a searchable index of sites I frequented. // C# 2.0 Framework// Creating a request for a specfic webpageHttpWebRequest webreq = (HttpWebRequest)WebRequest.Create(_Url); // Storeing the response of the webrequestwebresp = (HttpWebResponse)webreq.GetResponse(); StreamReader loResponseStream = new StreamReader(webresp.GetResponseStream()); _Content = loResponseStream.ReadToEnd(); // Adjust the Encoding of Responsestring charset = "";EncodeString(ref _Content, ref charset);loResponseStream.Close(); //Parse Data from the Respone_Content = _Content.Replace("\n", "");_Head = GetTagByName("Head", _Content);_Title = GetTagByName("title", _Content);_Body = GetTagByName("body", _Content);

    Read the article

  • JavaOne + Develop Registration is Open!

    - by justin.kestelyn
    Welcome to "The Zone". Here's what the new JavaOne + Develop registration Website says: The world's most important developer conferences are creating the world's coolest neighborhood for the developer community. Having been intimately involved in the planning process, I can vouch for that statement. Remember, if either co-located conference - JavaOne or Oracle Develop - are the confines of your interest, you can experience either one in standalone mode, if you like (although there are some areas of common interest, of course). Or, considering that a single Full Conference Pass gives you access to both of them, you can partake in any measure that you like. It's up to you. Either way, you will get access not only to session content and keynotes, but also to the massive OTN Night party on Monday night, to open unconference sessions, and to the legendary Appreciate Night concert (acts TBD) on Wednesday. Furthermore, as is customary, the Oracle Technology Network team will offer a full slate of community-focused activities and goodies while the conferences are running - more details on those as we have them. A GOOD time is ensured for all; I look forward to seeing you there!

    Read the article

  • Dealing with 2D pixel shaders and SpriteBatches in XNA 4.0 component-object game engine?

    - by DaveStance
    I've got a bit of experience with shaders in general, having implemented a couple, very simple, 3D fragment and vertex shaders in OpenGL/WebGL in the past. Currently, I'm working on a 2D game engine in XNA 4.0 and I'm struggling with the process of integrating per-object and full-scene shaders in my current architecture. I'm using a component-entity design, wherein my "Entities" are merely collections of components that are acted upon by discreet system managers (SpatialProvider, SceneProvider, etc). In the context of this question, my draw call looks something like this: SceneProvider::Draw(GameTime) calls... ComponentManager::Draw(GameTime, SpriteBatch) which calls (on each drawable component) DrawnComponent::Draw(GameTime, SpriteBatch) The SpriteBatch is set up, with the default SpriteBatch shader, in the SceneProvider class just before it tells the ComponentManager to start rendering the scene. From my understanding, if a component needs to use a special shader to draw itself, it must do the following when it's Draw(GameTime, SpriteBatch) method is invoked: public void Draw(GameTime gameTime, SpriteBatch spriteBatch) { spriteBatch.End(); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, EffectShader, ViewMatrix); // Draw things here that are shaded by the "EffectShader." spriteBatch.End(); spriteBatch.Begin(/* same settings that were set by SceneProvider to ensure the rest of the scene is rendered normally */); } My question is, having been told that numerous calls to SpriteBatch.Begin() and SpriteBatch.End() within a single frame was terrible for performance, is there a better way to do this? Is there a way to instruct the currently running SpriteBatch to simply change the Effect shader it is using for this particular draw call and then switch it back before the function ends?

    Read the article

  • DPKG errors after upgrade to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, cleaning apt cache, manual install using dpkg, etc, i just cant get them to install. what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) It is also just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy.....

    Read the article

  • What's the fastest and automatic way to transfer 2GB of data between 2 PCs every night?

    - by phan
    While it's fast (less than 2 minutes) I hate having to copy files from PC #1 onto a USB stick, and then manually popping it in PC #2 to copy the files to PC #2. Dropbox is too slow in uploading and then downloading 2GBs (synching), it could take hours. Copying 2GBs over the network is also slow because we're dealing with 10,000 little files that totals 2GBs, and not just one, giant 2gb file. Not sure why, but dealing with 10,000 little files makes the copy process much longer. Is there any other method that I'm missing? Any ideas? I'm using Win7 on both PCs. Edit: These files change every single night.

    Read the article

  • What is the safest way to remove a swap partition?

    - by user212062
    I am running Ubuntu 12.04 on a 64-bit HP laptop with a 16 GB flash drive. I do not have a working hard drive right now. When I installed Ubuntu, I created a 2 GB swap partition on sdb1. I have since learned that swap partitions are generally a bad idea on flash drives, so I would like to use my swap space for my other partitions. You can see my partition scheme in the link below. I have read that I just have to comment sdb1 out of the fstab file, boot from a GParted live CD, select swapoff for sdb1, delete/merge with other partition, and everything's good. But, I've also read that messing with sdb1 can change the UUID of sdb2 or sdb3 and cause problems. Is this true? Does initramfs use swap at all? Also, when I get Ubuntu running on my laptop with an internal hard drive, does the swap partition help that much? I have 6 GB of DDR3. Does the rule of 1.5xActual RAM still apply? It seems like quite a bit to me. Thanks for the help! UPDATE: I have removed swap. The process I followed is: Right click swap partition in GParted and selected swapoff. Used # to comment the swap partition out of fstab. I tried to boot from a live GParted CD, but I kept getting an error, so I ran GParted in Ubuntu. Deleted swap partition in GParted. Unmounted /windows. Expanded /windows to take the remaining space. Mounted /windows. The / and /windows partitions each kept their own names and UUIDs, and everything is running fine. I have never seen any swap space being used before, and I don't intend to use the hibernate function, so I think removing swap was a good idea.

    Read the article

  • How to use launchd to ensure an application is running?

    - by zneak
    Hello guys, I use Quicksilver, which is a nifty program that really speeds up my work. I've set it to launch when I log in. However, since Snow Leopard, it's become a bit less stable, and once in a while, it crashes. And obviously, it won't restart by itself. I'm pretty sure I can use launchd to ensure it's always running when I'm logged in. Is there a good guide/example of how to make sure a process restarts when it's killed/terminated/crashed with it?

    Read the article

  • Logging with Resource Monitor?

    - by Jay White
    I am having sudden spikes in disk read activity, which can tie up my system for a few seconds at a time. I would like to figure out the cause of this before I set my machine to go live. With Performance Monitor I know I can log activity, but this does not show me individual processes that cause a spike. Resource Monitor allows me to see processes, but I have no way to keep logs. It seems unless I have Resource Monitor open at the time of a spike, I will not be able to identify the process causing the spike. Can someone suggest a way to log with Resource Monitor, or an alternative tool that can?

    Read the article

  • How do I enable a disabled Event Notification.

    - by Derick Mayberry
    I have a scenerio where I am using external notification to process documents being sent in from the entire navy fleet, normally I have no problems, but just a few days ago an administrator changed passwords and I my queue processing failed and I rolled back the transaction with this C# code: catch (Exception) { TransporterService.WriteEventToWindowsLog(AppName, "Rolling Back Transaction:", ERROR); broker.Tran.Rollback(); break; } after which my target queue would continue to fill up but nothing to the external activation queue. Does the Event Notification get disabled once a transaction is rolled back? Should I have done a broker.EndDialog here when catching my exception? Also, after my event notification is disabled(if that is actually whats happening) how do I re engage it? Do I have to drop it and recreate it? Thank in advance for any help, I love Service Broker and its workign wonderfully except for this bug that I hope to fix soon.

    Read the article

  • needs updated glibc package version 3.4.15 or later for RHEL6

    - by Tejas
    I want to upgrade my current running applications to latest version. But due to some package issue i am unable to install them. I get common error in that: /usr/lib64/libstdc++.so.6: version 'GLIBCXX_3.4.15' not found. When i tried to update glibc package i get following output: [root@agastya ~]# yum install glibc Loaded plugins: refresh-packagekit, rhnplugin epel/metalink | 3.8 kB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 5.0 MB 01:33 epel-testing/metalink | 3.8 kB 00:00 epel-testing | 4.3 kB 00:00 epel-testing/primary_db | 295 kB 00:03 rhel-x86_64-server-6 | 1.8 kB 00:00 rhel-x86_64-server-6/primary | 11 MB 02:02 rhel-x86_64-server-6 8816/8816 Setting up Install Process Package glibc-2.12-1.80.el6_3.6.x86_64 already installed and latest version Nothing to do [root@agastya ~]# Should i need to add some more repositories? If yes, how?

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • What are the benefits of running a app server in user space, like Unicorn, as opposed to as sudo?

    - by dan
    I've been using Phusion Passenger + Rails/Sinatra for a lot of projects. Passenger runs under the main Nginx or Apache process. But I'm interested in Unicorn, partly because it runs in user space. You just set up Nginx to proxy_pass requests to a unix socket that is connected to Unicorn processes that you fire up under a normal user account. Is there anything to be said as far as advantages and disadvantages of these two alternative approaches to running an web app? I mean in terms of ease of administration, stability, simplicity, etc.

    Read the article

  • Installed over 4G RAM on 32-bit OS? [closed]

    - by kai
    Possible Duplicate: 32-bit Windows Server address > 4GB RAM - How? I know that for 32-bit OS, the addressable memory space for each process is "4G" (maybe just 3G in user space...). If I have a 8G RAM, is it correct that all of the processes can still utilize (shared) these 8G memory but each of them are limited to a maximum 4G? Or the whole system only can see and utilize 4G out of 8G and thus having 8G RAM on a 32-bit OS is the same as having 4G RAM on it?

    Read the article

  • Legal issues regarding embedding a toolbar into a browser [closed]

    - by OmarOthman
    We are in the process of developing a software that provides service to internet users and we would like to ask about the legal liabilities of some issues. Of course, everything is to be done with the consent of the user of our software but our concern is about third party tools and services that may be invoked/used by our product. In particular, these are the concerns: (1) Embedding a toolbar to an existing browser. This screenshot is an example, where the words in the highlighted toolbar are passed to www.google.com for searching, and the contents of the window are the results of the search. I want to know if any consent should be obtained before such a toolbar can be embedded in a web browser, whether there are any legal requirements by the web browser; whether different web browsers have different requirements (at least for Internet Explorer, Firefox, Chrome, Opera and Safari). (2) Invoking a free website from that toolbar (like Google’s search page). The screenshot above demonstrates such an existing toolbar. (3) Full ownership and unrestricted access to the data entered to this toolbar. In the screenshot above, I want to take the words (translation english to spanish) and own them, i.e. storing them in my database and do some processing on them. (4) Ability to track the pages entered by the user starting from that free website. In the screenshot above, you can notice that the user opted only for the third result, whose URL is translate.google.com. I want to have access to this and all URLs clicked from this page for some processing as well. This is a commercial application, so I need a very concrete, precise and reference-supported answer.

    Read the article

  • Apache configuration to accept all data

    - by ServerDown
    Hi, I have apache running on port 7979 to talk with a device that sends data to webserver and later will run php scripts to process and send reply xml. The problem now is that it sends data like POST HTTP/1.1 Content-Type:text/xml Content-Length:369 Followed by XML When apache sees this it gives a 400 error. Since the device cannot be changed is there any way to accept the full data sent from the device and write to some log? Currently apache simply keeps sending 400 errors back. If there was a way to log the entire xml or create some custom handler for 400 error then the xml could be read by a php script. Looking forward to solutions.

    Read the article

  • How to handle editing a large file for a non-technical user

    - by Luke
    I have a client who is given a tab delimited .txt file containing hundreds of thousands of rows. I have a user story as follows: As a user I want to take the text file and add a new value at the end of each line which contains the concatenated value of two of the columns. for example if the file read text_one text_two I need to output the following (preferably to a .txt file) text_one text_two text_onetext_two My first approach was to ask the vendor supplying the file to do the concatenation before providing the file, the easiest way to solve a problem is to eliminate it right? however they are very uncooperative and have point blank refused. I've looked at building a simple javascript application that does this client side so a non-technical user could select the file using a file selector. This approach has a few problems The file could be over a GB in size and so can't be loaded straight into memory, I've tried and the browser crashes There is no means to write a file in javascript so I'd need to output the content to the screen and have the user save it (somehow) I was thinking if I could get around the filesize limitations I could just output the edited content to the page and have the user save the page as a .txt file, however I think there is a better way than using javascript that will still accommodate the users lack of technical know-how. Please consider this question to be stack agnostic, but bear in mind that a nice little shell script or python script would be deemed unsuitable for a non technical user unless there is a way of "packaging" it nicely for a non-technical user. Updates The file is too large to open in excel. The process needs to be run weekly, but it doesn't require scheduling or automation...(yet)

    Read the article

  • MSDN / TechNet Key Importer for KeePass 2

    - by Stacy Vicknair
    If you have an MSDN account and, like me, systematically claim keys just as well as you systematically forget which keys you’ve used in which test environments! Well, in a meager attempt to help myself track my keys I created an importer for KeePass 2 that takes in the XML document that you can export from MSDN and TechNet. The source is available at https://github.com/svickn/MicrosoftKeyImporterPlugin.   How do I get my KeysExport.xml from MSDN or TechNet? Easy! First, in MSDN, go to your product keys. From there, at the top right select Export to XML. This will let you download an XML file full of your Microsoft Keys.   How do I import it into KeePass 2? The instructions are simple and available in the GitHub ReadMe.md, so I won’t repeat them. Here is a screenshot of what the imported result looks like:   As you can see, the import process creates a group called Microsoft Product Keys and creates a subgroup for each product. The individual entries each represent an individual key, stored in the password field. The importer decides if a key is new based on the key stored in the password, so you can edit the notes or title for the individual entries however you please without worrying about them being overwritten or duplicated if you re-import an updated KeysExport.xml from MSDN! This lets you keep track of where those pesky keys are in use and have the keys available anywhere you can access your KeePass database!   Technorati Tags: KeePass,KeePass 2,MSDN,TechNet

    Read the article

  • Why is IIS 7.5 flushing file cache very often?

    - by Steffen
    We're running a Win 2008 R2 server with IIS 7.5 for serving image files. It's only used for static content, and file caching has been set up to cache files for 10 minutes. However the IIS frequently completely flushes the cache (seen by using Perfmon) It's not application pool recycling, it's not because the TTL has expired, so now I'm at a loss :-( I've included a screenshot of the perfmon graph where you can clearly see the issue. Is there anywhere I can see WHY it's doing these flushes ? (Note: I'm aware I could maybe detect it by attaching a debugger to the process, but that's not an option because it's a production server, and it cannot handle the slowdown a debugger would cause)

    Read the article

  • Installed 4GB memory but Windows XP 32 bit only reporting 2GB?

    - by AnthonyWJones
    I've just taken an existing XP Pro 32 bit system that had only 0.5GB of memory installed and maxed it out to 4GB. The BIOS reports the 4GB ram however when XP is booted and I look at the computer properties only 2GB of RAM is reported. Can anyone explain this? Before we go up any blind allys the /3GB switch is not the answer here, I have no need for a single process to use more the 2GB of memory. I'm wondering if the the 32 bit XP Pro is deliberately limited to 2GB. I seem to remember seeing an excellent table on a Microsoft site listing all the various SKUs of Windows and what each one was limited to. However I can't seem to find that table now.

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >