Search Results

Search found 4 results on 1 pages for 'mhenry1384'.

Page 1/1 | 1 

  • Why are 32-bit application pools more efficient in IIS? [closed]

    - by mhenry1384
    I've been running load tests with two different ASP.NET web applications in IIS. The tests are run with 5,10,25, and 250 user agents. Tested on a box with 8 GB RAM, Windows 7 Ultimate x64. The same box running both IIS and the load test project. I did many runs, and the data is very consistent. For every load, I see a lower "Avg. Page Time (sec)" and a lower "Avg. Response Time (sec)" if I have "Enable 32-bit Applications" set to True in the Application Pools. The difference gets more pronounced the higher the load. At very high loads, the web applications start to throw errors (503) if the application pools are 64-bit, but they can can keep up if set to 32-bit. Why are 32-bit app pools so much more efficient? Why isn't the default for application pools 32-bit?

    Read the article

  • How to dispose of a NET COM interop object on Release()

    - by mhenry1384
    I have a COM object written in managed code (C++/CLI). I am using that object in standard C++. How do I force my COM object's destructor to be called immediately when the COM object is released? If that's not possible, call I have Release() call a MyDispose() method on my COM object? My code to declare the object (C++/CLI): [Guid("57ED5388-blahblah")] [InterfaceType(ComInterfaceType::InterfaceIsIDispatch)] [ComVisible(true)] public interface class IFoo { void Doit(); }; [Guid("417E5293-blahblah")] [ClassInterface(ClassInterfaceType::None)] [ComVisible(true)] public ref class Foo : IFoo { public: void MyDispose(); ~Foo() {MyDispose();} // This is never called !Foo() {MyDispose();} // This is called by the garbage collector. virtual ULONG Release() {MyDispose();} // This is never called virtual void Doit(); }; My code to use the object (native C++): #import "..\\Debug\\Foo.tlb" ... Bar::IFoo setup(__uuidof(Bar::Foo)); // This object comes from the .tlb. setup.Doit(); setup-Release(); // explicit release, not really necessary since Bar::IFoo's destructor will call Release(). If I put a destructor method on my COM object, it is never called. If I put a finalizer method, it is called when the garbage collector gets around to it. If I explicitly call my Release() override it is never called. I would really like it so that when my native Bar::IFoo object goes out of scope it automatically calls my .NET object's dispose code. I would think I could do it by overriding the Release(), and if the object count = 0 then call MyDispose(). But apparently I'm not overriding Release() correctly because my Release() method is never called. Obviously, I can make this happen by putting my MyDispose() method in the interface and requiring the people using my object to call MyDispose() before Release(), but it would be slicker if Release() just cleaned up the object. Is it possible to force the .NET COM object's destructor, or some other method, to be called immediately when a COM object is released? Googling on this issue gets me a lot of hits telling me to call System.Runtime.InteropServices.Marshal.ReleaseComObject(), but of course, that's how you tell .NET to release a COM object. I want COM Release() to Dispose of a .NET object.

    Read the article

  • Prevent OCUnit tests from running when compilation fails

    - by mhenry1384
    I'm using Xcode 3.2.2 and the built in OCUnit test stuff. One problem I'm running into is that every time I do a build my unit tests are run, even if the build failed. Let's say I make a syntax error in one of my tests. The test fails to compile and the last successful compilation of the unit tests are run. The same thing happens if one of the dependent targets fail to build - the tests are still run. Which is obviously not what I want. How can I prevent the tests from running if the build fails? If this is not possible then I'd rather have the tests never run automatically, is that possible? Sorry if this is obvious, I'm an Xcode noob. Should I be using a better unit testing framework?

    Read the article

  • How to track down a Blue Screen of Death triggered by an (usermode) application

    - by mhenry1384
    We have a .Net application consisting of mixed managed and unmanaged code. We have a number of reports of users getting BSOD while using our application. These blue screens happen on different versions of Windows. Mostly XP but one user claims it happens on Windows 7. Some users see it happening when doing one thing, other see it happening when doing something completely different. Of course, we cannot reproduce the problem. Needless to say, I'm stumped. A user mode application shouldn't be able to blue screen the OS so we are running into a bug in a common kernel space application, perhaps buggy antivirus software? Does anyone have any tips on how to track something like this down? We don't have access to a computer where this is happening so we wouldn't be able to hook up a kernel debugger or anything like that.

    Read the article

1