Search Results

Search found 37294 results on 1492 pages for 'oracle general support topics'.

Page 1131/1492 | < Previous Page | 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138  | Next Page >

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • I need access control within the same network/VLAN

    - by Sadiq ali
    Hi, I have a single network/VLAN and I want to block some traffic and allow some traffic in my network, is this possible using a L2 or L3 switch? If so which switches support this feature and what would be the commands to configure this? I have already tried this using access lists by applying it to an ethernet port but if I apply it on one port it will automatically work on incoming traffic on that port but I mean it to work on only outgoing traffic as per my ACL. Do you have any suggestions please?

    Read the article

  • Connect to localdb using Sql Server management studio

    - by Magnus Karlsson
    I was trying to find my databse for local db under localhost etc but no luck. The following led me to just connect to it, kind of obvious really when you look at your connections string but.. its sunday morning or something.. From: http://blogs.msdn.com/b/sqlexpress/archive/2011/07/12/introducing-localdb-a-better-sql-express.aspx High-Level Overview After the lengthy introduction it's time to take a look at LocalDB from the technical side. At a very high level, LocalDB has the following key properties: LocalDB uses the same sqlservr.exe as the regular SQL Express and other editions of SQL Server. The application is using the same client-side providers (ADO.NET, ODBC, PDO and others) to connect to it and operates on data using the same T-SQL language as provided by SQL Express. LocalDB is installed once on a machine (per major SQL Server version). Multiple applications can start multiple LocalDB processes, but they are all started from the same sqlservr.exe executable file from the same disk location. LocalDB doesn't create any database services; LocalDB processes are started and stopped automatically when needed. The application is just connecting to "Data Source=(localdb)\v11.0" and LocalDB process is started as a child process of the application. A few minutes after the last connection to this process is closed the process shuts down. LocalDB connections support AttachDbFileName property, which allows developers to specify a database file location. LocalDB will attach the specified database file and the connection will be made to it.

    Read the article

  • Gesture Detector not firing

    - by Tyler
    So I'm trying to create a input class that implements a InputHandler & GestureListener in order to support both Android & Desktop. The problem is that not all the methods are being called properly. Here is the input class definition & a couple of the methods: public class InputHandler implements GestureListener, InputProcessor{ ... public InputHandler(OrthographicCamera camera, Map m, Player play, Vector2 maxPos) { ... @Override public boolean zoom(float originalDistance, float currentDistance) { //this.zoom = true; this.zoomRatio = originalDistance / currentDistance; cam.zoom = cam.zoom * zoomRatio; Gdx.app.log("GestureDetector", "Zoom - ratio: " + zoomRatio); return true; } @Override public boolean touchDown(int x, int y, int pointerNum, int button) { booleanConditions[TOUCH_EVENT] = true; this.inputButton = button; this.inputFingerNum = pointerNum; this.lastTouchEventLoc.set(x,y); this.currentCursorPos.set(x,y); if(pointerNum == 1) { //this.fingerOne = true; this.fOnePosition.set(x, y); } else if(pointerNum == 2) { //this.fingerTwo = true; this.fTwoPosition.set(x,y); } Gdx.app.log("GestureDetector", "touch down at " + x + ", " + y + ", pointer: " + pointerNum); return true; } The touchDown event always occurs but I can never trigger Zoom (or pan among others...). The following is where I register and create the input handler in the "Game Screen". public class GameScreen implements Screen { ... this.inputHandler = new InputHandler(this.cam, this.map, this.player, this.map.maxCamPos); Gdx.input.setInputProcessor(this.inputHandler); Anyone have any ideas why zoom, pan, etc... are not triggering? Thanks!

    Read the article

  • With Monit, how do I restart a process when a directory timestamp check fails?

    - by Alterscape
    In my /etc/monit/monitrc I have the following lines: check process foo_server with pidfile /var/run/bwam_server.pid start program = "/Users/foo/foo_server.sh start" stop program = "/Users/foo/foo_server.sh stop" check directory foo_data path "/Users/foo/Library/Application Support/foo_server/data" if timestamp > 1 minute then alert #if timestamp > 1 minute then restart foo_server I know I shouldn't have some of this stuff in my home directory, but this aside: if I uncomment the last line, Monit tells me syntax error on foo_server -- but I am, as far as I understand, correctly defining the process -- how else do I reference it?

    Read the article

  • Munq is for web, Unity is for Enterprise

    - by oazabir
    The Unity Application Block (Unity) is a lightweight extensible dependency injection container with support for constructor, property, and method call injection. It’s a great library for facilitating Inversion of Control and the recent version supports AOP as well. However, when it comes to performance, it’s CPU hungry. In fact it’s so CPU hungry that it makes it impossible to make it work at Internet Scale. I was investigating some CPU issue on a portal that gets around 3MM hits per day and I found unusually high CPU. Here’s why: I did some CPU profiling on my open source project Dropthings and found that the highest CPU is consumed by Unity’s Resolve<>(). There’s no funky use of Unity in the project. Straightforward Register<>() and Resolve<>(). But as you can see, Resolve<>() is consuming significantly high CPU even after the site is warm and has been running for a while. Then I tried Munq, which is a basic Dependency Injection Container. It has everything you will usually need in a regular project. It boasts to be the fastest DI out there. So, I converted all Unity code to Munq in Dropthings and did a CPU profile and Whala!   There’s no trace of any Munq calls anywhere. That proves Munq is a lot faster than Unity.

    Read the article

  • First time application where to start?

    - by Nazariy
    After many years of searches and copy pasting, I'm still looking for simple solution that can transliterate text input on the fly from one key set to another. There are quite few online services that provide this feature but it still quite annoying to go online all the time. Unfortunately there is not that many applications left which are capable of doing so, and none of them supported by this day. I decided to make my own and at same time to learn something new for my self. The idea is quite simple: application should sit in system tray and wait until input language get changed, for example to Russian. If Russian language is activated, application should start to listen for user key strokes combination and replace them based on custom dictionary for example R = ?, SH = ? etc. I should be able to bind application to any installed language (Russian, Ukrainian, Bulgarian, Belarusian etc.) and customise dictionary for any of them. So my question is: Which language should I chose for this task C++, C# or might be something hardcore like Assembler, as application should work natively with Windows XP/Vista/7 or possibly Mac. (cross platform support is good but my main target is Windows) Due to nature of application behaviour how can I tell anti-virus software that it is not a "Key Logger" and basically not a virus? Where should I start and what should I be aware of? P.S. My current programming knowledge is quite basic, PHP and JavaScript with Object Oriented approach.

    Read the article

  • Is Ubuntu Netbook Remix actually lighter than Ubuntu Desktop?

    - by D Connors
    I'm running ubuntu on my work notebook. While it's not a netbook, it is a relatively weak machine. So, even though I'm not a big fan of the ubuntu netbook interface, I would be willing to use it if it's lighter (on RAM and CPU) than the desktop version. I know there are some very light linux distros available out there, but I'd like to stick with ubuntu both for it's community and for the support I have available here at my workplace. So, is Ubuntu Netbook Remix lighter on system resources than Ubuntu Desktop?

    Read the article

  • IIS FTP service - download timeouts and restarts getting the data twice

    - by accel229
    We have an IIS FTP site on a Windows Server 2003 x64 machine. Application Layer Gateway service is disabled (so http://support.microsoft.com/kb/931130 does not apply). Windows Firewall service is disabled as well. Connection timeout for the FTP site (there is only one) is set to 1,200 seconds = 20 minutes. An external client can connect to the site, list directory contents and download small files. When a client attempts to download a large file (eg, if the download continues for 3 minutes, which is still under 20 minutes, but relatively long), the server sends all data, then the connection times out, the client issues REST / RETR commands attempting to restart the download since after the last byte (which I believe should succeed and receive exactly 0 bytes), and the server behaves as if the client tried to restart after byte 0, that is, it sends the entire file all over. Any ideas on how to fix this?

    Read the article

  • Setting Up Multiple Domains (plus wildcard subdomains) to Point to the Same Site/VirtualHost

    - by Derek Reynolds
    I have my primary domain with wildcard subdomains setup already. username.maindomain.com and maindomain.com I want to provide my users with additional domains that they can select. additional1.com, additional2.com, additional3.com... These additional domains would also need to support wildcard subdomains (as the subdomains route to a username). Anyone know how to properly configure this in DNS and VirtualHost config? Currently I have the additional domains as A records pointing to the same IP as my main domain (with a wildcard subdomain A record for each as well). In my VirtualHost config I am placing the additional domain names in the ServerAlias directive. Let me know if any more detail is needed.

    Read the article

  • How do I classify using GLCM and SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. My glcm coding, as far as I have tried is, I = imread('fzliver3.jpg'); GLCM = graycomatrix(I,'Offset',[2 0;0 2]); stats = graycoprops(GLCM,'all') t1= struct2array(stats) I2 = imread('fzliver4.jpg'); GLCM2 = graycomatrix(I2,'Offset',[2 0;0 2]); stats2 = graycoprops(GLCM2,'all') t2= struct2array(stats2) I3 = imread('fzliver5.jpg'); GLCM3 = graycomatrix(I3,'Offset',[2 0;0 2]); stats3 = graycoprops(GLCM3,'all') t3= struct2array(stats3) t=[t1;t2;t3] xmin = min(t); xmax = max(t); scale = xmax-xmin; tf=(x-xmin)/scale Was this a correct implementation? Also, I get an error at the last line. My output is: stats = Contrast: [0.0510 0.0503] Correlation: [0.9513 0.9519] Energy: [0.8988 0.8988] Homogeneity: [0.9930 0.9935] t1 = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 Columns 7 through 8 0.9930 0.9935 stats2 = Contrast: [0.0345 0.0339] Correlation: [0.8223 0.8255] Energy: [0.9616 0.9617] Homogeneity: [0.9957 0.9957] t2 = Columns 1 through 6 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 Columns 7 through 8 0.9957 0.9957 stats3 = Contrast: [0.0230 0.0246] Correlation: [0.7450 0.7270] Energy: [0.9815 0.9813] Homogeneity: [0.9971 0.9970] t3 = Columns 1 through 6 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9971 0.9970 t = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9930 0.9935 0.9957 0.9957 0.9971 0.9970 ??? Error using ==> minus Matrix dimensions must agree. The images are:

    Read the article

  • Alternate Client for Cisco Unified Personal Communicator protocol

    - by Jason M
    At work we have an in-house chat system using CUPC. Does anyone else out there use this? There are a few things I do not like about this client: Where's the chat log? If I close the window out, I have no way of getting my conversation back. Tabbed interface? That would be nice. I hate having multiple chat windows up, having to arrange them around my desktop as more people start talking to me. I don't like that I have to use this one-off application for this protocol when other chat clients will handle 99% of the other protocols I use. Tell me: Is the protcol an open standard for which other applications have support? (pidgin, adium, digsby, etc.) If not, can I overcome these issues from within CUPC? Perhaps there are newer versions of the client that overcome these issues.

    Read the article

  • when to introduce an application services tier in an n-tier application

    - by user20358
    I am developing a web based application whose primary objective is to fetch data from the database, display it on the UI, take in user inputs and write them back to the database. The application is not going to be doing any industrial strength algorithm crunching, but will be receiving a very high number of hits at peak times (described below) which will be changing thru the day. The layers are your typical Presentation, Business, Data. The data layer is taken care of by the database server. The business layer will contain the DAL component to access the database server over tcp. The choices I have to separate these layers into tiers are: The presentation and business layers can be either kept on the same tier. The presentation layer on a separate tier by itself and the business layer on a separate tier by itself. In the case of choice 2, the business layer will be accessed by the presentation layer using a WCF service either over http or tcp. I don't see any heavy processing being done on the Business layer, so I am leaning towards option 1 above. I also feel for the same reason, adding a new tier will only introduce the network latency. However, in terms of scalability in case I need to scale up or scale out, which is a better way to go? This application will need to be able to support up to 6 million users an hour. There will be a reasonable amount of data in each user session, storing user's preferences and other details. I will be using page level caching as well.

    Read the article

  • How to list all my packages from command line which can show package name, license, source url, etc?

    - by YumYumYum
    How to get all the installed package list with there license, source url? Such as following only shows name of the package only. $ dpkg --get-selections acpi-support install acpid install adduser install adium-theme-ubuntu install aisleriot install alacarte install For example in Fedora/CentOS (RED HAT LINUX BRANCH), you can see that: $ yum info busybox Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit Available Packages Name : busybox Arch : i686 Epoch : 1 Version : 1.18.2 Release : 5.fc15 Size : 615 k Repo : updates Summary : Statically linked binary providing simplified versions of system commands URL : http://www.busybox.net License : GPLv2 Description : Busybox is a single binary which includes versions of a large number : of system commands, including a shell. This package can be very : useful for recovering from certain types of system failures, : particularly those involving broken shared libraries. Follow up: /var/lib/apt/lists$ ls extras.ubuntu.com_ubuntu_dists_natty_main_binary-amd64_Packages extras.ubuntu.com_ubuntu_dists_natty_main_source_Sources extras.ubuntu.com_ubuntu_dists_natty_Release extras.ubuntu.com_ubuntu_dists_natty_Release.gpg lock partial security.ubuntu.com_ubuntu_dists_natty-security_main_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_main_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_multiverse_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_multiverse_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_Release security.ubuntu.com_ubuntu_dists_natty-security_Release.gpg security.ubuntu.com_ubuntu_dists_natty-security_restricted_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_restricted_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_universe_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_universe_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_main_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_main_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_multiverse_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_multiverse_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_Release us.archive.ubuntu.com_ubuntu_dists_natty_Release.gpg us.archive.ubuntu.com_ubuntu_dists_natty_restricted_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_restricted_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_universe_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_main_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_main_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_Release us.archive.ubuntu.com_ubuntu_dists_natty-updates_Release.gpg us.archive.ubuntu.com_ubuntu_dists_natty-updates_restricted_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_restricted_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_universe_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_universe_source_Sources

    Read the article

  • Apache, Django with mod_wsgi, and large request buffering

    - by Mukul
    In my setup of Apache 2.2 MPM worker and Django 1.3 with mod_wsgi 2.8, I need to support large POST request payloads. The problem is that when there are many such simultaneous requests, Apache uses up all the memory in the system and then crashes. It seems that Apache is buffering the requests completely in memory before executing the WSGI handler and passing it the request. Is there any way to control request buffering in Apache? The log shows the following error whenever the crash happens: [Wed Jun 29 18:35:27 2011] [error] cgid daemon process died, restarting Here's my virtual host's configuration: <VirtualHost *:8080> ServerName example.com ErrorLog /var/log/apache2/error.log WSGIScriptAlias / <path to django.wsgi> WSGIPassAuthorization on WSGIDaemonProcess example.com WSGIProcessGroup example.com XSendFileAllowAbove on XSendFile on </VirtualHost>

    Read the article

  • Other games that employ mechanics like the game "Diplomacy"

    - by Kevin Peno
    I'm doing a little bit of research and I'm hoping you can help me track down any games, other than Diplomacy (online version here), that employ all or some of the mechanics in Diplomacy (rules, short form). Examples I'm looking for: Simultaneous orders given prior to execution of orders In Diplomacy, players "write down" their moves and execute them "at the same time" Support, in terms of supporting an attacker or defender "take" a territory. In Diplomacy, no one unit is stronger than another you need to combine the strength of multiple units to attack other territories. Rules for how move conflicts are resolved Example, 2 units move into a space, but only one is allowed, what happens. I may add to this list later, but these are the primary things I'm looking for. If you need clarification on anything just let me know. Note: I tried asking this on GamingSE, but it was shot down. So, I am unsure where else I could post this. Since I am researching this for game development purposes, I assume this post is on topic. Please let me know if this is not the case. Please also feel free to re-categorize this. Thanks!

    Read the article

  • Wildcard 'A' record overriding CNAME record

    - by user116890
    I have a Wildcard * A record for self-registration of subdomains by users on our web app. All works fine. I now need to set up an alias for support.mydomain.com to point to mydomain.freshdesk.com. I created a CNAME record as per instructions however it appears my Wildcard A record is overriding the CNAME entry. Any thoughts on how to resolve this? I need the wildcard so creating an A record for each user subdomain is not possible. Thanks.

    Read the article

  • steam won't open after install

    - by Dan Cooper
    I've looked all over the place for a solution but no one seems to be getting the same error codes as me. When I try to run Steam through terminal I get the following error: Running Steam on ubuntu 13.04 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1367621987_client) Installing breakpad exception handler for appid(steam)/version(1367621987_client) unlinked 0 orphaned pipes Gtk-Message: Failed to load module "overlay-scrollbar" Installing breakpad exception handler for appid(steam)/version(1367621987_client) [1013/104817:WARNING:proxy_service.cc(646)] PAC support disabled because there is no system implementation /home/buildbot/buildslave_steam/steam_rel_client_ubuntu12_linux/build/src/steamUI/../common/steam/client_api.cpp (281) : Assertion Failed: ClientAPI_InitGlobalInstance: InternalAPI_Init_Internal failed. Assert( Assertion Failed: ClientAPI_InitGlobalInstance: InternalAPI_Init_Internal failed. ):/home/buildbot/buildslave_steam/steam_rel_client_ubuntu12_linux/build/src/steamUI/../common/steam/client_api.cpp:281 Installing breakpad exception handler for appid(steam)/version(1367621987_client) Uploading dump (out-of-process) [proxy ''] /tmp/dumps/assert_20131013104817_1.dmp /home/buildbot/buildslave_steam/steam_rel_client_ubuntu12_linux/build/src/steamUI/SteamStartup.cpp (627) : Assertion Failed: ! "There was a problem with your Steam installation.\n" "Please reinstall steam.\n" unlinked 2 orphaned pipes CAsyncIOManager: 0 threads terminating. 0 reads, 0 writes, 0 deferrals. CAsyncIOManager: 75 single object sleeps, 0 multi object sleeps CAsyncIOManager: 0 single object alertable sleeps, 1 multi object alertable sleeps [2013-10-13 10:48:16] Startup - updater built May 3 2013 15:08:27 [2013-10-13 10:48:16] Verifying installation... [2013-10-13 10:48:16] Verification complete Shutting down. . . [2013-10-13 10:48:17] Shutdown Finished uploading minidump (out-of-process): success = yes response: CrashID=bp-d172a742-b7dd-419c-b235-d60c32131013 I've tried sudo apt-get purge and terminal tries to tell me I don't have Steam installed. I've tried reinstalling with software center but that doesn't help either.

    Read the article

  • Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux?

    - by Don MacAskill
    We use RAID1+0 with md on Linux (currently 2.6.37) to create an md device, then use LVM to provide volume management on top of the device, and then use ext4 as our filesystem on the LVM volume groups. With SSDs as the drives, we'd like to see the TRIM commands propagate through the layers (ext4 - LVM - md - SSD) to the devices. It looks like recent 2.6.3x kernels have had a lot of new SSD-related TRIM support added, including lots more coverage of Device Mapper scenarios, but we still can't seem to get it to cascade down properly. Is this possible yet? If so, how? If not, is any progress being made?

    Read the article

  • Can Acer Aspire Revo (Atom 330) be used with two monitors simultaneously?

    - by LeeD
    I'm so attracted to Acer Revo for the price & the look. As long as I can work on two monitors simultaneously, I'll be happy. Not planning to do heavy video editing or gaming. Occasional movie streaming would be fine. Will mainly use it to do trading, lots of word processing, some photo editing, connecting with friends. Anyone has experience using Revo with 2 or more monitors? The spec says it has VGA and HDMI output but Acer sales person over the phone told me it can support one monitor only..??

    Read the article

  • off-the-shelf HDD in Dell Optiplex 755 - Invalid Replacement - Press F1 when rebooting

    - by Eric Liprandi
    Hi, I recently put an SSD into my work Optiplex 755. Since then, every time the system boots or reboots, I get prompted to hit F1 to continue with a message like HDD replacement is not valid. The system works just fine. What I am gathering from Dell's website is that I left the original drive internally as a data drive. And apparently the Optiplex does not support 2 drives in the configuration we purchased originally and complains. Any suggestions on how to get rid of this message? some suggested the MEBx or whatever from Intel and to turn it off, but I did not succeed earlier. It's not a big deal 80% of the time, but I regularly work from home and occasionally need to fire a reboot and well, you don't get the BIOS screen remotely :) Regards, Eric.

    Read the article

  • Any way to stop VMWare workstation from dropping SSH connections?

    - by oljones
    I have VMWare workstation 8 with a few Linux guests. I have had problems maintaining an active SSH connection to my VMs when they are in bridged mode. I first read that the onboard realtek network cards were not well supported so I bought a Intel Pro/1000 GT card. This supposedly had support. But this made no difference. Connections via SSH are active for about the first 3 minutes then hang and die. I have changed the TCP Checksum offload on the Intel and Realtek NICs, but this only works some of the time and even then not for very long. The best I could do was about 20 minutes before the connection was dropped. Any ideas?

    Read the article

  • How to dismiss the Windows 8.1 Upgrade banner?

    - by user81430
    As of today, my Windows 8 system is displaying a large banner covering about a quarter of the screen, demanding that I upgrade to Windows 8.1 for free. The banner helpfully informs me that that I can continue to use the computer while the upgrade is downloading. There's a button to go to the store for the upgrade, but no other choices, such as No Thanks, Maybe Later, or Absolutely Not. I can't find a way to dismiss the banner, and I can start programs but can't give them focus while the banner is present. So I'm dead in the water. How can I fix this? Bringing up Task Manager doesn't give me any options. My employer doesn't yet support Windows 8.1 (which is also an issue, but beyond my control) so I won't be able to connect to work if install the upgrade. So there is no way on God's green earth that I'm going to install this upgrade now.

    Read the article

  • Looking for an open source email archiving application

    - by Joel
    I'm looking for an open source application that will archive my email. It might do this by logging in to my POP3 account on a regular basis and copying the emails across, or it might just read my Unix mbox/maildir file/directory directly on the mail server. It must be open and it must run on Linux (or any open OS actually). Ideally, it would have a web interface, but this is not a major requirement. MXsense (http://www.mxsense.com/mxsense.html) seems to be pretty-much what I want, except it's not open. I have no requirement for MS Exchange support. Any suggestions? The rationale (maybe a bit silly) is that I run Linux exclusively and it's still doesn't have an email client that is anywhere close to MS Outlook in terms of awesomeness, so I find myself switching between mail clients often. I would feel better about this if I had an archive of my emails, so it wouldn't matter which mail client I was using this month.

    Read the article

  • Is there a test to see if hardware virtualization (vmx / xvm) are presently enabled within a Linux session?

    - by Dr. Edward Morbius
    I'm writing procedures for configuring VirtualBox support for 64-bit SMP guests, which requires hardware virtualization suppot (VTx/Intel, AMD-V/AMD). I have successfully configured this myself, however I'd like the procedure to be clear. sed -ne '/^flags/s/^.*: //p' /proc/cpuinfo | egrep -q '(vmx|svm)' && echo Has hardware virt || echo No HW virt ... shows if the CPU is capable. I've still got to go enable the feature in BIOS. Any way to test from within Linux to see that this is no or not? Thanks.

    Read the article

< Previous Page | 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138  | Next Page >