Search Results

Search found 10391 results on 416 pages for 'sys dm exec requests'.

Page 168/416 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Async task ASP.net HttpContext.Current.Items is empty - How do handle this?

    - by GuruC
    We are running a very large web application in asp.net MVC .NET 4.0. Recently we had an audit done and the performance team says that there were a lot of null reference exceptions. So I started investigating it from the dumps and event viewer. My understanding was as follows: We are using Asyn Tasks in our controllers. We rely on HttpContext.Current.Items hashtable to store a lot of Application level values. Task<Articles>.Factory.StartNew(() => { System.Web.HttpContext.Current = ControllerContext.HttpContext.ApplicationInstance.Context; var service = new ArticlesService(page); return service.GetArticles(); }).ContinueWith(t => SetResult(t, "articles")); So we are copying the context object onto the new thread that is spawned from Task factory. This context.Items is used again in the thread wherever necessary. Say for ex: public class SomeClass { internal static int StreamID { get { if (HttpContext.Current != null) { return (int)HttpContext.Current.Items["StreamID"]; } else { return DEFAULT_STREAM_ID; } } } This runs fine as long as number of parallel requests are optimal. My questions are as follows: 1. When the load is more and there are too many parallel requests, I notice that HttpContext.Current.Items is empty. I am not able to figure out a reason for this and this causes all the null reference exceptions. 2. How do we make sure it is not null ? Any workaround if present ? NOTE: I read through in StackOverflow and people have questions like HttpContext.Current is null - but in my case it is not null and its empty. I was reading one more article where the author says that sometimes request object is terminated and it may cause problems since dispose is already called on objects. I am doing a copy of Context object - its just a shallow copy and not a deep copy.

    Read the article

  • MVC2 Json request not actually hitting the controller

    - by SlackerCoder
    I have a JSON request, but it seems that it is not hitting the controller. Here's the jQuery code: $("#ddlAdminLogsSelectLog").change(function() { globalLogSelection = $("#ddlAdminLogsSelectLog").val(); alert(globalLogSelection); $.getJSON("/Administrative/AdminLogsChangeLogSelection", { NewSelection: globalLogSelection }, function(data) { if (data.Message == "Success") { globalCurrentPage = 1; } else if (data.Message == "Error") { //Do Something } }); }); The alert is there to show me if it actually fired the change event, which it does. Heres the method in the controller: public ActionResult AdminLogsChangeLogSelection(String NewSelection) { String sMessage = String.Empty; StringBuilder sbDataReturn = new StringBuilder(); try { if (NewSelection.Equals("Application Log")) { int i = 0; } else if (NewSelection.Equals("Email Log")) { int l = 0; } } catch (Exception e) { //Do Something sMessage = "Error"; } return Json(new { Message = sMessage, DataReturn = sbDataReturn.ToString() }, JsonRequestBehavior.AllowGet); } I have a bunch of Json requests in my application, and it seems to only happen in this area. This is a separate area (I have 6 "areas" in the app, 5 of which work fine with JSON requests). This controller is named "AdministrativeController", if that matters. Does anything jump out anyone as being incorrect or why the request would not pass to the server side?

    Read the article

  • What is the most common way to use a middleware in node with express and connect

    - by Bernhard
    Thinking about the correct way, how to make use of middlewares in a node.js web project using express and connect which is growing up at the moment. Of course there are middlewares right now wich has to pass or extend requests globally but in a lot of cases there are special jobs like prepare incoming data and in this case the middleware would only work for a set of http-methods and routes. I've a component based architecture and each component brings it's own middleware layer which can implement those for requests this component can handle. On app startup any required component is loaded and prepared. Is it a good idea to bind the middleware code execution to URLs to keep cpu load lower or is it better to use middlewares only for global purposes? Here's some dummy how an url related middleware look like. app.use(function(req, res, next) { // Check if requested route is a part of the current component // or if the middleware should be passed on any request if (APP.controller.groups.Component.isExpectedRoute(req) || APP.controller.groups.Component.getConfig().MIDDLEWARE_PASS_ALL === true) { // Execute the midleware code here console.log('This is a route which should be afected by middleware'); ... next(); }else{ next(); } });

    Read the article

  • Why Is Apache Giving 403?

    - by ThinkCL
    I am getting 403 Errors from Apache when I send too many, 12, synchronous HTTP Posts via a desktop app I am building in XCode / Objective-C. The 12 POST requests are just a few kb each and go out instantly one after the other and the Apache Error Log shows... client denied by server configuration: /the-path/the-file.php Apache 2.0 PHP 5 and I have this same setup working fine on my local machine. The error is coming from a VPS with my host, which runs very fast and smooth and has plenty of resources. To debug I threw a sleep(1); function (stalls script execution by 1 second) into the php file and that fixed it. This makes me think that I am breaking some limit for too many requests for a single IP in a certain amount of time. I have googled and combed PHP ini and Apache configs, but I cannot find what that directive/setting might be. I should mention that the although it varies the first 4 or 5 POSTS usually work then it starts returning the 403 error intermittently after that. Just really acting like its bogging down. Any ideas?

    Read the article

  • Enterprise Platform in Python, Design Advice

    - by Jason Miesionczek
    I am starting the design of a somewhat large enterprise platform in Python, and was wondering if you guys can give me some advice as to how to organize the various components and which packages would help achieve the goals of scalability, maintainability, and reliability. The system is basically a service that collects data from various outside sources, with each outside source having its own separate application. These applications would poll a central database and get any requests that have been submitted to perform on the external source. There will be a main website and REST/SOAP API that should also have access to the central data service. My initial thought was to use Django for the web site, web service and data access layer (using its built-in ORM), and then the outside source applications can use the web service(s) to get the information they need to process the request and save the results. Using this method would allow me to have multiple instances of the service applications running on the same or different machines to balance out the load. Are there more elegant means of accomplishing this? i've heard of messaging systems such as MQ, would something like that be beneficial in this scenario? My other thought was to use a completely separate data service not based on Django, and use some kind of remoting or remote objects (in they exist in Python) to interact with the data model. The downside here would be with the website which would become much slower if it had to push all of its data requests through a second layer. I would love to hear what other developers have come up with to achieve these goals in the most flexible way possible.

    Read the article

  • Socket left in TIME_WAIT after file transfer via netcat

    - by com
    Using Copying by NetCat I am trying to copy files throught network by NetCat. From console it work pretty well. First I run listening netcat on the destination machine and after I run sending on source machine. The problem is it's doen't work from script from the source machine: ssh -f user@$desthost 'nc -l 1234 | tar xvf - /dev/null &' #listening on destination host tar cv /tmp/file | nc $desthost 1234 #sending to destination host I saw that after running port 1234 is still was open and status of the socket was TIME_WAIT. If you know what's the problem, please, help me out. And by the way, after copying how can I validate that the content is identical? Thanks! Addendum: I found one very strange thing, the same implementation with screen on destination work works, but not stable, sometimes it doesn't copy a file. ssh user@$desthost screen -dm -S test 'nc -l 1234 | tar xvf - ' #listening on destination host Maybe there is an issue with timeout?

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.

    Read the article

  • PHP Array saved to Text file

    - by coffeemonitor
    I've saved a response from an outside server to a text file, so I don't need to keep running connection requests. Instead, perhaps I can use the text file for my manipulation purposes, until I'm read for re-connecting again. (also, my connection requests are limited to this outside server) Here is what I've saved to a text file: records.txt Array ( [0] => stdClass Object ( [id] => 552 [date_created] => 2012-02-23 10:30:56 [date_modified] => 2012-03-09 18:55:26 [date_deleted] => 2012-03-09 18:55:26 [first_name] => Test [middle_name] => [last_name] => Test [home_phone] => (123) 123-1234 [email] => [email protected] ) [1] => stdClass Object ( [id] => 553 [date_created] => 2012-02-23 10:30:56 [date_modified] => 2012-03-09 18:55:26 [date_deleted] => 2012-03-09 18:55:26 [first_name] => Test [middle_name] => [last_name] => Test [home_phone] => (325) 558-1234 [email] => [email protected] ) ) There's actually more in the Array, but I'm sure 2 are fine. Since this is a text file, and I want to pretend this is the actual outside server (sending me the same info), how do I make it a real array again? I know I need to open the file first: <?php $fp = fopen('records.txt', "r"); // open the file $theData = fread($fh, filesize('records.txt')); fclose($fh); echo $theData; ?> So far $theData is a string value. Is there a way to convert it back to the Array it originally came in as?

    Read the article

  • what's the key different between data management and data governance?

    - by Sid Xing
    i just read some articles about these two theories, and i thought they have the similar goal, but DG is more about process management by follow some best practice. So my 1st question is about the difference between DG & DM. I'm confused. There're so many concepts around data management. Data quality, data security, data governance, data profiling, data integration, master data management, metadata management.... It seems like neither of them is EXACTLY separated, they're together. My 2nd question, or ask for your suggestion to help me better understand the relation between these concepts. Appreciate your help.

    Read the article

  • how can i make sure only a single record is inserted when multiple apache threads are trying to acce

    - by Ed Gl
    I have a web service (xmlrpc service to be exact) that handles among other things writing data into the database. Here's the scenario: I often receive requests to either update or insert a record. What I would do is this: If the record already exists, append to the record, If not, create a new record The issue is that there are certain times I would get a 'burst' of requests, which spawns several apache threads to handle the request. These 'bursts' would come within less than milliseconds of each other. I now have several threads performing #1 and #2. Often two threads would would 'pass' number #1 and actually create two duplicate records (except for the primary key). I'd like to use some locking mechanism to prevent other threads from accessing the table while the other thread finishes its work. I'm just afraid of using it because if something happens I don't want to leave the table locked. Is there a solid way of handling this? I'm open to using locks if I can do it properly. Thanks,

    Read the article

  • Create a model that switches between two different states using Temporal Logic?

    - by NLed
    Im trying to design a model that can manage different requests for different water sources. Platform : MAC OSX, using latest Python with TuLip module installed. For example, Definitions : Two water sources : w1 and w2 3 different requests : r1,r2,and r3 - Specifications : Water 1 (w1) is preferred, but w2 will be used if w1 unavailable. Water 2 is only used if w1 is depleted. r1 has the maximum priority. If all entities request simultaneously, r1's supply must not fall below 50%. - The water sources are not discrete but rather continuous, this will increase the difficulty of creating the model. I can do a crude discretization for the water levels but I prefer finding a model for the continuous state first. So how do I start doing that ? Some of my thoughts : Create a matrix W where w1,w2 ? W Create a matrix R where r1,r2,r3 ? R or leave all variables singular without putting them in a matrix I'm not an expert in coding so that's why I need help. Not sure what is the best way to start tackling this problem. I am only interested in the model, or a code sample of how can this be put together. edit Now imagine I do a crude discretization of the water sources to have w1=[0...4] and w2=[0...4] for 0, 25, 50, 75,100 percent respectively. == means implies Usage of water sources : if w1[0]==w2[4] -- meaning if water source 1 has 0%, then use 100% of water source 2 etc if w1[1]==w2[3] if w1[2]==w2[2] if w1[3]==w2[1] if w1[4]==w2[0] r1=r2=r3=[0,1] -- 0 means request OFF and 1 means request ON Now what model can be designed that will give each request 100% water depending on the values of w1 and w2 (w1 and w2 values are uncontrollable so cannot define specific value, but 0...4 is used for simplicity )

    Read the article

  • Process data BEFORE a 301 Redirect?

    - by Jesse
    So, I've been working on a PHP link shortener (I know, just what the world needs). Basically when the page loads, php determines where it needs to go and sends a 301 Header to redirect the browser, like so... Header( "HTTP/1.1 301 Moved Permanently" ); header("Location: http://newsite.com"; Now, I'm trying to add some tracking to my redirects and insert some custom analytics data into a MySQL table before the redirect happen. It works perfectly if I don't specify the a redirect type and just use: header("Location: http://newsite.com"; But, of course as soon as you add in the 301 header, nothing else gets processed. Actually, on the first request, it sends the data to MySQL, but on any subsequent requests there's no communication with the database. I assume it's a browser caching issue, once it's seen the 301 it decides they're no reason to parse anything on future requests. But, does anyone know if there's any way to get around this? I'd really like to keep it as a 301 for SEO purposes (I believe if you don't specify it sends a 404 by default?). I thought about using .htaccess to prepend a file to the page that will do the MySQL work, but with the 301, wouldn't that just get ignored as well? Anyway, I'm not sure if there's any solution other than using a different type of redirect, but I'm ready to give up just yet. So, any suggestions would be much appreciated. Thanks!

    Read the article

  • Asynchronous SQL Operations

    - by Paul Hatcherian
    I've got a problem I'm not sure how best to solve. I have an application which updates a database in response to ad hoc requests. One request in particular is quite common. The request is an update that by itself is quite simple, but has some complex preconditions. For this request the business layer first requests a set of data from the data layer. The business logic layer evaluated the data from the database and parameters from the request, from this the action to be performed is determined, and the request's response message(s) are created. The business layer now executes the actual update command that is the purpose of the request. This last step is the problem, this command is dependent on the state of the database, which might have changed since the business logic ran. Locking down the data read in this operation across several round-trips to the database doesn't seem like a good idea either. Is there a 'best-practice' way to accomplish something like this? Thanks!

    Read the article

  • Rails: RESTful Find, Initialize, or Create

    - by Andrew
    I have an app that has Cities in it. I'm looking for some suggestions on how to RESTfully structure a controller so that I can lookup, initialize, and create city records via AJAX requests. For instance: Given a text field city_name A user enters the name of a City, like "Paris, France" The app checks this location to see if there is such a city in the database already If there is, it returns the city object If there is not, it returns a new record initialized with the name "Paris" and the country "France", and prompts the user to confirm they want to add this city to the database If the user says "Yes" the record is saved. If not the record is discarded and the form is cleared. Now, my first approach was to change the Create action to use find_or_create, so that an AJAX post to cities_path would result in either returning the existing city or creating it and returning it. That works ok... However, it would be better to setup controller actions that would take a string input, find , or else initialize and return, then only create if the user confirms the generated record is correct. The ideal scenario would put this all in one action so AJAX request can go to that url, the server responds with JSON objects, and javascript can handle things from there. I'd like to keep all the user-interaction logic client side, and also minimize the number of requests it takes to achieve this. Any suggestions on the cleanest, most RESTful way to accomplish this?

    Read the article

  • Periodically iterating over a collection that's constantly changing

    - by rwmnau
    I have a collection of objects that's constantly changing, and I want to display some information about objects (my application is multi-threaded, and differently threads are constantly submitting requests to modify an object in the collection, so it's unpredictable), and I want to display some information about what's currently in the collection. If I lock the collection, I can iterate over it and get my information without any problems - however, this causes problems with the other threads, since they could have submitted multiple requests to modify the collection in the meantime, and will be stalled. I've thought of a couple ways around this, and I'm looking for any advice. Make a copy of the collection and iterate over it, allowing the original to continue updating in the background. The collection can get large, so this isn't ideal, but it's safe. Iterate over it using a For...Next loop, and catch an IndexOutOfBounds exception if an item is removed from the collection while we're iterating. This may occasionally cause duplicates to appear in my snapshot, so it's not ideal either. Any other ideas? I'm only concerned about a moment-in-time snapshot, so I'm not concerned about reflecting changes in my application - my main concern is that the collection be able to be updated with minimal latency, and that updates never be lost.

    Read the article

  • Threading in Android

    - by virsir
    I am currently developing Android app, it needs download content from internet. I use thread to do that and then call runOnUiThread method to update GUI. I placed a refresh menu on it, if user tried to refresh the content, the download thread will be created and started. The problem is that how can I control the thread order, I need to accept the latest request's response and abandon previous thread requests if there were some other requests still running because the request parameters may have been changed by user. Currently I was using a threadId to do this thing, when a thread finished, it will check its threadId, if it was the latest recored one, it then takes control and render the response. My question is that is there any other proper better solution for this? Do I need to stop threads when user exit the app? I remember that some book said that do not try stop thread manually and wait itself finish is a good practice, is that true? Should I stop them by calling "stop" or "interrupt" method? I read some documents around threading in Android and found the class HandlerThread, what is it? In what kind of situation I need to use it?

    Read the article

  • ASP.NET problem - Firebug shows odd behaviour

    - by Brandi
    I have an ASP.NET application that does a large database read. It loads up a gridview inside an update panel. In VS2008, just running on my local machine, it runs fantastically. In production (identical code, just published and put on one of our network servers), it runs slow as dirt. Debug is set to false, so this is not the cause of the slow down. I'm not an experienced web developer, so besides that, feel free to suggest the obvious. I have been using Firebug to determine what's going on, and here is what that has turned up: On production, there are around 500 requests. The timeline bar is very short. The size column varies from run to run, but is always the same for the duration of the run. Locally, there are about 30 requests. The timeline bar takes up the entire space. Can anyone shed some light on why this is happening and what I can do to fix it? Also, I can't find much of anything on the web about this, so any references are helpful too.

    Read the article

  • Selective replication with CouchDB

    - by FRotthowe
    I'm currently evaluating possible solutions to the follwing problem: A set of data entries must be synchonized between multiple clients, where each client may only view (or even know about the existence of) a subset of the data. Each client "owns" some of the elements, and the decision who else can read or modify those elements may only be made by the owner. To complicate this situation even more, each element (and each element revision) must have an unique identifier that is equal for all clients. While the latter sounds like a perfect task for CouchDB (and a document based data model would fit my needs perfectly), I'm not sure if the authentication/authorization subsystem of CouchDB can handle these requirements: While it should be possible to restict write access using validation functions, there doesn't seem to be a way to authorize read access. All solutions I've found for this problem propose to route all CouchDB requests through a proxy (or an application layer) that handles authorization. So, the question is: Is it possible to implement an authorization layer that filters requests to the database so that access is granted only to documents that the requesting client has read access to and still use the replication mechanism of CouchDB? Simplified, this would be some kind of "selective replication" where only some of the documents, and not the whole database is replicated. I would also be thankful for directions to some detailed information about how replication works. The CouchDB wiki and even the "Definite Guide" Book are not too specific about that.

    Read the article

  • mod_rewrite: no access to real files and directories

    - by tshabalala
    Hello. I use mod_rewrite/.htaccess for pretty URLs. I forward all the requests to my index.php, like this: RewriteRule ^/?([a-zA-Z0-9/-]+)/?$ /index.php [NC,L] The index.php then handles the requests. I'm also using this condition/rule to eliminate trailing slashes (or rather rewrite them to the URL without a trailing slash, with a 301 redirect; I'm doing this to avoid duplicate content and because I like no trailing slashes better): RewriteCond %{HTTP_HOST} !^\.localhost$ [NC] RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,L] This works well, except that I now get an infinite loop when trying to access a (real) directory (the rewrite rule removes the trailing slash, the server adds it again, ...). I solved this by setting the DirectorySlash directive to Off: DirectorySlash Off I don't know how good this solution is, I don't feel too confident about it tbh. Anyway, what I'd like to do is completely ignore "real" files and directories, since I don't need them and I only use pretty URLs with "virtual" files/directories anyway. This would allow me to avoid the DirectorySlash workaround/hack too. Is this possible? Thanks!

    Read the article

  • Problems with system() calls in Linux

    - by Thomas
    I'm working on a init for an initramfs in C++ for Linux. This script is used to unlock the DM-Crypt w/ LUKS encrypted drive, and set the LVM drives to be available. Since I don't want to have to reimplement the functionality of cryptsetup and gpg I am using system calls to call the executables. Using a system call to call gpg works fine if I have the system fully brought up already (I already have a bash script based initramfs that works fine in bringing it up, and I use grub to edit the command line to bring it up using the old initramfs). However, in the initramfs it never even acts like it gets called. Even commands like system("echo BLAH"); fail. So, does anyone have any input?

    Read the article

  • Strange DataMapper (0.10.2) error. Please help!

    - by Joel M.
    See the full error here: http://notesapp.heroku.com/ I'm using DataMapper and dm-validations 0.10.2. No matter how much I tweak my models, I get the same error, or another one. Here's how my model looks like: class User include DataMapper::Resource attr_accessor :password, :password_confirmation property :id, Serial, :required => true property :email, String, :required => true, :format => :email_address, :unique => true property :hashed_password, String property :salt, String, :required => true property :created_at, DateTime, :default => Time.now property :permission_level, Integer, :default => 1 validates_present :password_confirmation, :unless => Proc.new { |t| t.hashed_password } validates_present :password, :unless => Proc.new { |t| t.hashed_password } validates_is_confirmed :password

    Read the article

  • Simulating a 2-level If-Else using RewriteCond

    - by hlissner
    Hi! I'm trying to get my head around RewriteCond, and want to rewrite any requests either to a static html page (if it exists), or to a specific index.php (so long as the requested file doesn't exist). To illustrate the logic: if HTTP_HOST is '(www\.)?mydomain.com' if file exists: "/default/static/{REQUEST_URI}.html", then rewrite .* to /default/static/{REQUEST_URI}.html else if file exists: {REQUEST_FILENAME}, then do not rewrite else rewrite .* to /default/index.php I don't seem to have much trouble doing it when I don't need to test for the HTTP_HOST. Ultimately, this one .htaccess file will be handling requests for several domains. I know I could get around this with vhosts, but I'd like to figure out how to do it this way. Here's where I am at now: RewriteCond %{HTTP_HOST} ^(www\.)?mydomain\.com$ [NC] RewriteCond /default/static/%{REQUEST_URI}.html -f RewriteRule . /default/static/%{REQUEST_URI}.html [L,NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule . /default/index.php [L,QSA] I'm not too familiar with some of the other flags, will any of them be of use here (like chain|C, next|N or skip|S)? Thanks in advance! UPDATE: I've managed to do it, but would appreciate alternatives: RewriteEngine On RewriteRule ^(.+)/$ /$1 [L] RewriteCond %{HTTP_HOST} ^(domainA|domainB)\.com [NC] RewriteCond %{DOCUMENT_ROOT}/%1/static/%{REQUEST_URI}.html -f RewriteRule (.*)? /%1/static/$1.html [NC,L] RewriteCond %{HTTP_HOST} ^(domainA|domainB)\.com [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule .* /%1/index.php [L,QSA]

    Read the article

  • HTTP 401.3 when PUT, DELETE to ADO.NET Data Service (.svc)

    - by Nate
    I have an ADO.NET Data Service (we'll call it service.svc). When I deploy it to an IIS 6 site with Integrated Windows Authentication turned on, all requests (GET, POST, PUT, and DELETE) work fine for me, because I am an administrator on the box. However, when a non-admin user hits the service, only GET and POST requests work. When they try a PUT or DELETE request, they get an HTTP 401.3 "Access is Denied" error: "Error message 401.3: You do not have permission to view this directory or page using the credentials you supplied (access denied due to Access Control Lists). Ask the web server's administrator to give you access to '...\service.svc'." If I give the "Authenticated Users" local group write access to the .svc file, everything works as it should, but I really don't want to do this (and don't think I should have to do this to get this to work). In fact, I'm confused as to why changing the file permissions would affect this at all, but it definitely seems to be the problem. I've found a couple of different suggestions to fix somewhat similar problems in the Microsoft forums (Here, and I would post more links, but am being told that new users can only post one link in a post), but none of the solutions help. Any help is much appreciated. I am certainly no IIS expert, and this one has got me stumped.

    Read the article

  • Semaphore - What is the use of initial count?

    - by Sandbox
    http://msdn.microsoft.com/en-us/library/system.threading.semaphoreslim.aspx To create a semaphore, I need to provide an initial count and maximum count. MSDN states that an initial count is - The initial number of requests for the semaphore that can be granted concurrently. While it states that maximum count is The maximum number of requests for the semaphore that can be granted concurrently. I can understand that the maximum count is the maximum number of threads that can access a resource concurrently. But, what is the use of initial count? If I create a semaphore with an initial count of 0 and a maximum count of 2, none of my threadpool threads are able to access the resource. If I set the initial count as 1 and maximum count as 2 then only thread pool thread can access the resource. It is only when I set both initial count and maximum count as 2, 2 threads are able to access the resource concurrently. So, I am really confused about the significance of initial count? SemaphoreSlim semaphoreSlim = new SemaphoreSlim(0, 2); //all threadpool threads wait SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1, 2);//only one thread has access to the resource at a time SemaphoreSlim semaphoreSlim = new SemaphoreSlim(2, 2);//two threadpool threads can access the resource concurrently

    Read the article

  • A basic load test question

    - by user236131
    I have a very basic load test question. I am running a load test using VSTS 2008 and I have test rig with controller + 10 agents. This load test is against a SharePoint farm I have. My goal of the load test is to find out the resource utilization on web+app+db tiers of my farm for any given load scenario. An example of a load scenario is Usage profile: Average collaboration (as defined by SCCP) User Load: 500 (using step load pattern=a step of 50 every 2 mins and a warm up time of 2mins for every step) Think time: 0 Load duration: 8hrs Now, the question is: Is it fair to expect that metrics like Requests/sec, %processor time on web front end / App / DB, Test/sec, and etc become flat or enter a steady state at one point in time during the load test. Like I said, the goal is not to create a bottleneck but to only measure the utilization of resources by the above load profile. I am asking this question because I see something different. At one point in the load test, requests/sec becomes more or less flat. But processor utilization on the web/DB servers keeps increasing. After digging through the data a bit, I see that "tests running" counter also steadily increased over time. So, if I run the load test for more than 8hrs, %processor may go up further. This way, I don't know what to consider as the load excreted by the load profile. What does this "tests running" counter really signify? How is this different from tests/sec? Another question is: how can I find out why "tests running" counter shows an increase overtime? Thanks for your time

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >