Search Results

Search found 51708 results on 2069 pages for 'http caching'.

Page 124/2069 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • Getting Session in Http Handler ashx

    - by prakash
    Hi All, I am using Http Handler ashx file for showing the images. I was using Session object to get image and return in the response Now problem is i need to use custom Session object its nothing but the Wrapper on HttpSession State But when i am trying to get existing custom session object its creating new ... its not showing session data , i checked the session Id which is also different Please adive how can i get existing session in ashx file ? Note: When i use ASP.NET Sesssion its working fine [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class GetImage : IHttpHandler, System.Web.SessionState.IRequiresSessionState {

    Read the article

  • C#: Is it possible to use expressions or functions as keys in a dictionary?

    - by Svish
    Would it work to use Expression<Func<T>> or Func<T> as keys in a dictionary? For example to cache the result of heavy calculations. For example, changing my very basic cache from a different question of mine a bit: public static class Cache<T> { // Alternatively using Expression<Func<T>> instead private static Dictionary<Func<T>, T> cache; static Cache() { cache = new Dictionary<Func<T>, T>(); } public static T GetResult(Func<T> f) { if (cache.ContainsKey(f)) return cache[f]; return cache[f] = f(); } } Would this even work? Edit: After a quick test, it seems like it actually works. But I discovered that it could probably be more generic, since it would now be one cache per return type... not sure how to change it so that wouldn't happen though... hmm Edit 2: Noo, wait... it actually doesn't. Well, for regular methods it does. But not for lambdas. They get various random method names even if they look the same. Oh well c",)

    Read the article

  • Multiple key/value pairs in HTTP POST where key is the same name

    - by randombits
    I'm working on an API that accepts data from remote clients, some of which where the key in an HTTP POST almost functions as an array. In english what this means is say I have a resource on my server called "class". A class in this sense, is the type a student sits in and a teacher educates in. When the user submits an HTTP POST to create a new class for their application, a lot of the key value pairs look like: student_name: Bob Smith student_name: Jane Smith student_name: Chris Smith What's the best way to handle this on both the client side (let's say the client is cURL or ActiveResource, whatever..) and what's a decent way of handling this on the server-side if my server is a Ruby on Rails app? Need a way to allow for multiple keys with the same name and without any namespace clashing or loss of data.

    Read the article

  • cached schwartzian transform

    - by davidk01
    I'm going through "Intermediate Perl" and it's pretty cool. I just finished the section on "The Schwartzian Transform" and after it sunk in I started to wonder why the transform doesn't use a cache. In lists that have several repeated values the transform recomputes the value for each one so I thought why not use a hash to cache results. Here' some code: # a place to keep our results my %cache; # the transformation we are interested in sub foo { # expensive operations } # some data my @unsorted_list = ....; # sorting with the help of the cache my @sorted_list = sort { ($cache{$a} or $cache{$a} = &foo($a)) <=> ($cache{$b} or $cache{$b} = &foo($b)) } @unsorted_list; Am I missing something? Why isn't the cached version of the Schwartzian transform listed in books and in general just better circulated because on first glance I think the cached version should be more efficient?

    Read the article

  • can widgets be cached ?

    - by movieuser
    Is there a way to cache widgets. For example if you place your widgets on high volume websites then each time when someone access that site, a call will be made to your server to get the widget code. This way my server can get too much overloaded just to display the widget . Can I cache the widget HTML code and place it on some server like Akamai. Any suggestions or tips highly appreciated. Thanks in advance.

    Read the article

  • I can't change HTTP request header Content-Type value using jQuery

    - by Matt
    Hi I tried to override HTTP request header content by using jQuery's AJAX function. It looks like this $.ajax({ type : "POST", url : url, data : data, contentType: "application/x-www-form-urlencoded;charset=big5", beforeSend: function(xhr) { xhr.setRequestHeader("Accept-Charset","big5"); xhr.setRequestHeader("Content-Type","application/x-www-form-urlencoded;charset=big5"); }, success: function(rs) { target.html(rs); } }); Content-Type header is default to "application/x-www-form-urlencoded; charset=UTF-8", but it obviously I can't override its value no matter I use 'contentType' or 'beforeSend' approaches. Could anyone adivse me a hint that how do I or can I change the HTTP request's content-type value? thanks a lot. btw, is there any good documentation that I can study JavaScript's XMLHttpRequest's encoding handling?

    Read the article

  • Simulate Incorrect Content-Length Headers for HTTP in C#

    - by cfeduke
    We are building a comprehensive integration test framework in C# for our application which exists on top of HTTP using IIS7 to host our applications. As part of our integration tests we want to test incoming requests which will result in EndOfStreamExceptions ("Unable to read beyond end of stream") that occur when a client sends up a HTTP header indicating a larger body size than it actually transmits as part of the body. We want to test our error recovery code for this condition so we need to simulate these sorts of requests. I am looking for a .NET Fx-based socket library or custom HttpWebRequest replacement that specifically allows developers to simulate such conditions to add to our integration test suite. Does anyone know of any such libraries? A scriptable solution would work as well.

    Read the article

  • css cache google chrome

    - by Daniel Garcia
    I'm having problems with cache, I think. I have a website in Joomla, and I have some .css (layout.css, position.css, .... ) and I have at home of the website, 3 buttons, I tested in localhost, and when I see the home well, I upload everything to production, to my server. Now, I just edited some styles of these buttons, for example, the width, in order to see them better...but I'm having a problem with the cache, because sometimes I see them with new changes, but other times I see with the old styles.....I realized that this happens especially with chrome Could you help me, please? Best regards, Daniel

    Read the article

  • Securely persist session between https://secure.yourname.com and http://www.yourname.com on rails ap

    - by Matt
    My rails site posts to a secure host (e.g. 'https://secure.yourname.com') when the user logs into the site. Session data is stored in the database, with the cookie containing only the session ID. The problem is that when the user returns to a non-https page, such as the home page (e.g. 'http://www.yourname.com') the user appears to have logged out. I believe the reason for this is that a separate cookie is stored for each host (www vs. secure). Is this correct? What is the best secure way to persist the session between both the http and https sections of the site? Does anyone know of any plugins that address this problem? The site runs on Heroku.

    Read the article

  • Why won't IIS serve my website? - 404 Page Not Found

    - by Giffyguy
    Built a brand new server, with a fresh copy of Windows Server 2003 Enterprise x86 Edition. Installed the .NET Framework 1.1, 2.0, 3.5, and 4.0 Added the "Domain Controller" and "Application Server" roles. Created a new website, pointed it to a local directory: C:\Inetpub\angryoctopus.net\ Added the appropriate headers: angryoctopus.net, www.angryoctopus.net, TCP port 80, all IPs Moved the website content into the local directory. Configured the default document in IIS: Default.aspx Enabled ASP.NET for this website, and set it to the correct version: 2.0.50727 Configured the zone angryoctopus.net in DNS. Tested DNS lookup here to ensure DNS was functional. Opened website in VS 2008 and re-built (and debugged) to ensure the content was functional. I can clearly see that IIS is responding normally, by browsing directly to my server's IP address. Since this does not use the angryoctopus HTTP header, the default website is displayed instead: the "Under Construction" page. And yet, after all of this, angryoctopus.net still returns 404. Does anybody know what could be wrong? What troubleshooting steps have I forgotten? Is there a command-line diagnostic that might provide more information?

    Read the article

  • ASP.NET cached aspx page & IIS logs

    - by Vishal Seth
    Hi guys, Is there any way to find out if ASP.Net runtime has served a cached copy of ASPX page or actually went through the page life cycle? Here is my problem: I'm seeing many entries in my IIS log files that were served successfully (200 OK). I've a corresponding logging code (Log4Net API) in the Session_Start and Application_BeginRequest() events that is logging every request to my DB with more details. I'm not seeing any corresponding entries in my SQL DB for some cases that should have been created by Log4Net code. Are there any logs available to find out if a cached copy was served by .NET worker process? Moreover, if my logging code would throw an exception, won't that show up as 500 in IIS logs? The code is on Windows 2008 Server, IIS 7.

    Read the article

  • How to cache pages using background jobs ?

    - by Alexandre
    Definitions: resource = collection of database records, regeneration = processing these records and outputting the corresponding html Current flow: Receive client request Check for resource in cache If not in cache or cache expired, regenerate Return result The problem is that the regeneration step can tie up a single server process for 10-15 seconds. If a couple of users request the same resource, that could result in a couple of processes regenerating the exact same resource simultaneously, each taking up 10-15 seconds. Wouldn't it be preferrable to have the frontend signal some background process saying "Hey, regenerate this resource for me". But then what would it display to the user? "Rebuilding" is not acceptable. All resources would have to be in cache ahead of time. This could be a problem as the database would almost be duplicated on the filesystem (too big to fit in memory). Is there a way to avoid this? Not ideal, but it seems like the only way out. But then there's one more problem. How to keep the same two processes from requesting the regeneration of a resource at the same time? The background process could be regenerating the resource when a frontend asks for the regeneration of the same resource. I'm using PHP and the Zend Framework just in case someone wants to offer a platform-specific solution. Not that it matters though - I think this problem applies to any language/framework. Thanks!

    Read the article

  • PHP Thumbnail image generator works, but consistently awaits 90% of its loadtime! Why?

    - by Sam
    Hi folks, This is the follow-up question from this general page load speed question. This question which will focus only, entirely and specifically on the thumbnail generator PHP code with the question: What's causing the 97% delays on these tiny thumbnails? Measuring only 3 ~ 5 kb each, this is strange indeed! Even after 7 orso bounty-rounds and 100 upvotes, the images are still doing nothing for over 97% of the time. It works allright, but it could be so much faster! ZAM, my pet robot, was so annoyed by not knowing why, he just pulled out some cords! Silly metal box... (I have asked permission from the original author, to put temporarily segments of the code online that I think could be the problem, after which I will remove all code, except the several lines that were the direct cause of this issue here to match the answer. This will be the free opportunity to perfect/speedup his otherwise already to my opinion vert versatile and userfriendly thumbnail generator.)

    Read the article

  • How to clone a mercurial repository over an ssh connection initiated by fabric when http authorizati

    - by Monika Sulik
    I'm attempting to use fabric for the first time and I really like it so far, but at a certain point in my deployment script I want to clone a mercurial repository. When I get to that point I get an error: err: abort: http authorization required My repository requires http authorization and fabric doesn't prompt me for the user and password. I can get around this by changing my repository address from: https://hostname/repository to: https://user:password@hostname/repository But for various reasons I would prefer not to go this route. Are there any other ways in which I could bypass this problem?

    Read the article

  • Subversion: svn protocol with HTTP/HTTPS proxy

    - by Neeraj
    Hi all, I need to do a svn checkout,say svn checkout svn://XYZ.com/trunk. I am using the svn client from behind the proxy. I had accessed other repositries using the http protocol in past but with svn protocol,it fails with "Connection Refused", reason I think being the port not allowed by the proxy.Nonetheless, the HTTP protocol is not supported on the server. However, svn+ssh gets connected but it prompts for an account at that server which I don't have? Is there any way out other than requesting for an account? Note that I can't affect the settings of the proxy server.

    Read the article

  • Why does stored procedure invalidate SQL Cache Dependency?

    - by Fabio Milheiro
    After many hours, I finally realize that I am working correctly with the Cache object in my ASP.NET application but my stored procedures stops it from working correctly. This stored procedure works correctly: CREATE PROCEDURE [dbo].[ListLanguages] @Page INT = 1, @ItemsPerPage INT = 10, @OrderBy NVARCHAR (100) = 'ID', @OrderDirection NVARCHAR(4) = 'DESC' AS BEGIN SELECT ID, [Name], Flag, IsDefault FROM dbo.Languages END But this (the one I wanted) doesn't: CREATE PROCEDURE [dbo].[ListLanguages] @Page INT = 1, @ItemsPerPage INT = 10, @OrderBy NVARCHAR (100) = 'ID', @OrderDirection NVARCHAR(4) = 'DESC', @TotalRecords INT OUTPUT AS BEGIN SET @TotalRecords = 10 EXEC('SELECT ID, Name, Flag, IsDefault FROM ( SELECT ROW_NUMBER() OVER (ORDER BY ' + @OrderBy + ' ' + @OrderDirection + ') as Row, ID, Name, Flag, IsDefault FROM dbo.Languages) results WHERE Row BETWEEN ((' + @Page + '-1)*' + @ItemsPerPage + '+1) AND (' + @Page + '*' + @ItemsPerPage + ')') END I gave the @TotalRecords parameter the value 10 so you can be sure that the problem is not from the COUNT(*) function which I know is not supported well. Also, when I run it from SQL Server Management Studio, it does exactly what it should do. In the ASP.NET application the results are retrieved correctly, only the cache is somehow unable to work! Can you please help? Maybe a hint I believe that the reason why the dependency HasChanged property is related to the fact that the column Row generated from the ROW_NUMBER is only temporary and, therefore, the SQL SERVER is not able to to say whether the results are changed or not. That's why HasChanged is always set to true. Does anyone know how to paginate results from SQL SERVER without using COUNT or ROW_NUMBER functions?

    Read the article

  • ColdFusion's cfquery failing silently

    - by johnthexiii
    I have a query that retrieves a large amount of data. <cfsetting requesttimeout="9999999" > <cfquery name="randomething" datasource="ds" timeout="9999999" > SELECT col1, col2 FROM table </cfquery> <cfdump var="#randomething.recordCount#" /> <!---should be about 5 million rows ---> I can successfully retrieve the data with python's cx_Oracle and using sys.getsizeof on the python list returns 22621060, so about 21 megabytes. ColdFusion does not return an error on the page, and I can't find anything in any of the logs. Why is cfdump not showing the number of rows? Additional Information The reason for doing it this way is because I have about 8000 smaller queries to run against the randomthing query. In other words when I run those 8000 queries against the database it takes hours for that process to complete. I suspect this is because I am competing with several other database users, and the database is getting bogged down. The 8000 smaller queries are getting counts of col1 over a period of col2. SELECT count(col1) as count WHERE col2 < 20121109 AND col2 > 20121108 According to Adam Cameron's suggestions. cflog is suggesting that the query isn't finishing. I tried changing the queries timeout both in the code and in the CFIDE/administrator, apparently CF9 no long respects the timeout attribute, regardless of what I tried I couldn't get the query to timeout. I also started playing around with the maxrows attribute to see if I could discern any information that way. when maxrows is set to 1300000 everything works fine when maxrows is 1400000 or greater I get this error when maxrows is 2000000 I observe my original problem Update So this isn't a limit of cfquery. By using QueryNew then looping over it to add data and I can get well past the 2 million mark without any problems. I also created a ThinClient datasource using the information in this question, I didn't observe any change in behavior. The messages on the database end are SQL*Net message from client and SQL*Net more data to client I just discovered that by using the thin client along with blockfactor1="100" I can retrieve more rows (appx. 3000000).

    Read the article

  • Problem in getting Http Response in crome

    - by Bhaskasr
    hi Am trying to get http response from php web service in javascript , but am getting null in firefox and crome. plz tell me where am doing mistake here is my code, function fetch_details() { if (window.XMLHttpRequest) { xhttp=new XMLHttpRequest() alert("first"); } else { xhttp=new ActiveXObject("Microsoft.XMLHTTP") alert("sec"); } alert("hi."); xhttp.open("GET","http://122.166.97.94:8080/proxim_live/phpsend.php?UserID=881&DeviceID=Imm123&LastSyncDate=",false); xhttp.send(""); xmlDoc=xhttp.responseXML; alert(xmlDoc); alert(xmlDoc.getElementsByTagName("Inbox")[0].childNodes[0].nodeValue); }

    Read the article

  • Role provider and Role management

    - by AspOnMyNet
    When the CacheRolesInCookie property is set to true in the Web.config file, role information for each user is stored in a cookie. When role management checks to see whether a user is in a particular role, the roles cookie is checked before the role provider is called to check the list of roles at the data source. The cookie is dynamically updated to cache the most recently validated role names. a) As far as I understand the above text, even though role management checks the roles cookie, role provider still checks the list of roles at the data source? b) Above text talks about role management, which is invoked before role provider is called. What class acts as a role management? thanx

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • Cache Wrapper with expressions

    - by Fujiy
    I dont know if is possible. I want a class to encapsulate all Cache of my site. I thinking about the best way to do this to avoid conflict with keys. My first idea is something like this: public static TResult Cachear<TResult>(this Cache cache, Expression<Func<TResult>> funcao) { string chave = funcao.ToString(); if (!(cache[chave] is TResult)) { cache[chave] = funcao.Compile()(); } return (TResult)cache[chave]; } Is the best way? Ty

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >