Search Results

Search found 7217 results on 289 pages for 'jboss cache'.

Page 212/289 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • How do you implement caching in Linq to SQL?

    - by Glenn Slaven
    We've just started using LINQ to SQL at work for our DAL & we haven't really come up with a standard for out caching model. Previously we had being using a base 'DAL' class that implemented a cache manager property that all our DAL classes inherited from, but now we don't have that. I'm wondering if anyone has come up with a 'standard' approach to caching LINQ to SQL results? We're working in a web environment (IIS) if that makes a difference. I know this may well end up being a subjective question, but I still think the info would be valuable. EDIT: To clarify, I'm not talking about caching an individual result, I'm after more of an architecture solution, as in how do you set up caching so that all your link methods use the same caching architecture.

    Read the article

  • Syntax error, unexpected T_CONSTANT_ENCAPSED_STRING in PHP

    - by pmms
    mysql_connect("localhost","root",""); mysql_select_db("hitnrunf_db"); $result=mysql_query("select * from jos_users INTO OUTFILE 'users.csv' FIELDS ESCAPED BY '""' TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' "); header("Content-type: text/plain"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Content-Transfer-Encoding: binary"); header("Pragma: no-cache"); header("Expires: 0"); print "$header\n$data"; in the above code in query string i.e string in side mysql_quey we are getting following error Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING in C:\wamp\www\samples\mysql_excel\exel_outfile.php on line 8 in query string '\n' charter is not identifying as string thats why above error getting

    Read the article

  • drupal's hook_preprocess_page not working as expected

    - by Peter Carrero
    i am having an issue where hook_preprocess_page 's changes to &$variables is not being rendered, even though it is the last item under $theme_registry['page']['preprocess functions']. logging contents of $variables to a file show the contents changed, but contents appear unchanged on the site. flushed all cache on drupal, flushed all browser caches and still the same result. /** * Implementation of hook_preprocess_page(). */ function grinchlist_preprocess_page(&$variables) { if (grinchlist_usercheck($variables['user']['uid'])) { $variables['scripts'] = preg_replace('/<script[^>]*christmas_snow.*<\/script>/','',$variables['scripts']); } file_put_contents('/tmp/vars.txt',print_r($variables,true)); } the /tmp/vars.txt shows the variables properly, but the browser still show the script being loaded. this may be a silly example, but i've had this issue with the hook_preprocess_page in other instances and it would really help out to understand what is going on here... thanks.

    Read the article

  • How do I include 2 tables in one LocalStorage item?

    - by Noor
    I've got a table that you can edit, and I've got a simple code saving that list when you're done with editing it. (the tables have the contenteditable on) The problem I've stumbled upon is that if I double click on enter, the table gets divided into two separate tables with the same ID. This causes the code I'm using to set the localStorage to only store one of the tables (I assume the first).. I've thought of different solutions and I wonder if someone could point out the pro's and con's (if the solutions even works that is). Make a loop that checks the page after tables and stores them into an array of localStorage-items.. I'd have to dynamically create a localStorage item for each table. Take the whole div that the tables are in, and store that in the localStorage, when a user revisits the page, the page checks after the items in storage and displays the whole divs. Any suggestions you have that can beat this :).. (but no cache, it has to be with the localStorage!) Thanks

    Read the article

  • Issues using $.ajax

    - by Nimbuz
    I want to load the lightbox javascript only when a certain condition is satisfied so I'm loading it using $.ajax like so: $.ajax({ url: "../static/js/lightbox.js", dataType: 'script', cache: true, success: function() { alert('loaded'); $("a.lightbox").lightbox({ opacity: "0.6", width: "940" }); }}); I see the "loaded" alert but the lightbox does not work. However, when I load the file directly (script src) from the HTML, lightbox works. How do I fix this? Many thanks for your help.

    Read the article

  • Does a WCF service with basicHttpBinding create a new connection for each request?

    - by Phil Wright
    I have a Silverlight client calling a WCF Service on an IIS web server. It uses the default basicHttpBinding setting for the calls. My client code has the usual Visual Studio generated proxy that is generated when using the 'Update Service Reference' menu option. Does every call to the service using that proxy use the same connection? Or does it create a connection each time a call is made and then close it down once the reply is received? As the client is actually making a SOAP call over HTTP I just assumed that every service request had a new connection created but I want to check if that is the case? (I need to know because if it creates a new connection each time then each request could end up at a different server because there are several servers being load balanced. It is uses a single connection for the duration of the proxy then I can assume they all end up at the same machine and so it would cache state.)

    Read the article

  • Load vs Get in Nhibernate

    - by Quintin Par
    The master page in my web application does authentication and loads up the user entity using a Get. After this whenever the user object is needed by the usercontrols or any other class I do a Load. Normally nhibernate is supposed to load the object from cache or return the persistent loaded object whenever Load of called. But this is not the behavior shown by my web application. NHprof always shows the sql whenever Load is called. How do I verify the correct behavior of Load? I use the S#arp architecture framework.

    Read the article

  • Can DBRefs contain additional fields?

    - by Soviut
    I've encountered several situations when using MongoDB that require the use of DBRefs. However, I'd also like to cache some fields from the referenced document in the DBRef itself. {$ref:'user', $id:'10285102912A', username:'Soviut'} For example, I may want to have the username available even though the user document is referenced. This would provide me all the benefits of a single document approach; Faster querying and eliminating the need to do manual dereferencing in my code. While at the same time allowing me to use references where they make sense. The idea being that when the referenced document is updated (a user changes their name, for example) my business layer can automatically update all the documents that reference it. Ultimately, I'm wondering if it's considered good form to store additional fields on my DBRefs? Will it break anything? Will I lose my data each time a reference is rewritten? Will drivers like pymongo support it?

    Read the article

  • JQuery never reaches success or fail on aspx returning json.

    - by David
    Basicaly I have my aspx page doing <% Response.Clear(); Response.Write("{\"Success\": \"true\" }"); Response.End(); %> My JQuery code is function DoSubmit(r) { if (r == null || r.length == 0 || formdata == null || formdata.length == 0) return; for (i = 0; i < formdata.length; i++) { var fd = formdata[i]; r[fd.Name] = fd.Value; } r["ModSeq"] = tblDef.ModSeq; jQuery.ajax({ url: "NashcoUpdate.aspx" , succsess: doRow , error: DoSubmitError , complete: DoSubmitComplete , dataType: "json" , cache: false , data: r , type: "post" }) } When I call the DoSubmit() function every thing works but the doRow or DoSubmitError functions never get called only the DoSubmitComplete function. When I look at the response text in teh DoSubmitComple function it is {"Success": "true" } Every JSON tester I have tried says that this is valied JSON. What am I doing wrong here?

    Read the article

  • Dns caching for sockets

    - by Poma
    I'm connecting to some websites through socks proxy server. In my case its very good to implement dns cache, so proxy don't need to resolve website's ip address. So, I performed DNS lookup, but don't know where to supply IP address. mySocket.Connect uses proxy's ip address so it isn't right place. I tried to place it in http header GET http://11.22.33.44/index.html HTTP/1.1 - this doesn't work (even in browser) since website is on virtual hosting. It seems that Host header is right place for resolved ip address. Am I right? Will proxy resolve host name (since it's still there in GET header) or not?

    Read the article

  • What database works well with 200+GB of data?

    - by taw
    I've been using mysql (with innodb; on Amazon rds) because it's sort of universal default, but it's been ridiculously under-performing, and tweaking it only delays the inevitable. The data is mostly relatively short (<1kB of bytes each) blobs information about 100Ms of urls. There is (or should be, mysql cannot seem to handle it) very high amount of insert / update / retrieve but few complex queries - not that complex queries wouldn't be useful, but because mysql is so slow that it's far faster to get the data out, process it locally, and cache the results somewhere. I can keep tweaking mysql and throwing more hardware at it, but it seems increasingly futile. So what are the options? SQL/relational model/etc. optional - anything will do as long as it's fast, networked, and language-independent.

    Read the article

  • Why better isolation level means better performance in SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead. Update: I did measure this with DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE issued and Profiler confirms there're no cache hits happening. Update2: The query in question is an OLAP one and we need to run it as fast as possible. Closing the production server from outside world to get the computation done is not out of question if this gives performance benefits.

    Read the article

  • jQuery Ajax works in Firefox, fails in IE when calling Controller action

    - by myaesubi
    Hello there, I'm making the following jQuery ajax call to an action in ASP.NET MVC. In Firefox the async request is sent to the action in the controller and everything works fine, but in IE no request is sent to the controller. Here is the ajax call and action controller signature: $.ajax({ cache: false, type: "GET", dataType: "json", contentType: "application/json; charset=utf-8", url: "/Fmz/AssignFmzToRegion", data: { fmzId: 403, regionId: 409 }, success: function(message) { if (message != 'Success') alert(message); }, failure: function(message) { alert(message); } }); [HttpGet] public JsonResult AssignFmzToRegion(long fmzId, long regionId) { try { FacilityManagementZoneService.AssignFmzToRegion(fmzId, regionId); } catch (Exception e) { return this.Json(e.Message, JsonRequestBehavior.AllowGet); } return this.Json("Success", JsonRequestBehavior.AllowGet); } Thanks.

    Read the article

  • make changes to html before append with jquery

    - by Pradyut Bhattacharya
    I m calling a page using ajax and making changes to it using jquery but the changes are not reflected after appending(append) it to a div as the changes are done to the html but the html var remains the same. how do i make the changes to the html var and then append to the DOM if i use the code directly to the DOM then there will be changes in the previous elements in the DOM the code $.ajax({ type: "POST", url: 'more_images.jsp', data: ({uid:uid, aid:aid, imgid:imgid}) , cache: false, success: function(html) { html = $.trim(html); $(html).filter('.corner-all').corner('5px'); $(html).filter('ol.c_album > li .img_c').corner('3px'); $(html).filter("ol.c_album > li ").find('a').attr('target', '_blank'); $('#images_list').append(html); } }); thanks Pradyut

    Read the article

  • valid json still fails on IE with jquery's ajax or getJSON callbacks

    - by lock
    everytime my page loads, im supposed to create a datatable (also a jquery plugin) but when im fetching the contents, using .ajax or .getJSON always goes straight ahead to the error function, without even telling me what went wrong inside the callback $.ajax({ cache: false, type: "POST", url: oSettings.sAjaxSource, data: {'newdate' : date}, contentType: "application/json; charset=utf-8", dataType: "json", success: function(json) { console.log('retrieving json data'); }, error: function() { console.log("An error has occurred. Please try again."); } }); that's the actual code with the callback stripped for security purposes... this works fine in firefox which actually executes what's on the callback function but IE simply fails and proceeds to writing my log i've read alot that the primary reason the JSON calls fails for IE is whenever there are trailing commas or simply malformed JS but i used JSONLint already and verified that my json object is a valid one :(

    Read the article

  • UIView Animation won't run transition

    - by dpelletier
    So, I've searched quite a bit for this and can't seem to find a solution. This code works: CGContextRef context = UIGraphicsGetCurrentContext(); [UIView beginAnimations:nil context:context]; [UIView setAnimationDuration:5]; [c setCenter:CGPointMake(200, 200)]; [UIView commitAnimations]; This code doesn't: CGContextRef context = UIGraphicsGetCurrentContext(); [UIView beginAnimations:nil context:context]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:c cache:YES]; [UIView setAnimationDuration:5]; [c exchangeSubviewAtIndex:0 withSubviewAtIndex:1]; [UIView commitAnimations]; And I know the call to exchangeSubViewAtIndex is working because if I remove it from the animation block it functions as expected. Anyone have any insight as to why this transition won't run? Something I need to import?

    Read the article

  • java - volatile keyword

    - by Tiyoal
    Say I have two threads and an object. One thread assigns the object: public void assign(MyObject o) { myObject = o; } Another thread uses the object: public void use() { myObject.use(); } Does the variable myObject have to be declared as volatile? I am trying to understand when to use volatile and when not, and this is puzzling me. Is it possible that the second thread keeps a reference to an old object in its local memory cache? If not, why not? Thanks a lot.

    Read the article

  • ajax call returns null in my facebook iframe !!!

    - by uhsp
    Hi, I am using jquery to send an ajax request from my facebook app which is in iframe to my server. The ajax request works fine when the web app is running stand alone and out of facebook platform, but within facebook, the result that I get from my ajax request is blank !!! Here is the code I use: $.ajax({ url: 'http://mydomain.com/search', data: params, cache: false, dataType: 'json', success: function(posts) { // post is null !!!!! } error: function(json) { alert('error'); } }); I appreciate if anybody can help me with it. Thanks. uhsp

    Read the article

  • SSL confirmation dialog popup auto closes in IE8 when re-accessing a JNLP file

    - by haylem
    I'm having this very annoying problem to troubleshoot and have been going at it for way too many days now, so have a go at it. The Environment We have 2 app-servers, which can be located on either the same machine or 2 different machines, and use the same signing certificate, and host 2 different web-apps. Though let's say, for the sake of our study case here, that they are on the same physical machine. So, we have: https://company.com/webapp1/ https://company.com/webapp2/ webapp1 is GWT-based rich-client which contains on one of its screens a menu with an item that is used to invoke a Java WebStart Client located on webapp2. It does so by performing a simple window.open call via this GWT call: Window.open("https://company.com/webapp2/app.jnlp", "_blank", null); Expected Behavior User merrilly goes to webapp1 User navigates to menu entry to start the WebStart app and clicks on it browser fires off a separate window/dialog which, depending on the browser and its security settings, will: request confirmation to navigate to this secure site, directly download the file, and possibly auto-execute a javaws process if there's a file association, otherwise the user can simply click on the file and start the app (or go about doing whatever it takes here). If you close the app, close the dialog, and re-click the menu entry, the same thing should happen again. Actual Behavior On Anything but God-forsaken IE 8 (Though I admit there's also all the god-forsaken pre-IE8 stuff, but the Requirements Lords being merciful we have already recently managed to make them drop these suckers. That was close. Let's hold hands and say a prayer of gratitude.) Stuff just works. JNLP gets downloaded, app executes just fine, you can close the app and re-do all the steps and it will restart happily. People rejoice. Puppies are safe and play on green hills in the sunshine. Developers can go grab a coffee and move on to more meaningful and rewarding tasks, like checking out on SO questions. Chrome doesn't want to execute the JNLP, but who cares? Customers won't get RSI from clicking a file every other week. On God-forsaken IE8 On the first visit, the dialog opens and requests confirmation for the user to continue to webapp2, though it could be unsafe (here be dragons, I tell you). The JNLP downloads and auto-opens, the app start. Your breathing is steady and slow. You close the app, close that SSL confirmation dialog, and re-click the menu entry. The dialog opens and auto-closes. Nothing starts, the file wasn't downloaded to any known location and Fiddler just reports the connection was closed. If you close IE and reach that menu item to click it again, it is now back to working correctly. Until you try again during the same session, of course. Your heart-rate goes up, you get some more coffee to make matters worse, and start looking for plain tickets online and a cheap but heavy golf-club on an online auction site to go clubbing baby polar seals to avenge your bloodthirst, as the gates to the IE team in Redmond are probably more secured than an ice block, as one would assume they get death threats often. Plus, the IE9 and IE10 teams are already hard at work fxing the crap left by their predecessors, so maybe you don't want to be too hard on them, and you don't have money to waste on a PI to track down the former devs responsible for this mess. Added Details I have come across many problems with IE8 not downloading files over SSL when it uses a no-cache header. This was indeed one of our problems, which seems to be worked out now. It downloads files fine, webapp2 uses the following headers to serve the JNLP file: response.setHeader("Cache-Control", "private, must-revalidate"); // IE8 happy response.setHeader("Pragma", "private"); // IE8 happy response.setHeader("Expires", "0"); // IE8 happy response.setHeader("Access-Control-Allow-Origin", "*"); // allow to request via cross-origin AJAX response.setContentType("application/x-java-jnlp-file"); // please exec me As you might have inferred, we get some confirmation dialog because there's something odd with the SSL certificate. Unfortunately I have no control over that. Assuming that's only temporary and for development purposes as we usually don't get our hands on the production certs. So the SSL cert is expired and doesn't specify the server. And the confirmation dialog. Wouldn't be that bad if it weren't for IE, as other browsers don't care, just ask for confirmation, and execute as expected and consistantly. Please, pretty please, help me, or I might consider sacrificial killings as an option. And I think I just found a decently prized stainless steel golf-club, so I'm right on the edge of gore. Side Notes Might actually be related to IE8 window.open SSL Certificate issue. Though it doesn't explain why the dialog would auto-close (that really is beyong me...), it could help to not have the confirmation dialog and not need the dialog at all. For instance, I was thinking that just having a simple URL in that menu instead of have it entirely managed by GWT code to invoke a Window.open would solve the problem. But I don't have control on that menu, and also I'm very curious how this could be fixed otherwise and why the hell it happens in the first place...

    Read the article

  • Role provider and Role management

    - by AspOnMyNet
    When the CacheRolesInCookie property is set to true in the Web.config file, role information for each user is stored in a cookie. When role management checks to see whether a user is in a particular role, the roles cookie is checked before the role provider is called to check the list of roles at the data source. The cookie is dynamically updated to cache the most recently validated role names. a) As far as I understand the above text, even though role management checks the roles cookie, role provider still checks the list of roles at the data source? b) Above text talks about role management, which is invoked before role provider is called. What class acts as a role management? thanx

    Read the article

  • How to safely purge in Varnish if backend is sick without losing content

    - by Highway of Life
    If the backend is sick, what is the preferable way to ensure that stale content can be retrieved from the backend when a PURGE request is made? When a PURGE request is made, whether or not the backend is sick, by default the content will be eliminated from the Varnish cache and if the backend is down, a 503 page would be served to the user until the backend comes back online to serve a new version of the content. I'd like to be able to at least serve up a stale version of the content if a new version could not be retrieved from the backend. Is this possible without installing the Softpurge Varnish Mod?

    Read the article

  • flex/actionscript client entity state refresh on JPA update using Pimento EntityManager

    - by Chris
    My Flex application uses a client-side pimento EntityManager which fetches quite a few objects and associations. It does this by forcing eager fetching of particular association ends in the form of fetch plans. I would like to update the client whenever a change has been made to an entity existing in the EntityManager's cache. Is it possible to update the state of the changed entity ONLY, including updating which entities are associated, without resetting the state of these associated entities? I have setup an EntityListener with a JPA post-update method that notifies clients when a persisted entity has been updated. I want this to trigger a refresh for the modified client-side entity, but calling EntityManager.refresh(entity) resets all lazy associations to proxies. Initializing these proxies resets the associated entities, even if they were loaded previously. I'm looking for an efficient way to keep the client's state in synch with the server's state, at least with respect to the entities that have already been retrieved by the initial load.

    Read the article

  • Apache or Nginx for php behind Varnish

    - by Macindy
    We are managing a heavy traffic php site (driven with vbulletin). To get less load I use the cache-proxy varnish. Works very well - with varnish the load was reduced by 50%. But now I am thinking about which webserver to use behind varnish. 90% requests getting to the webserver are php-requests. So is there a difference between apache/mpm-prefork with mod_php and nginx/php-fpm? Is apache perhaps better, because it doesn't have the tcp overhead? Thanks for reply - benchmarks would be great! macindy

    Read the article

  • Preload *.wav with SystemSoundID?

    - by fuzzygoat
    I am playing a wav file to give a little audio feedback when a button in my UI is pressed. My question is when you first press the button there is a delay (about 1.5secs) whilst the sound file "sound.wav" is loaded and cached. Is there a way to pre-cache this file (maybe in my viewDidLoad)? I guess I could do it by just playing it a viewDidLoad, but would really need to disable the audio so it does not "beeb" each time the app starts. many thanks for and help. gary EDIT: Looks like my question is a duplicate of this post unless anyone has any new info? Maybe a way to turn the play volume down temporarily, unless the audio is cleared each time through the run loop.

    Read the article

  • PHPDelicious - Fail to Connect to Delicious

    - by Meander365
    phpdelicious (http://www.phpdelicious.com/) is a wrapper class for pulling in your delicious bookmarks using php. But I can't get it to work. Here's my code - my index.php: <?php require('php-delicious.inc.php'); define('DELICIOUS_USER', 'my-user-name-here'); define('DELICIOUS_PASS', 'my-password-here'); $oDelicious = new PhpDelicious(DELICIOUS_USER, DELICIOUS_PASS); ?> <?php if ($aPosts = $oDelicious->GetAllPosts()) { foreach ($aPosts as $aPost) { echo '<a href="'.$aPost['url'].'">'.$aPost['desc'].'</a>'; echo $aPost['notes']; echo $aPost['updated']; } } else { echo $oDelicious->LastErrorString(); } ?> But I get this error : Notice: Use of undefined constant CACHE_PATH - assumed 'CACHE_PATH' in C:\wamp\www\testing\cache.inc.php on line 44 Connection to del.icio.us failed. Any ideas?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >