Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 430/1180 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • How to force browsers to always reload xslt files?

    - by bitmask
    Related: Apache: How can I force the browser to reload CSS files? I'm building an xml page (on an apache2) that is supposed to be translated to xhtml by the browser, so my server also serves a main.xslt which is used as stylesheet by the xml file, similar to the scenario with the css files in the linked question. However, none of tricks provided in either that answer, nor some issues on SO solve the issue for Opera. While Firefox responds to F5 by fetching not only the xml file but also the xslt file, Opera only reloads the xml file. I tried both, setting the Last-Modified HTTP header via an .htaccess file and using the expires module of apache2. This is what my .htaccess looks right now: AddType text/xsl;charset=utf-8 .xslt ExpiresByType text/xsl "modification plus 1 second" Header set Last-Modified "Wed, 08 Jan 2000 23:11:55 GMT" #Header set Last-Modified "Wed, 08 Jan 2020 23:11:55 GMT" If I open the xsl myself and manually reload it, the xml presentation is updated as well, but this is tedious for development. Note: There is no php or any kind of scripting involved. Everything is static.

    Read the article

  • Content, MetaData and Taxonomy 1 Taxonomy Manager

    This article is cross-posted from my personal blog. In DotNetNuke version 5.3, we introduced the concept of a centralized Content store, together with the ability to apply Taxonomies (categories) to the content. We have extended this in DNN 5.4 by completing the MetaData API as well as adding Folksonomy (user tags). In this series of blogs I will explain how developers can take advantage of these new features in their own extensions. But first lets take a look at how the pieces work together....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SignalR recording when a Web Page has closed

    - by Benjamin Rogers
    I am using MassTransit request and response with SignalR. The web site makes a request to a windows service that creates a file. When the file has been created the windows service will send a response message back to the web site. The web site will open the file and make it available for the users to see. I want to handle the scenario where the user closes the web page before the file is created. In that case I want the created file to be emailed to them. Regardless of whether the user has closed the web page or not, the message handler for the response message will be run. What I want to be able to do is have some way of knowing within the response message handler that the web page has been closed. This is what I have done already. It doesnt work but it does illustrate my thinking. On the web page I have $(window).unload(function () { if (event.clientY < 0) { // $.connection.hub.stop(); $.connection.exportcreate.setIsDisconnected(); } }); exportcreate is my Hub name. In setIsDisconnected would I set a property on Caller? Lets say I successfully set a property to indicate that the web page has been closed. How do I find out that value in the response message handler. This is what it does now protected void BasicResponseHandler(BasicResponse message) { string groupName = CorrelationIdGroupName(message.CorrelationId); GetClients()[groupName].display(message.ExportGuid); } private static dynamic GetClients() { return AspNetHost.DependencyResolver.Resolve<IConnectionManager>().GetClients<ExportCreateHub>(); } I am using the message correlation id as a group. Now for me the ExportGuid on the message is very important. That is used to identify the file. So if I am going to email the created file I have to do it within the response handler because I need the ExportGuid value. If I did store a value on Caller in my hub for the web page close, how would I access it in the response handler. Just in case you need to know. display is defined on the web page as exportCreate.display = function (guid) { setTimeout(function () { top.location.href = 'GetExport.ashx?guid=' + guid; }, 500); }; GetExport.ashx opens the file and returns it as a response. Thank you, Regards Ben

    Read the article

  • Europe's Largest Customer Engagement Conference

    - by Charles Knapp
    What have Ben & Jerry's, HSBC, Innocent, LoveFilm, Oracle, Orange, Virgin, and other leaders learned about innovating to build customer loyalty? Loyalty World will help you better understand, engage and retain your customers. For example, Oracle's Melissa Boxer will deliver a Keynote on "Integrating Social, Marketing, and Loyalty to Deliver Great Customer Experiences." Sundar Swaminathan will speak on "Powering Rich Cross-Channel Customer Experiences with Next-Gen Loyalty Programs." You'll learn best practices from global thought leaders who are producing real results. Learn more about the Conference.

    Read the article

  • Developing Schema Compare for Oracle (Part 3): Ghost Objects

    - by Simon Cooper
    In the previous blog post, I covered how we solved the problem of dependencies between objects and between schemas. However, that isn’t the end of the issue. The dependencies algorithm I described works when you’re querying live databases and you can get dependencies for a particular schema direct from the server, and that’s all well and good. To throw a (rather large) spanner in the works, Schema Compare also has the concept of a snapshot, which is a read-only compressed XML representation of a selection of schemas that can be compared in the same way as a live database. This can be useful for keeping historical records or a baseline of a database schema, or comparing a schema on a computer that doesn’t have direct access to the database. So, how do snapshots interact with dependencies? Inter-database dependencies don't pose an issue as we store the dependencies in the snapshot. However, comparing a snapshot to a live database with cross-schema dependencies does cause a problem; what if the live database has a dependency to an object that does not exist in the snapshot? Take a basic example schema, where you’re only populating SchemaA: SOURCE   TARGET (using snapshot) CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); In this case, we want to generate a sync script to synchronize SchemaA.Table1 on the database represented by the snapshot. When taking a snapshot, database dependencies are followed, but because you’re not comparing it to anything at the time, the comparison dependencies algorithm described in my last post cannot be used. So, as you only take a snapshot of SchemaA on the target database, SchemaB.Table1 will not be in the snapshot. If this snapshot is then used to compare against the above source schema, SchemaB.Table1 will be included in the source, but the object will not be found in the target snapshot. This is the same problem that was solved with comparison dependencies, but here we cannot use the comparison dependencies algorithm as the snapshot has not got any information on SchemaB! We've now hit quite a big problem - we’re trying to include SchemaB.Table1 in the target, but we simply do not know the status of this object on the database the snapshot was taken from; whether it exists in the database at all, whether it’s the same as the target, whether it’s different... What can we do about this sorry state of affairs? Well, not a lot, it would seem. We can’t query the original database, as it may not be accessible, and we cannot assume any default state as it could be wrong and break the script (and we currently do not have a roll-back mechanism for failed synchronizes). The only way to fix this properly is for the user to go right back to the start and re-create the snapshot, explicitly including the schemas of these 'ghost' objects. So, the only thing we can do is flag up dependent ghost objects in the UI, and ask the user what we should do with it – assume it doesn’t exist, assume it’s the same as the target, or specify a definition for it. Unfortunately, such functionality didn’t make the cut for v1 of Schema Compare (as this is very much an edge case for a non-critical piece of functionality), so we simply flag the ghost objects up in the sync wizard as unsyncable, and let the user sort out what’s going on and edit the sync script as appropriate. There are some things that we do do to alleviate somewhat this rather unhappy situation; if a user creates a snapshot from the source or target of a database comparison, we include all the objects registered from the database, not just the ones in the schemas originally selected for comparison. This includes any extra dependent objects registered through the comparison dependencies algorithm. If the user then compares the resulting snapshot against the same database they were comparing against when it was created, the extra dependencies will be included in the snapshot as required and everything will be good. Fortunately, this problem will come up quite rarely, and only when the user uses snapshots and tries to sync objects with unknown cross-schema dependencies. However, the solution is not an easy one, and lead to some difficult architecture and design decisions within the product. And all this pain follows from the simple decision to allow schema pre-filtering! Next: why adding a column to a table isn't as easy as you would think...

    Read the article

  • WCF to WCF communication 401, HttpClient

    - by youwhut
    I have a WCF REST service that needs to communicate with another WCF REST service. There are three websites: Default Web Site Website1 Website2 If I set up both services in Default Web Site and connect to the other (using HttpClient) using the URI http://localhost/service then everything is okay. The desired set-up is to move these two services to separate websites and rather than using the URI http://localhost/service, accessing the service via http://website1.domain.com/service still using HttpClient. I received the exception: System.ArgumentOutOfRangeException: Unauthorized (401) is not one of the following: OK (200), Created (201), Accepted (202), NonAuthoritativeInformation (203), NoContent (204), ResetContent (205), PartialContent (206) I can see this is a 401, but what is going on here? Thanks

    Read the article

  • Activity Indicator not displaying based on whether the UIWebView is loading or not...

    - by Jack W-H
    Hi folks Sorry if this is an easy one. Basically, here is my code: MainViewController.h: // // MainViewController.h // Site // // Created by Jack Webb-Heller on 19/03/2010. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import "FlipsideViewController.h" @interface MainViewController : UIViewController <UIWebViewDelegate, FlipsideViewControllerDelegate> { IBOutlet UIWebView *webView; IBOutlet UIActivityIndicatorView *spinner; } - (IBAction)showInfo; @property(nonatomic,retain) UIWebView *webView; @property(nonatomic,retain) UIActivityIndicatorView *spinner; @end MainViewController.m: // // MainViewController.m // Site // // Created by Jack Webb-Heller on 19/03/2010. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import "MainViewController.h" #import "MainView.h" @implementation MainViewController @synthesize webView; @synthesize spinner; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { NSURL *siteURL; NSString *siteURLString; siteURLString=[[NSString alloc] initWithString:@"http://www.site.com"]; siteURL=[[NSURL alloc] initWithString:siteURLString]; [webView loadRequest:[NSURLRequest requestWithURL:siteURL]]; [siteURL release]; [siteURLString release]; [super viewDidLoad]; } - (void)flipsideViewControllerDidFinish:(FlipsideViewController *)controller { [self dismissModalViewControllerAnimated:YES]; } - (void)webViewDidFinishLoad:(UIWebView *)webView { [spinner stopAnimating]; spinner.hidden=FALSE; NSLog(@"viewDidFinishLoad went through nicely"); } - (void)webViewDidStartLoad:(UIWebView *)webView { [spinner startAnimating]; spinner.hidden=FALSE; NSLog(@"viewDidStartLoad seems to be working"); } - (IBAction)showInfo { FlipsideViewController *controller = [[FlipsideViewController alloc] initWithNibName:@"FlipsideView" bundle:nil]; controller.delegate = self; controller.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:controller animated:YES]; [controller release]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [spinner release]; [webView release]; [super dealloc]; } @end Unfortunately nothing is ever written to my log, and for some reason the Activity Indicator never seems to appear. What's going wrong here? Thanks folks Jack

    Read the article

  • ASP.NET MVC Validation of ViewState MAC failed

    - by Kevin Pang
    After publishing a new build of my ASP.NET MVC web application, I often see this exception thrown when browsing to the site: System.Web.Mvc.HttpAntiForgeryException: A required anti-forgery token was not supplied or was invalid. --- System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. --- System.Web.UI.ViewStateException: Invalid viewstate. This exception will continue to occur on each page I visit in my web application until I close out of Firefox. After reopening Firefox, the site works perfectly. Any idea what's going on? Additional notes: I am not using any ASP.NET web controls (there are no instances of runat="server" in my application) If I take out the <%= Html.AntiForgeryToken % from my pages, this problem seems to go away

    Read the article

  • Why does Joomla debug show 446 queries logged and 446 legacy queries logged?

    - by Darye
    I have been called in to fix the performance of a Joomla site that was already setup. I look at the debug output and it shows the same queries twice, once for queries logged and again for legacy queries logged. My guess is that it is actually running the same queries twice make for just under 900 queries per page (hope I am wrong) The Legacy plugin is disabled, so Legacy mode is not on at all. The site uses VirtueMart as well (which BTW isn't working properly if the cache in the Global Config is turned on) Besides the fact that I don't think it should be running 446 queries anyway (sometimes even up to 650 per page ), has anyone every experienced this issue, and where would I look to fix this. Thanks

    Read the article

  • Please help properly setting up path variables for root.php

    - by Joel
    Hi guys, I just posted a similar question, but deleted it because I realized I was working with an old file...doh! I am just trying to get my XAMPP setup working for me. I have a live site that navigates to a login page at http://www.monkeycalendar.com/arvindkt/login.php That login page includes a root.php file that is found at http://www.monkeycalendar.com/arvindkt/root.php Live site works great. My localhost is set up so my sites are a folder in localhost: IE: http://www.example.com = localhost/example.com I'm having problems figuring out how to make my root folder point to the right directory. Any help would be much appreciated: root.php: # local settings define("SITE_ROOT" , $_SERVER['DOCUMENT_ROOT']."/arvindkt"); define("SITE_URL" , "http://localhost/monkeycalendar.com"); define('DB_HOST', "localhost"); define('DB_USER', "root"); define('DB_PASS', ""); define('DB_NAME', "dev.monkeycalendar");

    Read the article

  • using php's libcurl to register user and upload file to server

    - by tunpishuang
    here is a site http://www.lyrkjsw.gov.cn that can let the registered user to upload file (e.g. images or office files) to the site. i want to register user and upload image to this site using libcurl binding with php. only registered user can upload image. so i use cookiejar stored in c:\cookie.txt after register and use c:\cookie.txt in uploadImg() function . register user is valid but failed to upload image , can anybody know is there any mistake of my code: <? /* options */ //the list url $expUrl='http://www.lyrkjsw.gov.cn/hbcms/user/list_resource.php'; //the user info to be registered $regUser='jiong'; $regPass='jiong'; $regMail='[email protected]'; $regUrl=str_replace('list_resource.php','register.php',$expUrl); // options for image upload $fileDir='@D:\img\b.jpg'; $fileTitle='aaaaaaaaaaaaa'; $fileDesc='aaaaaaaaaaaaadesc'; $uploadImgUrl=str_replace('list_resource.php','add_resource.php',$expUrl); /* register function */ function reg($regurl,$u,$p,$m) { $ch = curl_init(); $options=array( CURLOPT_URL=>$regurl, CURLOPT_RETURNTRANSFER=>true, CURLOPT_POST=>true, CURLOPT_POSTFIELDS=>'mod=register_now&next_url=index.php&addon_app=&referrer_id=&login_name='.$u.'&login_pass='.$p.'&confirm_login_pass='.$p.'&login_email='.$m.'&nickname=&gender=0&qq=&mobile=&telephone=&true_name=&website_name=&website_url=&my_question=&my_answer=', CURLOPT_COOKIESESSION=>true, CURLOPT_HEADER=>true, CURLOPT_COOKIEJAR=>'c:\cookie.txt' ); curl_setopt_array($ch,$options); $data = curl_exec($ch); if(strpos($data,'??')){ printf("register ok :)\n"); curl_close($ch); return true; }else{ printf("register failed:(\n"); curl_close($ch); return false; } } /* image uploading function */ function uploadImg($uploadimgurl,$filedir,$filetitle,$filedesc) { $ch = curl_init(); $options=array( CURLOPT_COOKIEFILE=>'c:\cookie.txt', CURLOPT_URL=>$uploadimgurl, CURLOPT_RETURNTRANSFER=>1, CURLOPT_POST=>1, CURLOPT_POSTFIELDS=>" 'MAX_FILE_SIZE'='33554432'& 'preview_area_id'='upload_file'& 'editor_area_id'='body'& 'js_function'=''& 'resource_id'=''& 'show_top_part'='no'& 'file_1'=$filedir& 'file_title_1'=$filetitle& 'file_desc_1'=$filedesc " ); curl_setopt_array($ch,$options); $data = curl_exec($ch); if(strpos($data,'??')){ printf("upload ok :)\n"); }else{ printf("upload failed :(\n"); } curl_close($ch); } if(reg($regUrl,$regUser,$regPass,$regMail) != false) { uploadImg($uploadImgUrl,$fileDir,$fileTitle,$fileDesc); } http://www.lyrkjsw.gov.cn/hbcms/user/list_resource.php (list file page) http://www.lyrkjsw.gov.cn/hbcms/user/register.php (register page) http://www.lyrkjsw.gov.cn/hbcms/user/add_resource.php (image uploading page)

    Read the article

  • ASP.NET 32-bit machine compiled now trying to run on 64-bit machine

    - by user54064
    I have an ASP.NET app that was compiled on a 32-bit machine. There are many different assemblies that are referenced. I opened the web site's main dll with ILDASM and looked at the .corflags. It stated it was ILONLY. However, when I run the web site locally on the 64-bit machine (Windows XP Pro 64-bit), I get "is not a valid Win32 applciation". Shouldn't the app run as 64-bit since it was compiled with "AnyCPU"? How can I get this to work? I am using .NET 3.5.

    Read the article

  • htaccess redirect from root to subfolder and then mask the url?

    - by KiwiCoder
    Hello all, Two things: Firstly - I have version 2 of a website located in a folder named v2, and I want to redirect any traffic that is NOT a child of the v2 folder, to www.example.com/v2 The old site located in the root was created in iWeb and has a LOT of subfolders and sub-subfolders. So: www.example.com/v2 = New site www.example.com/Page.html www.example.com/category/Page.html ww.example.com/category/subcategory/Page.html = All generic examples of what I need to redirect. Secondly, and I don't know if this is possible, I want to hide /v2/ in the URL, so that visitors will just see www.example.com/page even though they are actually on www.example.com/v2/page Links are hardcoded to the v2 folder, like so <a href="v2/contact.html" Any help is MOST appreciated. I've spent hours trying to figure this out, but I'm only just learning about htaccess and regular expressions, and am totally confused. Thanks so much!

    Read the article

  • django urls.py regex isn't working

    - by Phil
    This is for Django 1.2.5 and Python 2.7 on Wamp Server running apache version 2.2.17. My problem is that the my URLConf in urls.py isn't redirecting, it's just throwing a 404 error. urls.py: from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: #from django.contrib import admin #admin.autodiscover() urlpatterns = patterns('', (r'^app/$', include('app.views.index')), # Uncomment the admin/doc line below to enable admin documentation: #(r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: #(r'^admin/', include(admin.site.urls)), ) views.py from django.http import HttpResponse def index(request): return HttpResponse("Hello World") I'm getting the following error: ImportError at /app/ No module named index I'm stumped as I'm only learning Django, can anybody see something wrong with my code? Here's my PythonPath: ['C:\Windows\system32\python27.zip', 'C:\Python27\Lib', 'C:\Python27\DLLs', 'C:\Python27\Lib\lib-tk', 'C:\wamp\bin\apache\Apache2.2.17', 'C:\wamp\bin\apache\apache2.2.17\bin', 'C:\Python27', 'C:\Python27\lib\site-packages', 'c:\wamp\www\seetwo']

    Read the article

  • Are there any JSF components for implementing breadcrumb navigation?

    - by kazanaki
    As far as I know there are two "kinds" of breadcrumbs. The static/hierarchy one Works like a stack Entries are pushed when a user goes "deeper" into the site Entries are poped when user goes "up" into the site Is the same for all users (for a given page) Shows location rather than history A simple Example would be HOME - BIG CATEGORY - SMALL CATEGORY - ARTICLE The dynamic/historical one Works like a queue Entries are pushed at the end when a user goes to another page Entries are removed from the front when the maximum size is reached Is different for each user, since it is personalized. Shows timeline/history instead of location. A simple example would be SMALL CATEGORY - HOME - BIG CATEGORY - HOME The question is: Are there any ready-made JSF component for these types of navigation?

    Read the article

  • Wordpress plugin for enhanced WYSIWYG image resizing?

    - by Gnee
    Really, really trying to find a plugin that gives adds functionality to the image-resize functions in Wordpress's WYSIWYG editor. Something where the use of 'Gallery' is not mandatory – just an upload straight from the post editor. In a post, when an image is linked to from another site, there are less options – unlike when an image is uploaded. • You can resize, but it's 100%, 110%, 120%, 130% ....Instead of thumbnail, large, medium, etc, when you upload. •These dimension rarely match the dimensions needed. I know you can type in the W x H in the advanced tab, but my quest to find a better solution is really for the clients using the site. If anyone knows a solution / plugin / modification for this, I'd love to hear it!!

    Read the article

  • What are the best measures to protect content from being crawled?

    - by Moak
    I've been crawling a lot of websites for content recently and am surprised how no site so far was able to put up much resistance. Ideally the site I'm working on should not be able to be harvested so easily. So I was wondering what are the best methods to stop bots from harvesting your web content. Obvious solutions: Robots.txt (yea right) IP blacklists What can be done to catch bot activity? What can be done to make data extraction difficult? What can be done to give them crap data? Just looking for ideas, no right/wrong answer

    Read the article

  • Facebook connect JavaScript with PHP

    - by skidding
    I'm using the JavaScript method to sync/login (with the popup) with Facebook Connect on my site, it seems to work. However, after I get logged in, I want to continue in backend, with the PHP library. I see the cookies are set by the JavaScript lib, but I don't know how to use them with the PHP api. I used $fb = new Facebook($api_key, $secret); $uid = $fb->get_loggedin_user(); but not user data is getting passed. How can I get the user data in PHP after I logged in in frontend? As far as I'm concerned, I would have gone PHP all the way, but I didn't manage to make the auth work, meaning that in never redirected me back to my site :). Thanks!

    Read the article

  • Able to ping but cannot browse after several hours running of my python program

    - by Shane
    It's a GUI program I wrote in python checking website/server status running on my XP SP3, multi threads are used to check different site/server. After several hours running, the program starts to get urlopen error timed out all the time, and this always happens right after a POST request from a server(not a certain one, might be A or B or C), and it's also not the first POST request causing the problem, normally after several hours running and it happens to make a POST request at an unknown moment, all you get from then on is urlopen error timed out. I'm still able to ping but cannot browse any site, once the program closed everything's fine. It's definitely the program causing this problem, well I just don't know how to debug/check what the problem is, also don't know if it's from OS side or my program wasting too many resources/connections(are you still able to ping when too many connections used?), would anybody please help me out?

    Read the article

  • RenderTarget2D behavior in XNA

    - by Utkarsh Sinha
    I've been dabbling with XNA for a couple of days now. This chunk of code doesn't work as I expect. The goal is to render sprites individually and composite them on another rendertarget. P = RenderTarget2D(with RenderTargetUsage.PreserveContents) D = RenderTarget2D(with RenderTargetUsage.DiscardContents) for all sprites: graphicsDevice.SetRenderTarget(D); <draw sprite i> graphicsDevice.SetRenderTarget(P); <Draw D> graphicsDevice.SetRenderTarget(null); <Draw P> The result I get is - only the last sprite is visible. I'm sure I'm missing some piece of information about RenderTarget2D. Any hints on what that might be? Cross posted from - http://stackoverflow.com/questions/9970349/weird-rendertarget2d-behaviour

    Read the article

  • Hook Response.Cache to memcache

    - by dvr
    Has anyone done this before? I have a 32 bit win 2003 server running 2.0 and have read the ms engineers' blog about min(60%, 1800mb) for cache limits and our site (asp.net 2.0 / 3.5) is caching alot. It throws system outofmemory exceptions when wp is around 1.3gb (unfortunately it is the 2.0 apps) and I would like to push alot over to memcache but worried that at the moment the site is efficient using response.cache as is (though memory is an issue). I want to move most items over to memcache and have concerns on a – how to do this (implementation of response.cache to read/write from memcache) and b – what will performance be like? Before I commit to doing this and possibly spending a few days running tests I would like to hear from you if this has been done already and get some feedback. (and please don’t tell me to buy a x64 machine – I have already requested this!), by the way I ran a test requesting a single image 1000 times and response.cache was over 50% quicker than using application cache. Does response.cache bypass the page lifecycle?

    Read the article

  • Architectural advice on connecting multiple diverse sites into a single community.

    - by Aleksandar
    Hi SO, I've been given a task to connect multiple sites of the same client into a single network. So i would like to hear an architectural advice on connecting these sites into a single community. These sites include: 1. Invision Power Board Forum (the most important site) 2. 3 custom made cms-s (changes to code allowable) 3. 1 drupal site 4. 3-4 wordpress blogs Requirements are as follows: 1. Connecting all users of all sites into a single administrable entity. With permissions changing ability, users banning etc. 2. Later on, based on this implementation I have to implement "facebook like" chat, which will be available to all users regardless of place of login. I have few ideas on my mind on how to go with this, but would like to hear some people with more experience and expertize than my self. Cheers!

    Read the article

  • A pixel is not a pixel is not a pixel

    Yesterday John Gruber wrote about the upped pixel density in the upcoming iPhone (960x480 instead of 480x320), and why Apple did this. He also wondered what the consequences for web developers would be.Now I happen to be deeply engaged in cross-browser research of widths and heights on mobile phones, and can state with reasonable certainty that in 99% of the cases these changes will not impact web developers at all.The remaining 1% could be much more tricky, but I expect Apple to cater to this problem...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • PowerShell to fetch a SQL Execution Plan

    - by Rob Farley
    With PowerShell becoming the scripting language of choice for many people, I’ve occasionally wondered about using it to analyse execution plans. After all, an execution plan is just XML, and PowerShell is just one tool which will very easily handle xml. The thing is – there’s no Get-SqlPlan cmdlet available, which has frustrated me in the past. Today I figured I’d make one. I know that I can write T-SQL to get an execution plan using SET SHOWPLAN_XML ON, but the problem is that this must be the only statement in a batch. So I used go, and a couple of newlines, and whipped up the following one-liner: function Get-SqlPlan([string] $query, [string] $server, [string] $db) { return ([xml] (invoke-sqlcmd -Server $server -Database $db -Query "set showplan_xml on;`ngo`n$query").Item( 0)) } (but please bear in mind that I have the SQL Snapins installed, which provides invoke-sqlcmd) To use this, I just do something like: $plan = get-sqlplan "select name from Production.Product" "." "AdventureWorks" And then find myself with an easy way to navigate through an execution plan! At some point I should make the function more robust, but this should be a good starter for any SQL PowerShell enthusiasts (like Aaron Nelson) out there.

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >