Search Results

Search found 38689 results on 1548 pages for 'page caching'.

Page 748/1548 | < Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >

  • jQuery ajax form submit - how to ensure dynamically loaded form's action is used

    - by kenny99
    Hi, i'm having a problem with dynamically loaded forms - instead of using the action attribute of the newly loaded form, my jquery code is still using the action attribute of the first form loaded. I have the following code: //generic ajax form handler - calls next page load on success $('input.next:not(#eligibility)').live("click", function(){ $(".form_container form").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: $(this).attr('action'), success : function() { var url = $('input.next').attr('rel'); ajaxFormStage(url); }, failure : function() { } }); }); But when the next form is loaded, the above code does not pick up the new action attribute. I have tried adding the above code to my callback on successful ajax load (shown below), but this doesn't make any difference. Can anyone help? Many thanks function ajaxFormStage(url) { var $data = $('#main_body #content'); $.validationEngine.closePrompt('body'); //close any validation messages $data.fadeOut('fast', function(){ $data.load(url, function(){ $data.animate({ opacity: 'show' }, 'fast'); '); //generic ajax form handler - calls next page load on success $('input.next:not(#eligibility)').live("click", function(){ $(".form_container form").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: $(this).attr('action'), success : function() { var url = $('input.next').attr('rel'); ajaxFormStage(url); }, failure : function() { } }); }); }); });

    Read the article

  • jQuery/ajax working on IIS5.1 but not IIS6

    - by Mikejh99
    I'm running a weird issue here. I have code that makes jquery ajax calls to a web service and dynamically adds controls using jquery. Everything works fine on my dev machine running IIS 5.1, but not when deployed to IIS 6. I'm using VS2010/ASP.Net 4.0, C#, jQuery 1.4.2 and jQuery UI 1.8.1. I'm using the same browser for each. It partially works though. The code will add the controls to the page, but they aren't visible until I click them (they aren't visible though). I thought this was a css issue, but the styles are there too. The ajax calls look like this: $.ajax({ url: "/WebServices/AssetManager.asmx/Assets", type: "POST", datatype: "json", async: false, data: "{'q':'" + req.term + "', 'type':'Condition'}", contentType: "application/javascript; charset=utf-8", success: function (data) { res($.map(data.d, function (item) { return { label: item.Name, value: item.Name, id: item.Id, datatype: item.DataType } })) } }) Changing the content-type makes the autocomplete fail. I've quadruple checked and all the paths are correct, there is no document footer enabled in IIS, and I'm not using IIS compression. Any idea why the page will display and work properly in IIS 5 but only partially in IIS 6? (If it failed completely, that'd make more sense!). Is it a jQuery or CSS issue?

    Read the article

  • W3 xHTML Validation Errors on jQuery code!

    - by Chris
    I have some jQuery code that, without it in the document it passes validation fine, but with it in it causes errors. The code in question is here: $.ajax({ type: "GET", url: "data.xml", dataType: "xml", success: function(xml) { //Update error info errors = $(xml).find("Errors").find("*").filter(function () { return $(this).children().length === 0; }); if (errors.length == 0) { statuscontent = "<img src='/web/resources/graphics/accept.png' alt='' /> System OK"; } else { statuscontent = "<img src='/web/resources/graphics/exclamation.png' alt='' /> "+errors.length+" System error"+(errors.length>1?"s":""); } $("#top-bar-systemstatus a").html(statuscontent); //Update timestamp $("#top-bar-timestamp").html($(xml).find("Timestamp").text()); //Update storename $("#top-bar-storename").html("Store: "+$(xml).find("StoreName").text()); } }); There are loads of other jQuery code on the page which all works fine and causes no errors so I cannot quite understand what is wrong with this. The page isn't "live" so cannot provide a link to it unfortunately. The error it lists is document type does not allow element "img" here And the line of code it points to is here: statuscontent = "<img src='/web/resources/graphics/accept.png' alt='' /> System OK"; It also has an issue with the next assignment to statuscontent

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • Mercurial CLI is slow in C#?

    - by pATCheS
    I'm writing a utility in C# that will make managing multiple Mercurial repositories easier for the way my team is using it. However, it seems that there is always about a 300 to 400 millisecond delay before I get anything back from hg.exe. I'm using the code below to run hg.exe and hgtk.exe (TortoiseHg's GUI). The code currently includes a Stopwatch and some variables for timing purposes. The delay is roughly the same on multiple runs within the same session. I have also tried specifying the exact path of hg.exe, and got the same result. static string RunCommand(string executable, string path, string arguments) { var psi = new ProcessStartInfo() { FileName = executable, Arguments = arguments, WorkingDirectory = path, UseShellExecute = false, RedirectStandardError = true, RedirectStandardInput = true, RedirectStandardOutput = true, WindowStyle = ProcessWindowStyle.Maximized, CreateNoWindow = true }; var sbOut = new StringBuilder(); var sbErr = new StringBuilder(); var sw = new Stopwatch(); sw.Start(); var process = Process.Start(psi); TimeSpan firstRead = TimeSpan.Zero; process.OutputDataReceived += (s, e) => { if (firstRead == TimeSpan.Zero) { firstRead = sw.Elapsed; } sbOut.Append(e.Data); }; process.ErrorDataReceived += (s, e) => sbErr.Append(e.Data); process.BeginOutputReadLine(); process.BeginErrorReadLine(); var eventsStarted = sw.Elapsed; process.WaitForExit(); var processExited = sw.Elapsed; sw.Reset(); if (process.ExitCode != 0 || sbErr.Length > 0) { Error.Mercurial(process.ExitCode, sbOut.ToString(), sbErr.ToString()); } return sbOut.ToString(); } Any ideas on how I can speed things up? As it is, I'm going to have to do a lot of caching in addition to threading to keep the UI snappy.

    Read the article

  • Best practices for withstanding launch day traffic burst

    - by Sam McAfee
    We are working on a website for a client that (for once) is expected to get a fair amount of traffic on day one. There are press releases, people are blogging about it, etc. I am a little concerned that we're going to fall flat on our face on day one. What are the main things you would look at to ensure (in advance without real traffic data) that you can stay standing after a big launch. Details: This is a L/A/M/PHP stack, using an internally developed MVC framework. This is currently being launched on one server, with Apache and MySQL both on it, but we can break that up if need be. We are already installing memcached and doing as much PHP-level caching as we can think of. Some of the pages are rather query intensive, and we are using Smarty as our template engine. Keep in mind there is no time to change any of these major aspects--this is the just the setup. What sorts of things should we watch out for?

    Read the article

  • XPath ordered priority attribute search

    - by user94000
    I want to write an XPath that can return some link elements on an HTML DOM. The syntax is wrong, but here is the gist of what I want: //web:link[@text='Login' THEN_TRY @href='login.php' THEN_TRY @index=0] THEN_TRY is a made-up operator, because I can't find what operator(s) to use. If many links exist on the page for the given set of [attribute=name] pairs, the link which matches the most left-most attribute(s) should be returned instead of any others. For example, consider a case where the above example XPath finds 3 links that match any of the given attributes: link A: text='Sign In', href='Login.php', index=0 link B: text='Login', href='Signin.php', index=15 link C: text='Login', href='Login.php', index=22 Link C ranks as the best match because it matches the First and Second attributes. Link B ranks second because it only matches the First attribute. Link A ranks last because it does not match the First attribute; it only matches the Second and Third attributes. The XPath should return the best match, Link C. If more than one link were tied for "best match", the XPath should return the first best link that it found on the page.

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • calling java script function then C# function after clicking ASP.NET button

    - by Eyla
    I have this serious: I have ASP.NET page, This page contents Update panel with ASP.NET control. I have Java script function to do validation so when I click the button I will use onclientclick to call the java function to do the validation and after this one done should call then event click button function from code behind. I tried vew methods but they did not work for me. here is sample of my code that after I click the button onclientclick will call the java script function for validation and if the validation is OK should call onclick event. .................... java script function ........................ <script type="text/javascript" > function add(){ if (tag == trye) { document.getElementById('<%=btnInfor.ClientID%>').click(); alert("DataAdded") } else { alert("Requiered Field Missing.") return false; } } </script> ..................... ASP.NET button ................... <asp:Button ID="btnInfor" runat="server" Text="Add Information" Style="position: absolute; top: 1659px; left: 433px;" onclientclick="JavaScript: return myAdd()" /> .................... code behind in C# ...................... protected void btnInfor_Click(object sender, EventArgs e) { \\mycode }

    Read the article

  • Is there a max recommended size on bundling js/css files due to chunking or packet loss?

    - by George Mauer
    So we all have heard that its good to bundle your javascript. Of course it is, but it seems to me that the story is too simple. See if my logic makes sense here. Obviously fewer HTTP requests is fewer round trips and hence better. However - and I don't know much about bare http - aren't http responses sent in chunks? And if a file is larger than one of those chunks doesn't it have to be downloaded as multiple (possibly synchronous?) round trips? As opposed to this, several requests for files just under the chunking size would arrive much quicker since modern web browsers download resources like javascripts in parallel. Even if chunking is not an issue, it seems like there would be some max recommended size just due to likelyhood of packet loss alone since a bundled file must wait till it is entirely downloaded to execute, versus the more lenient native rule that scripts must execute in order. Obviously there's also matters of browser caching and code volatility to consider but can someone confirm this or explain why I'm off base? Does anyone have any numbers to put to it?

    Read the article

  • Initialize listitem with blanks?

    - by VBartilucci
    Say I have a list made up of a listitem which contains three strings. I add a new listitem, and try to assign the values of said strings from an outside source. If one of those items is unassigned, the value in the listitem remains as null (unassigned). As a result I get an error if I try to assign that value to a field on my page. I can do a check on isNullOrEmpty for each field on the page, but that seems inefficient. I'd rather initialize the values to "" (Empty string) in the codebehind and send valid data. I can do it manually: ClaimPwk emptyNode = new ClaimPwk(); emptyNode.cdeAttachmentControl = ""; emptyNode.cdeRptTransmission = ""; emptyNode.cdeRptType = ""; headerKeys.Add(emptyNode); But I have some BIG list items, and writing that for those will get tedious. So is there a command, or just plain an easier way to initialize a listitem to empty string as opposed to null? Or has anyone got a better idea?

    Read the article

  • Empty Postbacks on ASP.NET pages

    - by AaronLS
    We are having a problem that seems to only be a problem when accessing our websites from internal intranet machines. When logged into the domain, and accessing our websites, postbacks are not working. Basically the page behaves as if it were refreshed and nothing was changed. When logging the GETs and POSTs with an HTTP analyzer, the post is complete empty and the ContentLength is 0. It is also very sporadic, but seems to be happening fairly often. In the case where it failed, we could see that there was an extra item in the Header for the POST, it was "Authorization" and the value was the word "Negotiate " followed by a space and then a bunch of characters and two equal symbols at the end, which looked like some kind of base64 encoded value. In a case where it succeeded, this Authorization item was no in the header, but I have logged more than one successful cases to know if that is consistent. We have seen this occur only with IE8 so far, and when it occurs it is sometimes sporadic. I can close and open the browser and it will begin working sometimes, and other times it is still broken. What might be causing the postback to be empty? This means the viewstate is not sent to the server which makes the page basically broken. It seems to certainly be a client side issue, but not sure if it's not aggravated by some server settings. Thanks in advance.

    Read the article

  • Session variables not getting set but only in Internet Explorer and not on all machines

    - by gaoshan88
    Logging into a site I'm working on functions as expected on my local machine but fails on the remote server but ONLY in Internet Explorer. The kicker is that it works in IE locally, just not on the remote machine. What in the world could cause this? I have stepped through the code on the remote machine and can see the entered login values being checked in the database, they are found and then a login function is called. This sets two $_SESSION variables and redirects to the main admin page. However, in IE only (and not when run on local machine... this is key) the $_SESSION variables are not present by the time you get to the main admin page. var_dump($_SESSION) gives me what I expect on every browser when I am running this in my local environment and in every browser except IE 6, 7 and 8 when run on the remote server (where I get a null value as if nothing has been set for $_SESSION). This really has me stumped so any advice is appreciated. For an example... in IE, run locally, var_dump gives me: array 'Username' => string 'theusername' length=11 'UserID' => string 'somevalue' length=9 Run on the remote server (IE only... works fine in other browsers) var_dump gives me: array(0){} Code: $User = GetUser($Username, $Password); if ($User->UserID <> "") { // this works so we call Login()... Login($User); // this also works and gives expected results. on to redirect... header("Location: index.php"); // a var_dump at index.php shows that there is no session data at all in IE, remotely. } else { header("Location: login.php"); } function Login($data) { $_SESSION['Username'] = $data->Username; $_SESSION['UserID'] = $data->UserID; // a var dump here gives the expected data in every browser }

    Read the article

  • Not able to connect to TFS Server from TFS Proxy

    - by GV India
    In our office we have setup TFS for project development. The TFS Server is WIN 2003 server SP2 with VSTFS 2008 and is running fine. Now we need to setup a TFS Proxy server on client site for client to access. Before going for the client setup, I wanted to build and test proxy in our office on a dummy server (will call it Proxy server hereon) by keeping it on a different domain. OS configuration of the Proxy server is the same as TFS server. I have installed and configured TFS proxy on Proxy server to connect to TFS Server. Also we have built trust between the two different domains to enable communication. Now problem is that I am not able to at all connect to TFS server. I am trying to connect from Internet Explorer of proxy server using proxy service account. It gives me error: The page cannot be displayed. HTTP 500 - Internal server error. The page I was browsing was http://tfs:8080/VersionControl/v1.0/ProxyStatistics.asmx. I think I have done all the required steps correctly to configure proxy as described in MSDN and also TFS installation guide. Here Proxy service account is a member of ‘Team Foundation Valid Users’ group. I am able to connect to TFS Server (specifying port) using Telnet from command prompt on proxy server as suggested by few sites. The TFS server web sites have been configured to use Integration Windows Authentication. Event Logs on both the servers are also not giving any error. Overall I’m not able to get it done. Any ideas on what might be the problem???

    Read the article

  • Prepare your site images for google image search indexing

    - by Vittorio Vittori
    Hi, I'm trying to understand how can I do to let my site be reachable from google image search spiders. I like how last.fm solution, and I thought to use a technique like his staff do to let google find artists images on their pages. When I'm looking for an artist and I search it on google image search, as often as not I find an image from last.fm artists page, I make an example: If I search the band Pure Reason Revolution It brings me here, the artist's image page http://www.last.fm/music/Pure+Reason+Revolution/+images/4284073 Now if I take a look to the image file, i can see it's named: http://userserve-ak.last.fm/serve/500/4284073/Pure+Reason+Revolution+4.jpg so if I try to undertand how the service works I can try to say: http://userserve-ak.last.fm/serve/ the server who serve the images 500/ the selected size for the image 4284073/ the image id for database Pure+Reason+Revolution+4.jpg the image name I thought it's difficult to think the real filename for the image is Pure+Reason+Revolution+4.jpg for image overwrite problems when an user upload it, in fact if I digit: http://userserve-ak.last.fm/serve/500/4284073.jpg I probably find the real image location and filename With this tecnique the image is highly reachable from search engines and easily archived. My question is, does exist some guide or tutorial to approach on this kind of tecniques, or something similar?

    Read the article

  • Need help with PHP URL encoding/decoding

    - by Kenan
    On one page I'm "masking"/encoding URL which is passed to another page, there I decode URL and start file delivering to user. I found some function for encoding/decoding URL's, but sometime encoded URL contains "+" or "/" and decoded link is broken. I must use "folder structure" for link, can not use QueryString! Here is encoding function: $urll = 'SomeUrl.zip'; $key = '123'; $result = ''; for($i=0; $i<strlen($urll); $i++) { $char = substr($urll, $i, 1); $keychar = substr($key, ($i % strlen($key))-1, 1); $char = chr(ord($char)+ord($keychar)); $result.=$char; } $result = urlencode(base64_encode($result)); echo '<a href="/user/download/'.$result.'/">PC</a>'; Here is decoding: $urll = 'segment_3'; //Don't worry for this one its CMS retrieving 3rd "folder" $key = '123'; $resultt = ''; $string = ''; $string = base64_decode(urldecode($urll)); for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr($key, ($i % strlen($key))-1, 1); $char = chr(ord($char)-ord($keychar)); $resultt.=$char; } echo '<br />DEC: '. $resultt; So how to encode and decode url. Thanks

    Read the article

  • jQuery Accordion + Anchor Tag 'stuck as block' bug?

    - by DA
    Sample page: http://jsbin.com/ohuze/2 This is a simple jQuery UI Accordion. Each accordion panel has an UL (an OL works the same) with this markup: <ol> <li><a href="">Lorep ipsum dolor lorem ipsum dolor lorem ipsum dolor</a>?</li> <li><a href="">Lorep ipsum dolor lorem ipsum dolor lorem ipsum dolor</a>?</li> </ol> In IE6, you'll see that the <a> tag appears to be getting rendered as a block element, so the question mark ends up being pushed outside and not at the end of the line of text. In addition, the bullet and/or list item number is now bottom-aligned with the text rather than top-aligned. I've narrowed it down to the javascript that executes to make the accordion. It's not an issue with jQuery's CSS as disabling that, alone, doesn't resolve the issue. Anyone know what might be going on in IE6 to cause this rendering issue? UPDATE: Apparently, this is also an IE7 issue. UPDATE 2: After some more playing, I've narrowed things down a bit more: the bug has nothing to do with lists. The issue is any anchor tag within a jQuery Accordion will appear as display: block (even though it appears that the CSS still indicates display: inline) the bug has nothing to do with the actual CSS that jQuery UI uses to create the accordion. I created a test page that uses the fully rendered jQuery Accordion post-processed source code and the accompanying CSS. In that situation, the anchor tags remain inline. In conclusion: It appears that the process of rendering the accordion via javascript is messing up the display of the anchor tags. It may be a show/hide issue?

    Read the article

  • Java applet loading images from external jars

    - by Mathias
    I have a jar on a server, and users should be able to develop extensions for it. Therefore the jars main class should be extended and some resources should be added to a second user created jar which will be loaded from another server or locally. Now I have problems accessing the resources (images) from the user loaded jars. Heres is the structure: My Server: game.jar containing game.class images.class ... image1.png (...) Local: user.jar containing: user.class extends game userimage.png The extension is loaded via Greasemonkey, it modifies the "archive" attribute to "/home/username/user.jar, game.jar" and the "code" attribute to "user.class". The user should be able to overwrite already defined images. If the image does not exist in game.jar, it is loaded correctly from user.jar. But the images loaded early in the game are always loaded from the game.jar, others seem to be overwritten correctly by the user. Is there a way to make sure they are always loaded in the correct order? This might be because of some caching mechanism. Because Greasemonkey removes the game from the page, changes the archive and code and reinsert it, the game is loaded without a mod for a brief second. In that time, images are loaded as expected from game jar, but those are the ones not being overwritable by the user. But how to avoid it? Another thing: If I overwrite the "run" method in user.class, the game is unable to load any image at all. Not from the user.jar and not from the game.jar. Java doesn't find the image, as the URL object "getClass().getResource(imagename)" returns with null. I tried to overwrite the image.class, but that doesn't fix the problem, unless I overwrite every class from game.class involved into calling the image.class

    Read the article

  • WYSIWYG with Qt - font size woes

    - by Rob
    I am creating a custom Qt widget that mimics an A4 printed page and am having problems getting fonts to render at the correct size. My widget uses QPainter::setViewport and QPainter::setWindow to mimic the A4 page, using units of 10ths of a millimetre which enables me to draw easily. However, attempting to create a font at a specific point size doesn't seem to work and using QFont:setPixelSize isn't accurate. Here is some code: View::View(QWidget *parent) : QWidget(parent), printer(new QPrinter) { printer->setPaperSize(QPrinter::A4); printer->setFullPage(true); } void View::paintEvent(QPaintEvent*) { QPainter painter(this); painter.setWindow(0, 0, 2100, 2970); painter.setViewport(0, 0, printer->width(), printer->height()); // Draw a rect at x = 1cm, y = 1cm, 6cm wide and 1 inch high painter.drawRect(100, 100, 600, 254); // Create a 72pt (1 inch) high font QFont font("Arial"); font.setPixelSize(254); painter.setFont(font); // Draw in the same box // The font is too large painter.drawText(QRect(100, 100, 600, 254), tr("Wg\u0102")); // Ack - the actual font size reported by the metrics is 283 pixels! const QFontMetrics fontMetrics = painter.fontMetrics(); qDebug() << "Font height = " << fontMetrics.height(); } So I'm asking for a 254 high font (1 inch, 72 pts) and it's too big and sure enough when I query for the font height via QFontMetrics it is 283 high. Does anyone else know how to use font sizes in points when using custom mapping modes like this? It must be possible. Note that I cannot see how to convert between logical/device points either (i.e. the Win32 DPtoLP/LPtoDP equivalents.)

    Read the article

  • Saving a remote image with cURL?

    - by thebluefox
    Morning all, Theres a few questions around this but none that really answer my question, as far as I ca understand. Basically I have a GD script that deals with resizing and caching images on our server, but I need to do the same with images stored on a remote server. So, I'm wanting to save the image locally, then resize and display it as normal. I've got this far... $file_name_array = explode('/', $filename); $file_name_array_r = array_reverse($file_name_array); $save_to = 'system/cache/remote/'.$file_name_array_r[1].'-'.$file_name_array_r[0]; $ch = curl_init($filename); $fp = fopen($save_to, "wb"); // set URL and other appropriate options $options = array(CURLOPT_FILE => $fp, CURLOPT_HEADER => 0, CURLOPT_FOLLOWLOCATION => 1, CURLOPT_TIMEOUT => 60); // 1 minute timeout (should be enough) curl_setopt_array($ch, $options); curl_exec($ch); curl_close($ch); fclose($fp); This creates the image file, but does not copy it accross? Am I missing the point? Cheers guys.

    Read the article

  • how to put header authentication into a form using php?

    - by SkyWookie
    Hey guys, for the page I am doing needs a login authentication using Twitter (using tweetphp API). For test purposes I used this code below to do a successful login: if (!isset($_SERVER['PHP_AUTH_USER'])){ header('WWW-Authenticate: Basic realm="Enter your Twitter username and password:"'); header('HTTP/1.0 401 Unauthorized'); echo 'Please enter your Twitter username and password to view your followers.'; exit(); } $username = $_SERVER['PHP_AUTH_USER']; $password = $_SERVER['PHP_AUTH_PW']; The problem now is, I want to integrate it into a form, so far I have the following: <form action="logincheck.php" method="post" class="niceform" > <fieldset> <legend>Twitter Login:</legend> <dl> <dt><label for="email">Twitter Username:</label></dt> <dd><input type="text" name="username" id="username" size="32" maxlength="128" /></dd> </dl> <dl> <dt><label for="password">Password:</label></dt> <dd><input type="password" name="password" id="password" size="32" maxlength="32" /></dd> </dl> </fieldset> <fieldset class="action"> <input type="submit" name="submit" id="submit" value="Submit" /> I am sending it to logincheck.php, this is where I think I get stuck. I am not sure how to compare the form data with Twitter's login data. I was trying a similar if statement as I used in the first code (box that pops up before page loads), but I couldn't wrap my head around it. Thanks again guys!

    Read the article

  • How to authenticate multiple entry points in a facebook app?

    - by Simon_Weaver
    I am using an IFrame application with XFBML and the new Javascript API. I'd like to have a facebook application with multiple entry points. These will most likely represent different links coming from a fan page tab. I can do this quite easily if the pages don't require authentication - for instance I can create several pages under the app and if a new user comes I can send them to any page: http://apps.facebook.com/myapp/offers http://apps.facebook.com/myapp/game http://apps.facebook.com/myapp/products The problem is that if I need to have authentication then once the user is authenticated they get redirected to my default post-authorization url. Is there a way for a user that comes to /game to stay on /game after they are authenticated without redirecting. I thought I could do it with the AJAX login form - but I cannot find out how to do that in a Facebook IFrame application. I think the example using requirelogin only works for FBML. <a href="http://apps.facebook.com/mysmiley" requirelogin=1> Welcome to my app</a>. Is there a way to accomplish this with Facebook APIs - or will I have to do some kind of clever cookie handling?

    Read the article

  • Ajax not working in visual studio 2005

    - by sachin
    I am trying to do an ajax website, but my ajax is not working. I checked my GAC and system.web,extensions dll is available. Why it is not working .? I am also not getting any errors. I tried many ways. I wrote the below code to test ajax. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <%@ Register Assembly="System.Web.Extensions" Namespace="System.Web.UI" TagPrefix="asp" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <cc1:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server"> </cc1:ToolkitScriptManager> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <cc1:CalendarExtender ID="CalendarExtender1" runat="server" TargetControlID="TextBox1"> </cc1:CalendarExtender> </div> </form> </body> </html> JAvascript error that i got 1.Type is not defined http://localhost:1467/testnew/Default.aspx?_TSM_HiddenField_=ToolkitScriptManager1_HiddenField&_TSM_CombinedScripts_=%3b%3bAjaxControlToolkit%2c+Version%3d1.0.20229.20821%2c+Culture%3dneutral%2c+PublicKeyToken%3d28f01b0e84b6d53e%3aen-US%3ac5c982cc-4942-4683-9b48-c2c58277700f%3ae2e86ef9%3aa9a7729d%3a9ea3f0e2%3a9e8e87e9%3a1df13a87%3a4c9865be%3aba594826%3a507fcf1b%3ac7a4182e

    Read the article

  • XMLHttpRequest() and outputting csv file

    - by sjw
    Initially, I developed a javascript function to use window.open to post contents of a form to a new window which simply opened the new window and initiated a csv file download. Now on reflection, I find the opening of the window superfluous and am trying to just execute a XMLHttpRequest() to download the csv. I am not getting what I want and I'm not 100% sure that I can so I thought I'd ask here for some assistance. When a form is submitted, I want to take all the form values and post them to another page which builds an SQL string and builds a csv based on the contents. (This I can do and works fine with XMLHttpRequest() as seen below) var xhReq = new XMLHttpRequest(); var parameters = ""; for ( i=0; i<formObj.elements.length; i++ ) { parameters += formObj.elements[i].name + "=" + encodeURI( formObj.elements[i].value ) + "&"; } xhReq.open("POST", outputLocation, false); xhReq.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); xhReq.setRequestHeader("Content-length", parameters.length); xhReq.setRequestHeader("Connection", "close"); xhReq.send(parameters); document.write(xhReq.responseText); The code above calls the page ok, builds the csv ok, but it outputs the contents into the current browser window instead of initiating a csv file download. Can I achieve what I need using XMLHttpRequest() or am I going it about it the wrong way? Thanks

    Read the article

  • jQuery each loop - using variables

    - by Sam
    I have a list of products. Each product has a title and a review link. Currently the titles link directly to the individual product page, and the review links go elsewhere. I'd like to use a jquery each loop to cycle through each li, take the href from the title (the first link), and apply it to the review link (the second link), so they both point to the product page. Simplified code would be as follows: <ul> <li><a href="product1.html">Product 1</a><a href="review1.html">Review 1</a></li> <li><a href="product2.html">Product 2</a><a href="review2.html">Review 2</a></li> <li><a href="product3.html">Product 3</a><a href="review3.html">Review 3</a></li> </ul> I thought it would be something like the following: $("li").each(function(){ var link = $("a:eq(0)").attr('href'); $("a:eq(1)").attr("href", link); }); But it always uses the same variable "link". Can someone help me out?

    Read the article

< Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >