Search Results

Search found 37204 results on 1489 pages for 'page validators'.

Page 534/1489 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • trying to parse an xml from a url and it wont work.

    - by ida
    this page shown an xml file and im trying to use simplexml to parse the data out and print it. what am i missing? cause all it does is show a blank page when i run it. <?php $url = "http://api.scribd.com/api?method=docs.getList&api_key=2apz5npsqin3cjlbj0s6m"; $xml = new SimpleXMLElement($url,NULL,true); foreach($xml -> result as $value) { echo $value->doc_id."<br/>"; echo $value->access_key."<br/>"; echo $value->secret_password."<br/>"; echo $value->title."<br/>"; } ?>

    Read the article

  • How to trigger saved password autofill in browsers?

    - by Aleksander Kmetec
    I have a web application written in pure JavaScript (no pre-generated HTML except for the document which loads all the JS files). This app contains a login form which is created dynamically when the document.ready event event is triggered. I trick the browser into displaying the "Remember password?" dialog by posting the login form into a hidden iframe before logging in using ajax (in Firefox the password appears on the saved password list, so this part obviously works) but saved passwords never get filled in after the login screen is loaded again at a later time. The same thing happens in Firefox and Safari. Is there something I can do or some function I can call to trigger autofill? UPDATE: autofill works in Safari on initial page load, but not when user logs out and the login form is recreated without a page reload. In Firefox it never works.

    Read the article

  • Installing a new SQL Server instance fails

    - by Rubio
    I've previously in my setup installed SQL Server Express 2005. Now I've switched to SQL Server Express 2008. I updated the command line parameters to those documented for the latter. If the comp already has SQL Server Express 2008 installed, my installer should create a new instance. The command line parameters are as follows: /ACTION=Install /FEATURES=SQLEngine /QS /INSTANCENAME=ABCD /SECURITYMODE=SQL /SAPWD=CunningPassword The requested instance name does not exist on the target machine. This will end in an error -2068643838. The logs show the following error: "No features were installed during the setup execution. The requested features may already be installed." If I remove the /QS parameter and try to install interactively, I'll get as far as the Feature Selection page. The UI shows three options, Instance Features, Shared Features and Redistributable Features. Whatever I select, clicking Next results in the same error (There are validation errors on this page). Any ideas anyone? Thanks, -- Rubio

    Read the article

  • Set focus and carret position in textarea according to mouse position, as if user had clicked

    - by JeanHuguesRobert
    Once a page with a textarea is loaded, I want some textarea to have the focus immediatly if the mouse cursor is inside that textarea. This is the easy part because a onmousehover handler can set the focus. Now, how to I also set the position of the caret? I would like the caret to be where it would be if the user had clicked using the mouse to set the focus/caret. The basic use case is : User clicks on a link and waits (mouse barely moves) A page is delivered, it contains a big textarea only, full of text User types using keyboard Characters are inserted right below the mouse cursor Today the user has to wait until the caret is visible (at the top left of the textarea) and then click to move the caret before typing. Thanks!

    Read the article

  • Is there a workaround to Safari's/Opera's bug that you can't tab through hyperlinks?

    - by scunliffe
    In IE, Firefox, Chrome and most Windows-based interfaces that I've used, the Tab key can be used to navigate from one form field or hyperlink to the next (e.g. "actionable" items) (note: I have not tested on other Operating Systems) However Safari and Opera skip all hyperlinks in a web page when tabbing. IMHO its a usability bug but I digress. Is there a workaround/hack to make Safari and/or Opera navigate through these links? I've noticed that Opera will accept the tabindex attribute if set e.g. tabindex="0" thus maintaining the links "index" within the flow of the DOM on the page... but Safari does not want to accept this. For those interested, this bit of jQuery will make all the hyperlinks clickable. //Make links 'tab-able' in Opera $(document).ready(function(){ if($.browser.opera){ $('a[href]').attr('tabindex', 0); } }); ...and although this seems to work for Opera... is there a better workaround?

    Read the article

  • Relying on nhibernate's second level cache vs pushing objects into the session

    - by AhmetC
    I have some big entities which are frequently accessed in the same session. For example, in my application there is a reporting page which consist of dynamically generated chart images. For each chart image on this page, the client makes requests to corresponding controller and the controller generates images using some entities. I can either use asp.net's session dictionary for "caching" those entities or rely on nhibernate's second level cache support with using cached queries for example. What is your opinion? By the way I will use shared hosting, is nhibernate's second level cache hosting friendly? Thanks.

    Read the article

  • PHP code work on localhost but not webhotel ($_GET)

    - by Mestika
    Hi all, The following code works fine on my WAMP localhost server, but when I try to upload it, to my webhoste, it don't and I'm a bit confused about what's wrong. The code is: <?php if(isset($_GET['menu'])) { if($_GET['menu'] == 3) { echo "<!--Gallery Scripts-->\n"; echo gallery(); } } ?> The purpose is that if the URL is saying: index.php?menu=3 it will run the function "gallery()" which will load the gallery. I'm doing this "trick" several times to avoid the page to load all my JavaScript and function each time the page loads. Thanks Mestika

    Read the article

  • jQuery - cycle help

    - by MrTunes
    I'm looking to get some help on using the cycle library for jQuery. I'm in the beginner demos, and I got the absolute first one completed. This is the second one on the page. <script src="jquery-1.2.6.min.js" type="text/javascript"></script> <script src="jquery.cycle.all.min.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $('.pics').cycle({ fx: 'scrollDown', speed: 300, timeout: 2000 }); </script> My CSS is identical to the one on the page, that's why I put .pics in the quotes.

    Read the article

  • CSS Background image arbitrarily not getting requested at all

    - by Pekka
    I`m in the process of building a template for Redmine (a project management system based on Ruby on Rails.) Ruby is running on a virtual server from a Bitnami.org installation package. The OS is Windows. The template essentially consists of a styles.css file. In that file, I have the following line: #header { padding: 0px; padding-top: 48px; background-color: #62DFFF; background-image: url(../images/bkg.jpg); background-position: center bottom; background-repeat: repeat-x; height:150px; } It's a header element with a background image. The problem: This background image arbitrarily appears and disappears when reloading the page. It is more often missing than it is not. Say you reload the page in your browser ten times in twenty seconds; the image will appear in two instances, and be missing in the 18 others. I would have put this down to server problems, but the weird thing is that when it's missing, the request for the image doesn't appear in Firebug's net tab at all. Even if it were cached, the request should be there, shouldn't it? Raw screenshots of the identical page on two reloads: I am 100% sure the CSS file does not change in between. I have examined both instances with Firebug and the CSS is identical. When I change the image's URL by editing the style declaration in firebug to bkg.jpg?xyz=123445 then the image will get loaded, which is making me think this must be a server problem. It happens in both Firefox and Chrome so it must be something basic I'm overlooking. What could be causing a browser not to load a resource at all? I have zero idea about Ruby nor Rails - getting Redmine running and customized is all I have ever had to do with this platform - so I don't really know where to start debugging. Apache's, Mongrel's and Redmine's error logs look fine, though. And then again, this looks like a browser issue. I'm stumped.

    Read the article

  • Why does Chrome ignore local jQuery cookies?

    - by Nathan Long
    I am using the jQuery Cookie plugin (download and demo and source code with comments) to set and read a cookie. I'm developing the page on my local machine. The following code will successfully set a cookie in FireFox 3, IE 7, and Safari (PC). But if the browser is Google Chrome AND the page is a local file, it does not work. $.cookie("nameofcookie", cookievalue, {path: "/", expires: 30}); What I know: The plugin's demo works with Chrome. If I put my code on a web server (address starting with http://), it works with Chrome. So the cookie fails only for Google Chrome on local files. Possible causes: Google Chrome doesn't accept cookies from web pages on the hard drive (paths like file:///C:/websites/foo.html) Something in the plugin implentation causes Chrome to reject such cookies Can anyone confirm this and identify the root cause?

    Read the article

  • Multiple GET arguments

    - by AJ Ravindiran
    Hello, I've been working with PHP lately, and I came across something I couldn't solve. So basically, I have a form: <form method="get"> <fieldset class="display-options" style="float: left"> Search by name or ip: <input type="text" name="key" value="" />&nbsp; <input type="submit" class="button2" value="Search" /> </fieldset> </form> The problem is, I currently already have a argument: http://example.com/logs.php?type=admin&page=1 How would i pass the given form argument with the already existing arguments? Like so: http://example.com/logs.php?type=admin&page=1&key=name Thanks in advance, AJ.

    Read the article

  • webViewDidFinishLoad exception

    - by Nava Carmon
    Hi, I have a screen with a webView in my navigation controller stack and when I navigate from this view back to a previous before the page completely loaded I get a EXCEPTION_BAD_ACCESS. Seems, that the view with a webView being released before it comes to webViewDidFinishLoad function. My question is how to overcome this problem? I don't expect from the user to wait till the page loads... The code is very simple: - (void) viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; NSURL *url = [NSURL URLWithString:storeUrl]; //URL Requst Object NSURLRequest *requestObj = [NSURLRequest requestWithURL:url]; //Load the request in the UIWebView. [browser loadRequest:requestObj]; } TIA

    Read the article

  • Ensuring unique ID attribute for elements within ScriptControl

    - by Andy West
    I'm creating a control based on ScriptControl, and I'm overriding the Render method like this: protected override void Render(HtmlTextWriter writer) { RenderBeginTag(writer); writer.RenderBeginTag(HtmlTextWriterTag.Div); writer.Write("This is a test."); writer.RenderEndTag(); RenderEndTag(writer); } My question is, what if I want to assign the div an ID attribute and have it be unique on the page, even if there are mulitple instances of my control? I've seen other people's code that does this: writer.AddAttribute(HtmlTextWriterAttribute.Id, this.ID + "_divTest"); That will prevent naming conflicts between instances of my control, but what if I've already created a div elsewhere on the page that coincidentally has the same ID? I've also heard about implementing INamingContainer. Would that apply here? How could I use it?

    Read the article

  • How about the Asp.net processes and threads and apppools?

    - by Michel
    Hi, as i understand, when i load a asp.net .aspx page on the (iis)server, it's processed via the w3p.exe process. But when iis gets multiple requests, are they all processed by the same w3p process? And does this process automaticly use all my processors and cores? And after that: when i start i new thread in my page, this thread still works when the pages is already served to the client. Where does this thread live? also in the w3p.exe process? And what if i assign another apppool to my site, what does that do? Michel

    Read the article

  • Prevent bot from crawling certain areas of site.

    - by Skoder
    Hey, I don't know much about SEO and how web spiders work, so forgive my ignorance here. I'm creating a site (using ASP.NET-MVC) which has areas that displays information retrieved from the database. The data is unique to the user, so there's no real server-side output caching going on. However, since the data can contain things the user may not wish to have displayed from search engine results, I'd like to prevent any spiders from accessing the search results page. Are there any special actions I should take to ensure that the search result directory isn't crawled? Also, would a spider even crawl a page that's dynamically generated and would any actions preventing certain directories being search mess up my search engine rankings? edit: I should add, I'm reading up on robots.txt protocol, but it relies on co-operation from the web crawler. However, I'd also like to prevent any data-mining users who will ignore the robots.txt file. I appreciate any help!

    Read the article

  • webservice request issue with dynamic request inputs

    - by nanda
    try { const string siteURL = "http://ops.epo.org/2.6.1/soap-services/document-retrieval"; const string docRequest = "<soap:Envelope xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:xsd='http://www.w3.org/2001/XMLSchema'><soap:Body><document-retrieval id='EP 1000000A1 I ' page-number='1' document-format='SINGLE_PAGE_PDF' system='ops.epo.org' xmlns='http://ops.epo.org' /></soap:Body></soap:Envelope>"; var request = (HttpWebRequest)WebRequest.Create(siteURL); request.Method = "POST"; request.Headers.Add("SOAPAction", "\"document-retrieval\""); request.ContentType = " text/xml; charset=utf-8"; Stream stm = request.GetRequestStream(); byte[] binaryRequest = Encoding.UTF8.GetBytes(docRequest); stm.Write(binaryRequest, 0, docRequest.Length); stm.Flush(); stm.Close(); var memoryStream = new MemoryStream(); WebResponse resp = request.GetResponse(); var buffer = new byte[4096]; Stream responseStream = resp.GetResponseStream(); { int count; do { count = responseStream.Read(buffer, 0, buffer.Length); memoryStream.Write(buffer, 0, count); } while (count != 0); } resp.Close(); byte[] memoryBuffer = memoryStream.ToArray(); System.IO.File.WriteAllBytes(@"E:\sample12.pdf", memoryBuffer); } catch (Exception ex) { throw ex; } The code above is to retrieve the pdf webresponse.It works fine as long as the request remains canstant, const string docRequest = "<soap:Envelope xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:xsd='http://www.w3.org/2001/XMLSchema'><soap:Body><document-retrieval id='EP 1000000A1 I ' page-number='1' document-format='SINGLE_PAGE_PDF' system='ops.epo.org' xmlns='http://ops.epo.org' /></soap:Body></soap:Envelope>"; but how to retrieve the same with dynamic requests. When the above code is changed to accept dynamic inputs like, [WebMethod] public string DocumentRetrivalPDF(string docid, string pageno, string docFormat, string fileName) { try { ........ ....... string docRequest = "<soap:Envelope xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:xsd='http://www.w3.org/2001/XMLSchema'><soap:Body><document-retrieval id=" + docid + " page-number=" + pageno + " document-format=" + docFormat + " system='ops.epo.org' xmlns='http://ops.epo.org' /></soap:Body></soap:Envelope>"; ...... ........ return "responseTxt"; } catch (Exception ex) { return ex.Message; } } It return an "INTERNAL SERVER ERROR:500" can anybody help me on this???

    Read the article

  • Would Like Multiple Checkboxes to Update World PNG Image Using Mogrify to FloodFill Countries With C

    - by socrtwo
    Hi. I'm seeing in another forum if the best way to do this is with Javascript or Ajax but I'm wondering if there is an even easier simpler way. I'm trying to create a web service where users can check which countries they have visited from a list of 175 or so and a World map image would then instantly update with a filled color. There are other similar services, but I'm envisioning mine to be both updating from checks in checkboxes and by clicking on the target country in the displayed image say with an imagemap. Additionally other solutions display all the visited countries in the same color. I would like different colors for different countries or at least for those countries that touch. Eventually I would like to include a feature that enables the choice of which colors to assign countries. I found a Sourceforge project called pwmfccd. It's simply an open source image of the world and the coordinates on the PNG image for all the countries. You can use mogrify from ImageMagick and floodfill to fill the countries with color. I have done this successfully, locally with batch files. My ISP has told me where mogrify is located, basically "/usr/bin/mogrify". I now have a horrendously complicated cgi script which if it worked is set to redraw the world map image with each checkbox. It's here. It also redraws the whole web page with each check. The web page starts here. Of course this is not at all efficient, and I think probably the real way to go is Ajax or Javascript, so that maybe just the image gets changed and redrawn, not the whole web page. Sorry I don't even know the difference between Javascript and Ajax and their relative merits at this point. I suppose you could make just one part of the image update with each check or click on the image instead of even just the image redrawing, but I have never even heard of a hint at being able to do that for irregularly shaped image elements like countries. So I guess an Image map and sister checkbox entries tied to mogrify events redrawing the user's personal copy of the image with an image refresh would be the only way to go. So how do you do this with something other than Javascript or Ajax or is that definitely the way to go and if so, how would you do it? Or can you after all cut up a web based image into irregular puzzle shaped piece which you can redraw individually at will. Thanks in advance for reading and considering answering this post.

    Read the article

  • Pasting images in TinyMCE on rails app

    - by Sam Kong
    Hi, I found a weird problem with TinyMCE editor. Copying & pasting images from another domain works fine but if the images are on the same domain the path are relative but not correct sometimes. I figured out that the problem is related to the rails URL scheme. Example) Images are copied from http://mydomain.com/index.html page and the real images path is http://mydomain.com/photos/image.jpg. Editor page: http://mydomain.com/posts/new = image path is set as ../photos/image.jpg http://mydomain.com/posts/edit/123 = image path is set as ../../photos/image.jpg So I tried http://mydomain.com/posts/new/ and it worked. How do I solve this problem? Thanks. Sam

    Read the article

  • How to find missing web part ?

    - by Leonidius
    Does anybody know how to find offending web part which causes this error ? “A Web Part or Web Form Control on this Web Part Page cannot be displayed or imported because it is not registered on this site as safe.” I have inherited an old SharePoint 2003 portal site which uses custom web parts. I know what this error means. I also know that each web part must be installed and registred as safe in web.config. The problem is that I don't know which one is missing. I get same error when I'm trying to open the page in FrontPage as well.

    Read the article

  • ASP.Net Session data lost between pages

    - by Ananth
    Hi, i came across a weird behavior today w/ my web application. When I navigate from a page to another, I lose one particular session variable data. I'm able to launch the app in firefox and able to see that the session data is not lost. I use Response.Redirect(page2, false) to redirect to another page. Below code was used to track session variables System.IO.StreamWriter sw = new System.IO.StreamWriter(@"c:\test.txt", true); for (int i = 0; i < Session.Count; i++) { sw.WriteLine(Session.Keys[i] + " " + Session.Contents[i]); } sw.Close(); Can anyone help me in this? Any help is appreciated. ~/Ananth

    Read the article

  • dbpedia auto-suggest labels

    - by Sid
    Wikipedia has a auto-suggest feature on its search field. If you for instance type in "mars" it lists a few items including Mars, Marseille, Marsh. I am looking to implement something similar working off the latest DBpedia export (wikipedia in database form). If I do a search for all labels in the labels_en.nt file that DBpedia offer that begin with "mars" then, even if I remove ones that redirect on to others that are listed, I end up with a huge list. In trying to understand how wikipedia does this I noticed that I'm actually querying this URL which returns a JSON string. Now my problem is that I don't know how wikipedia narrows the list down. Perhaps it does so based on page popularity. The higher views/edits a page has the higher it goes in this suggestion box. If so, does DBpedia export this information?

    Read the article

  • How to inform users that webapplication does not support IE6

    - by Paul Szulc
    I have web application and I do not really care about IE6 users. However I would like to have some kind of feautre that would informe users that they are using IE6 and that their browser is not supported. I was thinking about two possible solutions: pop-out window (probably Javascript) with text informing the user on every page he enters some special page with information, that user would be redirected to whenever he tries to access my application Both solutions will be sufficient, however I would prefer the second one. Probably some magic javascript needs to be involved, can anyone could please provide a solution to this?

    Read the article

  • Masked Edit Extender Format Issue

    - by Kumar
    I am using an ASP.NET AJAX Masked Edit Extender to format phone numbers <asp:TextBox ID="tbPhoneNumber" runat="server" /> <ajaxToolkit:MaskedEditExtender TargetControlID="tbPhoneNumber" Mask="(999)999-9999" MaskType="Number" InputDirection="LeftToRight" ClearMaskOnLostFocus="false" ClearTextOnInvalid="false" runat="server" AutoComplete="false" /> On the page load event I am trying to populate the phone textbox as follows: protected void Page_Load(object sender, EventArgs e) { tbPhoneNumber.Text = "(394)456-310"; } So there is one number which is missing at the end to make it a valid phone number. When the page loads I expected the value in the textbox to be (394)456-310_ But it displays (_39)445-6310 Why is this happening?

    Read the article

  • Hold i ajax call in every minute calling section

    - by gowri
    i am calling ajax every second in page.. Here the server page returns randomly generated number,using this number(converted into seconds) i am triggering another function in ajax success .it works My problem suppose random number = 5 means trigger() function called after 5 seconds using setTimeout,but rember ajax call is triggering every 1 second so trigger function also called many time. i want to make ajax call wait untill trigger function execution.Which means i wanna pause that ajax call untill 5 seconds after that resume How can i do this ? My coding //this ajax is called every minute $.ajax({ type: "POST", url: 'serverpage', data: ({pid:1}), success: function(msg) { var array = msg.split('/'); if(array[0]==1){ setTimeout(function() { trigger(msg); },array[1]+'000'); } } }); //and my trigger function function trigger(value) { alert("i am triggered !"); } server response maybe 1/2 or 1/5 or 1/ 10 or 1/1 here 1/3(this is converted into seconds)

    Read the article

  • Which metadata I should save when downloading web-pages?

    - by Vojtech R.
    Hi, I'm going to download (for future purposes of language processing) some thousands webpages. Now I'm thinking, which metadata I should save. I explore this, but I do not wont to neglect something important. <title> <link> <publish_date> <date_downloaded> <source> // to this page <keyword> // for Solr indexing <text> // cleaned body of page Is there something important what I could miss in future?

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >