Search Results

Search found 15385 results on 616 pages for 'browser compatibility'.

Page 584/616 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • C# mvc 3 using selectlist with selected value in view

    - by Rob
    I'm working on a MVC3 web application. I want a list of categories shown when editing a blo from whe applications managements system. In my viewmodel i've got the following property defined for a list of selectlistitems for categories. /// <summary> /// The List of categories /// </summary> [Display(Name = "Categorie")] public IEnumerable<SelectListItem> Categories { get; set; } The next step, my controller contains the following edit action where the list of selectlistitems is filled from the database. public ActionResult Edit(Guid id) { var blogToEdit = _blogService.First(x => x.Id.Equals(id)); var listOfCategories = _categorieService.GetAll(); var selectList = listOfCategories.Select(x =>new SelectListItem{Text = x.Name, Value = x.Id.ToString(), Selected = x.Id.Equals(blogToEdit.Category.Id)}).ToList(); selectList.Insert(0, new SelectListItem{Text = Messages.SelectAnItem, Value = Messages.SelectAnItem}); var viewModel = new BlogModel { BlogId = blogToEdit.Id, Active = blogToEdit.Actief, Content = blogToEdit.Text, Title = blogToEdit.Titel, Categories = selectList //at this point i see the expected item being selected //Categories = new IEnumerable<SelectListItem>(listOfCategories, "Id", "Naam", blogToEdit.CategorieId) }; return View(viewModel); } When i set a breakpoint just before the view is being returned, i see that the selectlist is filled as i expected. So at this point everything seems to be okay. The viewmodel is filled entirely correct. Then in my view (i'm using Razor) i've got the following two rules which are supposed to render the selectlist for me. @Html.LabelFor(m => m.Categories) @Html.DropDownListFor(model=>model.Categories, Model.Categories, Model.CategoryId) @Html.ValidationMessageFor(m => m.Categories) When I run the code and open the view to edit my blog, I can see all the correct data. Also the selectlist is rendered correctly, but the item i want to be selected lost it's selection. How can this be? Until the point the viewmodel is being returned with the view everything is okay. But when i view the webpage in the browser, the selectlist is there only with out the correct selection. What am I missing here? Or doing wrong?

    Read the article

  • JQuery validation not working for checkbox group

    - by Chris Halcrow
    I'm having trouble getting JQuery validation to work with a set of checkboxes. I'm generating the checkboxes using an ASP.NET checkboxlist, and I've used JQuery to set the 'name' attribute to the same thing for each checkbox in the list. Here's the code that gets written to the browser. I'm setting the 'validate' attribute on the 1st checkbox to set the rule that at least one checkbox must be selected. The JQuery validation works for all other elements on the form, but not for the checkbox list. I'm also using a JQuery form wizard on the page which triggers validation for each 'page' of the form, so I don't have control over how the validation is called. <input id="ContentPlaceHolder1_MainContent_AreaOfInterest_0" class="ui-wizard-content ui-helper-reset ui-state-default" type="checkbox" value="Famine" name="hello[]" validate="required:true, minlength:1"> <label for="ContentPlaceHolder1_MainContent_AreaOfInterest_0">Famine</label> <br> <input id="ContentPlaceHolder1_MainContent_AreaOfInterest_1" class="ui-wizard-content ui-helper-reset ui-state-default" type="checkbox" value="Events Volunteer" name="hello[]"> <label for="ContentPlaceHolder1_MainContent_AreaOfInterest_1">Events Volunteer</label> Any ideas on what's going wrong? There are lots of examples of JQuery scripts that will do the validation, however I'm trying to avoid this as I'm generating the checkboxlist server side by a custom control so that it can be re-used across different pages that may or may not have JQuery enabled. I'm trying to enable the JQuery validation whilst being as unobtrusive as possible, so that pages will still work even if JQuery is disabled. Here are the relevant JQuery inclusions and JQuery initialisation script for the form wizard. I'm not using any initialisation code for JQuery validation: <script type="text/javascript" src="../js/formwizard/js/bbq.js"></script> <script type="text/javascript" src="../js/formwizard/js/jquery.form.js"></script> <script type="text/javascript" src="../js/formwizard/js/jquery.form.wizard.js"></script> <script type="text/javascript" src="../js/formwizard/js/jquery.validate.js"></script> <script type="text/javascript"> $(document).ready(function () { $("#form1").formwizard({ validationEnabled: true, focusFirstInput: true }); }); </script>

    Read the article

  • Custom dynamic error pages in Ruby on Rails not working

    - by PlanetMaster
    Hi, I'm trying to implement custom dynamic error pages following this post: http://www.perfectline.co.uk/blog/custom-dynamic-error-pages-in-ruby-on-rails I did exactly what the blog post says. I included config.action_controller.consider_all_requests_local = false in my environment.rb. But is not working. My browser shows: Routing Error No route matches "/555" with {:method=>:get} So, it looks like the rescues are not fired. I get the following in my log file: ActionController::RoutingError (No route matches "/555" with {:method=>:get}): Rendering rescues/layout (not_found) Is there some routing interfering with the code? I'm not sure what to look for. I'm running rails 2.3.5. Here is the routes.rb file: ActionController::Routing::Routes.draw do |map| # routing van property-url map.connect 'buy/:property_type_plural/:province/:city/:address/:house_number', :controller => 'properties' , :action => 'show', :id => 'whatever' map.myimmonatie 'myimmonatie' , :controller => 'myimmonatie/properties', :action => 'index' map.login "login", :controller => "user_sessions", :action => "create", :conditions => {:method => :post} map.login "login", :controller => "user_sessions", :action => "new" map.logout "logout", :controller => "user_sessions", :action => "destroy" map.buy "buy", :controller => 'buy' map.sell "sell", :controller => 'sell' map.home "home", :controller => 'home' map.disclaimer "disclaimer", :controller => 'disclaimer' map.sign_up "sign_up", :controller => 'users', :action => :new map.contact "contact", :controller => 'contact' map.resources :user_sessions map.resources :contact map.resources :password_resets map.resources :messages map.resources :users, :only => [:index,:new,:create,:activate,:edit,:profile,:password] map.resources :images map.resources :activation , :only => [:new,:resend] map.resources :email map.resources :properties, :except => [:index,:destroy] map.namespace :admin do |admin| admin.resources :users admin.resources :properties admin.resources :order_items, :as => :orders admin.resources :blog_posts, :as => :blog end map.connect 'myimmonatie/:action' , :controller => 'users', :id => 'current', :requirements => {:action => /(profile)|(password)|(email)/} map.namespace :myimmonatie do |myimmonatie| myimmonatie.resources :messages, :controller => 'messages' myimmonatie.resources :password, :as => "password", :controller => 'users', :action => 'password' myimmonatie.resources :properties , :controller => 'properties' myimmonatie.resources :orders , :only => [:index,:show,:create,:new] end map.root :controller => "home" map.connect ':controller/:action' map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end ActionController::Routing::Translator.translate_from_file('config','i18n-routes.yml')

    Read the article

  • Disable button after submit

    - by Chris Philpotts
    I'm trying to disable a button when a user submits a payment form and the code to post the form is causing a double post in firefox. This problem does not occur when the code is removed, and does not occur in any browser other than firefox. Any idea how to prevent the double post here? System.Text.StringBuilder sb = new StringBuilder(); sb.Append("if (typeof(Page_ClientValidate) == 'function') { "); sb.Append("if (Page_ClientValidate() == false) { return false; }} "); sb.Append("this.value = 'Please wait...';"); sb.Append("this.disabled = true;"); sb.Append(Page.GetPostBackEventReference(btnSubmit )); sb.Append(";"); btnSubmit.Attributes.Add("onclick", sb.ToString()); it's the sb.Append(Page.GetPostBackEventReference(btnSubmit )) line that's causing the issue Thanks EDIT: Here's the c# of the button: <asp:Button ID="cmdSubmit" runat="server" Text="Submit" /> here's the html This code posts twice (and disables the submit button and verifies input): <input type="submit" name="ctl00$MainContent$cmdSubmit" value="Submit" onclick="if (typeof(Page_ClientValidate) == 'function') { if (Page_ClientValidate() == false) { return false; }} this.value = 'Please wait...';this.disabled = true;document.getElementById('ctl00_MainContent_cmdBack').disabled = true;__doPostBack('ctl00$MainContent$cmdSubmit','');" id="ctl00_MainContent_cmdSubmit" /> This code posts twice (but doesn’t disable the submit button): <input type="submit" name="ctl00$MainContent$cmdSubmit" value="Submit" onclick="__doPostBack('ctl00$MainContent$cmdSubmit','');" id="ctl00_MainContent_cmdSubmit" /> This code posts once (but doesn’t verify the user input and doesn’t disable the submit button): <input type="submit" name="ctl00$MainContent$cmdSubmit" value="Submit" id="ctl00_MainContent_cmdSubmit" /> This code posts once (but doesn’t disable submit button): <input type="submit" name="ctl00$MainContent$cmdSubmit" value="Submit" onclick="javascript:WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions(&quot;ctl00$MainContent$cmdSubmit&quot;, &quot;&quot;, true, &quot;&quot;, &quot;&quot;, false, false))" id="ctl00_MainContent_cmdSubmit" /> This code doesn’t post at all: <input type="submit" name="ctl00$MainContent$cmdSubmit" value="Submit" onclick="this.disabled = true;WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions(&quot;ctl00$MainContent$cmdSubmit&quot;, &quot;&quot;, true, &quot;&quot;, &quot;&quot;, false, false))" id="ctl00_MainContent_cmdSubmit" /> Obviously it’s the disabling of the submit button that’s posing the problem. Do you have any ideas how we can disable the submit to avoid multiple clicking?

    Read the article

  • Stuck with jQuery thumbnail gallery behavior

    - by Vitor
    Hi everyone, I'm trying to create a simple thumbnail viewer with jQuery. Here goes the code: $(function(){ $("ul.thumbnails li").click(function(e){ var imgAlt = $(this).find('img').attr("alt"); //Get Alt Tag of Image var imgTitle = $(this).find('a').attr("href"); //Get Main Image URL $("img.featured").attr({ src: imgTitle , alt: imgAlt}); //Switch the main image (URL + alt tag) }); <div class="image-gallery"> <img class="featured" src="big.jpg"> <ul class="thumbnails"> <li id="thumb01"> <a href="big.jpg"><img src="small.jpg"></a> </li> <li id="thumb02"> <a href="big.jpg"><img src="small.jpg"></a> </li> <li id="thumb03"> <a href="big.jpg"><img src="small.jpg"></a> </li> </ul> .thumbnails li { margin: 0; padding: 5px 5px 0 5px; list-style: none; float: left; overflow: hidden; } thumbnails li img { width: 50px; height: 50px; margin: 0; padding: 0; float: left; } If i click the thumbnail, the browser goes straight to the full image instead of just changing the img src with jQuery. But I if i click somewhere inside the <li> it works. I know this must be simple, but I don't know where I'm going wrong. I've tried studying other galleries, like http://www.sohtanaka.com/web-design/examples/image-rotator/ , but couldn't find what I'm missing. I could really use some help from you guys :)

    Read the article

  • Getting jQuery slideshow animation to stop on click

    - by hollyb
    I have a slide show built with jQuery that pauses on hover. It has a group of thumbnails sitting on top of the image that advances the image when clicked, otherwise the slideshow just auto-rotates through all the images. There is also a +/- to expand and contract a caption related to each image. I want to have the slideshow's automatic advancing to stop if one of the thumbnails is clicked, or the +/-. Basically, just stop whenever a user clicks anywhere within the gallery (div class=".homeImg"). I'm having a major brain fart in getting this working properly and could use some advice. Here's the jQuery: $(document).ready(function() { $(".main_image .desc").show(); //Show image info $(".main_image .block").animate({ opacity: 0.85 }, 1 ); //Set Opacity //Click and Hover events for thumbnail list $(".image_thumb ul li:first").addClass('active'); // * Adds a class 'last' to the last li to let the rotator know when to return to the first $(".image_thumb ul li:last").addClass('last'); $(".image_thumb ul li").click(function(){ //Set Variables var imgAlt = $(this).find('img').attr("alt"); //Get Alt Tag of Image var imgTitle = $(this).find('a').attr("href"); //Get Main Image URL var imgDesc = $(this).find('.block').html(); //Get HTML of block var imgDescHeight = $(".main_image").find('.block').height(); //Calculate height of block if ($(this).is(".active")) { //If it's already active, then… return false; // Don't click through } else { //Animate $(".main_image img").animate({ opacity: 0}, 800 ); $(".main_image .block").animate({ opacity: 0, marginBottom: -imgDescHeight }, 800, function() { $(".main_image .block").html(imgDesc).animate({ opacity: 0.85, marginBottom: "0" }, 250 ); $(".main_image img").attr({ src: imgTitle , alt: imgAlt}).animate({ opacity: 1}, 250 ); }); } $(".image_thumb ul li").removeClass('active'); //Remove class of 'active' on all lists $(this).addClass('active'); //add class of 'active' on this list only return false; }) .hover(function(){ $(this).addClass('hover'); }, function() { $(this).removeClass('hover'); }); //Toggle teaser $("a.collapse").click(function(){ $(".main_image .block").slideToggle(); $("a.collapse").toggleClass("show"); return false; // added to remove # browser jump }); // If we are hovering over the image area, pause the clickNext function pauseClickNext = false; $(".homeImg").hover( function () { pauseClickNext = true; }, function () { pauseClickNext = false; } ); // Define function to click the next li var clickNext = function(){ if(!pauseClickNext) { /// find the next li after .active var $next_li = $("li.active").next("li"); if($("li.active").hasClass("last") ){ $(".image_thumb ul li:first").trigger("click"); } else { $next_li.trigger("click"); } } }; // Time between image transition setInterval(clickNext, 6000); });

    Read the article

  • On Redirect - Failed to generate a user instance of SQL Server...

    - by Craig Russell
    Hello (this is a long post sorry), I am writing a application in ASP.NET MVC 2 and I have reached a point where I am receiving this error when I connect remotely to my Server. Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. I thought I had worked around this problem locally, as I was getting this error in debug when site was redirected to a baseUrl if a subdomain was invalid using this code: protected override void Initialize(RequestContext requestContext) { string[] host = requestContext.HttpContext.Request.Headers["Host"].Split(':'); _siteProvider.Initialise(host, LiveMeet.Properties.Settings.Default["baseUrl"].ToString()); base.Initialize(requestContext); } protected override void OnActionExecuting(ActionExecutingContext filterContext) { if (Site == null) { string[] host = filterContext.HttpContext.Request.Headers["Host"].Split(':'); string newUrl; if (host.Length == 2) newUrl = "http://sample.local:" + host[1]; else newUrl = "http://sample.local"; Response.Redirect(newUrl, true); } ViewData["Site"] = Site; base.OnActionExecuting(filterContext); } public Site Site { get { return _siteProvider.GetCurrentSite(); } } The Site object is returned from a Provider named siteProvider, this does two checks, once against a database containing a list of all available subdomains, then if that fails to find a valid subdomain, or valid domain name, searches a memory cache of reserved domains, if that doesn't hit then returns a baseUrl where all invalid domains are redirected. locally this worked when I added the true to Response.Redirect, assuming a halting of the current execution and restarting the execution on the browser redirect. What I have found in the stack trace is that the error is thrown on the second attempt to access the database. #region ISiteProvider Members public void Initialise(string[] host, string basehost) { if (host[0].Contains(basehost)) host = host[0].Split('.'); Site getSite = GetSites().WithDomain(host[0]); if (getSite == null) { sites.TryGetValue(host[0], out getSite); } _site = getSite; } public Site GetCurrentSite() { return _site; } public IQueryable<Site> GetSites() { return from p in _repository.groupDomains select new Site { Host = p.domainName, GroupGuid = (Guid)p.groupGuid, IsSubDomain = p.isSubdomain }; } #endregion The Linq query ^^^ is hit first, with a filter of WithDomain, the error isn't thrown till the WithDomain filter is attempted. In summary: The error is hit after the page is redirected, so the first iteration is executing as expected (so permissions on the database are correct, user profiles etc) shortly after the redirect when it filters the database query for the possible domain/subdomain of current redirected page, it errors out.

    Read the article

  • AngularJS: download pdf file from the server

    - by Bartosz Bialecki
    I want to download a pdf file from the web server using $http. I use this code which works great, my file only is save as a html file, but when I open it it is opened as pdf but in the browser. I tested it on Chrome 36, Firefox 31 and Opera 23. This is my angularjs code (based on this code): UserService.downloadInvoice(hash).success(function (data, status, headers) { var filename, octetStreamMime = "application/octet-stream", contentType; // Get the headers headers = headers(); if (!filename) { filename = headers["x-filename"] || 'invoice.pdf'; } // Determine the content type from the header or default to "application/octet-stream" contentType = headers["content-type"] || octetStreamMime; if (navigator.msSaveBlob) { var blob = new Blob([data], { type: contentType }); navigator.msSaveBlob(blob, filename); } else { var urlCreator = window.URL || window.webkitURL || window.mozURL || window.msURL; if (urlCreator) { // Try to use a download link var link = document.createElement("a"); if ("download" in link) { // Prepare a blob URL var blob = new Blob([data], { type: contentType }); var url = urlCreator.createObjectURL(blob); $window.saveAs(blob, filename); return; link.setAttribute("href", url); link.setAttribute("download", filename); // Simulate clicking the download link var event = document.createEvent('MouseEvents'); event.initMouseEvent('click', true, true, window, 1, 0, 0, 0, 0, false, false, false, false, 0, null); link.dispatchEvent(event); } else { // Prepare a blob URL // Use application/octet-stream when using window.location to force download var blob = new Blob([data], { type: octetStreamMime }); var url = urlCreator.createObjectURL(blob); $window.location = url; } } } }).error(function (response) { $log.debug(response); }); On my server I use Laravel and this is my response: $headers = array( 'Content-Type' => $contentType, 'Content-Length' => strlen($data), 'Content-Disposition' => $contentDisposition ); return Response::make($data, 200, $headers); where $contentType is application/pdf and $contentDisposition is attachment; filename=" . basename($fileName) . '"' $filename - e.g. 59005-57123123.PDF My response headers: Cache-Control:no-cache Connection:Keep-Alive Content-Disposition:attachment; filename="159005-57123123.PDF" Content-Length:249403 Content-Type:application/pdf Date:Mon, 25 Aug 2014 15:56:43 GMT Keep-Alive:timeout=3, max=1 What am I doing wrong?

    Read the article

  • Call phpexcel from joomla

    - by Oscar Calderon
    i have a problem about phpexcel and joomla. I'm developing some filter form to load excel reports, so i used phpexcel library to do this. Right now i have only a report, it works fine, but after that i upload inside joomla using PHP pages component that allows me to put php files inside joomla and call it. When i put them, i change a little bit the form that calls the php that generates the excel report, i call the php using a link like this: h**p://www.whiblix.com/index.php?option=com_php&Itemid=24 That is, calling it from Joomla, not directly the php. If i wanna call the php directly i could use this path: h**p://www.whiblix.com/components/com_php/files/repImportaciones.php What's the problem? The problem is, when i call the php that generates the excel through joomla, the excel that is downloaded is corrupt and only shows symbols in one cell when i open it. But if i call the php directly the report is generated fine. I could call the php directly, the problem is that if i call it directly i can't use this line of code: defined( '_JEXEC' ) or die( 'Restricted access' ); That is used to deny the direct access to php from call it directly, because it doesn' work because the security. Where's the problem? This is the code of php that generates the report (ommiting the code where generates the rows and cells): <?php //defined( '_JEXEC' ) or die( 'Restricted access' ); /** Error reporting */ error_reporting(E_ALL); date_default_timezone_set('Europe/London'); require_once 'Classes/PHPExcel.php'; // Create new PHPExcel object $objPHPExcel = new PHPExcel(); // Set properties $objPHPExcel->getProperties()->setCreator("Maarten Balliauw") ->setLastModifiedBy("Maarten Balliauw") ->setTitle("Office 2007 XLSX Test Document") ->setSubject("Office 2007 XLSX Test Document") ->setDescription("Test document for Office 2007 XLSX, generated using PHP classes.") ->setKeywords("office 2007 openxml php") ->setCategory("Test result file"); // Rename sheet $objPHPExcel->getActiveSheet()->setTitle('Reporte de Importaciones'); // Set active sheet index to the first sheet, so Excel opens this as the first sheet $objPHPExcel->setActiveSheetIndex(0); // Redirect output to a client’s web browser (Excel5) header('Content-Type: application/vnd.ms-excel'); header('Content-Disposition: attachment;filename="repPrueba.xls"'); header('Cache-Control: max-age=0'); $objWriter = PHPExcel_IOFactory::createWriter($objPHPExcel, 'Excel5'); $objWriter->save('php://output'); exit;

    Read the article

  • Best practices managing JavaScript on a single-page app

    - by seanmonstar
    With a single page app, where I change the hash and load and change only the content of the page, I'm trying to decide on how to manage the JavaScript that each "page" might need. I've already got a History module monitoring the location hash which could look like domain.com/#/company/about, and a Page class that will use XHR to get the content and insert it into the content area. function onHashChange(hash) { var skipCache = false; if(hash in noCacheList) { skipCache = true; } new Page(hash, skipCache).insert(); } // Page.js var _pageCache = {}; function Page(url, skipCache) { if(!skipCache && (url in _pageCache)) { return _pageCache[url]; } this.url = url; this.load(); } The cache should let pages that have already been loaded skip the XHR. I also am storing the content into a documentFragment, and then pulling the current content out of the document when I insert the new Page, so I the browser will only have to build the DOM for the fragment once. Skipping the cache could be desired if the page has time sensitive data. Here's what I need help deciding on: It's very likely that any of the pages that get loaded will have some of their own JavaScript to control the page. Like if the page will use Tabs, needs a slide show, has some sort of animation, has an ajax form, or what-have-you. What exactly is the best way to go around loading that JavaScript into the page? Include the script tags in the documentFragment I get back from the XHR? What if I need to skip the cache, and re-download the fragment. I feel the exact same JavaScript being called a second time might cause conflicts, like redeclaring the same variables. Would the better way be to attach the scripts to the head when grabbing the new Page? That would require the original page know all the assets that every other page might need. And besides knowing the best way to include everything, won't I need to worry about memory management, and possible leaks of loading so many different JavaScript bits into a single page instance?

    Read the article

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • flickr.photos.search strange behavior with nested xml response

    - by JohnIdol
    I am querying flickr with the following request: view-source:http://www.flickr.com/services/rest/?method=flickr.photos.search&api_key=MY_KEY&text=cork&per_page=12&&format=rest if I put the above url in the browser I get the following as I would expect: <rsp stat="ok"> <photos page="1" pages="22661" perpage="12" total="271924"> <photo id="4587565363" owner="46277999@N05" secret="717118838d" server="4029" farm="5" title="06052010(001)" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587466773" owner="25901874@N06" secret="32c5a1a57f" server="4002" farm="5" title="black wine cork 2 BW" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4588084730" owner="25901874@N06" secret="b27eef5635" server="4032" farm="5" title="black wine cork 2" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587457789" owner="43115275@N00" secret="ae0daa0ab6" server="4034" farm="5" title="Matthew" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587443837" owner="49967920@N07" secret="2b4c1a58de" server="4066" farm="5" title="Two car garage" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4588050906" owner="45827769@N06" secret="72de138f7e" server="4067" farm="5" title="Dabie Mountains, Central China" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587979300" owner="36531359@N00" secret="e5f8b30734" server="3299" farm="4" title="stroll" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587304769" owner="7904649@N02" secret="995062f550" server="4034" farm="5" title="IMG_4325" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587306827" owner="7904649@N02" secret="b92d7ff916" server="4050" farm="5" title="IMG_4329" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587311657" owner="7904649@N02" secret="bb1b34ccf8" server="4053" farm="5" title="IMG_4336" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587933110" owner="7904649@N02" secret="f733b1d482" server="3300" farm="4" title="IMG_4334" ispublic="1" isfriend="0" isfamily="0" /> <photo id="4587930650" owner="7904649@N02" secret="9bc202b3ed" server="4018" farm="5" title="IMG_4331" ispublic="1" isfriend="0" isfamily="0" /> </photos> </rsp> Now I've got something very simple PHP to query the exact same url, called from javascript through AJAX - it all works as I can see putting a breakpoint in my javascript function that handles the response: function HandleResponse(response) { document.getElementById('ResponseDiv').innerHTML = response; } Looking at response value in the snippet above I see what I expect (xml formed as above) - then the strangest thing happens: when I inspect the DOM instead of having photo elements all as children of photos they're all nested within each other. What amI missing - How is this possible in light of the fact that the response string is definitely formed as I would expect it to be?

    Read the article

  • How is "clean" testing done on the Macintosh without virtualization?

    - by Schnapple
    One of the things I've run across on Windows is when a web browser plugin or program you're developing makes an assumption that something is installed that, by default, isn't always present on Windows. A perfect example would be .NET - a whole lot of people running Windows XP have never installed any versions of .NET and so the installer needs to detect and remedy this if necessary. The way I've been testing this in Windows is to have a virtual machine with a snapshot of a clean, patched, but otherwise untouched install of XP or Vista or 7 or whatever. When I'm done testing I just discard any changes since the snapshot. Works great. I'm now developing something for the Macintosh, a platform which is very new to me, and I'm seeing that virtualization does not appear to be an option. It's explicitly forbidden in the EULA of Mac OS X, it's only allowed from Mac OS X Server, which seeing as how I'm targeting an end product is of no use to me, and the one program I see which can virtualize it - VirtualBox - only supports the server and actively nukes any discussion of running the consumer/client version of Mac OS X. And the only instructions I find anywhere on the topic seem to involve the use of "hacking" programs which is very much incompatible with the full-time gig I'm trying to do this for. So it looks like virtualization is out, but at various points I'm going to want or need to simulate what it's like to install and run this software on a "clean" Macintosh. How do people usually do this? Just buy multiple Macintoshes and use Time Machine? Am I thinking about this all wrong and everything Just Works? To be clear I'm not trying to run Mac OS X on a Windows machine. I have a Macintosh, I'm fine with virtualizing Mac OS X on Apple hardware, I'm just not seeing a route to making the non-Server version do this. I'm aware that Mac OS X Server can be virtualized but that's not what I'm going for. I'm aware that there are unsanctioned/unsupported methods of making Mac OS X run in virtualization programs like VirtualBox but for legal reasons I am not interested in those. My question is not "how can I do this?" but rather "so this thing I do on Windows seems to not be possible, generally, on the Macintosh, so what do people do to achieve what I'm going for?"

    Read the article

  • Measure width() with jQuery after DOM refresh

    - by o_O Tync
    My script dynamically creates a <ul> width left-floating <li>s inside: it's a paginator. Afterwards, the script measures width of all <li>s and summs them up. The problem is that after the nodes are injected into the document — the browser refreshed DOM and applies CSS styles which takes a while. It has a negative effect on my script: when these operations are not complete before I measure the width — my script gets a wrong value. If I perform the measure in a second — everything is ok. The thing I'm looking for is a way to detect the moment when the <ul> is fully drawn, styles applied and the width has stabilizes. Or at least a way to detect every dimensions changes. Of course I can use setTimeout(..., 100) but it's ugly and I guess — not a solution at all. If there's a way to detect width stabilization — I would do the measuring right after it to get the correct values. HTML code generated by the DOM <div> <ul> <li><a href="...">1</a></li> <li><a href="...">2</a></li> .... </ul> </div> P.S. Why I need this. My paginator's left-floating <li> items tend to move to the next line when the <ul> tries to become wider than the page itself. Even though most of <li>s are invisible because of parent <div>'s width restriction: div { width: 500px; overflow: hidden; } div ul { width: 100%; white-space: nowrap; } div ul li { display: block; float: left; } they still go down unless I specify the actual summed width of the <ul> with the script.

    Read the article

  • invite friends in a dialog in a Facebook application

    - by Shani1351
    I'm trying to create a Facebook application that displays a friend invite dialog within the application using Facebook's Javascript API (FB.ui). To do that I followed this tutorial I have two problems : The action url I've put in the request-form is "http://apps.facebook.com/appname/post_invite.php" but I see that the iframe source after the post is "http://mydomain.com/post_invite.php" and when this iframe tries to do : parent.closeInviteWidget(); I get an error saying : "Permission denied for < http: //mydomain.com (document.domain has not been set) to get property Window.closeInviteWidget from < http:// apps.facebook.com (document.domain=< http:// facebook.com)." The skip button inside the request-form opens the action url in a new window (new browser tab) and not post to itself like the invite button. How can I fix those problems? -------------------- UPDATE : -------------------------------- I've tried to do what ifaour said and changed the code to : function inviteFriends(user_name, category_id, category_name) { url = appBaseUrl + "/index.php?category_id=" + category_id; req = "<fb:req-choice url='" + url + "' label='Authorize My Application' />"; content = user_name + " opened a new category called " + category_name + ". " + req; action = 'post_invite.php'; fbmi_text = '<fb:request-form action="' + action + '" target="_self" method="post" invite="true" type="Invite" content="' + content + '" <fb:multi-friend-selector showborder="false" actiontext="Invite yor friends" email_invite="false" import_external_friends="false" /> </fb:request-form>'; FB.ui({ method:'fbml.dialog', width:'750px', fbml:fbmi_text }); } When I use FireBug and look at the invite form it looks like this: <form id="req_form_4d20682f73ddb6e71722794" content="I've opened a new category called dsfsd. <fb:req-choice url='http://apps.facebook.com/appname/index.php?category_id=60' label='Authorize My Application' /> type="Invite" invite="true" method="post" target="_self" action="http://apps.facebook.com/appname/post_invite.php"> ... </form> But I still get the same error : Permission denied for <http://mydomain.com> (document.domain has not been set) to get property Window.closeInviteWidget from <http://apps.facebook.com> (document.domain=<http://facebook.com>)...

    Read the article

  • Making a JQuery tooltip retrieve a new value every time the mouse moves.

    - by Micheal Smith
    As i am developing an application that makes use of a tooltip that will display a different value when the user moves the mouse. The user mouses over a table cell and the application then generates a number, the further right the cursor moves in the cell, the higher the value increases. I have created a tooltip that runs and when the cursor mouses over the cell, it does indeed show the correct value. But, when the i move the mouse, it does not show the new value but just the older one. I need to know how to make it update everytime the mouse moves or the value of a variable changes, Any ideas for the problem? <table> <tr id="mon_Section"> <td id="day_Title">Monday</td> <td id="mon_Row"></td> </tr> </table> Below is the document.ready function that calls my function: $(document).ready(function() { $("#mon_Row").mousemove(calculate_Time); }); Below is the function: <script type="text/javascript"> var mon_Pos = 0; var hour = 0; var minute = 0; var orig = 0; var myxpos = 0; function calculate_Time (event) { myxpos = event.pageX; myxpos = myxpos-194; if(myxpos<60) { orig = myxpos; $('#mon_Row').attr("title", orig); } if (myxpos>=60 && myxpos<120) { orig=myxpos; $('#mon_Row').attr("title", orig); } if (myxpos>=120 && myxpos<180) { orig=myxpos; $('#mon_Row').attr("title", orig); Inside the function is the code to generate the tooltip: $('#mon_Row').each(function() { $(this).qtip( { content: { text: false }, position: 'topRight', hide: { fixed: true // Make it fixed so it can be hovered over }, style: { padding: '5px 15px', // Give it some extra padding name: 'dark' // And style it with the preset dark theme } }); }); I know that a new value is being assigned to the cells title attribute because it will display inside the standard small tooltip that a browser will display. The JQuery tooltip will not grab the new value and display it, only the variables initial value when it was called.

    Read the article

  • PHP miniwebsever file download

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; function openfile() { $filename = "a.pl"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $array = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); return $array; } $send = openfile(); $file = $send['filename']; $filesize = $send['filesize']; $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Content-Length:$filesize\r\n"; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; $output .= $send['Output']; $output .= "Content-Transfer-Encoding: binary"; $output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hello, I am snikolov i am creating a miniwebserver with php and i would like to know how i can send the client a file to download with his browser such as firefox or internet explore i am sending a file to the user to download via sockets, but the cleint is not getting the filename and the information to download can you please help me here,if i declare the file again i get this error in my server Fatal error: Cannot redeclare openfile() (previously declared in C:\User s\fsfdsf\sfdsfsdf\httpd.php:31) in C:\Users\hfghfgh\hfghg\httpd.php on li ne 29, if its possible, i would like to know if the webserver can show much banwdidth the user request via sockets, perl has the same option as php but its more hardcore than php i dont understand much about perl, i even saw that a miniwebserver can show much the client user pulls from the server would it be possible that you can assist me with this coding, i much aprreciate it thank you guys.

    Read the article

  • Problem with Executing Mysql stored procedure

    - by karthik
    The stored procedure builds without any problem. The purpose of this is to take backup of selected tables to a script file. when the SP returns a value {Insert statements}. I am using the below MySql stored procedure, created by SQLWAYS [Tool to convert MsSql to MySql]. The actual MsSql SP is from http://www.codeproject.com/KB/database/InsertGeneratorPack.aspx When i execute the SP in MySql Query Browser, It says "Unknown column 'tbl_users' in 'field list'" What would be the problem ? Because there was no error when i build-ed this Converted MySql SP. Help.. DELIMITER $$ DROP PROCEDURE IF EXISTS `demo`.`InsertGenerator` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsertGenerator`(v_tableName VARCHAR(100)) SWL_return: BEGIN -- SQLWAYS_EVAL# to retrieve column specific information -- SQLWAYS_EVAL# table DECLARE v_string NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# first half -- SQLWAYS_EVAL# tement DECLARE v_stringData NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# data -- SQLWAYS_EVAL# statement DECLARE v_dataType NATIONAL VARCHAR(1000); -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# columns DECLARE v_colName NATIONAL VARCHAR(50); DECLARE NO_DATA INT DEFAULT 0; DECLARE cursCol CURSOR FOR SELECT column_name,data_type FROM `columns` WHERE table_name = v_tableName; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN SET NO_DATA = -2; END; DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1; OPEN cursCol; SET v_string = CONCAT('INSERT ',v_tableName,'('); SET v_stringData = ''; SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; IF NO_DATA <> 0 then -- NOT SUPPORTED print CONCAT('Table ',@tableName, ' not found, processing skipped.') close cursCol; LEAVE SWL_return; end if; WHILE NO_DATA = 0 DO IF v_dataType in('varchar','char','nchar','nvarchar') then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(',v_colName,'SQLWAYS_EVAL# ''+'); ELSE if v_dataType in('text','ntext') then -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# else SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 00)),'''')+'''''',''+'); ELSE IF v_dataType = 'money' then -- SQLWAYS_EVAL# doesn't get converted -- SQLWAYS_EVAL# implicitly SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# y,''''''+ isnull(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0.0000'')+''''''),''+'); ELSE IF v_dataType = 'datetime' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# time,''''''+ isnull(cast(',v_colName, 'SQLWAYS_EVAL# 0)),''0'')+''''''),''+'); ELSE IF v_dataType = 'image' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(convert(varbinary,',v_colName, 'SQLWAYS_EVAL# 6)),''0'')+'''''',''+'); ELSE SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0'')+'''''',''+'); end if; end if; end if; end if; end if; SET v_string = CONCAT(v_string,v_colName,','); SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; END WHILE; END $$ DELIMITER ;

    Read the article

  • dijit/form/Select broken in Internet Explorer using Esri Javascript 3.7

    - by disuse
    After developing a web map app in Firefox, I tested my code in Internet Explorer (company standard) to discover that the dijit/form/Select is misbehaving using the latest Esri JavaScript v3.7. The issue I am seeing is that the Select will not update/change from the first option in the list when using v3.7. If I bump the version down to 3.6, it works as expected. I've tried IE browser modes from 7 to 10 and am experiencing the same behavior between all of them. Can someone confirm they are experiencing the same thing? Example in 3.7 - http://jsbin.com/aVIsApO/1/edit Example in 3.6 - http://jsbin.com/odIxETu/7/edit Codeblock var url = "http://services.arcgis.com/V6ZHFr6zdgNZuVG0/ArcGIS/rest/services/Street_Trees/FeatureServer/0"; var frmTrees; require([ "esri/tasks/query", "esri/tasks/QueryTask", "dojo/dom-construct", "dijit/form/Select", "dojo/parser", "dijit/registry", "dojo/on", "dojo/ready", "dojo/_base/connect", "dojo/domReady!" ], function( Query, QueryTask, domConstruct, Select, parser, registry, on, ready, connect ) { ready(function() { frmTrees = registry.byId("trees"); var qt = new QueryTask(url); var query = new Query(); query.where = "FID < 25"; query.orderByFields = ["qSpecies"]; query.returnGeometry = false; query.outFields = ["qSpecies", "TreeID"]; query.groupByFieldsForStatistics = ["qSpecies"]; //query.returnDistinctValues = true; qt.execute(query, function(results) { //var frm_domain_area = dom.byId("domain_area"); var testVals = {}; for (var i = 0; i < results.features.length; i++) { var id = results.features[i].attributes.TreeID; var desc = results.features[i].attributes.qSpecies; if (!testVals[id]) { testVals[id] = true; var selectElem = domConstruct.create("option",{ label: desc + " (" + id + ")", value: id }); frmTrees.addOption(selectElem); } } }); frmTrees.on("change", function() { console.debug(frmTrees.get("value")); }); }); });

    Read the article

  • Precise explanation of JavaScript <-> DOM circular reference issue

    - by Joey Adams
    One of the touted advantages of jQuery.data versus raw expando properties (arbitrary attributes you can assign to DOM nodes) is that jQuery.data is "safe from circular references and therefore free from memory leaks". An article from Google titled "Optimizing JavaScript code" goes into more detail: The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers' C++ objects' implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer's COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure). It lists two examples of circular reference patterns: DOM element → event handler → closure scope → DOM DOM element → via expando → intermediary object → DOM element However, if a reference cycle between a DOM node and a JavaScript object produces a memory leak, doesn't this mean that any non-trivial event handler (e.g. onclick) will produce such a leak? I don't see how it's even possible for an event handler to avoid a reference cycle, because the way I see it: The DOM element references the event handler. The event handler references the DOM (either directly or indirectly). In any case, it's almost impossible to avoid referencing window in any interesting event handler, short of writing a setInterval loop that reads actions from a global queue. Can someone provide a precise explanation of the JavaScript ↔ DOM circular reference problem? Things I'd like clarified: What browsers are effected? A comment in the jQuery source specifically mentions IE6-7, but the Google article suggests Firefox is also affected. Are expando properties and event handlers somehow different concerning memory leaks? Or are both of these code snippets susceptible to the same kind of memory leak? // Create an expando that references to its own element. var elem = document.getElementById('foo'); elem.myself = elem; // Create an event handler that references its own element. var elem = document.getElementById('foo'); elem.onclick = function() { elem.style.display = 'none'; }; If a page leaks memory due to a circular reference, does the leak persist until the entire browser application is closed, or is the memory freed when the window/tab is closed?

    Read the article

  • jQuery tablesorter - loss of functionality after AJAX call

    - by Nick
    I have recently been experimenting with the tablesorter plugin for jQuery. I have successfully got it up and running in once instance, and am very impressed. However, I have tried to apply the tablesorter to a different table, only to encounter some difficulties... Basically the table causing a problem has a <ul> above it which acts as a set of tabs for the table. so if you click one of these tabs, an AJAX call is made and the table is repopulated with the rows relevant to the specific tab clicked. When the page initially loads (i.e. before a tab has been clicked) the tablesorter functionality works exactly as expected. But when a tab is clicked and the table repopulated, the functionality disappears, rendering it without the sortable feature. Even if you go back to the original tab, after clicking another, the functionality does not return - the only way to do so is a physical refresh of the page in the browser. I have seen a solution which seems similar to my problem on this site, and someone recommends using the jQuery plugin, livequery. I have tried this but to no avail :-( If someone has any suggestions I would be most appreciative. I can post code snippets if it would help (though I know the instantiation code for tablesorter is fine as it works on tables with no tabs - so it's definitely not that!) EDIT: As requested, here are some code snippets: The table being sorted is <table id="#sortableTable#">..</table>, the instantiation code for tablesorter I am using is: $(document).ready(function() { $("#sortableTable").tablesorter( { headers: //disable any headers not worthy of sorting! { 0: { sorter: false }, 5: { sorter: false } }, sortMultiSortKey: 'ctrlKey', debug:true, widgets: ['zebra'] }); }); And I tried to rig up livequery as follows: $("#sortableTable").livequery(function(){ $(this).tablesorter(); }); This has not helped though... I am not sure whether I should use the id of the table with livequery as it is the click on the <ul> I should be responding to, which is of course not part of the table itself. I have tried a number of variations in the hope that one of them will help, but to no avail :-(

    Read the article

  • Moving from Windows to Ubuntu.

    - by djzmo
    Hello there, I used to program in Windows with Microsoft Visual C++ and I need to make some of my portable programs (written in portable C++) to be cross-platform, or at least I can release a working version of my program for both Linux and Windows. I am total newcomer in Linux application development (and rarely use the OS itself). So, today, I installed Ubuntu 10.04 LTS (through Wubi) and equipped Code::Blocks with the g++ compiler as my main weapon. Then I compiled my very first Hello World linux program, and I confused about the output program. I can run my program through the "Build and Run" menu option in Code::Blocks, but when I tried to launch the compiled application externally through a File Browser (in /media/MyNTFSPartition/MyProject/bin/Release; yes, I saved it in my NTFS partition), the program didn't show up. Why? I ran out of idea. I need to change my Windows and Microsoft Visual Studio mindset to Linux and Code::Blocks mindset. So I came up with these questions: How can I execute my compiled linux programs externally (outside IDE)? In Windows, I simply run the generated executable (.exe) file How can I distribute my linux application? In Windows, I simply distribute the executable files with the corresponding DLL files (if any) What is the equivalent of LIBs (static library) and DLLs (dynamic library) in linux and how to use them? In Windows/Visual Studio, I simply add the required libraries to the Additional Dependencies in the Project Settings, and my program will automatically link with the required static library(-ies)/DLLs. Is it possible to use the "binary form" of a C++ library (if provided) so that I wouldn't need to recompile the entire library source code? In Windows, yes. Sometimes precompiled *.lib files are provided. If I want to create a wxWidgets application in Linux, which package should I pick for Ubuntu? wxGTK or wxX11? Can I run wxGTK program under X11? In Windows, I use wxMSW, Of course. If question no. 4 is answered possible, are precompiled wxX11/wxGTK library exists out there? Haven't tried deep google search. In Windows, there is a project called "wxPack" (http://wxpack.sourceforge.net/) that saves a lot of my time. Sorry for asking many questions, but I am really confused on these linux development fundamentals. Any kind of help would be appreciated =) Thanks.

    Read the article

  • Cannot Call WordPress Plugin Files Under wp-content

    - by Volomike
    I have a client who has many blog customers. Each of these WordPress blogs calls a plugin that provides a product link. The way that link is composed looks like this: {website}/wp-content/plugins/prodx/product?id=432320 This works fine on all blogs except two. On those two, when you try to call the URL, you get a 404. So, I disabled all plugins except prodx and reverted the theme to default (Kubrick), thinking perhaps a plugin intercept with add_action() API was doing this, such as intercepting URLs and redirecting them. However, this did not help. So, I upgraded the WordPress to the latest version. Again, didn't fix. So, I checked permissions, comparing with a blog that worked just fine. Again, didn't fix. So I replaced the .htaccess, using one from a working blog. Again, didn't fix. So I replaced all the files using some from a working blog that was identical to this one, and then restored the wp-config.php file back so that it talked to the right blog database. Again, didn't fix. Again I checked permissions meticulously, comparing to a perfectly working blog. Again, didn't fix. So, I created a test.php that looks like so: <?php print_r($_GET); echo "hello world"; I then copied it into another plugin folder and used my browser to get to it -- again, 404. So I copied it into the root of wp-content/plugins and tried to call it there -- again, 404. So I copied it into wp-content -- again, 404. Last, I copied it into the root of the WordPress blog website, and this time, it worked! Doesn't make sense. I started to think that perhaps something was going on with /etc/httpd/conf/httpd.conf for this customer, but the only thing I saw different in their for this customer was the IP address was different than the customer's blog that worked. Each customer gets their own IP in this environment my client has built. My client sysop is baffled too. What do you think is going on? Is there something wrong in the WP database for this customer? Is there something wrong in httpd.conf?

    Read the article

  • jQuery post request is not sent until first post request is compleated

    - by Champ
    I have a function which have a long execution time. public void updateCampaign() { context.Session[processId] = "0|Fetching Lead360 Campaign"; Lead360 objLead360 = new Lead360(); string campaignXML = objLead360.getCampaigns(); string todayDate = DateTime.Now.ToString("dd-MMMM-yyyy"); context.Session[processId] = "1|Creating File for Lead360 Campaign on " + todayDate; string fileName = HttpContext.Current.Server.MapPath("campaigns") + todayDate + ".xml"; objLead360.createFile(fileName, campaignXML); context.Session[processId] = "2|Reading The latest Lead360 Campaign"; string file = File.ReadAllText(fileName); context.Session[processId] = "3|Updating Lead360 Campaign"; string updateStatus = objLead360.updateCampaign(fileName); string[] statusArr = updateStatus.Split('|'); context.Session[processId] = "99|" + statusArr[0] + " New Inserted , " + statusArr[1] + " Updated , With " + statusArr[2] + " Error , "; } So to track the Progress of the function I wrote a another function public void getProgress() { if (context.Session[processId] == null) { string json = "{\"error\":true}"; Response.Write(json); Response.End(); }else{ string[] status = context.Session[processId].ToString().Split('|'); if (status[0] == "99") context.Session.Remove(processId); string json = "{\"error\":false,\"statuscode\":" + status[0] + ",\"statusmsz\":\"" + status[1] + "\" }"; Response.Write(json); Response.End(); } } To call this by jQuery post request is used reqUrl = "AjaxPages/lead360Campaign.aspx?processid=" + progressID + "&action=updatecampaign"; $.post(reqUrl); setTimeout(getProgress, 500); get getProgress is : function getProgress() { reqUrl = "AjaxPages/lead360Campaign.aspx?processid=" + progressID + "&action=getProgress"; $.post(reqUrl, function (response) { var progress = jQuery.parseJSON(response); console.log(progress) if (progress.error) { $("#fetchedCampaign .waitingMsz").html("Some error occured. Please try again later."); $("#fetchedCampaign .waitingMsz").css({ "background": "url(common/images/ajax_error.jpg) no-repeat center 6px" }); return; } if (progress.statuscode == 99) { $("#fetchedCampaign .waitingMsz").html("Update Status :"+ progress.statusmsz ); $("#fetchedCampaign .waitingMsz").css({ "background": "url(common/images/ajax_loded.jpg) no-repeat center 6px" }); return; } $("#fetchedCampaign .waitingMsz").html("Please Wait... " + progress.statusmsz); setTimeout(getProgress, 500); }); } But the problem is that I can't see the intermediate message. Only the last message is been displayed after a long lime of ajax loading message Also on the browser console I just see that after a long time first requested is completed and after that the second request is completed. but there should be for getProgress ? I have checked jquery.doc and it says that $post is an asynchronous request. Can anyone please explain what is wrong with the code or logic?

    Read the article

  • Android pluginable application

    - by Alxandr
    I've been trying to create an android-application the last couple of weeks, and mostly everything has worked out great, but there is one thing that I was wondering about, and that is pluginability trough the use of intents. What I'm trying to create is basically a comic-reader. As of the version I use now, I open the application and get a list of commics that are my favourites, then I enter one to get a detailed view, and finally I enter a page. This is managed trough 3 activities. List, Details and Page. However, as of now the application can only read comics of one source (a specialiced xml-feed comming from my server), and I was hoping to be able to expand this a litle (also, the page-activity and some other stuff needs to be cleaned up in, so I'm thinking about remaking from scratch, and just take the first go as a learning-round). And I came up with an idea which I think sounds great, but I don't know if it's possible, but this is what I'm thinking about: The user enters the application and get an (first time empty) list of comics. The user hits a button to find comics, this launces an intent that says something like "find comic" or something like that. This should cause the system to display all matching activities. This would make it possible to provide different comic-providers trough different applications. Another activity kicks in and might displays some options to the user (for instance a file-browser), or might not (in the example of an xml-feed, which should just load). The list is returned to the first activity and displayed to the user. The second (find) activity is closed. The user picks a comic from the list. This should open some details-activity. The details-activity should receive a key which corresponds to the comic selected. This should be unique amongst the comic-providers. The details-view should get it's data trough some cind of content-provider, or an activity (whichever is most suited, if one of them is). The user can select a page. This should be the same routine as step 5. My question is, is this possible in the android system, and if it is, is it a bad idea? And also, is there any better way to achieve more or less the same thing?

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >