Search Results

Search found 16396 results on 656 pages for 'browser extensions'.

Page 565/656 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • HttpWebRequest does has empty response requesting a search from Bing

    - by Jarrod Maxwell
    I have the following code that sends a HttpWebRequest to Bing. When I request the url below though it returns what appears to be an empty response when it should be returning a list of results. var response = string.Empty; var httpWebRequest = WebRequest.Create("http://www.bing.com/search?q=stackoverflow&count=100") as HttpWebRequest; httpWebRequest.Method = WebRequestMethods.Http.Get; httpWebRequest.Headers.Add("Accept-Language", "en-US"); httpWebRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Win32)"; httpWebRequest.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip,deflate"); using (var httpWebResponse = httpWebRequest.GetResponse() as HttpWebResponse) { Stream stream = null; using (stream = httpWebResponse.GetResponseStream()) { if (httpWebResponse.ContentEncoding.ToLower().Contains("gzip")) stream = new GZipStream(stream, CompressionMode.Decompress); else if (httpWebResponse.ContentEncoding.ToLower().Contains("deflate")) stream = new DeflateStream(stream, CompressionMode.Decompress); var streamReader = new StreamReader(stream, Encoding.UTF8); response = streamReader.ReadToEnd(); } } Its pretty standard code for requesting and receiving a web page. Any ideas why the response is empty? Thanks in advance. EDIT I left off a query string parameter in the url. I also had &count=100 which I have now corrected. It seems to work for values of 50 and below but returns nothing when larger. This works ok when in the browser, but not for this web request. It makes me think the issue is that the response is large and HttpWebResponse is not handling that for me the way I have it set up. Just a guess though.

    Read the article

  • Need assistance with Kohana 3 and catch all route turning into a 404 error

    - by alex
    Based on this documentation, I've implemented a catch all route which routes to an error page. Here is the last route in my bootstrap.php Route::set('default', '<path>', array('path' => '.+')) ->defaults(array( 'controller' => 'errors', 'action' => '404', )); However I keep getting this exception thrown when I try and go to a non existent page Kohana_Exception [ 0 ]: Required route parameter not passed: path If I make the <path> segment optional (i.e. wrap it in parenthesis) then it just seems to load the home route, which is... Route::set('home', '') ->defaults(array( 'controller' => 'home', 'action' => 'index', )); The home route is defined first. I execute my main request like so $request = Request::instance(); try { // Attempt to execute the response $request->execute(); } catch (Exception $e) { if (Kohana::$environment === Kohana::DEVELOPMENT) throw $e; // Log the error Kohana::$log->add(Kohana::ERROR, Kohana::exception_text($e)); // Create a 404 response $request->status = 404; $request->response = Request::factory(Route::get('default')->uri())->execute(); } $request->send_headers(); echo $request->response; This means that the 404 header is sent to the browser, but I assumed by sending the request to the capture all route then it should show the 404 error set up in my errors controller. <?php defined('SYSPATH') or die('No direct script access.'); class Controller_Errors extends Controller_Base { public function before() { parent::before(); } public function action_404() { $this->bodyClass[] = '404'; $this->internalView = View::factory('internal/not_found'); $longTitle = 'Page Not Found'; $this->titlePrefix = $longTitle; } } Why won't it show my 404 error page?

    Read the article

  • Workflow Service host not publishing Metadata.

    - by jlafay
    Still hacking away with extreme persistence at WF services hosted outside of IIS. I'm now having issues with my WF service publishing metadata. Can someone take a look at my code and see what step I'm missing? The few tutorials that I've stumbled across for my scenario make it look so easy, and I know it is. I'm just missing something ridiculously simple. Here's my current trial code: const string serviceUri = "http://localhost:9009/Subscribe"; WorkflowServiceHost host = new WorkflowServiceHost( new Subscribe(), new Uri(serviceUri) ); SubscriberSvcHost.AddDefaultEndpoints( ); SubscriberSvcHost.Open(); Subscribe() is an activity that is coded in an xaml file and contains simple receive and sendreply activities to test out my hosted workflow service. It is NOT a xamlx (WF service) file. Seems like this should be simple enough to work but when I start the application and the service fires I get this message in my browser when navigating to the URI: "Metadata publishing for this service is currently disabled." Shouldn't adding the default endpoints provide enough metadata and description to satisfy the service init and then go into its wait for message state?

    Read the article

  • How do I create a C# .NET Web Service that Posts items to a users Facebook Wall?

    - by Jourdan
    I'm currently toying around with the Clarity .NET Facebook API but am finding certain situations with authentication to be kind of limiting. I keep going through the tutorials but always end up hitting a brick wall with what I want to do. Perhaps I just cannot do it? I want to make a Web Service that takes in the require credentials (APIKey, SecretKey, UsersId (or Session Key?) and whatever else I would need), and then do various tasks: Post to users wall, add events etc. The problem I am having is this: The current documentation, examples and support provide a way to do this within the context of a Web site. Within this context, the required "connect" popup can be initiated and allow the user to authenticate and and connect the application. From that point on the Web can go on with its business to do what it needs to do. If I close the browser and come back to the page, I have to push the connect button again. Except this time, since I was already logged into facebook, I don't have to go through the whole connection process. But still ... How do applications like Tweetdeck get around this? They seemingly have you connect once, when you install their application, and you don't have to do it again. I would assume that this same idea would have to applied towards making a web service because: You don't know what context the user is in when making the Web service call. The web service methods being called could be coming from a Windows Form app, or code behind in a workflow.

    Read the article

  • Sharepoint.OpenDocuments Control Compatible with Forms Authentication?

    - by Richard Collette
    We are using the Sharepoint.OpenDocuments.EditDocument2 ActiveX control and method. The method is being called from JavaScript in an IE6 client on a Windows XP SP3 client (fully patched). The server is running IIS6 on Windows Server 2003 SP1 Fronting the IIS server is Tivoli Access Manager (TAM) which proxies access to the web applications sitting behind it. Similar to forms authentication, it creates a session cookie for authentication purposes, that must be present for the HTTP request to reach the IIS server. In front of TAM is an F5/BigIP load balancer and SSL encryption offloader, which enforces that incoming requests use the HTTPS protocol. What is happening is that HTTP requests issued by this control do not contain any session cookies that were present in the browser. It drops the ASP.NET session cookie, the ASP.NET forms authentication cookie and the TAM cookie Because the TAM cookie is missing the request is redirected to the TAM login page, which then shows up via HTML conversion in Word or Excel. The API documentation at http://msdn.microsoft.com/en-us/library/ms440037.aspx mentions nothing about security or appropriate usage scenarios for this control. Should these controls work in an ASP.Net Forms Authentication scenario or are they only supported with Windows Authentication. If Forms Authentication is supposed to function, how do we get the control to include the necessary session cookies in its requests?

    Read the article

  • How to actually query Chinese address in Googlemap API geocoding??

    - by Robert
    I'm following the demo code from article of phpsqlgeocode.html In the db, I inserted some Chinese addresses, which is utf8 encode. I found after urlencode the Chinese address, the output of the address will be wrong.Like this one: http://maps.google.com.tw/maps/geo?output=csv&key=ABQIAAAAfG3KxFZXjEslq8VNxMBpKRR08snBovzCxLQZ9DWwpnzxH-ROPxSAS9Q36m-6OOy0qlwTL6Ht9qp87w&q=%3F%3F%3F%3F%3F%3F%3F%3F%3F132%3F Then output(can't query from php, it have to test as browser url link), 200,5,59.3266963,18.2733433 Whose address is actually located in Taichung Taiwan, but turn out to in Sweden Europe. But when I paste the Chinese address(such as ???????? ?131?56?58?60?) in the url, the result turn out to be fine!!! So my question is how to make sure it send out the original Chiness address?? how to prevent urlencode()??? I found take urlencode() away not change anything. (I've change the MAPS_HOST from maps.google.com to maps.google.com.tw.) (I'm sure my key is right, and other English address geocoding are fine.) Thanks!!

    Read the article

  • Why can't I debug from Visual Studio 2005 after installing IE8?

    - by tjrobinson
    I've just installed IE8 (final) and restarted. I can no longer debug Web Application Projects using Visual Studio 2005 on Windows Server 2003 Enterprise R2. I get the message "Internet Explorer cannot display the webpage" and then WebDev.WebServer.exe quits with no visible error message and nothing in the Event Viewer. Does anyone have any ideas? Things that haven't helped: Adding localhost to trusted sites Changing the port to 8080 or 80 Checking my hosts file (it's just got 127.0.0.1 localhost in it) Things that have helped a bit: Running (not debugging) with CTRL-F5, which works fine (unless you need to debug) Changing the default Visual Studio browser to Firefox, which allows me to debug My hosts file contains: # Copyright (c) 1993-1999 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host 127.0.0.1 localhost

    Read the article

  • javascript using 'this' in global object

    - by Marco Demaio
    What does 'this' keyword refer to when used in gloabl object? Let's say for instance we have: var SomeGlobalObject = { rendered: true, show: function() { /* I should use 'SomeGlobalObject.rendered' below, otherwise it won't work when called from event scope. But it works when called from timer scope!! How can this be? */ if(this.rendered) alert("hello"); } } Now if we call in an inline script in the HTML page: SomeGlobalObject.show(); window.setTimeout("Msg.show()", 1000); everything work ok. But if we do something like AppendEvent(window, 'load', Msg.show); we get an error because this.rendered is undefined when called from the event scope. Do you know why this happens? Could you explain then if there is another smarter way to do this without having to rewrite every time "SomeGlobalObject.someProperty" into the the SomeGlobalObject code? Thanks! AppendEvent is just a simple cross-browser function to append an event, code below, but it does not matter in order to answer the above questions. function AppendEvent(html_element, event_name, event_function) { if(html_element.attachEvent) //IE return html_element.attachEvent("on" + event_name, event_function); else if(html_element.addEventListener) //FF html_element.addEventListener(event_name, event_function, false); }

    Read the article

  • Take screenshot with Selenium: WaitForPageToLoad does not wait long enough

    - by OregonGhost
    I'm trying to get screenshots from a web page with multiple browsers. Just experimenting with Selenium RC, I wrote code like this: var sel = new DefaultSelenium(server, 4444, target, url); sel.Start(); sel.Open(url); sel.WaitForPageToLoad("30000"); var imageString = sel.CaptureScreenshotToString(); This basically works, but in most cases the screenshot is of a blank browser window, because the page is not yet ready for display. It kind of works if I add a sleep just after the WaitForPageToLoad, but that slows down the fast browsers and/or may be to short for the slower browsers (or under load). A typical solution for this seems to be to wait for the presence of a certain element. However, this is meant as a simple generic solution to get a screenshot of a local web page with as many browsers as possible (to test the layout) and I don't want to have to enter certain element names or whatever. It's a simple tool where you just enter the Selenium Server URL and the URL you want to test, and get the screenshots back. Any advice?

    Read the article

  • C++ resize a docked Qt QDockWidget programmatically?

    - by Zac
    I've just started working on a new C++/Qt project. It's going to be an MDI-based IDE with docked widgets for things like the file tree, object browser, compiler output, etc. One thing is bugging me so far though: I can't figure out how to programmatically make a QDockWidget smaller. For example, this snippet creates my bottom dock window, "Build Information": m_compilerOutput = new QTextEdit; m_compilerOutput->setReadOnly(true); dock = new QDockWidget(tr("Build Information"), this); dock->setWidget(m_compilerOutput); addDockWidget(Qt::BottomDockWidgetArea, dock); When launched, my program looks like this: http://yfrog.com/6ldreamidep (bear in mind the early stage of development) However, I want it to appear like this: http://yfrog.com/20dreamide2p I can't seem to get this to happen. The Qt Reference on QDockWidget says this: Custom size hints, minimum and maximum sizes and size policies should be implemented in the child widget. QDockWidget will respect them, adjusting its own constraints to include the frame and title. Size constraints should not be set on the QDockWidget itself, because they change depending on whether it is docked Now, this suggests that one method of going about doing this would be to sub-class QTextEdit and override the sizeHint() method. However, I would prefer not to do this just for that purpose, nor have I tried it to find that to be a working solution. I have tried calling dock-resize(m_compilerOutput-width(), m_compilerOutput-minimumHeight()), calling m_compilerOutput-setSizePolicy() with each of its options...nothing so far has affected the size. Like I said, I would prefer a simple solution in a few lines of code to having to create a sub-class just to change sizeHint(). All suggestions are appreciated.

    Read the article

  • Microsoft Reporting 2005 and Report Viewer Report ASP.Net Session Has Expired on Load

    - by ThaKidd
    At my job, I have been tasked with fixing an error with our reporting server. That error is ASP.Net Session Has Expired. This error occurs when the Visual Studio ReportViewer 2005 Control attempts to load a report. We are trying to host this report to users hitting our Internet exposed Windows 2003 Server running IIS 6.0. The reportviewer control is attempting to load this report from a second server running Microsoft SQL 2005 w/Reporting Services. The SQL server is not exposed to the Internet. Here is the weird thing. This error never occurs on the development box. When it is transferred to the production IIS server, the error starts to occur. It only happens every time the report is first loaded. If the browser's refresh button is clicked 5-10 times, the report will finally load correctly. I have reproduced this same error on the latest version of Mozilla Firefox, IE 7, and IE 8. The report only takes 10-20 seconds to load. I have tried timeouts in the 300+ second range on the reporting server/iis production server. I have tried a few options like Async (which causes images not to load properly) and setting the session mode to iproc with a high timeout value in the Reporting Server's web.config. I have also tried using the reporting server's IP address in the report viewer's code instead of the server name. I plan on verifying a picture loading issue which I also read about tomorrow when I get into work. I am unsure what service packs Visual Studio 2005 and the MSSQL server are running. Was an update released to fix this problem that I could not find? Does anyone have a fix for this?

    Read the article

  • How to add favicon.ico in ASP.NET site

    - by Tapas Bose
    The solution structure of my application is: Now I am in Login.aspx and I am willing to add favicon.ico, placed in the root, in that page. What I am doing is: <link id="Link1" runat="server" rel="shortcut icon" href="../favicon.ico" type="image/x-icon" /> <link id="Link2" runat="server" rel="icon" href="../favicon.ico" type="image/ico" /> Also I have tried: <link id="Link1" runat="server" rel="shortcut icon" href="favicon.ico" type="image/x-icon" /> <link id="Link2" runat="server" rel="icon" href="favicon.ico" type="image/ico" /> But these aren't working. I have cleared the browser cache but no luck. What will be the path to the favicon.ico from: Login.aspx Site.master Thank you. The login page's URL: http://localhost:2873/Pages/Login.aspx and the favicon.ico's URL: http://localhost:2873/favicon.ico. I am unable to see the favicon.ico after changing my code as: <link id="Link1" rel="shortcut icon" href="/favicon.ico" type="image/x-icon" /> <link id="Link2" rel="icon" href="/favicon.ico" type="image/ico" />

    Read the article

  • how to include .pl (PERL) file in PHP

    - by dexter
    i have two pages one in php(index.php) and another one in Perl(dbcon.pl). basically i want my php file to show only the UI and all the data operations would be done in Perl file. i have tried in index.pl <?php include("dbcon.pl");?> <html> <br/>PHP</br> </html> and dbcon.pl has #!/usr/bin/perl use strict; use warnings; use DBI; use CGI::Simple; my $cgi = CGI::Simple->new; my $dsn = sprintf('DBI:mysql:database=%s;host=%s','dbname','localhost'); my $dbh = DBI->connect($dsn,root =>'',{AutoCommit => 0,RaisError=> 0}); my $sql= "SELECT * FROM products"; my $sth =$dbh->prepare($sql); $sth->execute or die "SQL Error: $DBI::errstr\n"; while (my @row = $sth->fetchrow_array){ print $cgi->header, <<html; <div>&nbsp;@row[0]&nbsp;@row[1]&nbsp;@row[2]&nbsp;@row[3]&nbsp;@row[4]</div> html } but when i run index.php in browser it prints all the code in dbcon.pl file instead of executing it how to overcome this problem? note: i am running this in windows environment is there any other way to do this?

    Read the article

  • js popup window to play .flv flash video using jwplayer.swf

    - by Mike Trader
    js popup window to play .flv using jwplayer.swf I would like to adjust this code so that it does not crash the browser when expanded to full screen and the popup closes when it looses focus <html> <head> <title>Popup Example</title> <center> <div class="yt_container"> <div id="yt_the_video" class="yt_video_full"> <script type="text/javascript" src="swfobject.js"></script> <script type="text/javascript"> var s1 = new SWFObject("player.swf","ply","640","500","9","#FFFFFF"); s1.addParam("allowfullscreen","true"); s1.addParam("allownetworking","all"); s1.addParam("allowscriptaccess","always"); s1.addParam("flashvars",'&file=GJClip.flv&autostart=true'); </script> </head> <body &#10;&#10;bgcolor="#CCCFFF"> <img alt="GJ" src="GJPlay.jpg"&#10;width="80" height="60" onClick="s1.write('yt_the_video');"&#10;&#10;</body> </html> I have to have many small thumbnails on a page and each one needs to open up to a full size (640x480) video with controls when clicked. Having looked at Shdowbox (dims web page behind it, not allowed to do that) and lightbox which I cannot get to work at all, I am down to a home gown solution, which I prefer anyway.

    Read the article

  • edited an .SWF, locally the changes made are okay, but when on server changes are not fully visible

    - by Andy
    Hi, I just edited an .SWF file I have on my website home page for a while now. I update the SWF frequently with new pictures. It's actually a picture slideshow, consisting out of 6 images. I used Adobe Flash CS4 Pro to edit the file, with just swapping all the pictures (JPGS) in it for other ones. I also have some small AS where I just have the URL: on(release) { getURL("link"); } so that's nothing fancy at all. I saved and published everything (CTRL+ENTER) and the .SWF played well, and tested it in IE8 and FF. Then I uploaded the SWF to my test server, overwriting the existing SWF file. Now the problem: all pictures but one show up well. Of the 6 images, the second image is actually the old image that was in its place. I downloaded the .SWF from the testserver and inspected the SWF and guess what: the old picture wasn't in it, instead the correct image was in the SWF. Even after reloading the page hitting CTRL+F5 still the wrong image shows. FF though shows the SWF correctly. So I then opened the page on another computer using IE8 and there the SWF works well, showing the correct second image. What's wrong with my first computer's browser? It's also the computer I edited the SWF with. I DO remember like I first saved and uploaded the wrong SWF (with the old 2nd image still in it) to the testserver, and later on uploading the correct one (proper 2nd image) I think IE8 has cached the wrong SWF, and now memorized it someway not willing to see that the file has actually been changed, but what to do so IE8 starts showing the correct SWF??

    Read the article

  • Flash/Flex: "Warning: filter will not render" problem

    - by davidemm
    In my flex application, I have a custom TitleWindow that pops up in modal fashion. When I resize the browser window, I get this warning: Warning: Filter will not render. The DisplayObject’s filtered dimensions (1286, 107374879) are too large to be drawn. Clearly, I have nothing set with a height of 107374879. After that, any time I mouse over anything in the Flash Player (v. 10), the CPU churns at 100%. When I close the TitleWindow, the problem subsides. Sadly, the warning doesn't seem to indicate which DisplayObject object is too large to draw. I've tried attaching explicit height/widths to the TitleWindow and the components within, but still no luck. [Edit] The plot thickens: I found that the problem only occures when I set the PopUpManager's createPopUp modal parameter to "true." I don't see the behavior when modal is set to "false." It's failing while applying the graying filter to other components that comes from being modal. Any ideas how I can track down the one object that has not been initialized but is being filter during the modal phase? Thanks for reading.

    Read the article

  • jquery newbie: combine validate with hidding submit button.

    - by Jeffb
    I'm new a jQuery. I have gotten validate to work with my form (MVC 1.0 / C#) with this: <script type="text/javascript"> if (document.forms.length > 0) { document.forms[0].id = "PageForm"; document.forms[0].name = "PageForm"; } $(document).ready(function() { $("#PageForm").validate({ rules: { SigP: { required: true } }, messages: { SigP: "<font color='red'><b>A Sig Value is required. </b></font>" } }); }); </script> I also want to hide the Submit button to prevent twitchy mouse syndrome from causing duplicate entry before the controller completes and redirects (I'm using an GPR pattern). The following works for this purpose: <script type="text/javascript"> // // prevent double-click on submit // jQuery('input[type=submit]').click(function() { if (jQuery.data(this, 'clicked')) { return false; } else { jQuery.data(this, 'clicked', true); return true; } }); </script> However, I can't get the two to work together. Specifically, if validate fails after the Submit button is clicked (which happens given how the form works), then I can't get the form submitted again unless I do a browser refresh that resets the 'clicked' property. How can I rewrite the second method above to not set the clicked property unless the form validates? Thx.

    Read the article

  • Which combing css technique?

    - by DotnetShadow
    Hi there, Which of the following would you say is the best way to go when combining files for CSS: Say I have a master.css file that is used across all pages on my website (page1.aspx, page2.aspx) Page1.aspx - A specific page that has some unique css that is only ever used on that page, so I create a page1.css and it also uses another css grids.css Page2.aspx - Another specific page that is different from all other pages on the site and is different to page1.aspx, I'll name this page2.aspx and make a page2.css this doesn't use grids.css So would you combine the scripts as: Option1: Combine scripts csshandler.axd?d=master.css,page1.css,grids.css when visiting page1 Combine scripts csshandler.axd?d=master.css,page2.css when visiting page2 Benefits: Page specific, rendering quicker since only selectors for that page need to be matched up no unused selectors Drawback: Multiple combinations of master.css + page specific hence master.css has to be downloaded for each page Option2: Combine all scripts whether a page needs them or not csshandler.axd?d=master.css,page1.css,page2.css,grids.css (master, page1 and page2) that way it gets cached as one. The problem is that rendering maybe slower since it will have to try and match EVERY selector in the css with selectors on the page even the missing ones, so in the case of page2.aspx that doesn't use grids.css the selectors in grids.css will need to be parsed to see if they are in page2 which means rendering will be slow Benefits: One file will ever be downloaded and cached doesn't matter what page you visit Drawback: Unused selectors will need to be parsed by the browser slower rendering Option3: Leave the master file on it's own and only combine other scripts (the benefit of this is because master is used across all pages there is a chance that this is cached so doesn't need to keep on downloading csshandler.axd?d=Master.css csshandler.axd?d=page1.css,grids.css Benefits: master.css file can be cached doesn't matter what page you visit. Not many unused selectors as page spefic is applied Drawback: Initially minimum of 2 HTTP request will have to be made What do you guys think? Cheers DotnetShadow

    Read the article

  • COM access to classic ASP intrinsic objects

    - by wrench
    I'm converting a VB6 COM object that works with classic ASP to a c# .Net COM Object Interop_COMSVCS.ObjectContext objContext; Interop_COMSVCS.AppServer objAppServer; objAppServer = null; // need to initialize before using objAppServer = new Interop_COMSVCS.AppServer(); objContext = objAppServer.GetObjectContext(); oApplication = (Interop_ASP.Application)objContext["Application"]; oSession = (Interop_ASP.Session)objContext["Session"]; oResponse = (Interop_ASP.Response)objContext["Response"]; oRequest = (Interop_ASP.Request)objContext["Request"]; oSession works to store local information to.from ASP storage oResponse can do simple writes to the browser BUT any code like oRequest.Cookies["sessionId"] or oResponse.Cookies["sessionId"] doesn't provide any sort of read or write access. Any cast or conversion I trry to do tells me I'm dealing with an empty or null System Object. There doesn't seem to be any sort of syntax to get/set the cookie collection. With COM+ I've seesn soem articles that indcate a switch for Access to ASP Intrinsic Objects -- that seesm to describe my issue, but I'd rather not use COM+. There are also some articles that indicate if I was using ASP.NET I could use HttpContext and HttpRequest/Response, but that's a completely different set of data objects that don't seem to be available with classic ASP. I've been stuck on this fopr a few days. Any help appreciated.

    Read the article

  • Autodetect timezone in Rails given UTC offset and DST

    - by Jose
    I basically want to autodetect a user's timezone using Rails. I can use this JS code at the user's browser (http://www.onlineaspect.com/2007/06/08/auto-detect-a-time-zone-with-javascript/) to send a form with the UTC offset and the fact that the time zone observes DST during summer or not, in the user's time zone. Once I have that info in the server, I want to select the matching time zone. In Rails, I can get a list of time zones with ActiveSupport::TimeZone.all. Also, I can filter zones by utf offset thanks to the utc_offset method. However, I don't know how to filter the timezones that do/don't observe DST. E.g. suppose a user lives in Amsterdam. Filtering by UTC offset will return Berlin, Belgrado, Madrid, etc timezones, as well as West Central Africa. All of them, but West Central Africa, would be appropriate timezones for a user in Amsterdam (as they provide the same time/date), but I need to filter West Central Africa, which does not perform DST in summer. How can I do this in Rails? Also, are any of my assumptions wrong?

    Read the article

  • Can push technology / comet be faked?

    - by stef
    Client has a dating site and would like to have a popup (either a nice javascript overlay or a new browser window popup. we're flexible.) displayed to users when another user is visiting their page. I'm familiar with push technology and Comet but it's quite challenging to implement this and may place serious strain on a server with over 100,000 unique visitors per day. I'm wondering if there is a way of faking this, perhaps by not being accurate to the second. I can't really think of any way. This is a classic LAMP environment. Anyone? EDIT: what about this: placing an iframe on the page that refreshes every few seconds and at each page load it checks in the db if a visitor has been logged on this profile page. if so, it shows a message. the message would be visible but the background of the iframe would blend in with the background of the site and be invisible. if the message fades in and out, it would look like a JS box "popping up".

    Read the article

  • Help with redirection for .com, .net and .org domains: redirecting all of them to .com.

    - by user198553
    Hi all! I need help with some rules in ISAPI_Rewrite in my installation. (If you only know mod_rewrite could be a good help to, so I would adapt the configuration). I'm going to be very honest about my needs. I need to do this configuration in the next few hours, and don't have time right now understand everything about rewrites, regular expressions and such. I really think you can help me, if I had more reputation I would even set up a bounty... :( In fact, I believe that what I need is simple: I have a .com domain. The main url of my website is going to be http:// www.mainurl.com/. I have two other domains: mainurl.net and mainurl.org. What I need (in isapi-rewrite 2, the config made with httpd.ini file in root file) is: everytime someone writes mainurl.net in browser it becomes http:// www.mainurl.com/ 301 redirect. If it's written without www becomes http:// www.mainurl.com/. If someone writes mainurl.net/about it becomes http:// www.mainurl.com/about/. Redirect always the .com, the www part and the final slash /. Thanks in advance you all!

    Read the article

  • Use spring tag in XSLT

    - by X-Pippes
    I have a XSL/XML parser to produce jsp/html code. Using MVC model I need to accees spring library in order to perform i18n translation. Thus, given the xml <a> ... <country>EN</country> ... </a> and using <spring:message code="table_country_code.EN"/> tag, choose based on the browser language, the transalation into England, Inglaterra, etc... However, the XSL do not support <spring:message> tag. The idea is to have a XSLT with something like this <spring:message code="table_country_code.><xsl:value-of select="country"/>"/>` I also tried to create the spring tag in Java when I make a parse to create the XML but I sill have the same error. ERROR [STDERR] (http-0.0.0.0-8080-1) file:///C:/Software/Jboss/jboss-soa-p-5/jboss-as/bin/jstl:; Line #5; Column #58; The prefix "spring" for element "spring:message" is not bound. How can I resolve?

    Read the article

  • javascript - catch SyntaxError and run alternate function

    - by ludicco
    Hello there, I'm trying to build something on javascript that I can have an input that can be everything like string, xml, javascript and (non-javascript string without quotes) as follows: //strings eval("'hello I am a string'"); /* note the following proper quote marks */ //xml eval(<p>Hello I am a XML doc</p>); //javascript eval("var hello = 2+2;"); So this first 3 are working well since they are simple javascript native formats but when I try use this inside javascript //plain-text without quotes eval("hello I am a plain text without quotes"); //--SyntaxError: missing ; before statement:--// Obviously javascript interprets this as syntax error because it thinks its javascript throwing a SyntaxError. So what I would like to do it to catch this error and perform the adjustment method if this occurs. I've already tried with try catch but it doesn't work since it keeps returning the Syntax error as soon as it tries to execute the code. Any help would be much appreciated Cheers :) Additional Information: Imagine an external file that javascript would read, using spidermonkey, so it's a non-browser stuff(I can't use HttpRequest, DOM, etc...)..not sure if this matters, but there it is. :)

    Read the article

  • jquery click on anchor element forces scroll to top?

    - by Dan.StackOverflow
    http://stackoverflow.com/questions/720970/jquery-hyperlinks-href-value[link text][1] I am running in to a problem using jquery and a click event attached to an anchor element. [1]: http://stackoverflow.com/questions/720970/jquery-hyperlinks-href-value "This" SO question seems to be a duplicate, and the accepted answer doesn't seem to solve the problem. Sorry if this is bad SO etiquette. In my .ready() function I have: jQuery("#id_of_anchor").click(function(event) { //start function when any update link is clicked Function_that_does_ajax(); }); and my anchor looks like this: <a href="#" id="id_of_anchor"> link text </a> but when the link is clicked, the ajax function is performed as desired, but the browser scrolls to the top of the page. not good. I've tried adding: event.preventDefault(); before calling my function that does the ajax, but that doesn't help. What am I missing? Clarification I've used every combination of return false; event.preventDefault(); event.stopPropagation(); before and after my call to my js ajax function. It still scrolls to the top.

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >