Search Results

Search found 27599 results on 1104 pages for 'google map'.

Page 208/1104 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • How can I get Google to re-point its search entries to new domain?

    - by poolski
    My main .com domain registration lapsed and when I went to re-register it, I found that a domain reseller service squatted it and I've lost access to it. As I wasn't terribly keen on spending money on funding scammers and the like, I registered a .co.uk domain under the same name. Is there any way of getting Google to re-point all its indexed links to the new domain? It's been indexing my blog for a couple years now and while it's not too big a deal, I'd like to not have to start all over again. Also, searching for my site results in an old entry which is currently pointing at a "Apply for a Tax Break NOW!!!" page.

    Read the article

  • Does Google Chrome officially work on Windows 7 64-Bit Yet?

    - by Nick Josevski
    As soon as I jumped onto one of the beta releases for Windows 7, I tried to install Google Chrome. Being on a 64-bit installation it came up with a 'non-supported OS' or some error (can't remember). Having a look around at the time I saw lots of posts/tips about just appending --in-process-plugins to the shortcut for chrome, after trying this and still not having luck, I found more posts including what seemed ones from the Chrome developers saying this was not wise and exposed a security risk. So does anyone have a well sourced answer, as to what's holding up Win 7 64-bit support in Chrome, or better yet an "official" answer to say that it is supported in Win7 x64 RTM and works well now...

    Read the article

  • How do I fix font corruption in Google Chrome 9.0.597.44beta in Windows XP?

    - by snicker
    I am not sure what is causing this problem, but I think it is related to unicode problems. Google Chrome, seemingly out of nowhere a month ago, stopped rendering unicode characters in certain fonts. IE this ?_? Looks fine in some fonts, but looks like this in others. Renders fine in other browsers. Most recently, I visited the FourSquare website and have complete font corruption. Here is IE vs Chrome Full Size What gives? Has anyone else seen this? How can I fix it?

    Read the article

  • How does Google Chrome know my Firefox history if I never imported it?

    - by fdisk
    I have Firefox and Google Chrome installed in the same machine (Linux). It happens when I type something in Chrome Omnibox it suggests pages I have already visited in Firefox. I have never connected the accounts of both browsers I have never imported information from one browser to other I have never visited the suggested pages in Chrome The keyword I type in the omnibox is vague and there is no way it could guess the suggestion without having access to the Firefox history. i.e.: i type "ir" in Chrome and it suggests me the same Iron Maiden lyrics page I have browse before in Firefox. Thanks

    Read the article

  • Move email off Small Business Server to Google Apps, retain other SBS functions?

    - by Paul S.
    Recently, an in-house Microsoft Small Business Server 2011 was installed where I work. Unfortunately, our buildings have a bad electrical power supply and we suffer frequent outages. We have a large percentage of staff working off-site. Now when the power goes off here, everyone everywhere loses email functionality. I have been assigned to research the possibility of routing our email to Google Apps while maintaining LAN functions on the SBS. I haven't worked with Microsoft products for several years now, so do not know how SBS is structured. Can anyone here tell me if this is possible, or point me to good resources that explain our options?

    Read the article

  • Does Google Chrome take over all other browsers no matter what?

    - by Jodi
    I can not use either IE or Firefox since I have downloaded Google Chrome. I don't even have Chrome set as my default anymore but it opens up anyway. I need to use IE in order to download updates for Windows Movie Maker and I can only download it using IE, thanks to good old Microsoft. And no where can I find a way to access IE on my computer. It is not shown in programs and no shortcut was created on my desktop in the download. Any suggestions? I got to get this video down and I am on a tight deadline. Thanks.

    Read the article

  • How can I move authorized applications between google accounts?

    - by zoopp
    I'm looking into creating an email address with a professional name on gmail and due to the fact that I can't change my current one I have to create new google account. Among some things which which need to be patched (eg. forwarding email to the new address until every other account's email contact address is changed etc.) I came across authorized applications. If I am to use exclusively the new email address I have to somehow move my authorized applications as well since if I am to eventually delete my old account I will lose access to my current profiles created by those applications (eg. the stackexchange network, youtube etc). How can this move be accomplished?

    Read the article

  • why does my google chat get blocked (by corporate firewall) somedays but not others?

    - by Peter
    I have noticed that some days I am able to chat while using Gmail, and other days I am not. It would make sense to me that I would either always be blocked, or never. But I can't figure out why it seems to change daily or weekly. Is Google constantly changing the URLs involved so that the censoring companies (they use websense where I work) have to play catch up? Or is there some other reason I'm missing? I am more interested in the technical reason it is might be happening than in an actual work around.

    Read the article

  • Why is My Google Chat get Blocked (by corporate firewall) somedays but not others? [closed]

    - by Peter
    I have noticed that some days I am able to chat while using Gmail, and other days I am not. It would make sense to me that I would either always be blocked, or never. But I can't figure out why it seems to change daily or weekly. Is Google constantly changing the URLs involved so that the censoring companies (they use websense where I work) have to play catch up? Or is there some other reason I'm missing? I am more interested in the technical reason it is might be happening than in an actual work around.

    Read the article

  • Which credentials should I put in for Google App Engine BulkLoader at development server?

    - by Hoang Pham
    Hello everyone, I would like to ask which kind of credentials do I need to put on for importing data using the Google App Engine BulkLoader class appcfg.py upload_data --config_file=models.py --filename=listcountries.csv --kind=CMSCountry --url=http://localhost:8178/remote_api vit/ And then it asks me for credentials: Please enter login credentials for localhost Here is an extraction of the content of the models.py, I use this listcountries.csv file class CMSCountry(db.Model): sortorder = db.StringProperty() name = db.StringProperty(required=True) formalname = db.StringProperty() type = db.StringProperty() subtype = db.StringProperty() sovereignt = db.StringProperty() capital = db.StringProperty() currencycode = db.StringProperty() currencyname = db.StringProperty() telephonecode = db.StringProperty() lettercode = db.StringProperty() lettercode2 = db.StringProperty() number = db.StringProperty() countrycode = db.StringProperty() class CMSCountryLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'CMSCountry', [('sortorder', str), ('name', str), ('formalname', str), ('type', str), ('subtype', str), ('sovereignt', str), ('capital', str), ('currencycode', str), ('currencyname', str), ('telephonecode', str), ('lettercode', str), ('lettercode2', str), ('number', str), ('countrycode', str) ]) loaders = [CMSCountryLoader] Every tries to enter the email and password result in "Authentication Failed", so I could not import the data to the development server. I don't think that I have any problem with my files neither my models because I have successfully uploaded the data to the appspot.com application. So what should I put in for localhost credentials? I also tried to use Eclipse with Pydev but I still got the same message :( Here is the output: Uploading data records. [INFO ] Logging to bulkloader-log-20090820.121659 [INFO ] Opening database: bulkloader-progress-20090820.121659.sql3 [INFO ] [Thread-1] WorkerThread: started [INFO ] [Thread-2] WorkerThread: started [INFO ] [Thread-3] WorkerThread: started [INFO ] [Thread-4] WorkerThread: started [INFO ] [Thread-5] WorkerThread: started [INFO ] [Thread-6] WorkerThread: started [INFO ] [Thread-7] WorkerThread: started [INFO ] [Thread-8] WorkerThread: started [INFO ] [Thread-9] WorkerThread: started [INFO ] [Thread-10] WorkerThread: started Password for [email protected]: [DEBUG ] Configuring remote_api. url_path = /remote_api, servername = localhost:8178 [DEBUG ] Bulkloader using app_id: abc [INFO ] Connecting to /remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 2802, in Run request_manager.Authenticate() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 1126, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py", line 488, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\appengine_rpc.py", line 344, in Send f = self.opener.open(req) File "C:\Python25\lib\urllib2.py", line 381, in open response = self._open(req, data) File "C:\Python25\lib\urllib2.py", line 399, in _open '_open', req) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 1107, in http_open return self.do_open(httplib.HTTPConnection, req) File "C:\Python25\lib\urllib2.py", line 1082, in do_open raise URLError(err) URLError: <urlopen error (10061, 'Connection refused')> [INFO ] Authentication Failed Thank you!

    Read the article

  • How can I perform 2D side-scroller collision checks in a tile-based map?

    - by bill
    I am trying to create a game where you have a player that can move horizontally and jump. It's kind of like Mario but it isn't a side scroller. I'm using a 2D array to implement a tile map. My problem is that I don't understand how to check for collisions using this implementation. After spending about two weeks thinking about it, I've got two possible solutions, but both of them have some problems. Let's say that my map is defined by the following tiles: 0 = sky 1 = player 2 = ground The data for the map itself might look like: 00000 10002 22022 For solution 1, I'd move the player (the 1) a complete tile and update the map directly. This make the collision easy because you can check if the player is touching the ground simply by looking at the tile directly below the player: // x and y are the tile coordinates of the player. The tile origin is the upper-left. if (grid[x][y+1] == 2){ // The player is standing on top of a ground tile. } The problem with this approach is that the player moves in discrete tile steps, so the animation isn't smooth. For solution 2, I thought about moving the player via pixel coordinates and not updating the tile map. This will make the animation much smoother because I have a smaller movement unit per frame. However, this means I can't really accurately store the player in the tile map because sometimes he would logically be between two tiles. But the bigger problem here is that I think the only way to check for collision is to use Java's intersection method, which means the player would need to be at least a single pixel "into" the ground to register collision, and that won't look good. How can I solve this problem?

    Read the article

  • Set Custom Reload Times for Individual Webpages in Chrome

    - by Asian Angel
    Do you have a webpage that needs to be reloaded every so often or perhaps you have multiple webpages that each need their own individual reload time? Now you can have the best of both with the AutoReloader extension for Google Chrome. Using AutoReloader When you first look at the drop-down window everything will be in a neutral “waiting” state. You can start using the extension immediately by simply entering the desired “time frame” for reloading a webpage. Notice for the “Repeat Option” that “0 = Continuous”… You may want to have a quick look through the “Options” to see if there are any “operational changes” that you would like to make. Once you enter a time click on the “Set Link” to start the timer. Notice that you can view the time remaining on the “Toolbar Button” unless you disabled the feature in the “Options”. Clicking on the “Toolbar Button” will show a larger version of the timer in the drop-down window along with a “Cancel Current Timer Link”. Here is the best part of all with AutoReloader…you can set up your own customized list of “Reload Times” and then access them through the drop-down window. Using the two times shown here we were able to set the “Productive Geek Webpage” up for 30 second reloads and the “TinyHacker Webpage” up for 1 minute reloads at the same time. There was no conflict whatsoever in running both “reload times” simultaneously. This is a really terrific feature! Conclusion Whether you have only one webpage or multiple pages that need periodic reloading (such as tracking a Woot-Off or an Ebay auction) the AutoReloader extension is the perfect tool for the job. Running custom reload times simultaneously have never been easier. Links Download the AutoReloader extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Set Up Automatic Timed Page Reloading on Your Webpages in FirefoxRemove Custom about:config Entries the Easy WayEnable Vista Black Style Theme for Google Chrome in XPActivate the Redesigned New-Tab Interface in Google ChromeModify Tab Ordering in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • Can you Trust Search?

    - by David Dorf
    An awful lot of referrals to e-commerce sites come from web searches. Retailers rely on search engine optimization (SEO) to correctly position their website so they can be found. Search on "blue jeans" and the results are determined by a semi-secret algorithm -- in my case Levi.com, Banana Republic, and ShopStyle show up. The NY Times recently uncovered a situation where JCPenney, via third-parties hired to help with SEO, was caught manipulating search results so they were erroneously higher in page rankings. No doubt this helped drive additional sales during this part Christmas. The article, The Dirty Little Secrets of Search, is well worth reading. My friend Ron Kleinman started an interesting discussion at the ARTS Linkedin forum. He posed the question: The ability of a single company to "punish" any retailer (by significantly impacting their on-line sales volume) who does not play by their rules ... is this a good thing or a bad thing? Clearly JCP was in the wrong and needed to be punished, but should that decision lie with Google alone? Don't get me wrong -- I'm certainly not advocating we create a Department of Search where bureaucrats think of ways to spend money, but Google wields an awful lot of power in this situation, and it makes me feel uncomfortable. Now Google is incorporating more social aspects into their search results. For example, when Google knows its me (i.e. I'm logged in when using Google) search results will be influenced by my Twitter network. In an effort to increase relevance, the blogs and re-tweeted articles from my network will be higher in the search results than they otherwise would be. So in the case of product searches, things discussed in my network will rise to the top. Continuing my blue jean example, if someone in my network had been discussing Macy's perhaps they would now be higher in the result set. soapbox: I already have lots of spammers posting bogus comments to this blog in an effort to create additional links to their sites and thus increase their search ranking. Should I expect a similar situation in Twitter and eventually Facebook? Now retailers need to expand their SEO efforts to incorporate social media as well, but do us all a favor and please don't cheat.

    Read the article

  • What constitutes a "substantial, good-faith effort to remove the links"

    - by Luke McCallum
    We engaged the services of a 3rd party SEO consultant to assist us in managing our Meta data and to write regular blogs on our site http://cyberdesignworks.com.au Without our authorisation, the SEO also ran a link building campaign which has seen us Penguin slapped and we no longer appear in Google for a number of our core keywords. Since notification by Google that we have "unnatural links" back in March we have undertaken a significant campaign to rid ourselves of these dodgy backlinks by a number of methods. I have just received feedback on my 4th or 5th resubmission which is still advising that we need to make a "substantial, good-faith effort to remove the links" before Google will reconsider us for inclusion. After the effort that I have gone through to get links removed, I am now at a loss as to what else I can do to demonstrate "substantial, good-faith effort to remove the links". Below is a summary of the actions that we have taken to date. According to http://removem.com we had about 5584 back-linking domains. Of those we have successfully contacted and had removed links from 344 domains We ignored links from 625 domains as they were either legitimate press releases, natural backlinks or client websites containing an attribution link in the footer that points back to us. Due to our efforts, or the sites simply becoming defunct, removem.com reports that links from 3262 domains have been removed. We have contacted but are yet to receive feedback from 1666 domains so we can assume that the backlinks remain. We have configured an automatic 301 redirect for each of the links from these 1666 domains to point to http://redirects.sanscode.com/ which we are calling our Bad Link Catcher (a stroke of genius I thought). i.e http://www.mysimplewebdesign.com/create-a-perfect-webpage-with-four-important-tips-from-sydney-web-development-service-companies.php As we are a web design agency, we have a large number of client websites which contain an attribution link in their footer which points back to us. We have gone through the vast majority of these and updated these links to replace anchor text with an image and rel="nofollow" link. i.e <a rel="nofollow" target="_blank" href="http://www.cyberdesignworks.com.au/"><img src="https://sessions.sanscode.com/site/assets/media/badges/Badge_CDW_SANSCODE.png"></a> See http://www.milkatwork.com.au/ An export from http://removem.com detailing the number of times we have contacted each link and whether it is still found or not was also supplied with each resubmission. The total back links reported in Google Web Master Tools has dropped from over 100K to 87K and I expect it to drop significantly lower once Google re-crawls each back-linking page. Based on all of the above, I am not sure what else I can do to to demonstrate a "substantial, good-faith effort to remove the links". I would sincerely appreciate any feedback or suggestions that you may have as I am out of ideas.

    Read the article

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • Problems using Maven to initialize a local thoughtsite (App Engine sample) project in Eclipse

    - by ovr
    This sample app ("thoughtsite") for App Engine contains a pom.xml in its trunk: http://code.google.com/p/thoughtsite/source/browse/#svn/trunk I ran mvn eclipse:eclipse and also tried using m2eclipse to import this source code into an Eclipse project. But I end up with this error despite the fact that I have the Google App Engine plugin and the Google App Engine SDK installed: Exception in thread "main" java.lang.ExceptionInInitializerError at com.google.appengine.tools.info.SdkImplInfo.<clinit>(SdkImplInfo.java:19) at com.google.appengine.tools.util.Logging.initializeLogging(Logging.java:36) at com.google.appengine.tools.development.DevAppServerMain.main(DevAppServerMain.java:82) Caused by: java.lang.RuntimeException: Unable to discover the Google App Engine SDK root. This code should be loaded from the SDK directory, but was instead loaded from file:~/.m2/repository/com/google/appengine/appengine-tools-sdk/1.3.0/appengine-tools-sdk-1.3.0.jar. Specify -Dappengine.sdk.root to override the SDK location. at com.google.appengine.tools.info.SdkInfo.findSdkRoot(SdkInfo.java:106) at com.google.appengine.tools.info.SdkInfo.<clinit>(SdkInfo.java:24) ... 3 more When I go into the project settings under "Google" and try to set it to use the default App Engine SDK it always reverts to trying to use Maven's App Engine SDK instead. No idea how to get this project working.

    Read the article

  • Html2Canvas ...Google Map is not rendering

    - by eric maxfield
    I am running a Apache Server . I have a simple screen capture set up using Html2canvas .The capture function is rendering a blank Image . I have tried numerous ways to configure the javascript using related articles from this site to no Avail . The code is all working and tested because I can capture the image prior to "google maps api being loaded . Thank you and any advice would be much appreciated . <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <head> <title>Tester</title> <script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script> <script type="text/javascript" src="html2canvas.js"></script> <script type="text/javascript" src="jquery.plugin.html2canvas.js"></script> <script src="http://www.google.com/jsapi?key=ABQIAAAAwbkbZLyhsmTCWXbTcjbgbRSzHs7K5SvaUdm8ua-Xxy_-2dYwMxQMhnagaawTo7L1FE1-amhuQxIlXw"></script> <script> google.load("earth", "1"); var ge = null; function init() { google.earth.createInstance("map_canvas", initCallback, failureCallback); } function initCallback(object) { ge = object; ge.getWindow().setVisibility(true); } function failureCallback(object) { } function capture() { $('#target').html2canvas({ onrendered: function (canvas) { //Set hidden field's value to image data (base-64 string) $('#img_val').val(canvas.toDataURL("image/png")); //Submit the form manually document.getElementById("myForm").submit(); } }); } </script> <style type="text/css"> #map_canvas {position: fixed; top: 60px; left: 0px; right:0px; bottom:0px; } #target { border: 1px solid #CCC; margin: 0px; padding:0px; position: absolute; left: 10px;top: 80px;height: 580px; width: 580px; } </style> </head> <body onload='init()' id='body'> <form method="POST" enctype="multipart/form-data" action="save.php" id="myForm"> <input type="hidden" name="img_val" id="img_val" value="" /> </form> <input type="submit" value="Take Screenshot Of Div Below" onclick="capture();" /> <div id="target"> <div id="map_canvas"> </div> </div> </body> </html> This is the php document renders to save.php <?php //Get the base-64 string from data $filteredData=substr($_POST['img_val'], strpos($_POST['img_val'], ",")+1); //Decode the string $unencodedData=base64_decode($filteredData); //Save the image file_put_contents('img.png', $unencodedData); ?> <h2>Save the image and show to user</h2> <table> <tr> <td> <a href="img.png" target="blank"> Click Here to See The Image Saved to Server</a> </td> <td align="right"> <a href="index.php"> Click Here to Go Back</a> </td> </tr> <tr> <td colspan="2"> <br /> <br /> <span> Here is Client-sided image: </span> <br /> <?php //Show the image echo '<img src="'.$_POST['img_val'].'" />'; ?> </td> </tr> </table> <style type="text/css"> body, a, span { font-family: Tahoma; font-size: 10pt; font-weight: bold; } </style> This sample works Correctly . I want to achieve this with above Code using "Google Earth" <!DOCTYPE html> <html> <head> <script src="http://maps.googleapis.com/maps/api/js?key=AIzaSyDY0kkJiTPVd2U7aTOAwhc9ySH6oHxOIYM&sensor=false"></script> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.js"></script> <script type="text/javascript" src ="http://code.jquery.com/jquery-1.9.0.min.js"></script> <script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script> <script type="text/javascript" src="html2canvas.js"></script> <script type="text/javascript" src="jquery.plugin.html2canvas.js"></script> </script> <script type="text/javascript"> function initialize() { var mapProp = { center:new google.maps.LatLng(51.508742,-0.120850), zoom:5, mapTypeId:google.maps.MapTypeId.ROADMAP }; var map=new google.maps.Map(document.getElementById("googleMap"), mapProp); } google.maps.event.addDomListener(window, 'load', initialize); $(window).load(function(){ $('#load').click(function(){ html2canvas($('#googleMap'), { useCORS: true, onrendered: function (canvas) { var dataUrl= canvas.toDataURL("image/png").replace("image/png", "image/octet-stream"); window.location.href = dataUrl; } }); }); }); </script> </head> <body> <div id="googleMap" style="width:500px;height:380px;"></div> <input type="button" value="Save" id="load"/> </body> </html>

    Read the article

  • How to return proper 404 for google while providing user friendly content to the user?

    - by Marek
    I am bouncing between posting this here and on Superuser. Please excuse me if you feel this does not belong here. I am observing the behavior described here - Googlebot is requesting random urls on my site, like aecgeqfx.html or sutwjemebk.html. I am sure that I am not linking these urls from anywhere on my site. I suspect this may be google probing how we handle non existent content - to cite from an answer to the linked question: [google is requesting random urls to] see if your site correctly handles non-existent files (by returning a 404 response header) We have a custom page for nonexistent content - a styled page saying "Content not found, if you believe you got here by error, please contact us", with a few internal links, served (naturally) with a 200 OK. The URL is served directly (no redirection to a single url). I am afraid this may discriminate the site at google - they may not interpret the user friendly page as a 404 - not found and may think we are trying to fake something and provide duplicate content. How should I proceed to ensure that google will not think the site is bogus while providing user friendly message to users in case they click on dead links by accident?

    Read the article

  • Handling file uploads with JavaScript and Google Gears, is there a better solution?

    - by gnarf
    So - I've been using this method of file uploading for a bit, but it seems that Google Gears has poor support for the newer browsers that implement the HTML5 specs. I've heard the word deprecated floating around a few channels, so I'm looking for a replacement that can accomplish the following tasks, and support the new browsers. I can always fall back to gears / standard file POST's but these following items make my process much simpler: Users MUST to be able to select multiple files for uploading in the dialog. I MUST be able to receive status updates on the transmission of a file. (progress bars) I would like to be able to use PUT requests instead of POST I would like to be able to easily attach these events to existing HTML elements using JavaScript. I.E. the File Selection should be triggered on a <button> click. I would like to be able to control response/request parameters easily using JavaScript. I'm not sure if the new HTML5 browsers have support for the desktop/request objects gears uses, or if there is a flash uploader that has these features that I am missing in my google searches. An example of uploading code using gears: // select some files: var desktop = google.gears.factory.create('beta.desktop'); desktop.openFiles(selectFilesCallback); function selectFilesCallback(files) { $.each(files,function(k,file) { // this code actually goes through a queue, and creates some status bars // but it is unimportant to show here... sendFile(file); }); } function sendFile(file) { google.gears.factory.create('beta.httprequest'); request.open('PUT', upl.url); request.setRequestHeader('filename', file.name); request.upload.onprogress = function(e) { // gives me % status updates... allows e.loaded/e.total }; request.onreadystatechange = function() { if (request.readyState == 4) { // completed the upload! } }; request.send(file.blob); return request; } Edit: apparently flash isn't capable of using PUT requests, so I have changed it to a "like" instead of a "must".

    Read the article

  • Custom Rails actions: I have issues every time

    - by normalocity
    Every time I go to add a custom action to a controller, I completely screw it up somehow. I'm trying to add a route "listings/buyer_listings", that will display all of my listings where someone is a buyer (rather than a seller). With the routes.rb file below, when I go to "listings/buyer_listings", I get routed instead to "users" WTF? In the past, I've had to define my routes using "map.", but this seems like a very verbose way to do something that should work with the :collection specification. You can see that I've done this with many routes as specified toward the end of the file, such as "edit_my_profile", etc. If I put the ":collection" part last my browser routes to the "show" action, which is not the correct action, and which also doesn't make sense to me why it would even do this. If I do "rake routes", my routes look correctly mapped. If I go into a Ruby console and have it recognize the url, it maps to the correct action, so what am I missing? ActionController::Routing::Routes.draw do |map| map.resources :locations map.resources :browse_boxes map.resources :tags map.resources :ratings map.resources :listings, :collection => { :buyer_listings => :get }, :has_many => :bids, :has_many => :comments map.resources :users map.resources :invite_requests map.resource :user_session map.resource :account, :controller => "users" map.root :controller => "listings", :action => "index" # optional, this just sets the root route map.login "login", :controller => "user_sessions", :action => "new" map.logout "logout", :controller => "user_sessions", :action => "destroy" map.search "search", :controller => "listings", :action => "search" map.edit_my_profile "edit_my_profile", :controller => "users", :action => "edit_my_profile" map.all_listings "all_listings", :controller => "listings", :action => "all_listings" map.my_listings "my_listings", :controller => "listings", :action => "my_listings" map.posting_guidelines "posting_guidelines", :controller => "listings", :action => "posting_guidelines" map.filter_on "filter_on", :controller => "listings", :action => "filter_on" map.top_25_tags "top_25_tags", :controller => "tagging_search", :action => "top_25_tags" map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >