Search Results

Search found 9717 results on 389 pages for 'gkt pro'.

Page 89/389 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Nginx and Google Appengine Reverse Proxy Security

    - by jmq
    The scenario is that I have a Google compute node running Nginx as a reverse proxy to the google appengine. The appengine is used to service REST calls from an single page application (SPA). HTTPS is used to the Nginx front end from the Internet. Do I also need to make the traffic from the Nginx reverse proxy to the appengine secure by turning on HTTPS on the appengine? I would like to avoid the overhead of HTTPS between the proxy and the backend. My thinking was that once the traffic has arrived at Nginx encrypted, decrypted in Nginx, and then sent via the reverse proxy inside of Google's infrastructure it would be secure. Is it safe in this case to not use HTTPS?

    Read the article

  • Screencast several application windows at once in Microsoft Windows

    - by Birt
    I have several (20+) applications running on a Microsoft Windows PC. What I would like is a solution that allows me to broadcast the window of each application in a webpage, in readonly mode (there's no need for the users to interact with it). This should work even if the application is in the background, seeing that there's no way to fit all of them on the screen. I performed very extensive searching, from simple screencasting apps such as Camtasia, CamStudio or VHScrCap to things like VNC (haven't found any server able to broadcast multiple windows at once, much less background windows) and even application virtualization, but in the end I haven't found anything that fits my needs. Most solutions that allow capturing a window instead of the whole desktop will not let you capture multiple windows but only a single window and on top of that they don't even work when the window is in the background.

    Read the article

  • Aligning text to the bottom of a div: am I confused about CSS or about blueprint? [closed]

    - by larsks
    I've used Blueprint to prototype a very simple page layout...but after reading up on absolute vs. relative positioning and a number of online tutorials regarding vertical positioning, I'm not able to get things working the way I think they should. Here's my html: <div class="container" id="header> <div class="span-4" id="logo"> <img src="logo.png" width="150" height="194" /> </div> <div class="span-20 last" id="title"> <h1 class="big">TITLE</h1> </div> </div> The document does include the blueprint screen.css file. I want TITLE aligned with the bottom of the logo, which in practical terms means the bottom of #header. This was my first try: #header { position: relative; } #title { font-size: 36pt; position: absolute; bottom: 0; } Not unexpectedly, in retrospect, this puts TITLE flush left with the left edge of #header...but it failed to affect the vertical positioning of the title. So I got exactly the opposite of what I was looking for. So I tried this: #title { position: relative; } #title h1 { font-size: 36pt; position: absolute; bottom: 0; } My theory was that this would allign the h1 element with the bottom of the containing div element...but instead it made TITLE disappear, completely. I guess this means that it's rendering off the visible screen somewhere. At this point I'm baffled. I'm hoping someone here can point me in the right direction. Thanks!

    Read the article

  • Will ranking be affected with a mobile XML sitemap for a mobile site with the same URLs as the desktop site?

    - by Emil Rasmussen
    We have a site with both a desktop version and a mobile version. Most of the content are the same and both versions have the same URL, but the HTML generated is device specific. Looking at Google's recommendations for smartphone-optimized sites, one could get the impression that the mobile xml sitemap is only for sites with different URLs. Will ranking be affected - negatively or positively - if we add a mobile xml sitemap that effectively will be a duplicate of the desktop sitemap?

    Read the article

  • Gracefully terminate a request based service on server

    - by Jatin
    In our web application, for each http-request there is a lot of computation that happens on back end. Output can vary from 10 sec - 1 Hour. In the mean time when it is computed, "Waiting.." is shown on the website for the respective user. But it so happens, that a user might cut down the service in between. So what all can be done on the back end so that the computation can be stopped in between to save resources? What different tactics can be applied here? And if better (instead of killing the thread directly), then a graceful termination policy should make wonders.

    Read the article

  • What liability concerns do advertising vendors raise, and how can I address them?

    - by Beofett
    One of the websites I administer wants to provide free advertising in the form of direct links to vendors at an event they are running. Up until now, there has been no advertising whatsoever on the site (or any of our other sites). The site is for a for-profit business. The idea of implicit endorsement of any vendors we advertise has been raised, which brought up the question of what we need to do, if anything, to protect ourselves from any potential problems such endorsement might create. I know that many sites have clauses in their Terms of Service that state that (in a nutshell) they are not responsible for any problems or grievances between the visitors to the site and any vendor advertised or linked. Are there other steps that a website typically takes when considering advertising, such as getting the advertiser to provide some sort of certification that their ad will not violate any trademarks or copyrighted material?

    Read the article

  • Should I rely on externally-hosted services?

    - by Mattis
    I am wondering over the dangers / difficulties in using external services like Google Chart in my production state website. With external services I mean them that you can't download and host on your own server. (-) Potentially the Google service can be down when my site is up. (+) I don't have to develop those particular systems for new browser technologies, hopefully Google will do that for me. (-) Extra latency while my site fetch the data from the google servers. What else? Is it worth spending time and money to develop my own systems to be more in control of things?

    Read the article

  • I need a little help with .htaccess rewrite

    - by Pinokyo
    I need a little help with .htaccess file I have songs, singers and albums links I want to rewrite. I all ready rewrote the links and they are like this: the links for the songs is like this: /song/song_name for singers: /singer_name for albums: /album_name From my .htaccess file: RewriteEngine on RewriteRule ^singer/([^/\.]+)/?$ /core/controller.php?singer=$1 [L] RewriteRule ^song/([^/\.]+)/?$ /core/controller.php?song=$1 [L] RewriteRule ^album/([^/\.]+)/?$ /core/controller.php?album=$1 [L] I need the links for the songs, singers and albums to be like this: for songs /singer_name/song_name for singers /singer_name for albums /singer_name/album_name can anyone help me with this please.

    Read the article

  • Meta tags again. Good or bad to use them as page content?

    - by Guandalino
    From a SEO point of view, is it wise to use exactly the same page title value and keyword/description meta tag values not only as meta information, but also as page content? An example illustrates what I mean. Thanks for any answer, best regards. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Meta tags again. Good or bad to use them as page content?</title> <meta name="DESCRIPTION" content="Why it is wise to use (or not) page title, meta tags description and keyword values as page content."> <meta name="KEYWORDS" content="seo,meta,tags,cms,content"> </head> <body> <h1>Meta tags again. Good or bad to use them as page content?</h1> <h2>Why it is wise to use (or not) page title, meta tags description and keyword values as page content.</h2> <ul> <li><a href="http://webmasters.stackexchange.com/questions/tagged/seo">seo</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/meta">meta</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/tags">tags</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/cms">cms</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/content">content</a> </ul> <p>Read the discussion on <a href="#">webmasters.stackexchange.com</a>. </body> </html>

    Read the article

  • How do I prevent tampering with AJAX process page? [closed]

    - by whamsicore
    I am using Ajax for processing with JQUERY. The Data_string is sent to my process.php page, where it is saved. Issue: right now anyone can directly type example.com/process.php to access my process page, or type example.com/process.php/var1=foo1&var2=foo2 to emulate a form submission. How do I prevent this from happening? Also, in the Ajax code I specified POST. What is the difference here between POST and GET?

    Read the article

  • How do I make sure the web developer I hire will not steal my idea?

    - by Greg McNulty
    So I have a great idea for a new website. However, not the time to develop it. I would like to hire a person or company to design it for me. What steps do I need to take, to protect my idea? Where and how do people protect website ideas in general? Also, how easy is it for someone to tweak the idea and make it legally heir own? Is a patent enough to protect such a thing, idea. Are there different levels or types of protection? Thank You.

    Read the article

  • How to interpret number of URL errors in Google webmaster tools

    - by user359650
    Recently Google has made some changes to Webmaster tools which are explained below: http://googlewebmastercentral.blogspot.com/2012/03/crawl-errors-next-generation.html One thing I could not find out is how to interpret the number of errors over time. At the end of February we've recently migrated our website and didn't implement redirect rules for some pages (quite a few actually). Here is what we're getting from the Crawl errors: What I don't know is if the number of errors is cumulative over time or not (i.e. if Google bots crawl your website on 2 different days and find 1 separate issue on each day, whether they will report 1 error for each day, or 1 for the 1st, and 2 for the 2nd). Based on the Crawl stats we can see that the number of requests made by Google bots doesn't increase: Therefore I believe the number of errors reported is cumulative and that an error detected on 1 day is taken into account and reported on the subsequent days until the underlying problem is fixed and the page it's crawled again (or if you manually Mark as fixed the error) because if you don't make more requests to a website, there is no way you can check new pages and old pages at the same time. Q: Am I interpreting the number of errors correctly?

    Read the article

  • Preventing indexing duplicate content by search engines

    - by umesh awasthi
    I am in process of migrating my old domain (www.oldurl.com) to new domain (www.newurl.com). Almost all the content,URL structure as well database is same except for few URL's and only difference will be in the domain name. I have made entries in the Apache's .htaccess file to set 301 redirect and currently have blocked all search engines from crawling my new domain by setting in robot.txt file. I am not sure how i will handle the duplicate content issue as when i will make the new domain go live. Should i block search engines to index/crawl my old domain? i am new to this field and not sure if this is actually any duplicate content issue or not.

    Read the article

  • Is a backlink with a duplicate description and title from a news site bad for SEO?

    - by Dejan Pelzel
    I have a blog with over a thousand posts. I have posted some of those to a news aggregator site and included the same preview photo and description that I used for it on my own site and the link to the post on my site. Since the site is mainly videos and images, the description was usually a complete match of 4-6 lines of text. It now looks that I have been affected by panda and since I am not doing any bad stuff, I suspect it might be due to duplicate content. For example, when I search the title of my posts, sometimes my site is not even returned, but the news aggregator site is. Could this be the problem with panda?

    Read the article

  • How to create a good sitemap for dynamic website

    - by Saif Bechan
    I have a website with dynamic content and different kind of pages. I have some pages that rarely change, and I have pages like blogs that change often. The blog pages also have links for sorting, for example sorting on date, asc, desc. On some of the pages I also have links to different tabbed content, and links that are just anchor links. Now when I use a xml sitemap generator then all the links are thrown into the site, and so I don't think all the links are really relevant. The blogposts up until now are also taken into the sitemap. Is this really necessary? I think the links to the blogposts can be indexed just fine. Is the best way to make a sitemap just to manually assign the main menu links to the sitemap, or is indexing everything really recommended?

    Read the article

  • How can I screen clients that try to register multiple times?

    - by Aba Dov
    My company offers a bonus to every client that register. We would like to prevent people from abusing this by registering several times. we thought about filtering clients by ip (there is a problem with workplaces where all stations have the same ip) cookies (if cookies are not allowed we might lose a client) I would like your opinions on these two methods and will be glad to hear about new ones. thanks

    Read the article

  • Best SEO practices for mobile URLs: 301, rel=canonical, or something else?

    - by Chris
    I am developing a site with a mobile version and am trying to figure the appropriate way to manage the URLs for search engines. So far I've considered: Having a separate mobile site (m.example.com) with rel="canonical" links to the regular site. Putting both the mobile site and full site on one URL (example.com), and doing user agent sniffing. Another opinion: Spencer: "If you have a mobile site at a separate location or URL, you should 301 redirect each and every mobile page to its corresponding page on your main website. Employ user agent detection so that the mobile optimized version is served up if someone's coming in from a hand-held. - http://developer.practicalecommerce.com/articles/1722-Mobile-site-Development-Best-Practices-for-SEO-Usability Both 2 and 3 make it hard for a user who wants to switch to the full site or mobile site manually, but I'm not sure 1 is the best alternative. What's the best way to write URLs for a mobile site?

    Read the article

  • Why does flush dns often fail to work?

    - by Sharen Eayrs
    C:\Windows\system32>ipconfig /flushdns Windows IP Configuration Successfully flushed the DNS Resolver Cache. C:\Windows\system32>ping beautyadmired.com Pinging beautyadmired.com [xxx.45.62.2] with 32 bytes of data: Reply from xxx.45.62.2: bytes=32 time=253ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=249ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=242ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=258ms TTL=49 Ping statistics for xxx.45.62.2: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 242ms, Maximum = 258ms, Average = 250ms My site should point to xx.73.42.27 I change the name server. It's been 3 hours. It still points to xxx.45.62.2 Actually what happen after we change name server anyway? Wait for what? I already flush dns. Why it still points to the wrong IP? Also most other people that do not have the DNS cache also still go to the wrong IP

    Read the article

  • Dropped impression 25 days after restructure

    - by Hamid
    Our website is a non English property related website (moshaver.com) which is similar to rightmove.co.uk. On September 2012 our website was adversely affected by Panda causing our Google incoming clicks to drop from around 3000 clicks to less than a thousand. We were hoping that Google will eventually realize that we are not a spam website and things will get better. However, in August 2013 we were almost sure that we needed to do something, so we started to restructure our web content. We used the canonical tag to remove our search results and point to our listing pages, using the noindex tag to remove it from our listing pages which does not have any properties at the moment. We also changed title tags to more friendly ones, in addition to other changes. Our changes were effective on 10th August. As shown in the graph taken from Google Analytics Search Engine Optimization section, these changes has resulted in an increase in the number of times Google displayed our results in its search results. Our impressions almost doubled starting 15th August. However, as the graph shows, our CTR dropped from this date from around 15% to 8%. This might have been because of our changed title tags (so people were less likely to click on them), or it might be normal for increased impressions. This situation has continued up until 10th September, when our impressions decreased dramatically to less than a thousand. This is almost 30% of our original impressions (before website restructure) and 15% of the new impressions. At the same time our impressions has increased dramatically to around 50%. I have two theories for this increase. The first one is that these statistics are less accurate for lower impressions. The second one is that Google is now only displaying our results for queries directly related to our website (our name, our url), and not for general terms, such as "apartments in a specific city". The second theory also explains the dramatic decrease in impression as well. After digging the analytic data a little more, I constructed the following table. It displays the breakdown of our impressions, clicks and ctr in different Google products (web and image) and in total. What I understand from this table is that, most of our increased impressions after restructure were on the image search section. I don't think users of search would be looking for content in our website. Furthermore, it shows that the drop in our web search ctr, is as dramatic of the overall ctr (-30% in compare to -60%) . I thought posting it here might help you understand the situation better. Is it possible that Google has tested our new structure for 25 days, and then decided to decrease our impressions because of the the new low CTR? Or should we look for another factor? If this is the case, how long does it usually take for Google to give us another chance? It has been one month since our impressions has dropped.

    Read the article

  • SEO for a list of products with filters

    - by dana
    I am a wondering if there is a recommended "best practice" for a product search SEO. I know to create a dynamic sitemap file that lists links to all products in the site. However, I want to implement a a bookmark-able "advanced search". Should I let search engines index any of the results? Take the following parameters for a search on a make believe used car website: minprice (minimum price in dollars) maxprice (maximum price in dollars) make (honda, audi, volvo) model (accord, A4, S40) minyear (minimum model year) maxyear (maximum model year) minmileage (minimum mileage) maxmileage (maximum mileage) Given these parameters, there could be an infinite number of search combinations: Price Between $10,000 and $20,000 /search?minprice=10000&maxprice&20000 Audis with less than 50k miles /search?model=audi&maxmileage=50000 More than 100,000 miles and less than $5,000 /search?minmileage=100000&maxprice=5000 etc. Over time, there may be inbound links to a variety of these types of searches, yet they are all slices of the same data. Should I allow for all of these searches to be indexed?

    Read the article

  • How to prevent access to website without SSL connection?

    - by CraigJ
    I have a website that has an SSL certificate installed, so that if I access the website using https instead of http I will be able to connect using a secure connection. However, I have noticed that I can still access the website non-securely, ie. by using http instead of https. How can I prevent people using the website in a non-secure manner? If I have a directory on the website, eg. samples/, can I prevent non-secure connections to just this directory?

    Read the article

  • Project planning and customer tracking system

    - by Daniel Hollands
    First off, sorry if this is the wrong 'stack' site, but it seemed like a good place to start. I'm happy to report that my services as a web developer are starting to be in quite a lot of demand, and I have a few existing and potentially new customers all lining up - but I'm finding it very hard to keep track of everything. What I'm hoping for is some (preferably web-based) system which I can use to keep track of who my customers are, the various projects that I've got going on for them, and (if possible) the individual sub-tasks that make up each project. What would be even better is if the relevant customer was able to log into the site, and see the process of their projects. I do hope you know what I'm talking about, and that you'll be able to offer some suggestions of either web-base sites that offer something along these lines, or of some open source solution or something like that? Thank you

    Read the article

  • How to create the right loop / jquery / picture wipe? [migrated]

    - by Razor
    <script type="text/javascript" src="slider/jquery.touch-gallery-1.0.0.js"></script> <script type="text/javascript" src="slider/jquery.touch-gallery-1.0.0.min.js"></script> <script> $('body').live("click", function() { for (var i=0; i<4; i++) { alert("bla: "+i); } }); $('body').live("swipeleft", function(){ var nextpage = $("#page00002").next('section[data-role="page"]'); $.mobile.changePage(nextpage, 'slide'); }); $('body').live("swiperight", function(){ var prevpage = $("#page1").prev('section[data-role="page"]'); $.mobile.changePage(prevpage, 'slide'); }); </script>

    Read the article

  • Author Bio on all pages - Is it duplicate content?

    - by Rana Prathap
    In a website with user generated content, I provide a author bio under every article on the site. The author bio will be the same under every article the same author wrote. For some authors, the author bio is no longer then a couple of sentences, but for some descriptive writers, it is a good 100 words. These 100 words get repeated in almost 15 pages, some of them without substantial original content(such as haikus). Will this lead to duplicate content?

    Read the article

  • Plain Text email support: Is it still needed in 2011?

    - by murdoch
    For many years I have been building emails that get sent out by my webapps that are Multi-part with a text part & an email part to allow users of plain text only email clients to default to the text version. However I have recently been developing a rather complex email that doesn't translate so well to text, so in 2011 is there really any need to provide a textual alternative. How many people out there are actually still only able to see plain text emails?

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >