Search Results

Search found 33009 results on 1321 pages for 'google index'.

Page 263/1321 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • How can I optimize my ajax calls to deliver at 60ms.

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance? Edit: People, you seemed to be angry that I did a comparison to Google and the question is very generic. Sorry about that. To rephrase: How can I bring down response time from 500 ms to 60 ms provided my server response time is just a fraction of ms. Assume the results are served from Nginx - Varnish with a cache hit. Here are some answers I would like to answer myself assume the response sizes remained more or less the same. Ensure results are http compressed Ensure SPDY if you are on https Ensure you have initcwnd set to 10 and disable slow start on linux machines. Etc. I don’t think I’ll end up with 60 ms at Google level but your collective expertise can help easily shave off a 100 ms and that’s a big win.

    Read the article

  • DNS issues on my iPhone

    - by mattalexx
    I'm trying to call up "https://m.google.com" on my iPhone on my home WiFi. It's saying Safari "cannot verify server identity" of m.google.com, then when I press Details, it refers to https://m.google.com as "mattserver". "mattserver" is the name of my development server, a Linux box on my home network. This stinks of DNS issues to me. Accessing the unsecure version of that URL ("http://m.google.com") gives me a blank page. What could be going on here? Is there a way to look at the logs of my router somehow?

    Read the article

  • Convert apache rewrite rules to nginx

    - by Shiyu Sekam
    I want to migrate an Apache setup to Nginx, but I can't get the rewrite rules working in Nginx. I had a look on the official nginx documentation, but still some trouble converting it. http://nginx.org/en/docs/http/converting_rewrite_rules.html I've used http://winginx.com/en/htaccess to convert my rules, but this just works partly. The / part looks okay, the /library part as well, but the /public part doesn't work at all. Apache part: ServerAdmin webmaster@localhost DocumentRoot /srv/www/Web Order allow,deny Allow from all RewriteEngine On RewriteRule ^$ public/ [L] RewriteRule (.*) public/$1 [L] Order Deny,Allow Deny from all RewriteEngine On RewriteCond %{QUERY_STRING} ^pid=([0-9]*)$ RewriteRule ^places(.*)$ index.php?url=places/view/%1 [PT,L] # Extract search query in /search?q={query}&l={location} RewriteCond %{QUERY_STRING} ^q=(.*)&l=(.*)$ RewriteRule ^(.*)$ index.php?url=search/index/%1/%2 [PT,L] # Extract search query in /search?q={query} RewriteCond %{QUERY_STRING} ^q=(.*)$ RewriteRule ^(.*)$ index.php?url=search/index/%1 [PT,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule ^(.*)$ index.php?url=$1 [PT,L] Order deny,allow deny from all ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /var/run/php5-fpm.sock -pass-header Authorization CustomLog ${APACHE_LOG_DIR}/access.log combined Nginx config: server { #listen 80; ## listen for ipv4; this line is default and implied root /srv/www/Web; index index.html index.php; server_name localhost; location / { rewrite ^/$ /public/ break; rewrite ^(.*)$ /public/$1 break; } location /library { deny all; } location /public { if ($query_string ~ "^pid=([0-9]*)$"){ rewrite ^/places(.*)$ /index.php?url=places/view/%1 break; } if ($query_string ~ "^q=(.*)&l=(.*)$"){ rewrite ^(.*)$ /index.php?url=search/index/%1/%2 break; } if ($query_string ~ "^q=(.*)$"){ rewrite ^(.*)$ /index.php?url=search/index/%1 break; } if (!-e $request_filename){ rewrite ^(.*)$ /index.php?url=$1 break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } I haven't written the original ruleset, so I've a hard time converting it. Would you mind giving me a hint how to do it easily or can you help me to convert it, please? I really want to switch over to php5-fpm and nginx :) Thanks

    Read the article

  • IE Search Provider: Specifying gTLD / Country-Specific Site

    - by jwa
    I am based in the UK, and as such typically use google.co.uk as my search engine. However, my employer is based in continental Europe, and thus my internet proxy is located overseas. As a result, IP geo-location presents a location outside of the UK. Google detects this, and as a result will redirect my searches from the address bar to a foreign Google domain. This leads to "local" answers having a higher ranking, many of which are not written in English language! Is there a specific search provider / URL I can give to IE which will use a specific gTLD of google (.co.uk), rather than performing the location-based redirect?

    Read the article

  • How do I point a new domain to start on a page that's not index.html on separate hosting?

    - by Owen Campbell-Moore
    I'm using a service (CMS/Host) called SquareSpace to host my site, and today I'm registering the domain for it. Basically, how do I make it so when somebody types www.tedxoxford.com it points at http://www.tedxoxford.com/landing (currently http://tedxoxford.squarespace.com/landing) instead of the default index? Is this possible? Squarespace is quite a restricted CMS and means that your logos etc all point to the index so I don't want people ending up on my landing/splash page every time they want the home page, only on the first time they type in the URL. A dirty hack would be to check the refferer and redirect anyone hitting the index to the landing page, but that's a lot of loading overhead I'd rather avoid...

    Read the article

  • Preventing indexing duplicate content by search engines

    - by umesh awasthi
    I am in process of migrating my old domain (www.oldurl.com) to new domain (www.newurl.com). Almost all the content,URL structure as well database is same except for few URL's and only difference will be in the domain name. I have made entries in the Apache's .htaccess file to set 301 redirect and currently have blocked all search engines from crawling my new domain by setting in robot.txt file. I am not sure how i will handle the duplicate content issue as when i will make the new domain go live. Should i block search engines to index/crawl my old domain? i am new to this field and not sure if this is actually any duplicate content issue or not.

    Read the article

  • Recovery from URL structure change?

    - by Dejan Pelzel
    in July this year, we have changed the URL structure of the website from: Post: domain.com/blog/post/986/dance/heart-beats-dance-video-by-chinatsu/ Category: domain.com/blog/index/cosplay/ to Post: domain.com/dance/heart-beats-dance-video-by-chinatsu-986/ Category: domain.com/cosplay/ Everything was (supposedly) properly redirected with 301 redirects and it first seemed that the traffic returned after a couple of days, but it has now been close to 2 months and things keep going worse although Google is slowly indexing the changes. What is worrying me even more is that the Pages crawled per day from Webmaster Tools started drastically dropping a few days ago and has just reached a new low in months (from over 2000 to 700). Should I be worried or will things sort out eventually?

    Read the article

  • How to prevent the google users found my index of admin page?

    - by krish
    I am running a website but for some days i stopped it and put the under-construction page because the Index of admin page is visible to the outside world through the Google search. One of my friend told me that your websites index is visible and its one step away to access the password file and he shows me that very simply using the Google search. How can i prevent this and i am hosting my site with a hosting company and i report about this to them but they simply replied to me still its secure so you no need to worry... am i really don need to worry and continue my site with the visible index of admin page?

    Read the article

  • How to test robots.txt in googlebot to find out what is being indexed

    - by Amar Jarubula
    This question is a continuation for this answer How to check if googlebot will index a given url? As was told I did go to the Webmaster Tools and tested contents of my robots.txt file. However this is just giving me the info if that content is good enough or not. However for my scenario I need to test whether disallowing some patterns is being indexed or not. For example I have something like this below in my robots.txt disallow:/pattern* My understanding is the URLs with word pattern should not crawled, but how do I test this pattern is enforced while indexing the website?

    Read the article

  • "X-Robots-Tag: noindex" on an HTTP 301 response

    - by Peter O.
    I understand that a resource with X-Robots-Tag: noindex forces some search engines, including Google, not to index the resource further. I also understand that an HTTP 301 response causes search engines to use the redirected URL instead of the original URL to refer to the resource. But what happens if both "X-Robots-Tag: noindex" and status code 301 occur on the same response? It's likely that the original URL will no longer be indexed, but will that cause the redirected URL to no longer be indexed too? This possibility is not mentioned in the X-Robots-Tag specification.

    Read the article

  • How to prevent Google from finding my admin index page?

    - by krish
    I am running a website but for some days i stopped it and put the under-construction page because the Index of admin page is visible to the outside world through the Google search. One of my friend told me that your websites index is visible and its one step away to access the password file and he shows me that very simply using the Google search. How can i prevent this and i am hosting my site with a hosting company and i report about this to them but they simply replied to me still its secure so you no need to worry... am i really don need to worry and continue my site with the visible index of admin page?

    Read the article

  • Does having a website inside a frame (<frameset>) helps or affect search engine rankings?

    - by rajesh.magar
    I have been working to promote my website from long time but not getting such traffic as work I have done on that. My website is running online with another domain using framset so is it somewhere affecting on search index & ranking. My parent website is http://www.battle cancer.com and using <frameset frameborder=0 framespacing=0 border=0 rows="100%,*"noresize> <frame name="frame" src="http://www.battle-cancer.com" noresize></frameset> It running online with the http://www.elimaysupplements.com/.

    Read the article

  • After a domain change, what can I do to recover lost traffic, rankings, impressions etc? [duplicate]

    - by Felix
    This question already has an answer here: How do I rename a domain and preserve PageRank? 3 answers I moved my site to a legacy exact-match domain I purchased about a couple of months ago. I have seen significant reduction in traffic, impressions, and rankings. I did all the right steps/best practices: change of address in GWT, map old site hierarchy and match to new site for 301 redirects etc. Indexation has gone through the Google process: old site has all but dissappeared from he index and new site is indexed, albeit with some 404 errors which I am addressing. Does anyone else who has gone through eh domain change process have any thoughts/advice? Thanks!

    Read the article

  • A drop in SERP after following webmaster guidelines [on hold]

    - by digiwig
    So here's a puzzle for all you SEO gurus out there. I recently launched my own site. I had target keywords which were ranking very well for about 1 month, within the top five and even appearing in first place. In an attempt to maintain good positioning, I followed guidelines by adding robots.txt, an xml sitemap redirecting non-www to www redirecting index.php to root domain adding htaccess 301 redirect for old pages I added rich snippets created a google+ account, verified my picture to appear, I went through each of the webmaster issues with duplicate titles and meta descriptions and improved header tag document outlines i even created a few more blog posts to keep the content freshing and moving. So now my website appears on page 2 with my target keywords - and all because I followed the guidelines. What is happening? I see competitors with stagnant content superglued to position 1.

    Read the article

  • Editing Django's admin index <div id='module'> tag

    - by zen
    I am new to the Django framework. On Django's admin index page I'd like to get rid of the "s" at the end of my model names. Example: <div class="module"> <table summary="Models available in the my application."> <caption><a href="" class="section">My application</a></caption> <tr> <th scope="row"><a href="model/">Model**s**</a></th> <td><a href="model/add/" class="addlink">Add</a></td> <td><a href="model/" class="changelink">Change</a></td> </tr> </table> </div> I know of a way to do this but I am really looking for the file I should edit. Where is it and what exactly should I do? I can't seem to pinpoint where it is coming from.

    Read the article

  • std::vector iterator or index access speed question

    - by Simone Margaritelli
    Just a stupid question . I have a std::vector<SomeClass *> v; in my code and i need to access its elements very often in the program, looping them forward and backward . Which is the fastest access type between those two ? Iterator access std::vector<SomeClass *> v; std::vector<SomeClass *>::iterator i; std::vector<SomeClass *>::reverse_iterator j; // i loops forward, j loops backward for( i = v.begin(), j = v.rbegin(); i != v.end() && j != v.rend(); i++, j++ ){ // some operations on v items } Subscript access (by index) std::vector<SomeClass *> v; unsigned int i, j, size = v.size(); // i loops forward, j loops backward for( i = 0, j = size - 1; i < size && j >= 0; i++, j-- ){ // some operations on v items } And, does const_iterator offer a faster way to access vector elements in case i do not have to modify them? Thank you in advantage.

    Read the article

  • Why This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • Strange EListError occurance (when accessing variable-defined index)

    - by michal
    Hi, I have a TList which stores some objects. Now I have a function which does some operations on that list: function SomeFunct(const AIndex: integer): IInterface begin if (AIndex > -1) and (AIndex < fMgr.Windows.Count ) then begin if (fMgr.Windows[AIndex] <> nil) then begin if not Supports(TForm(fMgr.Windows[AIndex]), IMyFormInterface, result) then result:= nil; end; end else result:= nil; end; now, what is really strange is that accessing fMgr.Windows with any proper index causes EListError... However if i hard-code it (in example, replace AIndex with value 0 or 1) it works fine. I tried debugging it, the function gets called twice, with arguments 0 and 1 (as supposed). while AIndex = 0, evaluating fMgr.Windows[AIndex] results in EListError at $someAddress, while evaluating fMgr.Windws[0] instead - returns proper results ... what is even more strange, even though there is an EListError, the function returns proper data ... and doesn't show anything. Just info on two EListError memory leaks on shutdown (using FastMM) any ideas what could be wrong?! Thanks in advance michal

    Read the article

  • Access 2003 VBA: Return only the index of the last item selected in a ListBox

    - by Eric D. Johnson
    I will preface this with saying, this is my first time using listboxes and earlier posts were criticized for lacking detail. So, all help is greatly appreciated and I hope this is enough information without being overkill. Currently, I have a listbox updating a junction table with an on click event (iterates through selected items and if they are not in the table it adds them). The list box is also updated by an option group (based on the option group value a query populates the list with the appropriate items and they are selected/highlighted based on the junction table). Also, when items are a "sub-category" the "category" is also selected. This functions perfectly until I ask it to do more... Problem 1: I need to differentiate "categories" of items from each other. So, I have included a blank item to the list box to add a space between categories. When the blank items are present the listbox does not update the junction table properly and vice versa. Problem 2: My users want to be able to deselect the "category" under certain circumstances. This is fine, just de-select the "category" after the "sub-category" is selected. However, the "category" is re-selected whenever the listbox is clicked again because it iterates through all entries. Perceived solution for both problems: Return only the index of the item (de)selected and manipulate accordingly. Is this possible? If so, how? OR: Should I take a different approach?

    Read the article

  • C# String.Replace with a start/index (Added my (slow) implementation)

    - by Chris T
    I'd like an efficient method that would work something like this EDIT: Sorry I didn't put what I'd tried before. I updated the example now. // Method signature, Only replaces first instance or how many are specified in max public int MyReplace(ref string source,string org, string replace, int start, int max) { int ret = 0; int len = replace.Length; int olen = org.Length; for(int i = 0; i < max; i++) { // Find the next instance of the search string int x = source.IndexOf(org, ret + olen); if(x > ret) ret = x; else break; // Insert the replacement source = source.Insert(x, replace); // And remove the original source = source.Remove(x + len, olen); // removes original string } return ret; } string source = "The cat can fly but only if he is the cat in the hat"; int i = MyReplace(ref source,"cat", "giraffe", 8, 1); // Results in the string "The cat can fly but only if he is the giraffe in the hat" // i contains the index of the first letter of "giraffe" in the new string The only reason I'm asking is because my implementation I'd imagine getting slow with 1,000s of replaces.

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • Why Does This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • I'm trying to build a query to search against a fulltext index in mysql

    - by Rockinelle
    The table's schema is pretty simple. I have a child table that stores a customer's information like address and phone number. The columns are user_id, fieldname, fieldvalue and fieldname. So each row will hold one item like phone number, address or email. This is to allow an unlimited number of each type of information for each customer. The people on the phones need to look up these customers quickly as they call into our call center. I have experimented with using LIKE% and I'm working with a FULLTEXT index now. My queries work, but I want to make them more useful because if someone searches for a telephone area code like 805 that will bring up many people, and then they add the name Bill to narrow it down, '805 Bill'. It will show EVERY customer that has 805 OR Bill. I want it to do AND searches across multiple rows within each customer. Currently I'm using the query below to grab the user_ids and later I do another query to fetch all the details for each user to build their complete record. SELECT DISTINCT `user_id` FROM `user_details` WHERE MATCH (`fieldvalue`) AGAINST ('805 Bill') Again, I want to do the above query against groups of rows that belong to a single user, but those users have to match the search keywords. What should I do?

    Read the article

  • X-Forwarded-For causing Undefined index in PHP

    - by bateman_ap
    Hi, I am trying to integrate some third party tracking code into one of my sites, however it is throwing up some errors, and their support isn't being much use, so i want to try and fix their code myself. Most I have fixed, however this function is giving me problems: private function getXForwardedFor() { $s =& $this; $xff_ips = array(); $headers = $s->getHTTPHeaders(); if ($headers['X-Forwarded-For']) { $xff_ips[] = $headers['X-Forwarded-For']; } if ($_SERVER['REMOTE_ADDR']) { $xff_ips[] = $_SERVER['REMOTE_ADDR']; } return implode(', ', $xff_ips); // will return blank if not on a web server } In my dev enviroment where I am showing all errors I am getting: Notice: Undefined index: X-Forwarded-For in /sites/webs/includes/OmnitureMeasurement.class.php on line 1129 Line 1129 is: if ($headers['X-Forwarded-For']) { If I print out $headers I get: Array ( [Host] => www.domain.com [User-Agent] => Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 [Accept] => text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 [Accept-Language] => en-gb,en;q=0.5 [Accept-Encoding] => gzip,deflate [Accept-Charset] => ISO-8859-1,utf-8;q=0.7,*;q=0.7 [Keep-Alive] => 115 [Connection] => keep-alive [Referer] => http://www10.toptable.com/ [Cookie] => PHPSESSID=nh9jd1ianmr4jon2rr7lo0g553; __utmb=134653559.30.10.1275901644; __utmc=134653559 [Cache-Control] => max-age=0 ) I can't see X-Forwarded-For in there which I think is causing the problem. Is there something I should add to the function to take this into account? I am using PHP 5.3 and Apache 2 on Fedora

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >