Search Results

Search found 33009 results on 1321 pages for 'google index'.

Page 212/1321 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Nested attributes in the index view?

    - by user283179
    How would I show one of many nested objects in the index view class Album < ActiveRecord::Base has_many: photos accepts_nested_attributes_for :photos, :reject_if => proc { |a| a.all? { |k, v| v.blank?} } has_one: cover accepts_nested_attributes_for :cover end class Album Controller < ApplicationController layout "mini" def index @albums = Album.find(:all, :include => [:cover,]).reverse respond_to do |format| format.html # index.html.erb format.xml { render :xml => @albums } end end This is what I have so fare. I just want to show a cover for each album. Any info on this would be a massive help!!

    Read the article

  • Multiple 301 redirect and massive loss of ranking

    - by DoesNotCompute
    I just remade from scratch a website for a client, the client ask me to preverve their ranking by making 301 redirect from the original URL to the new URL. For instance: http://plumber-directory.my-website.com/john-smith-city-1.php became http://directory.my-website.com/plumber/city/john-smith.html So i put the website online for few days until the 301 partially kicks in the google results. Then the client call me back to tell me that his boss want to switch back to the ancients URLs _< So i put a new 301 redirect: http://directory.my-website.com/plumber/city/john-smith.html revert to http://plumber-directory.my-website.com/john-smith-city-1.php Because google had just few days to assimilate the new URLs, it have now the two kinds of URLs in it's own result pages. Also the ranking of the website keeps falling down every day, i suspect google to mistaking those redirects for duplicate content. Is there something i can do to avoid a total loss of rankings?

    Read the article

  • Codeigniter 404 can't find index.php (only on real server, not on virtual server)

    - by Lukas Oppermann
    Hey, I got a working webpage with CodeIgniter. I did now just upload it to my webserver and it gives me a 404 error. The browser address is "web-page.com/folder/en/about" The baseurl in the config is "web-page/folder/" Also this is in the config.php, I did try AUTO but it does not work either. $config['index_page'] = ""; $config['uri_protocol'] = "QUERY_STRING"; The index.php is in "web-page.com/folder/" My htaccess is in "web-page.com/folder/.htaccess" The content of the .htaccess is AddCharset utf-8 .css .html .xhtml Options +FollowSymlinks RewriteEngine on RewriteBase /folder/ RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond $1 !^(index\.php|images|media|layout|css|libs|robots\.txt) RewriteRule ^(.*)$ /folder/index.php?/$1 [L] Do you have any tip, any idea, what can I try to do? I did check all the rights, even with 777 it does not work. Thanks in advance. Lukas

    Read the article

  • HTMLAgilityPack ChildNodes index works, named node does not

    - by XgenX
    I am parsing an XML API response with HTMLAgilityPack. I am able to select the result items from the API call. Then I loop through the items and want to write the ChildNodes to a table. When I select ChildNodes by saying something like: sItemId = dnItem.ChildNodes(0).innertext I get the proper itemId result. But when I try: sItemId = dnItem.ChildNodes("itemId").innertext I get "Referenced object has a value of 'Nothing'." I have tried "itemID[1]", "/itemId[1]" and a veriety of strings. I have tried SelectSingleNode and ChildNodes.Item("itemId").innertext. The only one that has worked is using the index. The problem with using the index is that sometimes child elements are omitted in the results and that throw off the index. Anybody know what I am doing wrong?

    Read the article

  • C# collection/string .Contains vs collection/string.IndexOf

    - by Daniel
    Is there a reason to use .Contains on a string/list instead of .IndexOf? Most code that I would write using .Contains would shortly after need the index of the item and therefore would have to do both statements. But why not both in one? if ((index = blah.IndexOf(something) = 0) // i know that Contains is true and i also have the index

    Read the article

  • How to get index using LINQ?

    - by codymanix
    Given a datasource like that: var c = new Car[] { new Car{ Color="Blue", Price=28000}, new Car{ Color="Red", Price=54000}, new Car{ Color="Pink", Price=9999}, // .. }; How can I find the index of the first car satisfying a certain condition with LINQ? EDIT: I could think of something like this but it looks horrible: int firstItem = someItems.Select((item, index) => new { ItemName = item.Color, Position = index }).Where(i => i.ItemName == "purple") .First() .Position; Will it be the best to solve this with a plain old loop?

    Read the article

  • AJAX CascadingDropdown - Setting the selected index - C# - ASP.NET

    - by rpm1984
    Hi, I have a CascadingDropDown on an ASP.NET page. Now, the prompt text is "Select State". (list of states). However, on a different version of this page (ie querystring), i might want to set the selected index to "California" for example. How can i do this? The web service used by the ajax control (ie GetStates) gets invoked at the same time the jquery document.ready function is triggered (ie asynchronously). So when i try and set the selected index in jquery, the items are not yet bound. Is there a way to attach a handler to the ajax dropdown so that i can set the selected index once the webservice call has completed, and the items are bound? Thanks in advance.

    Read the article

  • are keywords in URLs good SEO or needlessly redundant?

    - by Blazemonger
    A coworker and I are locked in a debate over the value of SEO keywords in the URL of a page. She wants to change all the filenames of the HTML pages of a fencing company so they look like residential-home-chicago.html, contact-chicago-contractor.html, and so on. She is convinced that because Google highlights keywords in URLS in search results, that means that putting keywords here is more valuable. My position is that these do not improve SEO, since Google doesn't seem to give keywords in the URL any more weight than keywords in the body of the page, and might even give them less weight. In the meantime, they make it harder for me to find the pages I want when its time to edit them, and the site as a whole looks cheap and spammy. Google's own SEO guide suggests to me that yes, keywords in URLs are useful, but not superior, and that they are more useful for human readability than search engine rankings. I'm looking for authoritative sources that support either position, not blog articles from SEO optimization companies trying to promote themselves.

    Read the article

  • Flex: Get an item from a AdvancedDataGrid given an index

    - by David Wolever
    I've got a subclass of AdvancedDataGrid showing a tree-like data structure. How can I, given the index returned by calculateDropIndex, get the item at that index? After reading through reams of code, it seems like the least terrible way is: var oldSelectedIndex:int = this.selectedIndex; var mouseOverIndex:int = this.calculateDropIndex(event); this.selectedItem = mouseOverIndex; var item:* = this.selectedItem; this.selectedIndex = oldSelectedIndex; The other option seems to be tinkering around with the iterator property... But, judging by the way I've seen it used, that will get pretty harry pretty quickly too. So, how can I get the item at a particular index in an advanced datagrid without going insane?

    Read the article

  • Traffic fall after a server problem

    - by Sébastien
    I have a website from which I analyse the traffic with Google analytics. Day after day the traffic (mainly from Google SE) incresed until I get a problem with my server. For one day the server has been offline and after that I have no longer had as much users as I had before. Now it's like the site is no more referenced on Google index (but when I type "site:mysite.com", I still have all the results). Do you know if this is a normal behaviour and if the traffic will come back as before (the server has had problems two days ago) ?

    Read the article

  • Stop bots from crawling old links with extensions

    - by Jared
    I've recently switched to MVC3 which is extension-less for the URL's, but Google and Bing have a wealth of links that they are crawling which no longer exist. So I'm trying to find out if there is a way to format robots.txt (or by some other method) to tell google/bing that any link that ends in an extension isn't a valid link... Is this possible? On pages that I'm concerned about a User having saved as a fav I'm displaying a 404 page that lists the links to take once they are redirected to the new page (I decided to not just redirect them as I don't want to maintain these forever). For Google/Bing sake I do have the canonical tag in the header. User-agent: * Allow: / Disallow: /*.* EDIT: I just added the 3rd line (in text above) and it APPEARS to do what I'm wanting. Allow a path, but disallow a file. Can anyone confirm this?

    Read the article

  • How does bing-bot( is that the right spider-name? ) and googlebot interpret 301 redirect?

    - by jbcurtin
    I've been looking for documentation on how the Microsoft and Google bots interpret 301 redirects. It seems that google-bot stores documents on a url based index system. But I haven't been able to figure out how bing works. Should I assume that they are still working towards coping everyone else and assume they use an algorithm close to google? Is it best to just forward a page to a new location via Javascript? I think this might be a blackhat trick, but how would I tell the bots that it's not? Is 301 redirect my best option and I just have to bit the bullet because said pages are no longer in existence? What other options do I have that I might not be aware of?

    Read the article

  • NAVT WordPress Plugin - Not working on index.php

    - by Michael
    Hi there, I need to move my wordpress home page onto the actual index.php file but for some bizarre reason the NAVT plugin doesn't work on there. It also doesn't work on index.php when I put it in the header.php file. It works on all other pages as normal. ALSO, it does work in the footer.php file when viewing the index.php file so this is what makes it all the more confusing. Any ideas what it could be? I've disabled every other plugin so I'm pretty sure there's nothing conflicting. It's rather basic setup and I'm using NAVT default settings. Thanks, Michael.

    Read the article

  • Is it possible to know impressions of other websites?

    - by Saeed Neamati
    Google Webmasters's dashboard gives you a big number which is called impressions, and by definition that I've seen in Google Analytics, it means the total number of times your site has been become eligible for SERPs. I just don't have an idea how to invest on this number, and how much its increase or decrease mean to me, because I can't compare it with other websites. I mean, if the impressions of say site a.com is 150,000, and mine is 50,000, then maybe I can confer that I need to triple my efforts to reach to a.com. But by seeing 50,000 alone I have no clue at all of how to interpret it. Is there any service or other way to know about the impressions Google gives to other sites?

    Read the article

  • Best S.E.O. practice for backlinking etc

    - by Aaron Lee
    I'm currently working on a website that I am really looking to optimise in terms of search engines, i've been submitting between 5-20 directory submissions daily, i've validated and optimised my code and i've joined a lot of forums etc to speak of the website in question, however, I don't seem to be making much of an impact in terms of Google. I know that S.E.O. takes a while to start making an impact, and that Google prefers sites that a regularly updated and aged, but are there any more practices that can really help with organic results in Search engines. I have looked on Google itself, and a few other SE's but nobody is willing to talk about extensive S.E.O. practices as they normally don't want people knowing their formula's for S.E.O., also does anyone know of a decent piece of software that really looks into the in's and out's of your page and provides feedback, I usually use http://www.woorank.com, but only using one program doesn't show if it's exactly correct in what it's saying. If anyone could help it would be much appreciated, thank you very much.

    Read the article

  • allowing index access only with .htaccess

    - by YsoL8
    Hello I have this in my .htaccess file, in the site root: Options -Indexes <directory ../.*> Deny from all </directory> <Files .htaccess> order allow,deny deny from all </Files> <Files index.php> Order allow,deny allow from all </Files> What I'm trying to achieve is to block folder and file access to anything that isn't called index.php, regardless of which directory is accessed. I have the folder part working perfectly and the deny from all rule is working as well - but my attempt to allow access to index.php is failing. Basically could someone tell me how to get it working?

    Read the article

  • How reliable are URIs like /index.php/seo_path

    - by Boldewyn
    I noticed, that sometimes (especially where mod_rewrite is not available) this path scheme is used: http://host/path/index.php/clean_url_here --------------------------^ This seems to work, at least in Apache, where index.php is called, and one can query the /clean_url_here part via $_SERVER['PATH_INFO']. PHP even kind of advertises this feature. Also, e.g., the CodeIgniter framework uses this technique as default for their URLs. The question: How reliable is the technique? Are there situations, where Apache doesn't call index.php but tries to resolve the path? What about lighttpd, nginx, IIS, AOLServer? A ServerFault question? I think it's got more to do with using this feature inside PHP code. Therefore I ask here.

    Read the article

  • migrating PR / rankings from one site to another

    - by sam
    Ive got a clients company site with decent PR, backlinks and search engines rankings. The client wants to change their comapany name and therfore URL, i will set up a rediect between the old site and the new site. But i was wandering is their a way to tell Google that they are moving while retaining all your rankings ? It is the same people, services, office building same everything just rebranded under a different name and url. Additionaly if their is a way to do this, how does google stop you buying expired domains and just pointing them onto your site, for instance i could buy several PR3 domains all relating to the same sector and point them at my site or would google catch on to this ?

    Read the article

  • Online accounts advanced setting with Empathy (13.10)

    - by uruloke
    the new online accounts doesn't have the advanced settings as the empathy accounts had. How do i change the google server to connect to? i read here: https://wiki.gnome.org/Empathy/FAQ I can't connect to my Google Talk account Your router is probably blocking DNS SRV requests. If possible you should try to fix it. If you can't, the easiest work around is to set "talk.google.com" in the "Server" field of the advanced section of the account. So i think this might fix my problem, or maybe just an option to shift the port it connects to. and is there anyone that knows how to use join any IRC channels with Empathy? i have installed the plugin, but i don't know how to join a channel.

    Read the article

  • Redirecting non existing post to homepage; is that good for SEO?

    - by BlackEagle
    I am checking my website out on Google Webmasters and I am seeing an astonishing 5000 links that could not be found by Google's Crawlers. That's normal, because my website is built in a manner that users can drop their own things, which also lead to 404 pages. Not a problem at all if I can find a workaround of course... So my question is: what if I made a function or a mod rewrite that will check if the link exists (a post for example) and if not, it will redirect it to the home page. Is this good for SEO? Will Google see this as 'link found'? How do I have to look at this problem?

    Read the article

  • Is there an easier way to implement 301 redirects when converting a site to WordPress

    - by Amanda
    I have just converted a website to WordPress. The old site has hundreds of hard-coded html files, and the new site does not match the old site's directory structure or file naming system (bad SEO in the original site), so I can't place any "blanket" 301 redirects. Its been at least 2 months, and the old links are still appearing in Google searches, despite a google-friendly sitemap.xml. Do I need to hardcode a 301 for every individual page in my htaccess file, or am I just misunderstanding 301s and apache? Is there some other way I can update Google about the fact that my entire site structure has changed?

    Read the article

  • Why my site is not ranking for particular keyword

    - by user543087
    My site is only 3 days to be 6 months old. This website is unique, that is there is no competitor to this type site in India, providing comparison of payment gateways in India, besides the payment gateways companies itself. I've optimized it for key word : "payment gateway" I've changed the url's twice, latest being 3 months back, in which case Google Webmaster gave plently of 404's. I corrected the useful 404's and left meaningless ones as it is. What is the reason it's not ranking well for payment gateways? Even site with single page about "Payment gateways" seem to be ranking better than this. Is it does to: 1) Lot of outbound links to in-context companies and information 2) 404's as reported in Google Webmaster My another site is successfully getting 1500 unique visitors daily and is up in Google ranking. I don't know why it is not!

    Read the article

  • Static HTML to Wordpress Migration SEO Implications?

    - by Kayle
    Recently, I migrated a client's site to a new server and a new home within wordpress so they could more easily edit their website and start a blog section. The static site was 10 years old a was showing up at place #3 for it's primary keyword, consistently, according to my client, and has dropped to rank #6-8 following the migration. At launch, we made sure the urls were identical (save the removal of ".htm" which we used 301 redirects to compensate for) and we generated a new XML map and pinged google with the new site. We keep a 404 log to make sure we're not losing any incoming links. We also have Google Webmaster Tools on this site and have zero errors/suggestions, everything seems ok. I was told by numerous sources that Google would not penalize us for the use of 301s, but it's the only thing I can think of right now that is different about the site, other than the platform. Any ideas about what we could be getting docked for?

    Read the article

  • How to access fields in JSON object by index

    - by Stefan
    I know this isn't the best way to do it, but I have no other choice :( I have to access the items in JSONObject by their index. The standard way to access objects is to just wirte this[objectName] or this.objectName. I also found a method to get all the fields inside a json object: (for (var key in p) { if (p.hasOwnProperty(key)) { alert(key + " -> " + p[key]); } } (Soruce : Loop through Json object). However there is no way of accessing the JSONfields directly by a index. The only way I see right now, is to create an array, with the function above, get the fieldname by index and then get the value by fieldname. As far as I see it, the p (in our case the JSON file must be an iteratable array to, or else the foreach loop wouldn't work. How can I access this array directly? Or is it some kind of unsorted list? Regards, Stefan

    Read the article

  • Does a longer registration length/period for a domain name improve its SEO and search ranking?

    - by Cupcake
    While I was renewing a domain of mine with a well-known domain registrar, the support person who was on call with me said that I'd improve the SEO ranking of my domain if I increased the registration length from 1 year to 5 years instead. The explanation that he gave me was something along the lines that a search engine like Google doesn't like to send users to domains and businesses that may no longer exist, and that by registering my domain for 5 years instead of just 1, Google would have higher confidence that I'm serious about keeping my business around for the long-term. Needless to say, I was quite skeptical. Does the registration/renewal length of a domain name affect its SEO and search result ranking for search engines such as Google?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >