Search Results

Search found 11896 results on 476 pages for 'smart pro'.

Page 254/476 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Is it possible to use two different ErrorDocuments for different paths of a website?

    - by tapwater
    this is my first question on stackexchange and it might be a bit confusing. I currently run PmWiki (sorry, you'll have to google it, new user can only have 2 hyperlinks) at mydomain.com/pmwiki. I have a 404 page and .htaccess set up in my site's document root for 404 pages regarding anything that doesn't have to do with my wiki. By default, PmWiki handles URLs a little confusingly so I had to use this in order to get it to look like mydomain.com/pmwiki/Namespace/Page I had to create a .htaccess in /pmwiki to remove parts of the URL. PmWiki also has a custom 404 (Site/PageNotFound) page that has stopped working, now my site uses the /404.htm page. I noticed this when trying to install this "recipe" to enable case-insensitive URLs. Currently the only way to access Site/PageNotFound is by actually linking to it, and, if you read how that recipe seems to function, this is an issue. Currently mydomain.com/pmwiki/blahblah and mydomain.com/pmwiki/legitimate_namespace_but_lowercase/legitimate_lowercase_page_name both direct to mydomain.com/404.htm. I have to admit I'm very confused, and I apologize if I was unclear in any of this, but I could definitely use some help. Thanks!

    Read the article

  • Cannot access my own web page

    - by enflam3
    I am developing, learning and experimenting with php,html,javascript,flash and so on. Having web hosting and all of the cpanel, phpmyadmin and other utilities. One day, while updating information, connection between my computer and website just went down. I found out that it is only from this computer, where I cannot access anything. I don't know what is the reason I cannot access website, however this is what I have checked so far: Everything else opens normally, having problem only with my page. Cannot access FTP,cpanel or any kind of information related to the domain and hosting ipconfigs detects IP, but shows request timed out (so its not browser related) Turned off Firewall,AV, Rebooted computer Cleared caches,temp,cookies,histry with CCleaner Checked connectivity with both (wired,wireless) networks ISP has dynamic IP that has been changed about 3 times since issue Checked host file I am out of ideas and understanding what could cause this kind of issue, however couple minutes ago, found out that everything works with proxy server (when adding IP and port to the browsers) Can someone point out what should I check or try to get rid of this problem?

    Read the article

  • Is hidden content (display: none;) -indexed- by search engines? [closed]

    - by user568458
    Possible Duplicate: How bad is it to use display: none in CSS? We've established on this site before (in this question) that, since there are so many legitimate uses for hiding content with display: none; when creating interactive features, that sites aren't automatically penalised for content that is hidden this way (so long as it doesn't look algorithmically spammy). Google's Webmaster guidelines also make clear that a good practice when using content that is initially legitimately hidden for interactivity purposes is to also include the same content in a <noscript> tag, and Google recommend that if you design and code for users including users with screen readers or javascript disabled, then 9 times out of 10 good relevant search rankings will follow (though their specific advice seems more written for cases where javascript writes new content to the page). JavaScript: Place the same content from the JavaScript in a tag. If you use this method, ensure the contents are exactly the same as what’s contained in the JavaScript, and that this content is shown to visitors who do not have JavaScript enabled in their browser. So, best practice seems pretty clear. What I can't find out is, however, the simple factual matter of whether hidden content is indexed by search engines (but with potential penalties if it looks 'spammy'), or, whether it is ignored, or, whether it is indexed but with a lower weighting (like <noscript> content is, apparently). (for bonus points it would be great to know if this varies or is consistent between display: none;, visibility: hidden;, etc, but that isn't crucial). This is different to the other questions on display:none; and SEO - those are about good and bad practice and the answers are discussions of good and bad practice, I'm interested simply in the factual 'Yes or no' question of whether search engines index, or ignore, content that is in display: none; - something those other questions' answers aren't totally clear on. One other question has an answer, "Yes", supported by a link to an article that doesn't really clear things up: it establishes that search engines can spot that text is hidden, it discusses (again) whether hidden text causes sites to be marked as spam, and ultimately concludes that in mid 2011, Google's policy on hidden text was evolving, and that they hadn't at that time started automatically penalising display:none; or marking it as spam. It's clear that display: none; isn't always spam and isn't always treated as spam (many Google sites use it...): but this doesn't clear up how, or if, it is indexed. What I will do will be to follow the guidelines and make sure that all the content that is initially hidden which regular users can explore using javascript-driven interactivity is also structured in way that noscript/screenreader users can use. So I'm not interested in best practice, opinions etc because best practice seems to be really clear: accessibility best practices boosts SEO. But I'd like to know what exactly will happen: whether any display: none; content I have alongside <noscript> or otherwise accessibility-optimised content will be be ignored, or indexed again, or picked up to compare against the <noscript> content but not indexed... etc.

    Read the article

  • How to reverse engineer the SEO on a website?

    - by Startup Crazy
    I have read this question. My question is a bit different from it. I want to know how can I reverse engineer another website that is ranking the best for some keywords. For example some website called www.bla.com is there and it ranks high for many keywords and I want to learn from it how can my website be of the same authority and get the same ranking (or probably better ranking if I found something that they are missing). Can anyone enlist it as a procedure, how to reverse engineer a website?

    Read the article

  • How do I fix the paths of my website?

    - by EASI
    I have Joomla 2.5.7 in my client's server updated recently from 1.6 to 1.7. I did not make that site but I am responsible for it now. I prefer to make a site from zero. Now users area clicking on the menu options and when joomla send them to a meta-url like http://iap.pa.gov.br/acervo they get the message 404 The requested URL /acervo was not found on this server. Would that be because they moved joomla folder from the root to root/iap (name of the site)? If it is, what is the configuration to adapt to that new folder?

    Read the article

  • What are the Search engine affects of registering the same domain on multiple top level domains (ie. .com, .ie, .nl etc.)?

    - by user1020317
    I'm looking to register a few more domains for my company, I have my-company.com at the moment, but now require my-company.com.au and my-company.nl and some others.. I'm running through my options and wondering what is the best.. Duplicate all the content on the .com package and make a replica at the other domains Buy the other domains but do a 301 redirect back to the .com domain. Create a full new website with different content for the new domains, thus having no text duplication We currently sell over the world so would like to raise our Search rankings in various countries, can this be done by buying the domain in the country, and if so, how will the above methods affect our search rankings. Any other suggestions are welcome!

    Read the article

  • What framework for text rating site?

    - by problemofficer
    I want to start a "rate my"-style site. The rated objects are mostly texts. I want it to be rather simple. Features I need: object rating (thumb up, thumb down) object comments object tags related object presentation based on tags user authentication and management private message system sanity checks for text inputs (i.e. prevention of code injections) cache open source runs on GNU/Linux I would gladly take something that is tailored for my scenario but a generic framework would be fine too. I simply don't want to write stuff like user authentication that is been written a million times and risking security flaws. Programming language is irrelevant but python/php preferred.

    Read the article

  • Can't load Wordpress static files on home network

    - by Tosho
    I've just installed Wordpress 3.5 on my laptop (LAMP on Ubuntu 12.10) and when I'm trying to access the site from my phone but it doesn't load static files (css and images). I tried with Opera Mobile Emulator on my laptop and it works perfectly. I also have another Drupal site on my localhost which I can load from my phone without any issues. Both directories have chmod 777 permissions. What can cause that? Just tried to open the site from my sister's laptop but it except static file I can't access any post or page.

    Read the article

  • Another website is mirroring and ranks above my site in search results

    - by Marlboro Goodluck
    There is a site of ill-repute known as thedirty which has completely mirrored my site and now has links appearing on Google at the #1 spot using my content. I checked my log files and noticed that this site has been crawling mine for sometime, and also has 10,000 links from their site to mine. I have blocked user access which is referred from this site and reported them as web spam to Google already. I also disavowed the domain. How are they getting top links in Google (even overtaking mine) for such nefarious tactics? What are the steps to completely eliminating an issue such as this?

    Read the article

  • Finding 301 redirects

    - by php-b-grader
    I have a URL which is 301 redirecting but I cannot find where or how it is happening and wanted some checks to perform if posible? I've checked .htaccess - it's not there I've checked CPanel in Redirects - it's not there In wordpress, I have the redirection plugin active and it's not there either. Is there anywhere else that could be issuing redirects? I'm at a loss to find out where and how the page is redirecting!

    Read the article

  • HTML5 article tag application for the iPad

    - by dspencer
    I've used article tags on websites. My understanding and practice is to use the article tag for publication content. I always use HTML/HTML5 tags as their intended purposes and not at will. Recently, I've seen an HTML template that uses the article tag for the non-publication page content such as the content of an About Us page or any other generic page. I asked the why it was used this way and the (vague) explanation was that it had to do with the way the iPad read the tag. Is this true?

    Read the article

  • Does Google penalize pseudo-duplicate pages for different locations?

    - by mikewowb
    My compony's site's home page was not specificly optimized to any location. Now, I am planning to optimize it to Boston, and create ten or so other landing pages for other locations we serve. If we made these new pages by copying the original Boston one and changing the location's name (s/Boston/Montreal/), would Google consider them as duplicate pages and penalize us? What is the best practice for this?

    Read the article

  • How to ensure that the domain name you are registering won't have licensing or trademark issues?

    - by pokemarine
    What are the conventions of domain registering? What domains and names someone can use as a brand/name for a website? How to determine if the name selected is available and there will be no legal issues later? Example.: A new website being developed, it needs a name, isn't it? So the team who is responsible for it decides, the name will be "WooLaCocaCola", means I should register the www.woolacocacola.com domain for the site. Let's say the domain is free to register, but that doesn't mean that the name as it is can be used, how can I check something like this?

    Read the article

  • Will using HTTPS hurt my site's SEO or other statistics?

    - by yannbane
    I've set up a WordPress blog. Since I have to log into it from many different locations/machines, I've also got an SSL certificate, and set up Apache to redirect HTTP to HTTPS. It all works, but I'm wondering whether that's an overkill. Since most people who go to my site don't have to log in, I'm starting to wonder whether HTTPS has some drawbacks. If so, should I look for a way to make HTTPS optional?

    Read the article

  • cookie not being sent when requesting JS

    - by Mala
    I host a webservice, and provide my members with a Javascript bookmarklet, which loads a JS sript from my server. However, clients must be logged in, in order to receive the JS script. This works for almost everybody. However, some users on setups (i.e. browser/OS) that are known to work for other people have the following problem: when they request the script via the javascript bookmarklet from my server, their cookie from my server does not get included with the request, and as such they are always "not authenticated". I'm making the request in the following way: var myScript = eltCreate('script'); myScript.setAttribute('src','http://myserver.com/script'); document.body.appendChild(myScript); In a fit of confused desperation, I changed the script page to simply output "My cookie has [x] elements" where [x] is count($_COOKIE). If this extremely small subset of users requests the script via the normal method, the message reads "My cookie has 0 elements". When they access the URL directly in their browser, the message reads "My cookie has 7 elements". What on earth could be going on?!

    Read the article

  • jQuery scrolling images for e-commerce site, what to do about users who disable JS

    - by Livingston Storm
    As the title suggests, I am developing an e-commerce site and I intend of having two jQuery plug ins on the default page, one for scrolling images and the other for the navigation menu. Should I be concerned about making the site work if the users disables JS? Cause if they have it disabled my site would be almost impossible to use with the scrolling images blocking the main content. Plus the CMS I am using, Big Commerce, uses a bit of JS for the products pages, which would also look ridiculous with JS disabled. Anyone have experience with this?

    Read the article

  • Getting web results URLs in millions [closed]

    - by tereško
    I looked at all sites of SO and couldn't find any suitable to ask this question but posting here as nearest match to scenario After 1 months research I basically give up on getting all URL's from a search results programmatically, I looked at Google Search API to find a way to get millions of search results "URL's" to be specific to a text file or something relative but no success, but I am 100% there must be a way or trick of doing it. Real Question : Is there anyway programmatically or manually I can get 1000+ search results (URLs using search query e.g. "Apple" returns million of results on google and I want as much as possible URLs of them results in a text file)

    Read the article

  • Why my sub-domain redirect returns a blank page?

    - by Tom Brito
    I have the domain http://dropbox.tombrito.com/ (on GoDaddy) forwarding with masking to www.dropbox.com/sh/k6ypvx4y4kf0gu6/rdjxQ1b1OL It was working fine some time ago, but now the result is a blank page (although the Dropbox's favicon appears correctly in the browser's tab title). The DNS manager shows me a single entry with the name "dropbox": A dropbox 64.202.189.170 Any idea what's wrong? Related: Why my domain redirect on Google Apps is returning 404?

    Read the article

  • Does similar Titles considered duplicates relating to SEO

    - by Uri
    I have built a testing service and I wonder if I should be concerned that search engines will consider similar titles as duplicates. For example: Some URLs title can be differed from others by only one word such as "Senior" and "Junior" Title A: C++ Online Test for Juniors Title B: C++ Online Test for Seniors Another example is with the "+" sign: Title A: C Standard Library Online Test for Seniors Title B: C++ Standard Library Online Test for Seniors Should I assume search engines will understand there is a difference in the titles? And the titles are not duplicated?

    Read the article

  • forwarding my domain to ning site, vs paying for mapping. SEO value? [closed]

    - by myf
    Possible Duplicate: Could I buy a domain name to increase traffic to my site like this? hello, and thanks for your time to answer. really appreciate that! my domain url is keyword stuffed (homes for sale and the city name). does it make a difference if I foward that to my ning site, which is www.homesforsale(in city name).ning.com or is it just the same for SEO / pagerank value as paying ning for the proper url mapping. thanks so much!

    Read the article

  • 100% APC Fragmentation - Cacherouter & Pressflow install

    - by granttoth
    My APC cache has a 100% fragmentation. I'm not quite sure I understand what is going on here. For testing I jacked the available memory up to 512. After a day the total available free memory shows 73% but I still have 100% fragmentation. Would you gurus please look at my settings and offer your advice? Oh, I have read people suggest that I disable apc.stat when possible but when I do the site crashes. I am using the Pressflow build of Drupal 6 with the cacherouter module installed. Edit: (added screenshot) http://i.imgur.com/DqZEX.png

    Read the article

  • Is it possible to tell a search engine not to index a specific section of an HTML page? [closed]

    - by Justin
    Possible Duplicate: Preventing robots from crawling specific part of a page I know you can use robots.txt to ignore entire pages or sections of your site, but is there a way to tell cralwers like the Googlebot to ignore specific sections of an HTML page? I found this blog post that discusses one method, but it appears only to work for the Google Search Appliance, not the Googlebot. Is there some method for at least Google for to do this?

    Read the article

  • Change the output of the facebook like button [migrated]

    - by Mechaflash
    I've tied in the code for the facebook like button from the facebook dev site (I currently use the iframe version) directly into the html of a node (or content post). I want to be able to manipulate what text is sent when someone hits the like button. you can see the site and buttons here www.masteringmoneybasics.org I've tried both the iframe and html5 versions of the button and can't see where to alter what is sent. If there is no way to directly alter what is sent, does anyone know what it looks for in content to be sent so I can structure the node correctly? If you notice, when you like the page, it doesn't get the first sentence of the content, but all content after it, and I've tried putting different content in between the two lines (in its own <p>) and it still grabs the latter. Also how it figures out which image to grab from the page? Most times it doesn't take any image, however twice it's grabbed the middle school image.

    Read the article

  • jQuery setTimeout delay for an element

    - by Trouble
    Is there an easier way to wait for an element to load ( by independant script/mootools/other ). For example: I am waiting for a google map to load, but I don't want to use its API for checks. So I made two functions: function checkIfexist() { if(jQuery('#container').length) return 0; else reload(1); } function reload(mode) { setTimeout(function(){ do stuff . . . if(mode==1) checkIfexist(); }, 400); } I am starting it with reload(1); Is there an easier way to use setTimeout in such a way? I don't want to use delay, wait or whatever.

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >