Search Results

Search found 96383 results on 3856 pages for 'code pro'.

Page 596/3856 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • Will loading meta tags dynamically from a database hurt the site?

    - by Nalaka526
    I have a website (ASP.NET MVC) which has its contents mainly in Sinhala language. So the search engines will list my site only when someone searches for Sinhala words. But,I need to list my site's pages in search results when searched with appropriate English words too. So I'm planning to save HTML meta tags (in English) in database and load them dynamically with appropriate page contents. Will loading the meta tags dynamically affect the site adversely?

    Read the article

  • Directory access control with Apache: do I need to use a specific .htaccess?

    - by Mirror51
    I have an Apache webserver, and in the Apache configuration, I have Alias /backups "/backups" <Directory "/backups"> AllowOverride None Options Indexes Order allow,deny Allow from all </Directory> I can access files via http://127.0.0.1/backups. The problem is everyone can access that. I have a web interface, e.g. http://localhost/adminm that is protected with htaccess and password. Now I don't want separate .htaccess and .htpasswd for /backups, and I don't want a second password prompt when a user clicks on /backups in the web interface. Is there any way to use same .htaccess and .htpasswd for the backups directory?

    Read the article

  • Titles in Google results contain spammy prefixes

    - by rfoote
    Over the past couple of weeks, we've noticed that the search results from Google for some of our drupal-powered sites are having their page titles hijacked somehow. An example would be: free streaming porn - [Actual page title] There are other variations of the porn prefix, that's one of the more tame ones. I looked in the databases for each of these sites and the titles haven't actually been changed or anything along those lines. When you click on the result to visit the page everything looks normal (sans porn stuff). Would anyone be able to point me in the right direction as to what the cause of this is?

    Read the article

  • Blog: Search results prefer index page over content pages.

    - by jonescb
    I have a typical blog that has recent posts on the main page, and each post's title links to a page that only shows that one article and comments and such. I was looking through some of the keywords used to get to my site and I was noticing that some of the searches would only show my main page, and not the page for the article. If users have to find the article by scrolling through the main page, it just makes it more difficult. Is there some way that I can tell search engines to rank the content page higher than my index page? Or can I do something else like not display the full text of the posts on the main page?

    Read the article

  • Google indexed the same page under two URLs (despite rel-canonical)

    - by unor
    The Super User question "Playing mp3 in quodlibet displays “GStreamer output pipeline could not be initialized” error" is indexed under two URLs in Google: http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia/652058 The first one is the canonical one; the corresponding rel-canonical is included in both pages: <link rel="canonical" href="http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia" /> Google also indexed http://superuser.com/a/652058, which redirects to the answer: http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia/652058#652058 Now, the second URL from above is the same as this one minus the fragment #652058. So Google seems to strip the fragment, which results in exactly the same page under another URL (= containing the answer ID /652058 as suffix), and indexes it, too -- despite rel-canonical and duplicate content. Shouldn’t Google recognize this and only index the canonical variant? And what could be the reason why Stack Exchange includes the answer ID in the URL path, and not only in the fragment (resulting in various URL variants for the same page)?

    Read the article

  • Google analytics not provided for 55% of total traffic

    - by Neolisk
    I've been here and here to learn what (not provided) means. Now the question is if what I am seeing in my Google Analytics stats for my website is considered normal (and whether I can/should do anything about it). Here are the statistics from one day, but other days are similar: 102 visits, 57 is from (not provided), that's over 55% of unknown keywords. Is it normal to have it like that? Does google plan to do anything about it? In other words, what's the perspective? In my understanding, with this approach, as people switch to https, Analytics will stop being useful. Please correct me if I am wrong in my assumptions.

    Read the article

  • Regarding adsense cpc

    - by Silver Moon
    For the same niche and same set of keyword, does google adsense serve higher cpc ads to a website that has higher number of visits ? I have observed that for similar niches 1 website (with 3K daily uniques) makes around $100 a month, and another website (with 10k daily uniques) makes around $700-800 a month it seems that the earning curve is not linearly dependant on visit count and somewhat increases at a rate faster than the growth of visits, this leads me to think if the google adsense algorithm serves higher cpc ads once a website starts getting large number of visits.

    Read the article

  • Getting a lot of backslash underscore errors from webmaster tools

    - by Vermino
    I'm using a wordpress site and I thought I got all the kinks out of it. For some reason Webmaster tools is crawling my website and showing a lot of 404 errors which are from "/_" like additional pages that's i've never created. I just can't figure out what is creating these to google crawlers and then displaying a 404. my robots txt http://www.redcherryshrimp.net/robots.txt my sitemap created from Yoast plugin http://www.redcherryshrimp.net/sitemap_index.xml I have Yoast(creates the sitemap) and Jetpack plugins installed

    Read the article

  • Receive anonymous users' input by web upload form or email. Any online service for that?

    - by sja
    Are you aware of any online service or online "platform" allowing users, not previously registered, to upload pairs of picture+comment to a database? It would be a collaborative database of picture+comment pairs. I'm not going wiki or googlegroup, picasa or such because I'd like the user to have the least to do to participate, that is e.g.: take a picture with his phone and email it to an email to an email address. Or go to a web page with an upload form, type in a description, hit OK and that's it. And the goal is also that it be as hassle-less to put up as possible. Yeah I know, it can't programme itself to my requirements :) by I'm suspecting there's a tool or tool combination going a decent way through my needs. Thanks for any info/advice! SJA (NB the final goal is a kind of crowd-sourced census of specific urban items. If you have comment about the potential for spam-overload of my idea, other than "you're doomed", you're welcome!)

    Read the article

  • Photos - do I really need to look for the author and ask his permission when posting them on my site?

    - by user6456
    When I find a photo somewhere on the internet, without any explicit information of whether I can re-publish it on my own website, without any hint of who is the owner/author of that photo, can I still do it? I'm puzzled here cause I've seen like millions of websites, often very big, that repost photos, most probably found via google and it's VERY unlikely they bothered to look for and contact the author of that photos. Is every one of that sites likely to be sued at any moment? What about the case of forums and content provided by users - there is virtually no way of prevention here.

    Read the article

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • I need a backend system that is integrated with web services, is there an open source solution?

    - by Jarom
    I'm basically familiar with what I need in order to setup web services to talk with a centralized db, but if I don't have to go through and do all the work, I'd rather not. Is there an open source solution that would allow me to easily integrate web services for data transfer to a central db? I want to make a site that is powered by a db that can also be accessed by other things like mobile apps for example. What are the steps involved in setting up such a site? Any help is appreciated! I could use all the help I can get!

    Read the article

  • Google map in MediaWiki not showing

    - by user67656
    I have upgraded MediaWiki from 1.9.3 to 1.16.1 in a new server. However, the google map is not showing in the link. It's a blank in that page but in the old server with old version it is working fine. I am not a developer so I have no clue on this. Please let me know anybody have any idea on this. you can have a look on the below links http://new.realchicago.org/wiki/index.php/Archer_Heights The first link in which the google map is missing.

    Read the article

  • htaccess and htpasswd trouble

    - by hjpotter92
    This is the first time that I have ever tried working with .htpasswd and .htaccess files, so please point out my childish works. I have my apache document root set to /www/ on my debian server. Inside it, there's a folder named Logs/ which I want to restrict access using a htpasswd. I created my htpasswd file using the shell's htpasswd command. And this is the result: > user:<encoded password here> > hjp:<encoded password here> > hjpotter92:<encoded password here> I put this file named .htaccess inside /www/. The Logs/ has following htaccess file in it: > AuthName "Restricted Area" > AuthType Basic > AuthUserFile /www/.htpasswd > AuthGroupFile /dev/null > require valid-user This was again created using an online tool(I forgot its name/link, and can't search the browser-history now). The problem, as it might've already struck you is that I am experiencing no change on my Logs folder access. The folder is still accessible to everyone. I am running apache as root user(if that matters/helps). Please help/guide me. I've tried reading some htaccess guides and have followed some of older SO questions, but still haven't figured out a way to restrict access to Logs folder with a password.

    Read the article

  • Do private collaboration platform really need private file access?

    - by apasajja
    I need to make private collaboration platform, where the website is not open to public registration, and all the posts is only accessible by the members. The members is management team of a company. Along with many features, it has Announcement. When posting Announcement, it has option to upload images. I personally like it public, because the file transfer faster and can be easily integrated with CDN. I just wonder whether I need to make the image public, or only accessible by the members.

    Read the article

  • Getting a lot of '/_' errors from webmaster tools

    - by Vermino
    I'm using a WordPress site and I thought I got all the kinks out of it. For some reason Webmaster Tools is crawling my website and showing a lot of 404 errors which are from /_ like additional pages that I've never created. I just can't figure out what is creating these for Google crawlers and then displaying a 404. My robots.txt is here. My sitemap (created by the Yoast plugin) is here. I have Yoast and Jetpack plugins installed. What could be causing these links to appear

    Read the article

  • Web pages with mixed ownership photos

    - by dstonek
    I have a photo website. 15% of the photos belong to approved registered users. They agree my terms about uploading their images in my web pages. I include a photographer credit on right bottom corner. About identifying the site with google, every page contains a google+ button to MY google+ page it also contains <link href="https://plus.google.com/nnnnnnnnnn/" rel="publisher" /> I need some advice in order to respect google rules about my pages containing other photographers images not to be penalized because of possible duplicated or interpreted as stolen content. My concern is also about adding G+ links (to MY photo page) and Google publisher id would harm my site rank because of pages containing third-party photos.

    Read the article

  • Google Analytics: Custom variables issue difference in data

    - by Bart
    We’ve set up tracking through custom variables in Google Analytics to measure which offices are getting the most traffic. The custom var consists out of the key (=office) and value = (office name). In our Custom Var tab in audience we get no data (actually we got 1 hit, but we think the data is way off). When we setup advanced segments with the filters on key and value we get the correct data. Now we are wondering why we aren’t getting that data in the custom var tab.

    Read the article

  • SEO, IIS 7 and web.config in subfolder issue

    - by tesicg
    We have ASP.NET application that has sub-folder with .aspx pages and separate web.config file in it. The .aspx pages in that sub-folder behave as separate site. In the web.config file at application level, I set the rule that removing trailing slashes: <rewrite> <rules> <rule name="RemoveTrailingSlashRule1" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> </rules> </rewrite> I expect this rule will propagate downward to sub-folder as well. To access the site in sub-folder we should type: http://concert.local/elki/ and get it without trailing slash as: http://concert.local/elki But, the trailing slash remains. The web.config file in sub-folder looks as following: <configuration> <system.webServer> <defaultDocument> <files> <add value="Sections.aspx" /> </files> </defaultDocument> </system.webServer> </configuration>

    Read the article

  • Are Web Safe Colors Still Relevant?

    - by VxJasonxV
    I still remember one of my high school teachers lecturing us about the "web safe colors". A set of 216-256 colors that you should confine your designs to use, and nothing else besides them. Last I knew, Photoshop still has the "web safe" yield icon[1] on it's color picker. Are web safe colors still a concern? Outside of the obvious application (accessibility, legacy software versions, etc.), how much consideration should I give to limiting my color choice for my general audience? [1] Or was it the cube? I never remember.

    Read the article

  • Self-censorship of our search results

    - by user5261
    We run a small search engine and have recently been notified of a number of hate related links in our results that would upset a significant proportion of our users. Our first instinct is to summarily remove these results, but I'm concerned that this makes us little better than the oppressive regimes that censor the web. Where does one draw the line and how might one justify removing results that we deem offensive?

    Read the article

  • I am building a simply website for my mobile app & need good recommendation on where to host it [closed]

    - by Gob00st
    Possible Duplicate: How to find web hosting that meets my requirements? 1Question 1 I am building a simply website for my mobile app & need good recommendation on where to host it ? I am not expectation a large access volume any time soon but I want stability in general & considering I am just starting to do my 1st app, so I probably need it to be relative cheap. Please recommend me some stale & cost effective web hosting service ? 2Question 2 Also since I am some what new to web development (know basic HMTL & have used front page/dream weaver like 10 years ago, but haven't touched it for ages). But I am a good c++ software developer. How to you recommend me to build a simple static website(maybe just a few pages) for my mobile app ? Any template or tool recommendation ? Thanks a lot.

    Read the article

  • Website directory structure regarding subdomains, www, and "global" content

    - by Pawnguy7
    I am trying to make a homemade HTTP server. It occurs to me, though, I never fully understood what you might call "relativity" among web pages. I have come across that www. is a subdomain, and I understand its original purpose. IT sounds like, in general, you would redirect (is that 301 or 302?) it to a... non-subdomain, sort of. As in, redirecting www.example.com to example.com. I am not entirely sure how to make this work when retrieving files for an HTTP server though. I would assume that example.com would be the root, and www manifests as a folder within it. I am unsure. There is also the question of multi-level subdomains, e.g. subdomain2.subdomain1.example.com. It seems to me they are structured "backwards", where you go from the root left in folder structure. In this situation, subdomain2 is a directory within subdomain1, which is a directory in the root. Finally, it occurs to me I might want a sort of global location. For example, maybe all subdomains still use an image as a logo. It makes more sense to me that there is one image, rather than each having a copy. In the same way, albiet more doubtfully, you might have global CSS (though that is a bit contrary to the idea of a subdomain in the first place), or a javascript that is commonly used. (more efficient than each having its copy and better for organization purposes). Finally, mabye you have a global 404 page. I think this might be the case where you have user-created subdomains (say bloggername.example.com), where example.com still has a default 404 when either a subdomain does not exist or page does not exists under a valid blogger. I am confused on what the directory structure for this should be. To summarize: Should and how it have a global files not in a subdirectory, how should www. be handled, (or how a now www or other subdomain should be handled), and the pattern for root/subdomain, as well as subdomain within subdomains (order-wise). Sorry this is multiple questions, but I feel at the root they are all related to the directory.

    Read the article

  • Image slider not working when website is hosted on remote server [on hold]

    - by Tushar Khatiwada
    I'm having a different problem. I made a html website and it contains Nivo Slider in the index page. The site is working perfectly when viewed locally. I uploaded the site to remote server but the slider is not being displayed and the photo from the gallery is not working as expected ( popups on the local pc). The url of the site is: http://d138444.u24.elitehostingwizard.com/ The screenshot from the local pc: http://postimg.org/image/lxiqzx7br/ Thanks

    Read the article

  • Public/Private Key Generation

    - by JacKeown
    I'm just learning about public key cryptography and I want to make a public key certificate for my web server so that I can use https. My server is hosted on some random free webhost that is practically impossible for anything...and so my question is this: Is there any harm in making my private key, public key, and public key certificate on my computer using openssl and then transferring it to the server? Thanks in advance. Also if there's anything else I'm missing, any help would be appreciated.

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >