Search Results

Search found 96383 results on 3856 pages for 'code pro'.

Page 512/3856 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • Does Bing support anything like Google's First Click Free program?

    - by Dan Fabulich
    Google has a program for webmasters called First Click Free. To implement First Click Free, you need to allow all users who find a document on your site via Google search to see the full text of that document, even if they have not registered or subscribed to see that content. The user's first click to your content area is free. However, once that user clicks a link on the original page, you can require them to sign in or register to read further. The user must be able to see the full content of a multi-page article. You can allow this by displaying all content on a single page to both Googlebot and users. Alternatively, you can use cookies to make sure that a user can visit each page of a multi-page article before being asked for registration or payment. Does Bing support anything like this?

    Read the article

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

  • How can I work on a WordPress theme already installed in the root directory?

    - by Isaac Lubow
    I have WordPress installed at the root level of a website. I thought it would be easy enough to have a "coming soon" page called default.html and edit the .htaccess file as follows: AddHandler php5-script .php DirectoryIndex default.html index.php # BEGIN WordPress # END WordPress ...so that visitors to the site are sent to the default page, and I could manually specify index.php as my destination for testing. (This isn't a high-security job.) But index.php is redirecting me to the default page. When I remove the DirectoryIndex line, the index.php file is found automatically by visitors to the site root, but... that's the page I was trying to hide. What am I doing wrong with .htaccess and how can I get it to behave the way I want?

    Read the article

  • Move site from one tld to another

    - by Amol Ghotankar
    If we want to move site from say xyz.com to xyz.org. What all things we need to do to make sure seo works fine. I am doing something like Point both xyz.com and xyz.org to same ip where my site is working Use cannonical url to have xyz.org/* instead of xyz.com/* Add site to webmaster and make a change request. But problem is we are not able to 301 redirect from xyz.com to xyz.org as both are on same i/p and doing so is causing redirect loop and error. How to fix this? Please help.

    Read the article

  • Drupal + LDAP + Automatic

    - by WernerCD
    I've got Drupal 6 setup within a XAMPP test area. I have LDAP authentication, groups and data working against Active Directory. What I want... is since I'm on an intranet where users are logged in via user-names... is for automatic authentication, without the need to login via the website. If it's more difficult than its worth, it's no major hassle, but I'd like to know if it's possible that when my users visit our intranet they auto-magically authenticate with their already logged in Windows session. Ultimately, I may switch to IIS, but I do like having a portable, easy to backup/copy/test setup so for now I'm going to see if I can get this working in XAMPP.

    Read the article

  • What is recommended - UC or EV or EV UC certificate?

    - by Abdel Olakara
    We are implementing Exchange 2010 server and an eCommerce site. Both of these need certificates and I am confused what to use? I know Exchange need UC certificate. Can I use it for the ecommerce site as well? I did read EV is recommended for web sites.. I would like to know what to use and the recommended procedures. Here how we will be using the certificates: We are planning to use *.net for testing Exchange server Will be using *.com for Exchange server (Production) Will be using *.com for ecommerce site (Production) I also heard about certificates which are both EV UC.. please recommend the correct certificates to use.

    Read the article

  • Mixing self signed certs with traditional SSL

    - by brentonstrine
    I have a traditional SSL cert going to a subdomain secure.mydomain.com on my domain. My host required me to have a dedicated IP in order to do this. I would also like to use HTTPS on my site for when I log into WordPress, etc. and since this is just for me, I don't mind self signing it and clicking through the scary messages. Is there a way to use a self signed cert for mydomain.com/wp-admin (just for me) when I already am on a dedicated IP that already has a traditional SSL cert for normal users on secure.mydomain.com? (FWIW, I'm on WHM without root access.)

    Read the article

  • Browser testing - Ideas on how to tackle it efficiently

    - by Rob
    Browser testing, the bane of any web designers life! Are there any tools and/or ways in which I can efficiently test different browsers on both Mac and PC? I not only want to test different browsers but also different versions of each browser. My current setup is on a Mac running VirtualBox with Windows Vista installed. This allows me to test both Mac and PC but the complications arise when trying to test different versions of browsers. Any one have any ideas?

    Read the article

  • Multiple sites redirected to one main site

    - by mattgcon
    I have a client who insists of having multiple website domains all being redirected to one main website domain. It is getting out of hand and his server has become conveluted and riddled with garbage because of it, not to mention confusing at times. Each of these domains that he is setting up has no content, they simply redirect the user to the main website domain. Is this practice of having multiple domains pointing to one main website common? And does anyone know where I can get information to give to this client to let him know this is a bad practice if it is a bad practice?

    Read the article

  • Can a Mediawiki table be dynamically created using other Mediawiki pages?

    - by Ashimema
    OK, So I've got created a page on my wiki which contains just a single table listing various details about servers and customers. You can follow links for each customer name in the table to find additional details about said customer. What I want to know is; Can the information in the customers page (page B) be used to dynamically update the table (Page A). Is this something that the Semantic MediaWiki extension can accomplished? Running Mediawiki 1.16.2

    Read the article

  • Cost effective way to provide static media content

    - by james
    I'd like to be able to deliver around 50MB of static content, either in about 30 individual files up to 10MB or grouped into 3 compressed files, around 5k to 20k times a day. Ideally I'd like to put some sort of very basic security around providing the data to ensure that a request is from the expected source, but if tossing the security for a big reduction in price is possible then it's an option. Does anyone have any suggestions other than what I've found: Google AppEngine is $0.12/GB & I believe has a file size limit of 10MB so I'd have to break the data up a bit. So a rough calculation would seem to be that this would cost me about $30 to $120 a day. Or I've seen something like what seems to be just public static content delivery with no type of logic capabilities like Usenet.nl at what I think calculates to about $0.025/GB which would cost me about $6 to $25 a day. Any idea if I'm going about these calculations right & if there might be a better option for just static content on a decently high volume delivery? Again some basic security would be great but if cost is greatly reduced without it then I'm up for that.

    Read the article

  • Similar domains using my business' content, and stealing SEO results

    - by Murciano
    I've been hired to create a website for a restaurant in my city, let's call it "Flying Dragon" Chinese restaurant. The restaurant has never had a website, though the business itself is about ten years old. However, if you Google the restaurant's name, the first site that comes up seems to be affiliated with the restaurant itself, even though it is not. This site - let's say, flyingdragonchinese.com - is also the one that Google has apparently selected, in its results, to be the official website of the restaurant - in essence, the first Google result is flyingdragonchinese.com, and directly beneath it, within the same entry, are the Google reviews and contact information. Upon visiting flyingdragonchinese.com (again, not the actual name), I see that the website has taken the menu content from the restaurant, in the same manner that Yelp does, but it also seems (to the untrained eye) to be the restaurant's official site. Basically, someone has created a fake website for the business (I am not sure why) using its actual menu and contact information, and is hogging the search results. The concept is similar to a "scraping site" except that the information seems to have been stolen manually. The main problem is that visitors to this site will have an inaccurate impression of the restaurant. I feel like the obvious solution is to register a new domain for my site, and simply beat out this competitor (or whatever it is) with smarter SEO and business verification with Google. However, the Conan-the-Barbarian-web-designer part of me wants to somehow bash this other site (deservedly?) into oblivion. But I don't know what I can really do, besides maybe issuing a cease-and-desist letter, or trying to contact the web host for the site, although there is no contact information available on this "fake" site for the site owner. Has anyone ever experienced something like this? Is there any solution?

    Read the article

  • Why my site not linking with google.com?

    - by nishant
    i am very tired about my website ranking in google. i am dong hard work about it but not getting anywhere my site in google. actually i am web master of a www.panbeli.in matrimony website INDIA. i am trying to improve its visibility in google last 5 month but not getting any positive result. but other search engine giving good result like yahoo and bing but google showing no any result in top 20 page result. my website is www.panbeli.in and my keyword are- bari samaj bari matrimony bari community bari shadi panbeli please help me if you can, b'coz i am very frustrated about it. my domain age is 4 years when i type link:panbeli.in in google search does not appear any pages from my site in google. whats the meaning is that?my site does not indexed in google?

    Read the article

  • Google webmaster Index Status. Total Indexed=0

    - by hammad
    I previously changed my domain from www.visualstudiolearn.blogspot.com to www.visualstudiolearn.com... i had around 300 posts with the previous domain name and most of them where showing up on Google. Now that i have changed my domain name the index status shows total indexed as 0 and when i go to the advanced tab it says 304(not selected) and 217 blocked my robots. Im really depressed because of this situation. could you please help out???

    Read the article

  • Shifting from no-www to www and browsers' password storage

    - by user1444680
    I created a website having user-registration system and invited my friend to join it. I gave him the link of no-www version: http://mydomain.com but now after reading this and this, I want to shift to www.mydomain.com. But there's a problem. I saw that my browser is storing separate passwords for mydomain.com and www.mydomain.com. So in my friend's browser his password must have been stored for no-www. That means after I shift to www and next time he opens the login page, his browser wouldn't auto-fill the username and password fields and there will also be an extra entry (of no-www) in his browser's database of stored passwords. Can this be avoided? Can I do something that will convey to browsers that www.mydomain.com and mydomain.com are the same website? I already have a CNAME record for www pointing to mydomain.com but it seems that search engines consider CNAME as alias but browsers consider them as different websites, I don't know why.

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • Ok to target product names in adwords?

    - by Tom Gullen
    If I have widget company called "Widget Designer" and I have a direct competitor who has "Widgitator Version 5", am I allowed to target a campaign using the literal keywords "Widgitator"? Is this OK? Will they ever find out? Is it bad business? Update I can't really say what the words are, but this is a good example, if my product is called "Chair-o-matic" and it makes chairs, and a competitors is called "Chair Maker 5" can I target the keyword pair "Chair Maker"?

    Read the article

  • Should I get an SGC enabled SSL certificate?

    - by Simon
    I'm in the market for a new SSL certificate and am wondering if I should get an SGC enabled certificate or not? In the past I have just used cheap SSL certificates but since this is for a new company website I want to make sure I have the best but I am unsure whether it is worth paying the extra. The documentation states that it just enables older browsers to use 128 bit encryption when they would normally only be able to use 40 or 56 bit encryption. Would you pay the extra for older browsers which are likely to be extremely rare?

    Read the article

  • Determining cause of random latency and loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of CSS and JS files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well.

    Read the article

  • How to force user to use subdomain?

    - by David Stockinger
    I am hosting a webshop with OpenCart and its current URL is e.g. http://mydomain.com/shop/ I have created two subdomains ( http://pg.mydomain.com/ and http://shop.mydomain.com/ ) and both subdomains are already working as they should. However, can I restrict direct access to mydomain.com/shop/ while leaving all the files (index.php, etc.) there? Since both subdomains are pointing to http://mydomain.com/shop/, I thought this would restrict all access. So in the end, I would like my two shops to be accessable through http://pg.mydomain.com/ and http://shop.mydomain.com/, but not http://mydomain.com/shop/ while leaving all the files in http://mydomain.com/shop/.

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • Cloud hosting vs self hosting price

    - by yes123
    I was looking at some cloud hosting price. Consider an entry level self hosted server: PRICE: 40€ ---------- CPU: i5 (4x 2.66 GHz) RAM: 16GB hard disk: 2TB Bandwidth: 10TB/month with 100Mbps Now consider an equivalent on a cloud structure... (for example phpfog) PRICE: 29$ -------------- RAM: 613MB (LOL WUT?) CPU: 2 Burst ECUs Storage: 10GB (WUT?) Basically with cloud, to have the same hardware of your entry level dedicated server you have to pay 300-400€... Is it normal? I am missing something?

    Read the article

  • SEO best practices for a web feature that uses geolocation by IP Address

    - by Nick
    I'm working on a feature that tailors content based on a geo location lookup by IP address in order to provide information based on the general area where this visitor is from. I'm concerned that content will be interpreted as focused solely on the search engine spider's geo origin when it is indexed. Are there SEO best practices for geo location by ip address features? I appreciate any specific tips or words of wisdom.

    Read the article

  • How can I find unused/unapplied CSS rules in a stylesheet?

    - by liori
    Hello, I've got a huge CSS file and an HTML file. I'd like to find out which rules are not used while displaying a HTML file. Are there tools for this? The CSS file has evolved over few years and from what I know no one has ever removed anything from it--people just wrote new overriding rules again and again. EDIT: It was suggested to use Dust-Me Selectors or Chrome's Web Page Performance tool. But they both work on level of selectors, and not individual rules. I've got lots of cases where a rule inside a selector is always overridden--and this is what I mostly want to get rid of. For example: body { color: white; padding: 10em; } h1 { color: black; } p { color: black; } ... ul { color: black; } All the text in my HTML is inside some wrapper element, so it is never white. body's padding always works, so of course the whole body selector cannot be removed. And I'd like to get rid of such useless rules too. EDIT: And another case of useless rule: when it duplicates existing one without changing anything: a { margin-left: 5px; color: blue; } a:hover { margin-left: 5px; color: red; } I'd happily get rid of the second margin-left... again it seems to me that those tools does not find such things. Thank you,

    Read the article

  • How to find the exact font used by the browser

    - by sreekanth
    I have used the css style sheet as font-family:arial,helvicta,sans-serif,etc... in a php file when i checked it in browser i dont know which font it is using from the font-family . i wanted to know the exact font used by the browser to display. i.e whether it is using arial or helvicta or sans-serif for the text i have displayed. please any one let me know to find this. i have checked by putting this in wordpad but it is taking my default system font. Thanks in advance

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >