Search Results

Search found 96383 results on 3856 pages for 'code pro'.

Page 547/3856 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • URL Rewrite http to https EXCEPT files in a specific subfolder

    - by BrettRobi
    I am trying to force all traffic on my web site to use HTTPS, using the URL Rewrite 2.0 module added to IIS 7.5. I got that working and now have a need to exclude a couple of pages from using SSL. So I need a rule to rewrite all URL except those referencing this folder to HTTPS. I've been banging my head against the wall on this and am hoping someone can help. I tried creating a rule to match all URL except those in a nossl subfolder as in this example: <rule name="HTTP to HTTPS redirect" enabled="true" stopProcessing="true"> <match url="(/nossl/.*)" negate="true" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Found" /> </rule> But this doesn't work. Can anyone help?

    Read the article

  • Using an old penalized domain for a new website

    - by MiladSafaei
    I had a website with 2 domains like these: firstdomain.com and first-domain.com. The main domain was first-domain.com and the other one was 301 redirected to first one. The main domain got a Google Penguin penalty some months ago. I uploaded the site on an new domain and removed Google index of old domain by using the remove URL tool in Webmaster Tools. Now, I want to use firstdomain.com (which was redirected to the penalized domain) for a new and fresh website with new and perfect content. Is it probable that history of this domain affects the new website and harms its ranking?

    Read the article

  • Transferring local site to shared hosting

    - by Pete
    I'm looking to setup a simple online text processing tool similar to the Clang demo. The processing program itself is a C++ program which I can modify to provide the desired output I need. Since I use Linux+Perl daily and have used Apache in the past, I'd like to get this working locally first. My two questions are: Is it possible to do this with only Apache and Perl? I've looked into frameworks for doing this and quickly ran into The Paradox Of Choice. Will I be able to easily transfer a working local site to a shared hosting service? I want to administer as little as possible. My understanding is since this needs to run a C++ program that CGI is a requirement and thus I need to administer the httpd server. Hopefully this doesn't mean a VPS. Thanks

    Read the article

  • Pagination and duplicate content

    - by jazz090
    I have an archive page that displays the number of articles published. Because there were so many, I ran a pagination script: for 127.0.0.1/archive/2/?p=x&pp=y where p is the page number and pp is number of articles to display per page. The pagination looks like this: Prev 1 2 3 4 ... 12 NEXT with each item linking to p like <a href="?p=x">x</a>. I also have the items per page setter: 25 | 50 | 100 (<a href="?pp=y">y</a>). Now I have a PHP script that fixes pp into a session variable. But I am worried about duplicate content (since incrementing pp values will be inclusive) and also content not getting indexed because its not in the pagination link. so in the example above, pages 5-11 will not be indexed. Any ideas on how to fix this?

    Read the article

  • Letting search engines know that different links to identical pages stress different parts of the page

    - by balpha
    When you follow a permalink to a chat message in the Stack Exchange chat, you get a view of the transcript page for the day that contains the particular message. This message is highlighted in yellow, and the page is scrolled to its position. Sometimes – admittedly rarely, but it happens – a web search will result in such a transcript link. Here's a (constructed, obviously) example: A Google search for strange behavior of the \bibliography command site:chat.stackexchange.com gives me a link to this chat message. This message is obiously unrelated to my query, but the transcript page does indeed contain my search terms – just in a totally different spot. Both the above links lead to the same content, and Google knows this, since both pages have <link rel="canonical" href="/transcript/41/2012/4/9/0-24" /> in their <head>. The only difference between the two links is Which message has the highlight css class?. Is there a way to let Google know that while all three links have the same content, they put an emphasis on a different part of the content? Note that the permalinks on the transcript page already have a #12345 hash to "point" to the relavant chat message, but Google appears to drop it.

    Read the article

  • HTML background-size:cover with floating objects

    - by Mikhail
    I have a trivial page with body having an image background, with background-size:cover. I set html { height:100% } to fill up the entire page regardless of the content amount. Up to this point everything worked as expected. I've added a div and set position:absolute; right:0; width:200px; This, again, worked as expected, until I added content. When this div is populated so much that the contents take up more space than the height of the page, the scroll bar appears. Scrolling down reveals that the background image does not actually cover the entire page. This is due to the fact that my div is taller than 100% of the HTML height. How can I address this?

    Read the article

  • What meta tag or microdata should I use for a dictionary web application?

    - by vonPetrushev
    I have a web application that serves as a dictionary, and it ranks good at google when searching for a rare word in my language (the dictionary's target language). I want the result to appear in the define: some-word, as well as in the search results when someone uses the filter tool Dictionary. Should I add some special meta-tag in the head of the html? How about microdata? Does google have a special webmaster tool for registering dictionaries like: wordnetweb.princeton.edu or en.wiktionary.org ?

    Read the article

  • Serious Forum Design Issue

    - by John
    I want to launch a school forum for a country. I'm in dilemma as to how to design this forum. Country will have many states, each state will have cities, each city will have many schools. Each school will have many classes( 1st - 12th). It looks like this: States - Cities - Schools - Classes I want to make each class as forum. Now the biggest problems are two fold: If I were to list even 1000 schools it will create huge number of forums and I don't think forum softwares(PHPBB, MyBB) are designed to support these many Secondly creating and maintaining huge forums is also very difficult. Ideally I'd like to duplicate existing forum hierarchy to new one. I've done lot of search and I've found that such type of forums are infeasible and such designs don't work. Leave alone the fact the nobody uses such a forum. Keeping in mind this, how should I go about designing this forum?

    Read the article

  • Wordpress with user login and file manager support

    - by Don
    This may be a RTFM kind of thing, so I'll apologize up front. I've been asked by a friend I used to freelance for if there's a solution in Wordpress where users an login, then they can upload their own files in a "my docs" kind of thing. I've never used WP, so before I dig into their info I thought I'd see if anyone here can confirm or maybe point me to a resource. It's one of those "I'll look up at lunch and get back to you" things, which is why I'm bugging you all before reading the docs. Thanks

    Read the article

  • do you still get a bounce in google analytics if all the linked pages/content is loaded dyanmicly?

    - by sam
    Google analytics describes a bounce as a user that visits and leaves before after their first page. But if your site is a one page site, with content loaded dynamicly using javascript you could have a user one your site go through loads of info, text images but would that still count as a bounce ? Or once they click on an a-tag even if it is <a href="#"> can google analytics see that ? (im aware of click tracking in analytics) but i was wandering if google picks up these clicks by default..

    Read the article

  • Is there a class or id that will cause an ad to be blocked by most major adblockers?

    - by Moak
    Is there a general class or ID for a html element that a high majority of popular adblockers will block on a website it has no information for. My intention is to have my advertisement blocked, avoiding automatic blocking is easy enough... I was thinking of maybe lending some ids or classes from big advertisment companies that area already being fought off quite actively. right now my html <ul id=partners> <li class=advertisment><a href=# class=sponsor><img class=banner></a></li> </ul> Will this work or is there a more solid approach?

    Read the article

  • Best and easy way to add video to website

    - by Bibo
    I want to add videos to my website. I want to click on images and then to show video in "window" and start playing (popups like lightbox). I just don´t know what is best way to do it. I think one of the way is jQuery. I know that there is easy way with video tag in HTML5 but I want that this could be play on most browsers (not just with support HTML5, but not so old as IE6 :) ) and I don´t want to use flash or silverlight. What options I have? Is jQuery the way? And how can I do this? Thanks

    Read the article

  • How to point one sub-domain to another sub-domain and they can be used interchangeably

    - by Talon
    I'm trying to do this secure.domain2.com -loads content from- secure.domain1.com So if somebody goes to secure.domain2.com it will load the content of secure.domain1.com Note that I don't want a redirect, so if someone goes to secure.domain2.com in the address bar it will still say secure.domain2.com even though it's loading content from secure.domain1.com I've read that it's possible with a CName or something like that, what is the best way to do that?

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • How to register a .cn domain

    - by user359650
    I would like to register a .cn domain. I found the below pages which list the officially accredited registrars: -based in China: http://www.cnnic.net.cn/html/Dir/2007/06/05/4635.htm -based outside China: http://www.cnnic.net.cn/html/Dir/2007/06/25/4671.htm Needless to say that the registrars based in China have their website in Chinese which effectively prevents me from using them. There are 11 oversea registrars and I'm wondering which one I should be using. If you look at the big names, they all have their .cn registered (facebook.cn, microsoft.cn...), and whois only shows a Sponsering registrar which doesn't seem to be offering domains registration services directly to consumers: $ whois facebook.cn Domain Name: facebook.cn ROID: 20050304s10001s04039518-cn Domain Status: ok Registrant ID: tuv3ldreit6px8c7 Registrant Organization: Facebook Inc. Registrant Name: Facebook, Inc. Registrant Email: [email protected] Sponsoring Registrar: Tucows, Inc. http://www.tucowsdomains.com/ only seems to offer domain-related help but not registration. $ whois microsoft.cn Domain Name: microsoft.cn ROID: 20030312s10001s00043473-cn Domain Status: clientDeleteProhibited Domain Status: clientUpdateProhibited Domain Status: clientTransferProhibited Registrant ID: mmr-44297 Registrant Organization: Microsoft Corporation Registrant Name: Domain Administrator Registrant Email: [email protected] Sponsoring Registrar: MarkMonitor, Inc. https://www.markmonitor.com/ seems to offer registration but only to "big" customers, and definitely not to consumers like me via a web portal. Q: How do big companies register their .cn domains? How consumers like us should do it?

    Read the article

  • SEO: disallowing Google from indexing forms in iframes or not?

    - by Marco Demaio
    I usually place forms in iframes (i.e. order form, request assistance form, contact forms, ect.). Just the forms, I never place other contents or pages in iframes. From a SEO point of view, would you exclude forms from being indexed/crawled by Google or not? I mean my forms hardly ever contains keyword/keyphrases, moreover I obviously place empty title/meta description tags in pages shown in iframe to display forms, cause those titles are never displaied in browser title bar. So I'm wondering what's the point of letting Google index them? Moreover I think these form pages might suck out PR from all other pages that are more valuable for SEO. If your answer is "yes I would exclude them form indexing" would you simply use robots.txt to exclude them? Thanks!

    Read the article

  • is a merchant account a requirment for a website to take payments

    - by calum
    I have had a quick look but couldn't see anything related. Basically, if we were to accept payments for events on our website, via paypal (essentially a Buy it now! button), as a business, do we need a merchant's account, or will a regular bank account be acceptable? I may have some confusion in terms. My understanding is you need a merchant's account to accept credit card payments, but as we are using PayPal, is this necessary? Thank you for any clarification. disclaimer - I've read What are some options for taking payments on my website? but it doesn't explicitly say if we require a merchant account or not. Thank you.

    Read the article

  • .htaccess / 301 redirection question

    - by John K
    All my WordPress post URLs generate subdirectories with duplicate content and I do not know what regular expression to use to consistently 301 redirect domain.com/category/post/random-number/ to domain.com/category/post/ and domain.com/category/post/random-number/another-random-number/ also to domain.com/category/post/. Here is an example of my problem: http://www.example.com/features/harb-constitution-not-to-allow-kr-provinces-to-receive-foreign-officials/ http://www.example.com/features/harb-constitution-not-to-allow-kr-provinces-to-receive-foreign-officials/1345257927000/

    Read the article

  • Job Search Engine Url Structure Issue [closed]

    - by Justin
    Possible Duplicate: What is the best stucture of SEO friendly URL? I am working on a job board, and i'm trying to figure out a good design for URL structure. Some things that I have found through research: 100 - 150 Chars long is ideal 3-5 words in your url, according to Matt Cutts Use .htaccess to force clean urls Do not duplicate data (important) Clean and precise, describing the content Use hyphens On the homepage, I try to detect the users location based on IP, but this isn't always accurate, and not always reliable. So until they put in their city/location, I can't always use this structure but this is potentially work-able. For Searching, a form post to a results page: domain.com/jobs/[city]/[search] ie: domain.com/jobs/toronto/sales manager/ OR domain.com/search/jobs/toronto/sales manager/ or do I remove the word JOBS and just use Search. I trying to keep good search terms in the URL, but also keep it clean and concise. Can someone give me some feedback and thoughts to 'why'...

    Read the article

  • Webmasters hentry error and authorless pages

    - by Ben Racicot
    Within Google Webmasters Search Appearance-Structured data I'm getting a series of errors: Error: Missing required hCard "author". And most of my 44 errors have: Missing: Author Missing: entry-title Missing: updated There seems to be no CLEAR explanation of these errors. It is either because these classes exist without their nested classes, or they are expected to exist because of something else, possibly itemscope or itemtype='' The Question: How do you specify with richsnippets that the page is about a location and there is no human author?

    Read the article

  • Possible for using a surrogate to buy a .it domain?

    - by Matthew Reinbold
    I'm a US citizen interested in buying an Italian TLD (*.it). However, those domains can only be registered by EU citizens or residents, or businesses with a registrant who is an Italian citizen and resident. Are there companies that provide a 'surrogate' like service? They fulfill the requirements for registration but I can administer the domain properties? What are they and what can I expect to pay for the middleman? Or am I a horrible person for even considering 'circumventing' the intent of the restriction?

    Read the article

  • Online iPad 1&2 emulators give different results compared to the real thing

    - by Systembolaget
    I'm designing a centered website (jQuery Isotope). Thre sandbox is here. I have used some online iPad 1&2 emulators to test how the site is viewed on these devices. Then, I managed to get hold of the real thing. Result: on real iPads, the site is centered and the layout adjusts automatically as expected. In online iPad emulators, the site is not quite centered and additional Isotope elements are squeezed in. Of course, I trust the real thing more than online emulators, but why is this happening? To me, it feels like website testing with online emulators is not so reliable after all? If this question is wrong here, please move it or tell me where it should go. SO is about programming, this question isn't. Thanks!

    Read the article

  • Google Analytics: How long does it take users to trigger an event

    - by Stephen Ostermiller
    I implemented Google Analytics event tracking on my currency conversion website. The typical user flow is: User lands on a page about two currencies. User enters an amount to be converted. The site shows the user the value in the other currency. The JavaScript sends Google Analytics an "converted" event when the currency conversion is done. Because most of the sessions on my site are single page, the event tracking is very important to me to be able to know if users find my page useful. I'm looking for a way to be able to figure out how long it typically takes users to enter a value in the form. I expect that this data would form a bell curve with around a specific amount of time after page load. If I can't get a graph, I could make do with a median value. I would like to be able to use this as a core metric around usability testing. Is there a way to get this information out of Google Analytics?

    Read the article

  • Tracking Redirects Leading to your site

    - by Bill
    Is there a way in which I can find out if a user arrived at my site via a redirect? Here's an example: There are two sites, first.com & second.com. Any request to first.com will do a 302 redirect to second.com. When the request at second.com arrives, is there anyway to know it was redirected from first.com? Note that in this example you have no control over first.com. (In fact, it could be something bad, like kiddieporn.com.) Also note, because it is a redirect, it will not be in the HTTP referrer header.

    Read the article

  • What measures can be taken to increase Google indexing speed for a given newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. How do I get important-page.html indexed as soon as possible after 12:00? Ideally within seconds or minutes. Or more generally: What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >