Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 128/216 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Terms and conditions for a simple website

    - by lonekingc4
    I finished building a website for an online chess club which I am a member of. This is my first website. The site has blogging feature so the members can log in and write blog posts and comment on other posts. The membership is limited to users of an online chess site (freechess.org) and any member of that site can join this site as well. I was wondering, is it needed to put up a terms and conditions for my new website? If so, can I have a model of that? I searched and found some models but they are all for big sites that have e-commerce etc.

    Read the article

  • Virtual Pageview Goal Funnel Not Tracking Correctly

    - by cphill
    I have an AJAX form that has three stages: 1. The landing page where a user fills out a form and selects between three question sets and clicks begin assessment 2. The assessment page, where users fill out questions relating to the question set that they selected on the landing page. 3.The results page, which shows whether they are at High Risk or Low Risk. Since this is an AJAX form that does not open a new page for each step of the process, I implemented a virtual pageview that would fire on the pageload of each step of the form process. The following is my virtual pageview setup for each stage: /form/begin-assessment /form/assessment/* (* = Three different virtual pageviews depending on the users selection of the three sets of questions: /one, /two, /three) 3./form/finished-assessment I have set up three separate goals to track user progress through each step of the form assessment. Here is my Goal setup: Goal Description: -Goal Type: Destination Goal Details: -Destination: /form/finished-assessment -Funnel: On Step 1: /form/begin-assessment (Required: Yes) Step 2: /form/assessment/one (Step 2: replace /one with /two or /three and you have my two other goals setup) Now my goals are recording the correct data in the first step and show the completions in the destination, but the second step does not show any drop offs. They show the same data as the destination. Any ideas of how I set up the goals wrong?

    Read the article

  • Facebook shared links not opening (Redirecting correctly) [on hold]

    - by Hammad
    I have been in a problem and after hours of searching find nothing useful to counter it. I have a website, and that website has RSS feed attached to facebook page. As I post the content to website it also appears on facebook page. But I have people complaining on my page that my posts don't open when they click the link to read the details. Since I am a Chrome user and didn't notice this happening for months. But as I checked it through firefox and Internet Explorer, I found the links shared actually don't open with these browswers. They only work in Chrome, means they are properly redirected in chrome browser and not in firefox and IE. Whenever I click on links of posts on my page through IE or firefox the url does not simply redirect to my website and I get to see nothing as if I am not connecting to Internet. When examining URL I see this: http://www.facebook.com/l.php?u=http%3A%2F%2Fbit.ly%2F112NVO9&h=EAQHDgTUv&s=1 Which shows that facebook is not redirecting the links properly. Moreover I also use link shortening service bit.ly to shorten my shared links. I have checked same problem exists even if I don't shorten my links. And I checked I am not alone, even the tech giant website mashable.com links also don't open in IE and FF from facebook, they only open in Chrome. From mobile phone the links don't open (redirected properly) even in chrome. Can anyone tell me what is the issue? Nothing much is written about it on Internet as no once has faced this problem. P.S: I have checked from different systems, the problem persists.

    Read the article

  • Should I have link rel=next & prev on URLs which have query variables?

    - by user21100
    For example, I have link rel prev & next set up on these pages of products: site.com?page=2 site.com?page=3 (this is my preferred structure by the way and I'm trying to get all the ugly URLs which are littered with query variables deindexed as they are causing duplicate content). So the above URLs are fine but once a filter to narrow product results is selected, like "price", the URL shows like this: site.com?price[1000-1499]=on site.com?page=2&price[1000-1499]=on As of right now, I am having the link rel prev & next dynamically added to the header of these pages but since I am working on getting these query variable URLs pages deindexed, I am wondering if I should get rid of it on these pages? Any thoughts?

    Read the article

  • Custom Title Not Used on SERP [duplicate]

    - by rahstame
    This question already has an answer here: Title tag different from title appearing in Google? 1 answer I am using Wordpress Yoast plugin, and this is not the first time that I used it. I have used it in four of my sites. Problem: Google is not using my custom defined title on my homepage. The website is aboveinfranet[dot]com. If I search "above infranet solutions inc" If you open the site, it will show you the right title that I wanted to achieve.

    Read the article

  • Registering domains with Network Solutions

    - by Joel
    Few years ago I registered a domain with Network Solutions. In recent years I've been using cheaper services such as namecheap, powerpipe etc. Every time that I need to renew some of the older domains with Network Solutions I am surprised at how much expensive they are. What is the reason for the price differences between the services? Why should I use a service like Network Solutions if there are so many companies out there that offer domain registration for a very cheap price? Thanks, Meir

    Read the article

  • Openx api Advertiser statistics call [migrated]

    - by Sameer
    I am trying to write a jsp application which will establish the xmlrpc connection with openxapi and return the values. I am using openxapi v1 Here I get the dates through a datepicker and then convert to date format: `String dateStr = request.getParameter("datum1"); SimpleDateFormat formater = new SimpleDateFormat("dd-MM-yyyy"); Date result1 = formater.parse(dateStr); String dateStr2 = request.getParameter("datum2"); SimpleDateFormat formater2 = new SimpleDateFormat("dd-MM-yyyy"); Date result2 = formater2.parse(dateStr2);` Then I call the service provided by openxapi (Advertiser Daily Statistics) (sessionID, advertiserID, from date, to date) Object[] objects1=(Object[])client.execute("advertiserDailyStatistics", new Object[]{sessionId,3,result1,result2});

    Read the article

  • Google page rank not showing after redirecting www to non www?

    - by muhammad usman
    i have a fashion website. i had redirected my domain htttp:// (non www) to http:// www domain and my preferred domain in Google webmaster tools was http:// www. Now i have redirected http:// www to http:// domain and have changed my prefered domain as well. Now Google PageRank is not showing for even a single page. Would any body please help me and let me know if i have done something wrong? below is my htaccess redirect code RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] RewriteCond %{HTTP_HOST} ^www\.deemasfashion\.com$ RewriteRule ^deemasfashion\.com/?(.*)$ http://deemasfashion.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index\.html\ HTTP/ RewriteRule ^index\.html$ http://deemasfashion.com/ [R=301,L] RewriteRule ^index\.htm$ http://deemasfashion.com/ [R=301,L]

    Read the article

  • Marketing for Scheduled Online Events

    - by JT703
    Last year I started working with a team on our first major web project (We, the Pixels). I believe the idea is very solid, but it has a hard requirement for a group of people being on the site for the randomly scheduled events. We are having problems getting people to come and stay for these events. What is the proper marketing approach needed to bring people to the site for these events? We have recently done the following in an attempt to fix the problem: Added email notification of new events being created Added privileges based on rank Added text throughout the site encouraging setting up the events in the future so other users can have time see that it exists. Gotten involved in with other communities that would find the site interesting in order to promote (market) the site Advertised using Google Adwords Is there an standard marketing approach for such a case as this?

    Read the article

  • How do you determine whether a website is a scam [closed]

    - by Tom
    What's the best way to determine if a website is a scam. For example, at first sight (no pun intended) the following website seems to be legitimate. But the price of the product is suspiciously low (all the reviews point to an RRP of approximately £1000). http://www.maxiargos.com/index.php/asus-zenbook-ux31e-dh72-13-3-inch-thin-and-light-ultrabook-silver-aluminum.html Another indication is the lack of SSL for the checkout page, and lack of useful information in the WHOIS record. Registration Service Provided By: TMDHOSTING Contact: +1.8665325635 Domain Name: MAXIARGOS.COM Registrant: PrivacyProtect.org Domain Admin ([email protected]) ID#10760, PO Box 16 Note - All Postal Mails Rejected, visit Privacyprotect.org Nobby Beach null,QLD 4218 AU Tel. +45.36946676 Creation Date: 09-Nov-2011 Expiration Date: 09-Nov-2012 Domain servers in listed order: ns1.tmdhosting410.com ns2.tmdhosting410.com Administrative Contact: PrivacyProtect.org Domain Admin ([email protected]) ID#10760, PO Box 16 Note - All Postal Mails Rejected, visit Privacyprotect.org Nobby Beach null,QLD 4218 AU Tel. +45.36946676 Technical Contact: PrivacyProtect.org Domain Admin ([email protected]) ID#10760, PO Box 16 Note - All Postal Mails Rejected, visit Privacyprotect.org Nobby Beach null,QLD 4218 AU Tel. +45.36946676 Billing Contact: PrivacyProtect.org Domain Admin ([email protected]) ID#10760, PO Box 16 Note - All Postal Mails Rejected, visit Privacyprotect.org Nobby Beach null,QLD 4218 AU Tel. +45.36946676

    Read the article

  • What does Enable/Disable mean in Bing's URL Normalization feature?

    - by DisgruntledGoat
    I'm in Bing Webmaster Tools, under Index URL Normalization. Many parameters are listed in the table with 3 other columns: Status, Source, Date. The "Source" column says "Webmaster" where I have added parameters, and "Bing" where I assume the parameter has been auto-detected. "Date" is probably the last date it detected the parameter. I've tried searching the help files but I can't find what the Status column means. The top of the page says: This feature allows you to specify query parameters for Bing’s crawler to ignore. But it's not clear whether "Enable" or "Disable" is related to this, and if so what happens in each case. Does anyone know?

    Read the article

  • How can I alias domains to subdomains?

    - by user745668
    I have a main site with a bunch of subdomains created. Each subdomain is a blog and I want each blog to have its own domain name i.e. thisguy.com - blog1.mainsite.com thatguy.com - blog2.mainsite.com I bought the new domains and I set up the CNAME records as above to alias them to the appropriate subdomains. However, I get my hosts "a domain is pointing to one of our servers but we don't know anything about it" landing page. How can I set up these domains as aliases of my subdomains?

    Read the article

  • How to insert in a blog sharing links to visitors Tweet, Facebook and so on social networks?

    - by Andry
    I am developing a web blog using ASP.NET, but I guess that the tech details like this, here, is not important. My aim is to insert in every post I create those nice buttons to the social networks account of my visitors so that they can quote or post the link to the blog entry in their space. How can I do this? I guess it also de3pend on the social network I want to use. Lets say, now, that I want to have links to Facebook, Tweet and Google circle accounts. Thankyou.

    Read the article

  • Why subdomains of Blogspot/WordPress like sites are treated as different domains or sites?

    - by Thedijje
    As I know, maps.google.com or mail.google.com all comes under the same domain and its all are subdomain. Entire web treats these subdomain as the part of main domain and they have same Alexa rank, PageRank and all. But in another hand, take a look on blogspot.com/wordpress.com/webs.com; these are different sites but blogs or websites under those domains are treated as different sites. Its new URL, all have different PageRank and Alexa rank as well. Tts about millions of subdomains under those few domain, have almost similar IP address, hosting and CMS, still why they are called different domains?

    Read the article

  • wordpress is truncating and replacing category name and description in version 2.7.1 [migrated]

    - by Jayapal Chandran
    version: WordPress 2.7.1 I recently created a new category in a blog by wordpress. I created a long category name like win32 api programming and the description as windows api programming and win23api programming. But after saving it the name and the description changes to name: win32 api instead of win32 api programming and desc: Win32 api snippets I don't want to upgrade my wordpress because i dont like some new features in the new releases. what should i do to get my actual strings(names) intact?

    Read the article

  • No Obvious Answer - Query-Strings and Javascript

    - by nchaud
    Say I have this main page /my-site/all-my-bath-soaps which lists all my products. It has a search filter text box that uses javascript to filter the products they want to see on that page (the URL doesn't change as they filter). Now from many other parts of the site I want to navigate to this products-page and see specific products. E.g. <a href="/my-site/all-my-bath-soaps?filter='Nivea-Soap'"> will go to /all-my-bath-soaps and apply javascript filtering to see just that product and hide all dom nodes for the other products. The problem is if the user changes the text in the filter from 'Nivea-Soap' to 'Lynx' the javascript will work fine and show the new products but the URL stays at ?filter='Nivea-Soap'. Is there anything I can do about this? Of course, I don't want to reload the page with a new query string every time they change the search criteria. Somehow it'd be great to move the ?filter=... criteria into POST data instead - but how can I do this with a link I don't know...

    Read the article

  • Is it possible to hide some topics in phpBB forum?

    - by Martin
    I would like to be able to hide some of the topics in a phpBB-based forum temporarily from the users - perhaps with the exception of administrators and moderators. I am using the forum for my students and I have solutions of some problems from exams and tests there - posted either by me or by some of the students. I plan to use the same or very similar problems during the next academic year. So I don't want the students to see them, but I want to make the solutions visible again after the tests; so that I do not have to post solutions to same questions again. Is something like this possible? Is this a standard part of phpBB, or do I need to install some modification(s) for it?

    Read the article

  • Password protect an alias virtual difrecory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • How to copy or replicate a complex website to local file and modify then

    - by Andre Chenier
    I am not good at designing the visual side of a website. I found a website which I gave 10 over 10 because its functionality suits my aims and also it seems very esthetical. I know HTML, PHP, mySQL and some degree of CSS. I don't know JS, Ajax, Jquery. So I want to replicate this web site (save completely) on my local and then modify it. (content, colors, icons etc.) I saved this web site in Chrome and IE. After clicking the site from my local folder, a saw an ugly & non-working site. My aim is to understand the functions of the parts that I don't know. For example when I delete a js in its page what will happen as the result of the deletion operation. Since the page is too complex it has lots of css, js files to download inside. I don't want to deal it manually. Is there any alternative and easy way to get the web page completely to my local which also works like a charm from local? regards

    Read the article

  • Are the famous websites handmade? [closed]

    - by Mithun Chuckraverthy
    I'm a newbie in web designing. I always wanted to build a professional quality website by myself. So, I started learning HTML/XHTML and CSS for presentation; and, JavaScript and PHP/MySQL for scripting. I wonder, would the developers of famous websites design them by hand? Or, have they found out any better idea of using softwares? If so, can you tell me what are they? (By the word famous, I mean any websites that are liked by millions of people all over the world. Like: Google, Facebook etc.) Thanks in advance!

    Read the article

  • Will my site containing duplicate content be accepted in Adsense

    - by user5858
    I've a new site just over 6 months with 50 unique visitors daily. It has good amount of duplicate pages which are not copyrighted. For example I've copied related companies product FAQ's "as is" in the site. Moreover I'm not supposed to modify a company's product's faqs. I fear my login may be banned by Adsense if I submit it. So I want to know: 1) Whether I can submit it for Adsense account 2) Whether Google can penalize me and in what way 3) How would Google come to know that the duplicate content on my site is not copyrighted?

    Read the article

  • Web development and tips for building a website and the advantages of using HTML 5 in the site

    - by Siddarth
    I am trying to make a website for my college, and the program starts from jan 13 and we get 15 days of time to develop a running site. The best site will become the college site. I am participating, for all these days i used to participate in C and C++ contests and also won a few contests, now i am really into web dev for the last 2 months. I knew HTML long ago recently i brushed up on it and learnt javascript from "javascript and jquery the missing manual"(sorry for not adding the link) and recently bought "PHP and MySQL web development" and I am going on fine with it, but still a lot of pages to cover in that book. After this what do i need to know ajax is one language to concentrate on, what else do i need to do to make this project up and running. Can someone let me know the tricks of this trade and complete information to build a site like this. Right now i am good with javascript HTML and CSS and thats it, what else I am studying HTML5 and CSS3 its pretty fast and neat. The info on site is a college website which includes students profiles where the have to register their info with college id number and pretty much thats it. Think of it as a college site + a social networking site for students, where they can upload there pics and videos pdf books etc.

    Read the article

  • Moving large amounts of data between shared hosts

    - by Bryan M.
    I recently acquired a client who is a photographer and was interested in moving web hosts since his current host had threatened to throw him off due to CPU spiking. The migration went fairly easily, with about 350MBs of website and media files. Then I discovered about 60GBs of client galleries he had failed to mention. I am unable to move this much data myself, since I'm capping out at about 20kb/s on the FTP connection. Has anyone encountered a situation where they needed to migrate this much data between cheap hosting? Should we contact the hosting companies about this (he is moving from Westhost to MediaTemple)?

    Read the article

  • Google won't display site

    - by Markasoftware
    My website (markasoftware.getenjoyment.net) doesn't seem to be indexed properly by Google (I haven't tried other search engines). When I type in the URL of my site it appears right at the top of the list like it should. When I type in the entire contents of the title, however, the site doesn't appear! The title is quite long (Thermonuclear War Game Online: Thermonuclear War By Mark) and it has little (if any) competition. Have I been punished by Google for some reason, or is it something else? I have received zero hits from search engines. Can someone tell me why my site down't appear?

    Read the article

  • How do I deal with content scrapers? [closed]

    - by aem
    Possible Duplicate: How to protect SHTML pages from crawlers/spiders/scrapers? My Heroku (Bamboo) app has been getting a bunch of hits from a scraper identifying itself as GSLFBot. Googling for that name produces various results of people who've concluded that it doesn't respect robots.txt (eg, http://www.0sw.com/archives/96). I'm considering updating my app to have a list of banned user-agents, and serving all requests from those user-agents a 400 or similar and adding GSLFBot to that list. Is that an effective technique, and if not what should I do instead? (As a side note, it seems weird to have an abusive scraper with a distinctive user-agent.)

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >