Search Results

Search found 9728 results on 390 pages for 'meysam pro'.

Page 216/390 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Where can I learn about managing domain names for my websites? [closed]

    - by Shahbaz
    [I originally asked this question on serverfault.com, where it was closed as 'out of scope.' Hopefully it is appropriate for this forum] I am a developer who doesn't understand how to effectively manage Internet domain names. Say I registered a name with namecheap and host a website on linode. Now what is an a-record? What is a name server and do I host it with namecheap of linode? Why would I pay amazon when others are free? Does any of this matter in terms of website latency or reliability? I feel like a script kiddy, copying and pasting others' and hoping it works. Is there a book or other resource that explains all this? I know amazon is full of books about DNS, but afaik they are about setting up DNA servers for local networks, not the Internet. p.s. To emphasize, I'm asking for books or long write-ups which explain this to technically competent people, who just haven't had to think about the role of commercial registrars, name servers, commercial hosts, commercial websites and how all parts play together on the real internet (not local networks).

    Read the article

  • Changing hosts but keep old emails [closed]

    - by LDaniel
    Possible Duplicate: Change host / keep emails OK here's the situation...I'm trying to transfer my domain and email address to a new hosting service and I would like to start using Google's domain apps for my email, etc. My email address is currently on a WebMail type platform and when I move my domain and start using Google domain apps I would also like to keep the old emails and have them imported the the email address on Google. Note: The email address will be the same on both hosts. For example [email protected]. So its keeping the same address between two different systems. I've been Googling how to do this for awhile and all the migrate options that come up don't appear in the the Google inbox setting or domain apps config settings. Any help is greatly appreciated.

    Read the article

  • Frequency to submit sitemap to search engines

    - by user577691
    i have went live with my site and being new to search engines and SEO fields not sure what should be the best way to handle sitemap.xml. I have created sitemap.xml and submitted it to Google using webmaster tool Yahoo/Bing using Bing Webmater Ask.com now since site will get updated every 2-3 times per week i am not sure what should be the best approach. Do i need to submit sitemap.xml again If i need to submit sitemap.xml again and again what should be the frequency to submit that Please suggest the best approach

    Read the article

  • Hiding a particulat page from search engines not to index

    - by user702325
    I have a page which i don't want search engines to index or crawl. I am not sure hat should i put in my robots.txt file to tell search engines not to crawl/index that page. The page it itself is getting generated dynamically and do not have a predefined template for it all i know about its URL which is pre-defined and will remain unchanged. I have this page say at www.mysite.com/my-nonindexable-page/ Please suggest what i should do to achieve this.I am using WordPress for my website

    Read the article

  • How difficult it is to develope Apps for Android and iOS? [on hold]

    - by netsetter
    I'm an experienced web developer in PHP, HTML, Javascript, MySQL, CSS and I'm running communities where people can register to be able to login and do some stuff. Now more and more people are requestion an App, I told them that I have no time and experience to develope Apps with many functions for such complex communities I am running, but then the users told me what would be enough for them and this sounds already simpler to me: An app to install on Android / iOS just to be able to login (+ autologin), so they appear always online in the community when they have internet connection. Then only 1 function like a counter of new activities regarding their user account (new messages, new replies, etc..), and if they click on the app then a browser window will open to read the info at the main website. So, what you think, it will be a big thing to develope such an app for the members? Is there a big diffrence between developing for Android and iOS? How to test the App if you don't have an Android or iOS phone in example?

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • Re-Route Mail to a port other than 25

    - by Ken
    Is there a way to route mail to another port? I have an email account attached to my laptop that I'd like to be able to send and receive mail from. Due to mobility, I'll be passing through various networks that will probably block this port. My dynamic DNS provider allows me to utilize web-forwards for MX domains; is this possible? where I can web forward to a domain:port which is managed by my DNS provider when I traverse between networks. If not, is there a way? Of course i could use web-mail or relay-forwarding from my home server, but that's not geeky enough.

    Read the article

  • Help with cron syntax

    - by Randy
    I need to setup a cronjob on my webhost. The documentation for my webapp reads as follows: you will need to create following cronjob: /public_html/cake/console/cake -app /public_html/app master Also, I want any output written to a log file. My hosts documentation says this: You can have cron send an email everytime it runs a command. If you do not want an email to be sent for an individual cron job you can redirect the command's output to /dev/null like this: mycommand /dev/null 2&1 Can someone help me write the cron job? I dont know the syntax at all. Thanks for the help!

    Read the article

  • How to find a good photo gallery for my website?

    - by Roflcoptr
    For my website I'm searching for a really simple gallery module that looks like the one use by Dropbox. But I'd like to have 2 additional features: allow visitors to make comments and display the number of hits of a photo. I was googling a lot for such gallerys, but could find anyone that really matched my requirements. Could someone reocommend a simple good-looking gallery that fullfills these requirements.

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • phishing attack. Where do I start the cleanup?

    - by Suz
    I'm a newbie webmaster. I've got a domain and a site... and no clue about the web (I'm OK with files and programs... ) I got a message from google that my site is a possible phishing site, with a link the the suspect page: http://www.mydomain.com/~phishers/Paypal/us/Confirm.php needless to say, I didn't put that up. Can someone point me to a good tutorial on what to do now? I'd like to figure out what happened so I can defend against it the next time around. How do I identify what kind of attach this is? Also, what is the tilde doing in the URL path? I couldn't find any path like this on my hosting account, so I'm not entirely sure how to delete it.

    Read the article

  • SEO and domain name - which shape?

    - by user984621
    I just want to register the domain name for my spanish class and wonder, what domain name is beter for this purpose: learningspanish.com or ilearnspanish.com Which one is better? The domain name must be English, but I don't know, what is better for Google and SEO - if learn or learning... I would be grateful for your feedback and sorry if the explanation above is not understandable (I would try to explain it better). Thank you

    Read the article

  • IIS and content caching

    - by JayC
    I'm a web developer and administer of a Windows 2008R2 Could Instance with IIS 7. I recently made an update to our website, but when I revisited the website, the website was being viewed with old stylings. I did a refresh (shift + reload button in Firefox) and of course the website displayed as it should. I didn't worry about it, until my client had the same issue in Safari. So, my question, in general, is, how do I prevent this from happening again, and yet still afford some caching of our site? I noticed we did not have content expiration set up on our webserver sites, so I've set that up, but did I really need to? I've also looked at Etags, and, honestly, it's hard for me to know whether or not I should use them or not. One comment I read somewhere there isn't really any issue with Etags scenarios in IIS (even in webfarms)... but, I dunno. Anybody have any suggestions, links, info? Thanks.

    Read the article

  • Can I Use A Canonical Tag Instead of a Redirect for Updated Content?

    - by Ewan Heming
    I have some old articles on my blog that get quite a bit of traffic, but are very outdated. I want to remove them from Google's index using the noindex tag, but I'm not sure what the best approach will be to send the same traffic to my new article on the subject without using a redirect (as I want to keep them in my blog archives). I was intending to just put a link at the top of the article pointing to the new one, but was wondering if it was appropriate to use a canonical tag instead; the new article is on the same subject but doesn't contain the same content, so isn't really a copy.

    Read the article

  • Verifying that a user comes from a 'partner' site?

    - by matt_tm
    We're building a Drupal module that is going to be given to trusted 'corporate partners'. When a user clicks on a link, he should be redirected to our site as if he's a logged in user. How should I verify that the user is indeed coming from that site? It does not look like 'HTTP_REFERER' is enough because it appears it can be faked. We are providing these partner sites with API Keys. If I receive the API-key as a POST value, sent over https, would that be a sufficient indicator that the user is a genuine partner-site user?

    Read the article

  • What guidelines should be followed when implementing third-party tracking pixels?

    - by Strozykowski
    Background I work on a website that gets a fair amount of traffic, and as such, we have implemented different tracking pixels and techniques across the site for various specific reasons. Because there are many agencies who are sending traffic our way through email campaigns, print ads and SEM, we have agreements with a variety of different outside agencies for tracking these page hits. Consequently, we have tracking pixels which span the entire site, as well as some that are on specific pages only. We have worked to reduce the total number of pixels available on any one page, but occasionally the site is rendered close to unusable when one of these third-party tracking pixels fails to load. This is a huge difficulty on parts of the site where Javascript is needed for functionality built into the page, but is unable to initialize until a 404 is returned on the external tracking pixel. (Sometimes up to 30 seconds later) I have spent some time attempting to research how other firms deal with this sort of instability with third-party components, but have come up a bit short. The plan currently is to implement our own stop-gap method to deal with these external outages, but rather than reinventing the wheel, we wanted to find out how this is dealt with on other sites. Question Is there a good set of guidelines that should be followed when implementing third-party tracking pixels? I would love to see some white papers or other written documents about how other people have dealt with this issue.

    Read the article

  • Why I loose my page rank after 301 redirect?

    - by rajesh.magar
    As we all know Google treat sub-domain as completely separate domain so we have to fight for both, to get ranked in search results. Right! So one of my client website was like they having "example.com" and "Blog.example.com". So in mind to keep all stuff in one place we redirect "blog.example.com" to "example.com/blog/" But in this case we where loose our pagerank and still wondering where we went wrong or it just take few more time to showoff. so what is the reason behind this will be. Thanks in advance.

    Read the article

  • My site disappeared from Google search, how long does it take to get back?

    - by Sweb Dizajn
    Due to damage by malicious code, Google wrote: Google Analytics web property: link has been removed from http://swebdizajn.com November 29, 2011 Your Webmaster Tools http://swebdizajn.com site is no longer linked to a Google Analytics web property. Possible reasons are: You are no longer the owner of the site in Google Analytics, and nobody else owns both the site and the property Another site owner removed the link. After that I restored to backup and then accepted the Google message to tell them that all is well. How long will I have to wait for my site to return to the position where I was?

    Read the article

  • Finding what is causing my site to issue 301 redirects

    - by php-b-grader
    I have an URL which is 301 redirecting but I cannot find where or how it is happening and wanted some checks to perform if possible? I've checked .htaccess - it's not there I've checked cPanel in redirects section - it's not there In WordPress, I have the redirection plugin active and it's not there either Is there anywhere else that could be issuing redirects? I'm at a loss to find out where and how the page is redirecting!

    Read the article

  • Google Ads Blocking Other Site Elements From Loading

    - by Scott Schluer
    I'm using Google DFP to serve Adsense ads. In Google Chrome (this doesn't seem to happen in other browsers), the page will get stuck loading pagead2.googlesyndication.com. It will just load for hours if I let it. In the meantime, only about half or slightly more of the dynamic images on my page will have completed loading. It appears this is blocking other elements on my site from loading. Any suggestions on what I can do to fix this?

    Read the article

  • What is the benefit of the "download will begin shortly" page?

    - by Fammy
    I've noticed many websites that host files for downloads have an interstitial page between the download link/button and the actual start of the download. Terminology on the page may include "Your download will begin shortly. If it does not, try this direct link". What is the purpose of this page? It seems to draw away from the general experience of downloading a file. Is this beneficial for bookmarking? Less experienced users? Analytics?

    Read the article

  • How can # of unique visitors be more than # of visits?

    - by Dallas
    I am confused as heck. When looking at a few individual pages, I am seeing weird results I hope someone can help explain. I am doing manual standard reports, and not creating or using a widget. For each of the two tests below, the metrics I am using are visits (not visitors) and unique visitors. I am using Page as my primary dimension and Page Title as secondary. I am filtering to include certain Pages. Example 1 - Looking at a single page... I see 731 unique visitors and 169 visits. How is this possible? Does google just flip them around for some reason? Example 2 - Looking at several pages combined... If you examine the timeline below , you can see that the numbers are all over the place. How is it possible to have more unique visitors than visits? What am I missing? I would suspect that if things were just flipped around, then I should still see one line that is always below the other line. Anyone clue me in to what may be happening here? I also notice that the visits column (just out of view) adds up to the total, but the unique visitors column clearly doesn't.

    Read the article

  • How to tackle archived who-is personal data with opt-out?

    - by defaye
    As far as I understand it, it is possible to opt-out (in the UK at least) of having your address details displayed on who-is information of a domain for non-trading individuals. What I want to know is, after opt-out, how do individuals combat archived data? Is there any enforcement of this? How many who-is websites are there which archive data and what rights do we have to force them to remove that data without paying absurd fees? In the case of capitulating to these scoundrels, what point is it in paying for the removal of archived data if that data can presumably resurface on another who-is repository? In other words, what strategy is one supposed to take, besides being wiser after the fact?

    Read the article

  • Schema.org for Product Reviews

    - by Lynda
    I have a product reviews on a site and I am adding schema.org markup to the reviews. Here is the code I am using: <div class="blockquote-wrap"> <blockquote itemprop="review" itemscope itemtype="http://schema.org/Review"><span itemprop="reviewBody">Text of the review itself.</span> <cite><span itemprop="author">Author Name</span>, Location of Author</cite> </blockquote> </div> This is all the reviews are. When I test the page using Google's Structured Data Testing Tool I receive this error: Error: Incomplete microdata with schema.org. My question is what data is missing that is required? I don't see which data is required on the Schema.org page for reviews.

    Read the article

  • Do I need to physically host my website in separate countries for SEO?

    - by noelmcg
    I run an ecommerce store that is hosted in Ireland, and ranks ok with google.ie The market for this comapny is the Republic of Ireland and the UK. Is it beneficial for me to have a UK hosted version of my site (.co.uk) to rank higher in google.co.uk (and other localised search engines of course). If so, how would I prevent the site from being punished for duplicate content? Thanks in advance for any assistance on the above.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >