Search Results

Search found 9717 results on 389 pages for 'pro'.

Page 66/389 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Recovering a website

    - by Jessica
    I found my website in the Wayback Machine a few months ago, but today I've tried again and now it tells me it can't find robots.txt. My old webhost stopped paying for their servers back in August without any notice. I was going to do a backup the day it happened. Is there a way just to find the text? I have the old IP, images, but nothing else. None of the big search engines have caches anymore, and I already looked in the cache of three of my Macs with nothing to be found.

    Read the article

  • SEO Suggestion For My Blog [closed]

    - by Rana
    I have a programming tutorial blog, which have decent traffic. However, I am interested to do some basic seo for my blog to get it optimized. I want to do it myself by learning. I was wondering if experts here can suggest me how should I proceed please? Also, if you please review my blog and suggest the most common seo concern that come to your mind first, those will be helpful as well. My blog site url is as follow: http://codesamplez.com/ Looking forward to your feedback soon. Thanks.

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Redirecting bad links to the correct links via htacess or 301 redirect plugin for WordPress

    - by janoulle
    I'm getting a lot of 404 errors b/c I recently switched content management systems (Habari to WordPress). I would like to use the 301 redirect plugin for WordPress to capture and helpfully redirect the offending links to the correct urls. Here's an example of the type of errors I'm seeing and what they should be redirected to: http://janetalkstech.com/admin/publish?id=146 should redirect to http://janetalkstech.com/?p=146 http://janetalkstech.com/admin/publish?slug=post-title should redirect to http://janetalkstech.com/post-title I would greatly appreciate any specific pointers on how to perform the redirects with either the 301 redirect plugin for WordPress or via .htaccess file Edit: Redirection plugin being used is the one by Urban Giraffe: http://urbangiraffe.com/plugins/redirection/

    Read the article

  • Analiytics: Can I set a goal on multiple events?

    - by David Parks
    We have a popup dialogue that requests users email address or facebook login. The page behind the popup loads, so a page view is counted. We want to measure: How many users ignored the popup completely How many users engaged the popup, but don't complete the process (we trigger an event when the user performs actions defined as "engaging") How many users completed the popup Bounce rates aren't telling because some users won't receive the popup. We are basically triggering events "PopupDisplayed" "PopupEngaged" and "PopupComplete", with labels to differentiate between email and facebook. But I don't think I can set goals to count "Users who received 'PopupDisplayed' AND 'PopupComplete'" events, so I can count how many users both saw the popup and completed it.

    Read the article

  • Can Campaign URL tags cause a soft 404 error?

    - by user35306
    I was checking out one of my company's website's Webmaster Tools to analyze the cause behind some soft 404 errors and discovered that a few of the older errors had affiliate mp referral tags listed as the relative URLs. Since these are older problems and I don't seem too many of them coming up in the last few months I don't think it's still a problem. I'm just curious if it's possible to cause a soft 404 by improperly copying the campaign or referral tag into the URL.

    Read the article

  • Disable default error pages/error messages in IIS

    - by Antoine
    I have this web application (ASP.Net MVC 3) that on certain conditions returns a custom JSON string with a HTTP status code for an error (403, 415, 500). It is deployed on a Win 2008 R2 server with IIS 7.5 Initially I was gettting the standard HTML pages for the error instead of the JSON data. I removed the error pages for these errors in the app settings. But now my queries which should return some JSON data return a single error line. When the server gives me 403, I have the message "You do not have permission to view this directory or page." (simple line, no HTML around it). What can I do to deactivate this and finally get what the app is returning and not what the server wants to return?

    Read the article

  • How can I parse Amazon S3 log files?

    - by artlung
    What are the best options for parsing Amazon S3 (Simple Storage) log files? I've turned on logging and now I have log files that look like this: 858e709ba90996df37d6f5152650086acb6db14a67d9aaae7a0f3620fdefb88f files.example.com [08/Jul/2010:10:31:42 +0000] 68.114.21.105 65a011a29cdf8ec533ec3d1ccaae921c 13880FBC9839395C REST.GET.OBJECT example.com/blog/wp-content/uploads/2006/10/kitties_we_cant_stop_here_this_is_bat_country.jpg "GET /example.com/blog/wp-content/uploads/2006/10/kitties_we_cant_stop_here_this_is_bat_country.jpg HTTP/1.1" 200 - 32957 32957 12 10 "http://atlanta.craigslist.org/forums/?act=Q&ID=163218891" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.19) Gecko/2010031422 Firefox/3.0.19" - What are the best options for automating the log files? I'm not using any other Amazon services other than S3.

    Read the article

  • How to assign foo.example.com to one IP address and example.com to a different one?

    - by Guillaume Pierre
    Say that I have a domain name called example.com and two server located at 2 different IP addresses: 1.2.3.4 and 6.7.8.9. How could I assign example.com to 1.2.3.4 and the subdomain foo.example.com to 6.7.8.9? [EDIT] I did try to put a A record linking from @ to the first IP address and from foo.example.com to the second IP address, as illustrated below: And I did configure a vhosts called foo.example.com on my server at IP address 2. The @ record works. But after 3 hours waiting for the result (DNS delay), nothing happened with foo.example.com, which link to nothing. Why?

    Read the article

  • CPanel - Wild card SSL - How to point *.domain.com to one root and sub.domain.com to another root

    - by Harry Muscle
    I have a wildcard (*.domain.com) SSL certificate installed on my CPanel server. I have domain.com configured to point to /domain.com as its document root and use this wildcard SSL certificate. I also have sub.domain.com configured to point to /sub.domain.com as its document root. Btw, I have not explicitly configured configured sub.domain.com to use the wildcard SSL certificate. When I go to "http://sub.domain.com" it goes to the correct document root, however my problem is that when I go to "https://sub.domain.com" it goes to the incorrect root, it goes to the root configured for the wildcard SSL. I've been trying to find information on how to go about configuring sub.domain.com to use the SSL certificate and go to the correct document root, however, so far I haven't found anything concrete. Do I use the same steps that I used for configuring the certificate for domain.com, but use the same certificate again and specify dev.domain.com as the domain that this certificate is for (instead of *.domain.com)? Or is there something else I should be doing? This is a production server, so I don't want to play around too much. I'm hoping to find the correct information before proceeding.

    Read the article

  • Is meta description still relevant?

    - by Jeff Atwood
    I received this bit of advice about the meta description tag recently: Meta descriptions are used by Google probably 80% of the time for the snippet. They don’t help with rankings but you should probably use them. You could just auto generate them from the first part of the question. The description tag exists in the header, like so: <meta name="Description" content="A brief summary of the content on the page."> I'm not sure why we would need this field, as Google seems perfectly capable of showing the relevant search terms in context in the search result pages, like so (I searched for c# list performance): In other words, where would a meta description summary improve these results? We want the page to show context around the actual search hits, not a random summary we inserted! Google Webmaster Central has this advice: For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. I'm struggling to think of any scenario when I would want the Google-generated summary, that is, actual context from the page for the search terms, to be replaced by a hard-coded meta description summary of the question itself.

    Read the article

  • clicktale.com alternative that works with https and ajax

    - by Alexey Ivanov
    I need to record user's actions on site for analytics purposes. The way clicktale.com doing it is just fine. But unfortunately it have problems with working over https and recording ajax events. Is there some service or script/library that I can host that can do this task? Non-free one's are ok to. Clarification: ClickTale function that I want to reproduce is recording of separate user sessions and their replay. So you can see video of all user's interactions with page: There he clicks first, which links opens, etc. Usually such services replay user's actions buy reproducing them with javascript (and here comes ajax problem: external sites can't use ajax because of cross-domain scripting). So I'm looking for a tool (possibly script that I host on site to allow cross-domain scripting) that can record ajax blocks actions.

    Read the article

  • Targeting advertising for T-Mobile and Virgin Mobile users

    - by Codek
    We'd like to target our advertising to Virgin Mobile users. However, Virgin Mobile is actually on the T-Mobile network. So when we do a lookup of the IP address it reports T-Mobile. So this gives 2 problems: No way to target Virgin Mobile When we target T-Mobile we accidentally target Virgin Mobile users too. We actually have 2 separate sites for T-Mobile and Virgin Mobile - So is there any way we can make sure we send people to the right site? Also would appreciate it very much if anyone has any suggestions on other places for this discussion, I'm not entirely sure this is "webmaster" talk?

    Read the article

  • Problems Using CloudFlare On Blogger

    - by the_archer
    Here's the situation. I got a TLD for my blogger blog and set it up using the instructions from blogger. Blogger asks to: Add two CNAME records. For the first CNAME, where it says Name, Label or Host enter "www" and where it says Destination, Target orPoints To enter "ghs.google.com" . For the second CNAME, enter "NHRILA4K2RJG" as the Name and "gv-GQMUMYGHAMJWECXFLJXVXABIV23C55JIPNIAVD5IGFSXT653O5GA.domainverify.googlehosted.com." I did that on my domain host, and everything was working smoothly. Here's the things that happened: Typing myblog.blogspot.com in the address bar brought me to my new address www.mynewaddress.tld Typing my newaddress.tld brings me to www.mynewaddress.tld Now, I went through the instruction to setup CloudFlare and did everything as required. I saw that CloudFlare is active and working on my TLD www.mynewaddress.tld, however, when I am typing the blogspot address, i.e. myblog.blogspot.com, it's showing a notice that the blog is not hosted on blogger and that I should click "yes" to get redirected to the new website. However, the blog is still on blogger. I think the problem might be with this particular CNAME record Google asks to create, which I did not find imported to the CloudFlare nameservers: For the second CNAME, enter "NHRILA4K2RJG" as the Name and "gv-GQMUMYGHAMJWECXFLJXVXABIV23C55JIPNIAVD5IGFSXT653O5GA.domainverify.googlehosted.com." So I create that CNAME and added it to the CloudFlare panel. My question is - is that what will help Google determine that my blog is still hosted on Blogger? If so, should I turn off CloudFlare for that particular CNAME record or turn it on? Any help is very much appreciated :)

    Read the article

  • Googlebot fetches my pages very frequent, rel-nofollow, meta-noindex or robots.txt-disallow

    - by trante
    Googlebot fetches pages in my site very frequently. And this slowens my website. I don't want Googlebot to crawl too frequent. I decreased crawl rate from Google webmaster tools. But I'm supposing to use these three tools: Adding rel="nofollow" to my inner pages. So Googlebot won't crawl and index them. Adding meta tag "noindex" so Google will remove this page from index and won't get it again. Adding Disallow: /mySomeFolder/ to robots.txt and Googlebot won't crawl that pages. I'm planning to use these methods for my 56.000 pages, except the most important 6-7 pages. Which method would you prefer and what would be disadvantages or advantages ? Or won't it change my website speed etc..

    Read the article

  • Would using a self-signed SSL certificate be appropriate in this scenario?

    - by Kevin Y
    Now I realize this topic has been discussed in a few questions before (specifically this one), but I'm still a little confused about the implications of using a self-signed certificate, and how I would be affected by doing so in this case. After reading various sources, I'm still a little confused about the exact details of using one. The biggest problem with a self-signed certificate, is a man-in-the-middle attack. Even if you are 100% sure that you are on the correct website and you completely trust the site (your email server for example), you could have someone intercept the connection and present you with their own self-signed certificate. You would think that you are using a secure connection with your email server but you are really using a secure connection to an attacker's email server. – SSL Shopper So somebody could switch out my self-signed certificate with their own, and I wouldn't be able to detect it? The way this site phrases it, it makes it sound worse to install a self-signed certificate than to leave your site without a certificate at all. Self-signed certificates cannot (by nature) be revoked, which may allow an attacker who has already gained access to monitor and inject data into a connection to spoof an identity if a private key has been compromised. CAs on the other hand have the ability to revoke a compromised certificate if alerted, which prevents its further use. - Wikipedia Does this mean that the only way someone could switch out their own certificate for mine is for them to find out the private key? I suppose this is more secure, but I'm still slightly confused about what exactly results from using a self-signed certificate. Is the only issue that obnoxious security warning that pops up in your browser when directed to the site, or is there more to it? Now in my case, I want to add the an SSL certificate to a minuscule Wordpress blog I run that I don't expect anyone else will read anytime soon; I mainly started it to get into the habit of blogging, and to learn more about the process of administrating a site (ex. what to do in situations like this one). Whenever I go to the login page and there's an HTTP:// instead of HTTPS://, I cringe a little. Submitting my password feels like I'm shouting my password out loud with hundreds of people listening. I don't plan on adding any other authors to the site, so I am the only person who would ever need to login. This isn't a site I'm trying to get page views from, or one that handles e-commerce or any sensitive info like that, simply my username and password to login with. One of the concerns (that I've gathered so far) of a self-signed certificate is that non-technical users might be scared by the security warning, but this would not be an issue in my case. TL;DR: If scaring visitors away isn't a concern (which it isn't in my case), is it acceptable to use a self-signed certificate for the purpose of encrypting my Wordpress blog's password, or are there added security issues I should be aware of? Essentially, I'm wondering whether adding a self-signed certificate will be safer than leaving my login page the way it is now, or if it adds the potential for more security breaches than leaving it sans-SSL.

    Read the article

  • Can't get heroku site updated on custom domain

    - by Joseph Brown
    I have hosayif.com registered at GoDaddy, and I set up a cname for rails.hosayif.com to point to my heroku app at sharp-meadow-6535.herokuapp.com. I set this up with a previous app, and it worked. I made a new app, renamed the old one, and then renamed the new one so that it is sharp-meadow-6535.herokuapp.com in hopes of not having to change anything at GoDaddy. In theory, rails.hosayif.com and sharp-meadow-6535.herokuapp.com should be the same site, but they are not. Can someone tell me what I am doing wrong?

    Read the article

  • What does an asterisk/star in traceroute mean?

    - by Chang
    The below is a part of traceroute to my hosted server: 9 ae-2-2.ebr2.dallas1.level3.net (4.69.132.106) 19.433 ms 19.599 ms 19.275 ms 10 ae-72-72.csw2.dallas1.level3.net (4.69.151.141) 19.496 ms ae-82-82.csw3.dallas1.level3.net (4.69.151.153) 19.630 ms ae-62-62.csw1.dallas1.level3.net (4.69.151.129) 19.518 ms 11 ae-3-80.edge4.dallas3.level3.net (4.69.145.141) 19.659 ms ae-2-70.edge4.dallas3.level3.net (4.69.145.77) 90.610 ms ae-4-90.edge4.dallas3.level3.net (4.69.145.205) 19.658 ms 12 the-planet.edge4.dallas3.level3.net (4.59.32.30) 19.905 ms 19.519 ms 19.688 ms 13 te9-2.dsr01.dllstx3.networklayer.com (70.87.253.14) 40.037 ms 24.063 ms te2-4.dsr02.dllstx3.networklayer.com (70.87.255.46) 28.605 ms 14 * * * 15 * * * 16 zyzzyva.site5.com (174.122.37.66) 20.414 ms 20.603 ms 20.467 ms What's the meaning of lines 14 and 15? Information hidden?

    Read the article

  • Rebuilt website from static html to CMS need to redirect indexed links

    - by Michael Dunn
    I have rebuilt a website which was all created with static html pages, it has now been rebuilt using a CMS system. I need to find a way of redirecting all the existing links to there new corresponding pages which utilise friendly URL rewrites on the CMS based website I imagine there will be several hundred if not 1000s as i have pages and images linked from google. What is the most efficient way to complete this Thanks in advance Mike

    Read the article

  • Set up iis7.5 to deny connections outside of LAN for certain folder [migrated]

    - by Darkcat Studios
    Im setting up a combined website and extranet currently, they both read from the same database on the same server as the site is hosted on. The reason being that the website is fed from the data that the staff plug into the extranet interface. it also links in to AD for authorising access to the extranet. I have the extranet in a folder within the website folder. What I want to do is only allow the extranet to be accessed from computers within our LAN, but allow the main website to be freely accessible to internet users. I have it set up as a generic web server currently, so anyone can view anything (well up to the point where the user is asked to log into the extranet of course! I have read a lot on this but nothing I read applies to, or works in IIS7.5

    Read the article

  • What is the most time-effective way to monitor & manage threats from bots and/or humans?

    - by CheeseConQueso
    I'm usually overwhelmed by the amount of tools that hosting companies provide to track & quantify traffic data and statistics. I'm equally overwhelmed by the countless flavors of malicious 'attacks' that target any and every web site known to man. The security methods used to protect both the back and front end of a website are documented well and are straight-forward in terms of ease of implementation and application, but the army of autonomous bots knows no boundaries and will always find a niche of a website to infest. So what can be done to handle the inevitable swarm of bots that pound your domain with brute force? Whenever I look at error logs for my domains, there are always thousands of entries that look like bots trying to sneak sql code into the database by tricking the variables in the url into giving them schema information or private data within the database. My barbaric and time-consuming plan of defense is just to monitor visitor statistics for those obvious patterns of abuse and either ban the ips or range of ips accordingly. Aside from that, I don't know much else I could do to prevent all of the ping pong going on all day. Are there any good tools that automatically monitor this background activity (specifically activity that throws errors on the web & db server) and proactively deal with these source(s) of mayhem?

    Read the article

  • What are the options for hosting a small Plone site? [closed]

    - by Tina Russell
    Possible Duplicate: How to find web hosting that meets my requirements? I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • How to send mass email and not get treated as spam

    - by MonsterMMORPG
    I am the owner of http://www.monstermmorpg.com which is a free to play browser based monster role playing game. I have a very important announcement to send my above 300k members. I already have email sending software but they drop to the spam. So i want to improve my chances of dropping inbox. I am going to give you all details. Alright my domain registrar is : http://www.godaddy.com/ My hosting company is : http://www.leaseweb.com/en This is my setting at leaseweb: This is my DNS settings at Godaddy: This is how I send emails: MailMessage mail = new MailMessage(); mail.To.Add(EmailAdress); mail.From = new MailAddress("MonsterMMORPG NoReplay <[email protected]>"); mail.Subject = "Title Of Mail"; string Body = "Body Of Mail"; mail.Body = Body; mail.IsBodyHtml = true; SmtpClient smtp = new SmtpClient(); smtp.DeliveryMethod = SmtpDeliveryMethod.Network; smtp.UseDefaultCredentials = true; smtp.Host = "85.17.154.139"; smtp.Port = 25; smtp.Send(mail); Thanks for every kind of answer. I did not make any special setting at SMTP:

    Read the article

  • website address point to localhost

    - by munir ahmad
    Today in firefox while surfing the internet, I opened a website it asked "This site may harm your computer" if you want to open this website add it in exception list. I added a website under exception list and trust this website. After that situation, whenever I opend this website, it always points toward the localhost untill internet connected. I have setup localhost through apache (xampp server). If internet not connected this website do not open anything but localhost still work as usaual. How can I remove this situation so that this website do not point to locathost? I am using winxp sp3. Same problem now appear in all browsers too.

    Read the article

  • How to suppress PHPSESSID in URL for Googlebot?

    - by Roque Santa Cruz
    I use cookie based sessions, and they work for normal interaction with our site. However, when Googlebot comes crawling out PHP framework, Yii, needs to append ?PHPSESSID to each URL, which doesn't look that good in SERP. Any ways to suppress this behavior? PS. I tried to utilize ini_set('session.use_only_cookies', '1');, but it does not work. PPS. To get an impression of the SERP, they look like this: http://www.google.com/search?q=site:wwwdup.uni-leipzig.de+inurl:jobportal

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >