Search Results

Search found 181 results on 8 pages for 'linode'.

Page 3/8 | < Previous Page | 1 2 3 4 5 6 7 8  | Next Page >

  • The Web Hosting Connundrum for "not quite" developers

    - by saltcod
    Hey all, Apologies if this post feels like its been covered elsewhere, but I don't think it has. I've been down a winding web hosting road. To date, I've tried: Joyent, Media Temple, Bluehost, Hostgator, and finally Linode. The reason for switching are likely obvious to everyone: speed. With the exception of the lightening fast Linode, all of the shared hosts are absolutely sloooow. What do do when you're not really a "developer" While I'v grown addicted to the speed of Linode, I really don't feel like its where I should be. I have this nagging feeling in the back of my mind that one of these days (likely soon), I'm going to run into something that i won't be able to figure out and i'll have days worth of downtime. Just the other day, for example, I realized that one of my domains wasn't sending emails. After 4(!) hours looking into the problem, I still can't get sendmail or postfix to work. Four hours!! I want to be a Drupal expert, not a Ubuntu expert That's really the heart of my problem: I spend way too much time learning Ubuntu's ins-and-outs, and not nearly enough time working on Drupal. So here goes: Is there a web host out there anywhere that offers the speed of Linode, but will let me focus on Drupal instead of sys-admin-ing? Thanks! [ I know, I know. There are going to be lots of people who read this saying - "just learn Ubuntu like a real developer". And I get that. I do. But when I work full-time and try and develop some of these sites in my evenings and weekends, I'm really feeling like the sys-admin stuff gets in the way.

    Read the article

  • Difference between key_buffers and recommendation

    - by Typeoneerror
    I'm looking to add a bit of memory to MySQL on a Linode VPS server on which I've got a small facebook (canvas app) PHP app using MySQL running. I'm not super familiar with MySQL optimization so I'm hoping to find a simple answer. I think I want to increase the key_buffer size (the default is 16M) to something like 32M to start, but I'm not sure if I need to tweak anything else as well. All I've done so far is increase the query_cache_size to 32M from 16M. There's also key_buffer under [mysqld] and key_buffer under [isamchk]. What are the difference between those two? If I have Linode 2048MB (http://www.linode.com) VPS, what would recommend I set the buffers to? I don't expect this site to have tons of visitors, but I'd like it to be as optimized as possible. Definitely way more heavy on the database access than PHP and very few HTTP requests.

    Read the article

  • VPS stops responding every now and again

    - by Or W
    I have a Linode vps that I use to host some of my websites on. It's Ubuntu based and it's up to date in terms of all packages. I don't have any cron jobs scheduled or any automatic processes. I host a few (up to date) wordpress blogs there that have very little traffic altogether. Every day (at a different time) my server stops responding, I can't SSH to it, web access is getting timed out and it just dies until I reboot it through the Linode manager. On the linode dashboard I can see that the CPU is not very high (2-3%) Incoming/Outgoing traffic is on 0 and the IO count has a spike just before the server stops responding (SWAP IO is at 2k and IO Rate is at 5k). When I reboot the server everything is just fine. I'm trying to figure out a way to analyze what's going on at these random times where the server freezes up. How can I determine the problem?

    Read the article

  • Nightly backups (and maybe other tasks) causing server alerts

    - by J. Pablo Fernández
    I have two independent alert notification systems for my servers. The server is a virtual machine on Linode and one of the alerts comes from Linode. The other monitoring system we use is New Relic. They are both watching out for IO utilization. Every night I get alerts from both of them as the server is using too much IO. I run quite a few tasks in the middle of the night but the one I confirmed that can cause IO-warnings is running the backups. The backup is done by s3cmd sync. I tried ionice but it still generates the warnings. Getting warnings every night reduces the efficacy of warnings when they happen for real. For Linode I could raise the level at which a warning is issued, but it might mean making the whole thing useless as the level is too high. What would be the proper solution for this?

    Read the article

  • Solution for file store needing large number of simultaneous connections

    - by Tennyson H
    So I'm fairly new to large-scale architectures. We're currently using linode instances for our project, but we're brainstorming about scaling. We need a file store system than can deliver ~50mb folders (user data) to our computing instances in a reasonable amount of time (<20 sec), and scale to 10000+ total users, and perhaps 100+ simultaneous transfers. We are also unsure whether to network mount (sshfs/nfs) or just do a full transfer store-instance at the beginning and rsync instance- store at the end. I've experimented with SSH-FS between our little Linode instances but it seems to be bottlenecked at 15mb/s total bandwith, which wouldn't do under 10+ transfer stress let alone scale v. large. I also tried to investigate NFS but couldn't get it working but have little hope that it'll do within our linode network. Are there tools on other cloud providers that match our needs? Should we be mounting, or should we be transferring? Thanks very much!

    Read the article

  • Excessive Outbound DNS Traffic

    - by user1318414
    I have a VPS system which I have had for 3 years on one host without issue. Recently, the host started sending an extreme amount of outbound DNS traffic to 31.193.132.138. Due to the way that Linode responded to this, I have recently left Linode and moved to 6sync. The server was completely rebuilt on 6sync with the exception of postfix mail configurations. Currently, the daemons run are as follows: sshd nginx postfix dovecot php5-fpm (localhost only) spampd (localhost only) clamsmtpd (localhost only) Given that the server was 100% rebuilt, I can't find any serious exploits against the above stated daemons, passwords have changed, ssh keys don't even exist on the rebuild yet, etc... it seems extremely unlikely that this is a compromise which is being used to DoS the address. The provided IP is noted online as a known SPAM source. My initial assumption was that it was attempting to use my postfix server as a relay, and the bogus addresses it was providing were domains with that IP registered as their nameservers. I would imagine given my postfix configuration that DNS queries for things such as SPF information would come in with equal or greater amount than the number of attempted spam e-mails sent. Both Linode and 6Sync have said that the outbound traffic is extremely disproportionate. The following is all the information I received from Linode regarding the outbound traffic: 21:28:28.647263 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647264 IP 97.107.134.33 > 31.193.132.138: udp 21:28:28.647264 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647265 IP 97.107.134.33 > 31.193.132.138: udp 21:28:28.647265 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647266 IP 97.107.134.33 > 31.193.132.138: udp 6sync cannot confirm whether or not the recent spike in outbound traffic was to the same IP or over DNS, but I have presumed as such. For now my server is blocking the entire 31.0.0.0/8 subnet to help deter this while I figure it out. Anyone have any idea what is going on?

    Read the article

  • list of things to think about for hosting a potentially high traffic website

    - by SpashHit
    I do my own hosting for a few clients on my own VPS server (Lindode). Since my clients so far have been extremely low traffic, I have not had to really dig into some of the considerations that I would need for a higher traffic site. Now I am bidding on a client whose site will be potentially higher (not Facebook or twitter, but higher than Joe's ice cream shop). Is there a list of things I need to think about that I may be missing? I am going to assume, at least at first, that I will be able to handle them on my shared Linode, but I could move to a dedicated Linode if need be. I am not thinking so far of multiple servers, but short of that there are still considerations. For example, mod_perl instead of straight CGI, better backups, etc. What else? In case it matters, the stack will be debian-linux / apache / Perl / mysql / Template Toolkit.

    Read the article

  • My site is getting popular, bandwidth expensive

    - by ElbertF
    I'm running a (small) image hosting site which has been getting a little bit of traction lately (2,000 to 5,000 visitors a day, spikes up to 10,000). It's hosted on Linode and I've already run out of the bandwidth for this month (300 GB). I've experimented with Amazon S3 but that was costing me $5 a day, Linode currently costs me $30 a month. I get a tiny bit of revenue from ads (Adsense and Black Label Ads) but it's not enough to break even. What should I do? Obviously I'd like to keep running the site but not if it starts costing me a lot of money.

    Read the article

  • on debian, lighttpd apache2 using 80 port, lighttpd throws :address already use error

    - by user1960581
    I bought the linode(linode.com) server the other day. I've been trying to run lighttpd and apache2 at the same port, using lighttpd for static files. As linode is only providing ONE ipv4 address, I tried to bind lighttpd on the ipv6 address. That's where I got the same error each and very single time: can't bind to port [ipv6] 80 Address already in use. I tried bind the ipv4 address. Everything worked. Please help me, this is driving me nuts for the last two days. my lighttpd.conf file:(the ipv6 address isn't true) server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", # "mod_rewrite", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" server.port = 80 server.bind = "2600:3c02::0000" server.use-ipv6 = "enable" #server.pid-file = "/var/run/lighttpd.pid" index-file.names = ( "index.php", "index.html", "index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/javascript", "text/css", "text/html", "text/plain" ) # default listening port for IPv6 falls back to the IPv4 port #include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ### ipv6 ### $SERVER["socket"] == "[2600:3c02::0000]:80" { # accesslog.filename = "var/log/lighttpd/ipv6/access.log" # server.document-root = "/var/www/" # server.error-handler-404 = "/index.php?error=404" } and the error message: can't bind to port, 2600:3c02::0000 Address already in use.

    Read the article

  • Revover original email from Sendmail log

    - by Xavi Colomer
    I have a website the contact form has been failing silently for two weeks (Wordpress + Contact form 7). Apparently updating to Contact Form 7 made the assigned email to fail [email protected], I also tested with [email protected] and it also failed until I tried with gmail and it finally worked. Apparently @telefonica.net and @me.com domains are not working with this version of the plugin, but I have to investigate the cause. I found the logs of the lost emails, but I would like to know If I can recover the sender or the content of the original messages. May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: from=www-data, size=3250, class=0, nrcpts=1, msgid=<[email protected]_web.com>, relay=www-data@localhost May 24 23:41:11 localhost sm-mta[27655]: s4P3fBdA027655: from=<[email protected].linode.com>, size=3359, class=0, nrcpts=1, msgid=<[email protected]_web.com>, proto=ESMTP, daemon=MTA-v4, relay=localhost.localdomain [127.0.0.1] May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=33250, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s4P3fBdA027655 Message accepted for delivery) May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: to=<[email protected]>, ctladdr=<[email protected].linode.com> (33/33), delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=123359, relay=tnetmx.telefonica.net. [86.109.99.69], dsn=5.0.0, stat=Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: s4P3fCdA027657: DSN: Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fCdA027657: to=<[email protected].linode.com>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30000, dsn=2.0.0, stat=Sent Thanks

    Read the article

  • Does this prove a network bandwidth bottleneck?

    - by Yuji Tomita
    I've incorrectly assumed that my internal AB testing means my server can handle 1k concurrency @3k hits per second. My theory at at the moment is that the network is the bottleneck. The server can't send enough data fast enough. External testing from blitz.io at 1k concurrency shows my hits/s capping off at 180, with pages taking longer and longer to respond as the server is only able to return 180 per second. I've served a blank file from nginx and benched it: it scales 1:1 with concurrency. Now to rule out IO / memcached bottlenecks (nginx normally pulls from memcached), I serve up a static version of the cached page from the filesystem. The results are very similar to my original test; I'm capped at around 180 RPS. Splitting the HTML page in half gives me double the RPS, so it's definitely limited by the size of the page. If I internally ApacheBench from the local server, I get consistent results of around 4k RPS on both the Full Page and the Half Page, at high transfer rates. Transfer rate: 62586.14 [Kbytes/sec] received If I AB from an external server, I get around 180RPS - same as the blitz.io results. How do I know it's not intentional throttling? If I benchmark from multiple external servers, all results become poor which leads me to believe the problem is in MY servers outbound traffic, not a download speed issue with my benchmarking servers / blitz.io. So I'm back to my conclusion that my server can't send data fast enough. Am I right? Are there other ways to interpret this data? Is the solution/optimization to set up multiple servers + load balancing that can each serve 180 hits per second? I'm quite new to server optimization, so I'd appreciate any confirmation interpreting this data. Outbound traffic Here's more information about the outbound bandwidth: The network graph shows a maximum output of 16 Mb/s: 16 megabits per second. Doesn't sound like much at all. Due to a suggestion about throttling, I looked into this and found that linode has a 50mbps cap (which I'm not even close to hitting, apparently). I had it raised to 100mbps. Since linode caps my traffic, and I'm not even hitting it, does this mean that my server should indeed be capable of outputting up to 100mbps but is limited by some other internal bottleneck? I just don't understand how networks at this large of a scale work; can they literally send data as fast as they can read from the HDD? Is the network pipe that big? In conclusion 1: Based on the above, I'm thinking I can definitely raise my 180RPS by adding an nginx load balancer on top of a multi nginx server setup at exactly 180RPS per server behind the LB. 2: If linode has a 50/100mbit limit that I'm not hitting at all, there must be something I can do to hit that limit with my single server setup. If I can read / transmit data fast enough locally, and linode even bothers to have a 50mbit/100mbit cap, there must be an internal bottleneck that's not allowing me to hit those caps that I'm not sure how to detect. Correct? I realize the question is huge and vague now, but I'm not sure how to condense it. Any input is appreciated on any conclusion I've made.

    Read the article

  • Resque: Slow worker startup and Forking

    - by David John
    I'm currently moving my application from a Linode setup to EC2. Redis is currently installed on a remote instance with various worker instances interacting with the queue. Thats all going fantastic. My problem is with the amount of time it takes for a worker to be 'instantiated' and slow forking. Starting a worker will usually take between 30 seconds and a minute(from god.rb starting the worker rake task and the worker actively starting work on the queue). I could live with that, but I've not experienced such a wait time on my current Linode production box so I believe its one of my symptoms to a bigger problem. Next issue is that jobs that took a second or less in my previous environment now seem to take about 5 to 10 times longer.. I'm assuming this must be some sort of issue with my Ubuntu install on EC2? One notable difference is that I'm running REE 1.8.7-2010.01 in my new setup, and REE 1.8.6 on the old Linode boxes. Anyone else experienced these issues?

    Read the article

  • Caching API Proxy Server

    - by edc1591
    I need to have a server that caches API responses and then forwards them along to a desktop app. I don't really have much experience with this, so I have a few questions. First of all, what kind of server should I get? I already use Linode for my websites, so ideally I'd like to go with them. I expect to get anywhere from 30 million to 40 million requests to my proxy server each month. Will a 512 Linode be able to support that? Also, is there any software out there that does this already, or will I have to write my own? The API responses are roughly 10 KB each on average, so doing the math, that's a lot of data each month. Should I just add more transfer to whatever server I buy, or can I somehow compress the API responses before sending them off to the user? Thanks for any help.

    Read the article

  • how do i set index priorty on nginx in order to load index.html before wordpress' .php files

    - by orbitalshocK
    hello there, gents. I'm an absolute beginner in linux, the CLI, as well as nginx and wordpress. i'm trying to make a 'coming soon' landing page that will take priority over the main wordpress installation i just set up. I want to make .html load before php, or get information on the Best Practices approach to this. I just now realized i could easily use the wordpress' generic "under construction" page and modify it. I'm sure it has one; i'm sure there's a plugin. Stats linode 1024 ubuntu 12.04 nginx 1.6.1 single wordpress installation (for now) set up using easyengine, but going to restart and configure nginx for my linode specifically probably. I managed to find one piece of instruction on how to change the httpd file to specify priority for apache 2, but did not find the same documentation for nginx. If it's not on the first page of google, then serverfault needs the question answered! Viva la Server Fault first page results!

    Read the article

  • Apache virtual host for drupal test site

    - by bsreekanth
    Hello, I am a programmer, trying to launch my first website.. through different helpful posts in sf and others, I setup an account with Linode and set up a slice (Debian, Apache, ..etc). I have a Drupal site under development, and like to have a test site in the Linode server as well. Now, I like to have a site setup with the following requirement. What is the best way to setup and protect the test site along with the actual (production) site?. Is virtual host is the answer? To protect the test site, is .htaccess authentication sufficient to prevent access from public and robots? I also modifying the theme, database contents etc, so having two sites under one drupal installation may not be good idea . what do u suggest? thanks in advance. bsreekanth.

    Read the article

  • Email notification and mail server

    - by Jerr Wu
    I am building a web application with email notification just like Facebook, which will host in http://www.linode.com/. When a user A comment to a post, the poster will get an email notification from '[email protected]' with the comment message written by user A. (Not spam) I really like Google Apps but they have sending limits 2000 sending per day, that is not suit for my case becuz I cannot have sending limits. There will be many email notifications. http://support.google.com/a/bin/answer.py?hl=en&answer=166852 I also need company email accounts for team members use which I prefer Google Apps. My web application will host in linode, I am considering "Amazon Simple Notification Service" for the email notification. My questions are Any other recommend email service provider suits my case for me? Can I bind company email accounts(ex: [email protected]) with Google Apps and bind [email protected] with other email service provider?

    Read the article

  • Should I upgrade my VPS to a dedicated host?

    - by Mr. Hedgehog
    I've been running a website for 4 months now and in that time, it has grown from a few hundred unique visits/day to about 30,000 unique visits/day. I have been on Linode the whole time and have steadily upgraded to a 4GB node. Unfortunately, my app is very MySQL heavy and even with caching, the server is frequently ceasing to respond in peak times. During these times, all 4 cores are maxed on the machine (though Linode 4GB only provides a 1/5 share of the CPU). The only way to get it running again is to restart MySQL. I don't really know what to do. I have tried optimising the InnoDB settings and it seems to not be helping. Would moving to a Dedicated host be a good solution? I can get 4-5x the power of my node for a similar price - and with decent providers too.

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by Grant Tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS I have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah I know :) Anyways I have followed this LEMP (will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what I want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so I can host multiple websites - mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what I need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)?

    Read the article

  • Ubuntu 12.04 host lookups extremely slow

    - by tubaguy50035
    I'm having issues with one of my servers taking a long time to look up host names. This is an Ubuntu 12.04 box, so I've tried following the new resolvconf directives. In my /etc/network/interfaces file, I defined my name servers like this: auto eth0 iface eth0 inet static address someaddress netmask 255.255.255.0 gateway 198.58.103.1 dns-nameservers 74.14.179.5 72.14.188.5 In my /etc/resolv.conf, I see these name servers, like this: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 74.14.179.5 nameserver 72.14.188.5 On another box, I edited the resolv.conf directly as directed by my hosts' setup help files. It looks like this: domain members.linode.com search members.linode.com nameserver 72.14.179.5 nameserver 72.14.188.5 options rotate This second box has no issues with host name look ups and responds quite quickly. Could not having the domain and search directives make my look ups slow? By slow, I mean it's taking anywhere from 5 to 15 seconds to find the IP address of a host.

    Read the article

  • Two domains hosted on the same server with different root folder shows up the same homepage

    - by emaillenin
    I have hosted two domains from GoDaddy at Linode VPS. They are mobiletoast.com and lesseltechnologies.com Thought the latter site has a separate index folder, whenever I navigate to it, I get the homepage of mobiletoast.com The strange thing is, I see the expected page (It works), when I open the site from my mobile phone. But when I open the site from my PC (any browser, without any cache, hard refresh), I get the homepage of mobiletoast.com The Linode support team says, they see the correct "It works" page. But I am not able to see that page. This is the output of the command apache2ctl -S root@li339-83:~# apache2ctl -S VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server mobiletoast.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost mobiletoast.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost blog.mobiletoast.com (/etc/apache2/sites-enabled/blog.mobiletoast.com:1) port 80 namevhost lesseltechnologies.com (/etc/apache2/sites-enabled/lesseltechnologies.com:1) port 80 namevhost mobiletoast.com (/etc/apache2/sites-enabled/mobiletoast.com:1) Syntax OK

    Read the article

  • Best way to restrict FTP access to a single directory?

    - by John Debs
    I have a VPS running Ubuntu 10.04, and I'd like to give someone SFTP access to a single directory, but prevent them from seeing anything else on the system. What's the best way to pull this off? I considered removing "everyone" permissions from everything on the system, but that seems like a really blunt tool for this problem (and one that'll cause other issues) - I'm hoping there's a better option here. Edit: I appreciate the answers! (And I learned a bunch reading/researching through them). I ended up finding and using this guide from Linode as it spelled all the steps: http://library.linode.com/security/sftp-jails/

    Read the article

  • MySQL keeps crashing due to bug

    - by mike
    So about a week ago, I finally figured out what was causing my server to continually crash. After reviewing my mysqld.log I keep seeing this same error, 101210 5:04:32 [Warning] option 'max_join_size': unsigned value 18446744073709551615 adjusted to 4294967295 Here is a link to the bug report, http://bugs.mysql.com/bug.php?id=35346 someone recommend that you set the max_join_size vaule in my.cnf to 4M, and I did. I assumed this fixed the issue, and it was working for about a week with no issues until today... I checked MySQL and the same error is now back, 101216 06:35:25 mysqld restarted 101216 6:38:15 [Warning] option 'max_join_size': unsigned value 18446744073709551615 adjusted to 4294967295 101216 6:38:15 [Warning] option 'max_join_size': unsigned value 18446744073709551615 adjusted to 4294967295 101216 06:40:42 mysqld ended Anyone know how I can really fix this issue? I can't keep having mysql crash like this. EDIT: I forgot to mention every time this happens I get an email from linode staying I have a high disk io rate Your Linode, has exceeded the notification threshold (1000) for disk io rate by averaging 2483.68 for the last 2 hours.

    Read the article

  • Can't ssh to instance

    - by megas
    I have a linode instance, I was successfully connecting to it via ssh. But I've decided to rebuild my instance and then I can not connect to that instance via ssh. The linode works correctly because I can get access via Lish (lonode ssh) I've tried to clear known_hosts with: ssh-keygen -R 212.71.xxx.xx But I still getting message: ssh [email protected] -v OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 212.71.238.74 [212.71.238.74] port 22. debug1: Connection established. debug1: identity file /home/megas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/megas/.ssh/id_rsa-cert type -1 debug1: identity file /home/megas/.ssh/id_dsa type -1 debug1: identity file /home/megas/.ssh/id_dsa-cert type -1 debug1: identity file /home/megas/.ssh/id_ecdsa type -1 debug1: identity file /home/megas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA c5:c3:a7:c0:5a:25:a1:64:c4:04:0c:42:bb:46:f6:96 debug1: Host '212.71.238.74' is known and matches the ECDSA host key. debug1: Found key in /home/megas/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/megas/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/megas/.ssh/id_dsa debug1: Trying private key: /home/megas/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey,password). How to resolve this problem? Thanks

    Read the article

  • Dedicated Servers: Is one better then two for LAMP pseudo HA setup? [closed]

    - by bikedorkseattle
    Possible Duplicate: How to find web hosting that meets my requirements? I know there are zillions of commentary about hosting out there, but I haven't read much about this. Our current well known host is having too many problems, the hardware we are on it subpar, and I'm ready to leave. A day of downtime can cost as much as our monthly hosting bill. A month of bad performance is just killing us right now, user and google wise. I'm wondering about running two dedicated boxes for LAMP, one running as the primary Nginx/Apache (proxy pass), and the other as the MySQL box. Running a single box scares the bejesus out of me because who knows how long it will take anyone to fix a raid card or whatever. The idea is to set this up using some sort of failover system using pacemaker and heartbeat. If one server goes down the other can take over for the other running both web and db. There are some good articles over at Linode about this. I have a few DBs that are 1GB+ and would like to load them into memory. Because of this, I'm shying away from a Linode HA setup because for the price I could do it with two dedicated like I described. Am I mad or an idiot? What are people out there doing for pseodu high availability good performance setups under $400/month? I'm a webmaster; I do a lot of things none of it that well :)

    Read the article

  • Where can I learn about managing domain names for my websites? [closed]

    - by Shahbaz
    [I originally asked this question on serverfault.com, where it was closed as 'out of scope.' Hopefully it is appropriate for this forum] I am a developer who doesn't understand how to effectively manage Internet domain names. Say I registered a name with namecheap and host a website on linode. Now what is an a-record? What is a name server and do I host it with namecheap of linode? Why would I pay amazon when others are free? Does any of this matter in terms of website latency or reliability? I feel like a script kiddy, copying and pasting others' and hoping it works. Is there a book or other resource that explains all this? I know amazon is full of books about DNS, but afaik they are about setting up DNA servers for local networks, not the Internet. p.s. To emphasize, I'm asking for books or long write-ups which explain this to technically competent people, who just haven't had to think about the role of commercial registrars, name servers, commercial hosts, commercial websites and how all parts play together on the real internet (not local networks).

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8  | Next Page >