Search Results

Search found 3025 results on 121 pages for 'amazon ec2'.

Page 11/121 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • replacing kernel on non-booting ec2 instance

    - by TheToolBox
    So I had an amazon ec2 instance crash during the update to 14.04LTS. Frustratingly, it appears the kernel might be bad (or at least that's what the log below says to me, I could totally be wrong). I'm able to mount the volume to another, working server, chroot the broken volume, and sudo apt-get remove linux-headers-3.MOSTRECENT. Unfortunately though, when I try sudo update-grub, it comes back with /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). What am I missing? Here's the log from the server's attempted bootup: [H[J Booting 'Ubuntu 14.04 LTS, memtest86+' root (hd0) Filesystem type is ext2fs, using whole disk kernel /boot/memtest86+.bin ============= Init TPM Front ================ Tpmfront:Error Unable to read device/vtpm/0/backend-id during tpmfront initialization! error = ENOENT Tpmfront:Info Shutting down tpmfront xc: error: panic: xc_dom_core.c:621: xc_dom_find_loader: no loader found: Invalid kernel xc_dom_parse_image returned -1 close(3) Error 9: Unknown boot failure Press any key to continue... Thanks in advance!

    Read the article

  • Cloudformation with Ubuntu throwing errors

    - by Sammaye
    I have been doing some reading and have come to the understanding that if you wish to use a launchConfig with Ubuntu you will need to install the cfn-init file yourself which I have done: "Properties" : { "KeyName" : { "Ref" : "KeyName" }, "SpotPrice" : "0.05", "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] }, "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ], "InstanceType" : { "Ref" : "InstanceType" }, "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [ "#!/bin/bash\n", "apt-get -y install python-setuptools\n", "easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-1.0-6.tar.gz\n", "cfn-init ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource LaunchConfig ", " --configset ALL", " --access-key ", { "Ref" : "WorkerKeys" }, " --secret-key ", {"Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, " --region ", { "Ref" : "AWS::Region" }, " || error_exit 'Failed to run cfn-init'\n" ]]}} But I have a problem with this setup that I cannot seem to get a decent answer to. I keep getting this error in the logs: Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: config-scripts-per-once already ran once Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-boot with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-instance with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-user with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cc_scripts_user.py[WARNING]: failed to run-parts in /var/lib/cloud/instance/scripts Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[WARNING]: Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 117, in run_cc_modules#012 cc.handle(name, run_args, freq=freq)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 78, in handle#012 [name, self.cfg, self.cloud, cloudinit.log, args])#012 File "/usr/lib/python2.7/dist-packages/cloudinit/__init__.py", line 326, in sem_and_run#012 func(*args)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/cc_scripts_user.py", line 31, in handle#012 util.runparts(runparts_path)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 223, in runparts#012 raise RuntimeError('runparts: %i failures' % failed)#012RuntimeError: runparts: 1 failures Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[ERROR]: config handling of scripts-user, None, [] failed Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling keys-to-console with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling phone-home with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling final-message with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-user'] I have absolutely no idea what scripts-user means and Google is not helping much here either. I can, when I ssh into the server, see that it runs the userdata script since I can access cfn-init as a command whereas I cannot in the original AMI the instance is made from. However I have a launchConfig: "Comment" : "Install a simple PHP application", "AWS::CloudFormation::Init" : { "configSets" : { "ALL" : ["WorkerRole"] }, "WorkerRole" : { "files" : { "/etc/cron.d/worker.cron" : { "content" : "*/1 * * * * ubuntu /home/ubuntu/worker_cron.php &> /home/ubuntu/worker.log\n", "mode" : "000644", "owner" : "root", "group" : "root" }, "/home/ubuntu/worker_cron.php" : { "content" : { "Fn::Join" : ["", [ "#!/usr/bin/env php", "<?php", "define('ROOT', dirname(__FILE__));", "const AWS_KEY = \"", { "Ref" : "WorkerKeys" }, "\";", "const AWS_SECRET = \"", { "Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, "\";", "const QUEUE = \"", { "Ref" : "InputQueue" }, "\";", "exec('git clone x '.ROOT.'/worker');", "if(!file_exists(ROOT.'/worker/worker_despatcher.php')){", "echo 'git not downloaded right';", "exit();", "}", "echo 'git downloaded';", "include_once ROOT.'/worker/worker_despatcher.php';" ]]}, "mode" : "000755", "owner" : "ubuntu", "group" : "ubuntu" } } } } Which does not seem to run at all. I have checked for the files existance in my home directory and it's not there. I have checked for the cronjob entry and it's not there either. I cannot, after reading through the documentation, seem to see what's potentially wrong with my code. Any thoughts on why this is not working? Am I missing something blatant?

    Read the article

  • EBS full device confusion

    - by Mike
    I have a 500GB EBS device (/dev/xvdf) mounted to /vol and all data on the box seems to be writing to /vol correctly (see du output below). For some reason /dev/xvda1 is totally full. Any idea what's going on here? $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 32G 30G 8.0K 100% / udev 34G 8.0K 34G 1% /dev tmpfs 14G 176K 14G 1% /run none 5.0M 0 5.0M 0% /run/lock none 34G 0 34G 0% /run/shm /dev/xvdb 827G 201M 785G 1% /mnt /dev/xvdf 500G 145G 356G 29% /vol $ du -sh * 8.7M bin 18M boot 8.0K dev 5.1M etc 48K home 0 initrd.img 80M lib 4.0K lib64 16K lost+found 4.0K media 20K mnt 4.0K opt 0 proc 40K root 176K run 7.1M sbin 4.0K selinux 4.0K srv 0 sys 4.0K tmp 414M usr 356M var 0 vmlinuz 145G vol

    Read the article

  • Can you change an AWS Elastic Load Balancer health check without causing instances to go out of service?

    - by Anton I. Sipos
    For a number of reasons I need to change the health check URL of a live site behind an ELB. The ELB is configured for health checks every 30 seconds, with a healthy threshold of 2 and unhealthy threshold of 2. I need to ensure I make this change with no outage. If I make the change to the health check URL, and assuming the URL checks successfully, will the instances stay healthy on the load balancer, or will they go out of service until they succeed 2 health checks (in 1 minute)?

    Read the article

  • Picking only the value field out of Cloudwatch Dimensions, Java

    - by GroovyUser
    I have some data that are retried from the cloudwatch api's. Specifically I have used listMetrics. The data that I got from this call is : {Metrics: [{Namespace: Metric from grails, MetricName: hello123, Dimensions: [{Name: name, Value: 1425, }], }, {Namespace: Metric from grails, MetricName: hello123, Dimensions: [{Name: name, Value: 1068, }], }, That was the correct data as I would expect. I need a way to return only the value fields. Not others things. Is there any way to do this, in java? Thanks in advance.

    Read the article

  • Setting up scripts in Amazon EC2 Cloud

    - by racket99
    Hello, I am currently running a few perl and python scripts on a windows pc and would like to port over to the Amazon EC2 servers running 64-bit LINUX. The scripts are basic web scrapers that go to a variety of websites, get data and then save daily as csv files. I would like to install these in the cloud and get them running in an automated way so that they will run without my intervention. Also given that I don't want to lose all the data if the instance crashes, I should also upload the csv files to Amazon S3. Any idea how I can do this? I am not terribly versed in LINUX nor do I know Perl/Python well. What is the best way for me to tackle thi

    Read the article

  • What settings need to be changed to allow EC2 instances to use Amazon's Route 53 for DNS?

    - by ks78
    I have a number of Amazon EC2 instances, all running Ubuntu, which I'd like to configure to use Amazon's Route 53. I setup a script, following Shlomo Swidler's article, but ran into script-related issues, which were answered here. Now, I have the script working, but my instances are still not able to access Route 53's DNS. By this I mean, they are not able to resolve hostnames to IP addresses. My instances are currently configured with the DNS server IP address Amazon pushes out to them by default, does that need to be changed when using Route 53? I'm also IP-restricting my instances using the Security Groups. Could that be the problem? Is there a certain IP address or port I should open to allow communication with Route 53? It seems that DNS requests should be originating from my instances so the Security Groups shouldn't be an issue, but I've been wrong before. If anyone has any ideas, I'd really appreciate it.

    Read the article

  • Are EC2 security group changes effective immediately for running instances?

    - by Jonik
    I have an EC2 instance running, and it belongs to a security group. If I add a new allowed connection to that security group through AWS Management Console, should that change be effective immediately? Or perhaps only after restart of the instance? In my case, I'm trying to allow access to PostgreSQL's default port (tcp 5432 5432 0.0.0.0/0), and I'm not sure if it's the EC2 firewall or PostgreSQL's settings that are refusing the connection.

    Read the article

  • Interesting questions related to lighttpd on Amazon EC2

    - by terence410
    This problem appeared today and I have no idea what is going on. Please share you ideas. I have 1 EC2 DB server (MYSQL + NFS File Sharing + Memcached). And I have 3 EC2 Web servers (lighttpd) where it will mounted the NFS folders on the DB server. Everything going smoothly for months but suddenly there is an interesting phenomenon. In every 8 minutes to 10 minutes, PHP file will be unreachable. This will last about 1 minute and then back to normal. Normal files like .html file are unaffected. All servers have the same problem exactly at the same time. I have spent one whole day to analysis the reason. Finally, I find out when the problem appear, the file descriptor of lighttpd suddenly increased a lot. I used ls /proc/1234/fd | wc -l to check the number of fd. The # of fd is around 250 in normal time. However, when the problem appeared, it will be raised to 1500 and then back to normal. It sounds funny, right? Do you have any idea what's going on? ======================== The CPU graph of one of the web server.

    Read the article

  • Amazon SQS invalid binary character in message body

    - by letronje
    I have a web app that sends messages to an Amazon SQS Queue. Amazon sqs lib throws a 'AmazonSQSException' since the message contained invalid binary character. The message is the referrer obtained from an incoming http request. This is what it looks like: http://ads.vrx.adbrite.com/adserver/display_iab_ads.php?sid=1220459&title_color=0000FF&text_color=000000&background_color=FFFFFF&border_color=CCCCCC&url_color=008000&newwin=0&zs=3330305f323530&width=300&height=250&url=http%3A%2F%2Funblockorkutproxy.com%2Fsearch.php%2FOi8vZG93%2FbmxvYWRz%2FLnppZGR1%2FLmNvbS9k%2Fb3dubG9h%2FZGZpbGUv%2FNTY5MTQ3%2FNi9NeUN1%2FdGVHaXJs%2FZnJpZW5k%2FWmFoaXJh%2FLndtdi5o%2FdG1s%2Fb0%2F^Fô}úÃ<99ë)j Looks like the characters in bold are the invalid characters. Is there an easy way to filter out characters characters that are not accepted by amazon ? Here are the characters allowed by amazon in message body. I am not sure what regex i should use to replace invalid characters by ''

    Read the article

  • Living the Amazon Life [Video]

    - by Asian Angel
    Amazon has an amazing selection of products available to satisfy your needs and desires, but what if their services were to expand even more? This humorous video looks at what it might be like if you could literally get anything you wanted through a unique assortment of Amazon sister-sites! Note: Video contains some language that may be considered inappropriate. AMAZON LIFE [via Geeks are Sexy] What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • Can't Connect to IIS Ftp Site under Amazon EC2

    - by h3n
    IIS 7.5: Ftp Firewall Suport: Data Ranges 49152-65535 using external Ip of Amazon EC2 static IP Ftp IPv4 Restriction: allow: Amazon EC2 static IP Ftp Authentication: Anonymous: Enabled, Basic: Disabled, IISMgr: Enabled Ftp Authorization: Allow All Users: Read/Write Windows Firewall (Inbound): Open port 21 Open port ranges: 49152-65535 (Outbound) Open port: 20 Amazon EC2 Security Group: Custom TCP Rule: 21 Custom TCP Rule: 49152-65535 It works on Internet Explorer when I typed the address: ftp://localhost on the server but when I entered the Amazon EC2 Static IP (ftp://IPADRESS) it doesnt connect. I cant connect also to FileZilla

    Read the article

  • Can no longer deploy to Amazon AWS using VS 2010

    - by KevinDeus
    Did something change on Amazon recently? I'm trying to redeploy to my Amazon instance, and the "Publish to Amazon Cloudformation" plugin for VS 2010 no longer appears to update my instance. It tells me that upload is successful, but my instance does not appear to be updating on Amazon I've tried disabling all my instances and using the tool to create a new instance , but no luck. I do see that the URL of the deployed application (which looks like this: http://c2-107-20-11-27.compute-1.amazonaws.com) does not appear to match up with the public IP of my instance on Amazon. (even when it creates a new one) This seems to indicate that something might be broken. any clues? (btw, whenever the Amazon VS2010 Plugin creates a new instance, I am sure to reconnect my elastic IP with the new instance)

    Read the article

  • Can't authorize a server for Amazon RDS

    - by Parris
    We are attempting to slowly migrate a website over to AWS among other things. We decided the first thing to move was the database. We have some dedicated server with a different hosting provider. We only have one IP. I am having trouble authorizing the ip so that the old server can connect to RDS. It simply hangs for a while while using the mysql cli, then responds: ERROR 2003 (HY000): Can't connect to MySQL server on 'db.address.us-east-1.rds.amazonaws.com' (110) It did work on my laptop though. I am not quite sure what is wrong. I have a feeling I don't quite understand CIDR/IP. I simply took the ip address and tacked on /32 at the end. Then I gleaned some information that it also has to do with subnet mask? ifconfig reports: 255.255.255.0 I found a calculator and the IP changed a bit and had /24 at the end. That still didn't work. One other note... perhaps i dont know enough about the differences between OS. The hosting provider is using centOS, while our development machines are all ubuntu. Any insight would be extremely helpful! THANKS :)

    Read the article

  • Configure non-destructive Amazon S3 bucket policy

    - by Assaf
    There's a bucket into which some users may write their data for backup purposes. They use s3cmd to put new files into their bucket. I'd like to enforce a non-destruction policy on these buckets - meaning, it should be impossible for users to destroy data, they should only be able to add data. How can I create a bucket policy that only lets a certain user put a file if it doesn't already exist, and doesn't let him do anything else with the bucket.

    Read the article

  • Providing a static IP for resources behind AWS Elastic Load Balancer (ELB)

    - by tharrison
    I need a static IP address that handles SSL traffic from a known source (a partner). Our servers are behind an AWS Elastic Load Balancer (ELB), which cannot provide a static IP address; many threads about this here. My thought is to create an instance in EC2 whose sole purpose in life is to be a reverse proxy server having it's own IP address; accepting HTTPS requests and forwarding them to the load balancer. Are there better solutions?

    Read the article

  • SSHing into EC2 instance fails - -v details below!

    - by ming yeow
    Hi folks! I created a new ec2 instance, but i am unable to ssh in with the key i normally use with my other instances. The -v details are below. Thanks! debug1: Host 'dbl01' is known and matches the RSA host key. debug1: Found key in /Users/mingyeow/.ssh/known_hosts:26 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering public key: /Users/mingyeow/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/mingyeow/.ssh/identity debug1: Trying private key: /Users/mingyeow/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • Amazon EC2 Load Balancer: Defending against DoS attack?

    - by netvope
    We usually blacklist IPs address with iptables. But in Amazon EC2, if a connection goes through the Elastic Load Balancer, the remote address will be replaced by the load balancer's address, rendering iptables useless. In the case for HTTP, apparently the only way to find out the real remote address is to look at the HTTP header HTTP_X_FORWARDED_FOR. To me, blocking IPs at the web application level is not an effective way. What is the best practice to defend against DoS attack in this scenario? In this article, someone suggested that we can replace Elastic Load Balancer with HAProxy. However, there are certain disadvantages in doing this, and I'm trying to see if there is any better alternatives.

    Read the article

  • increasing amazon root volume size

    - by OCD
    I have a default amazon ec2 instance with 8GB root volume size. I am running out of space. I have: Detach the current EBS volume in AWS Management Console (Web). Create snapshot of this volume. Created a new Volume with 50G space with my snapshot. Attach the new volume back to the instance to /dev/sda1 However, when I reconnect to the account with: > df -h I can see from the management console that my new Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 8256952 8173624 0 100% / tmpfs 308508 40 308468 1% /dev/shm It's still not using my new volume's size, how to make this work?

    Read the article

  • Amazon ec2 folder missing

    - by CQM
    To set permissions on the settings file On your Amazon EC2 instance, at a command prompt, use the following command to set permissions: sudo chmod 666 /var/www/html/sites/default/settings.php except I don't have a www folder in my instance [ec2-user@ip-10-242-118-215 ~]$ cd / [ec2-user@ip-10-242-118-215 /]$ ls bin cgroup etc lib local media opt root selinux sys usr boot dev home lib64 lost+found mnt proc sbin srv tmp var [ec2-user@ip-10-242-118-215 /]$ cd var [ec2-user@ip-10-242-118-215 var]$ ls account db games local log nis preserve run tmp cache empty lib lock mail opt racoon spool yp Please advise, did I forget to install something that the amazon instructions assumed I knew about? Running 64bit Amazon linux ami march 2012 I feel like the webserver is missing?

    Read the article

  • Running game server and webserver on EC2

    - by mazzzzz
    Hey guys, I have a webhost, and an EC2 server (to run a game server on). The problem is that I want to access/modify the EC2's files with php admin programs. I looked into a lot of options to just have the webhost communicate with the EC2 server (ssh, etc), but none of them panned out. My question is if I were to install a lightweight webserver (think lighttpd) on my EC2 server, how badly would it hurt the game server's performance? I was leaning away from this solution, even though the webserver (on the EC2 server) wouldn't get many hits (less than a 100 a day). Thanks for your thoughts, Max

    Read the article

  • SOLR high CPU usage in amazon EC2

    - by user644745
    I installed solr-3.6 in my local windows box and it worked fine. I installed solr-4.0 in amazon ec2 linux large instance and the cpu usage shot upto 100%. It maintained at 80-90% average cpu power. I thought it could be because of 4.0, So I installed 3.6 in EC2 again. But again the CPU usage was 80-90% average. With both the versions, solr works in EC2. dont know why CPU usage is so high. i started the solr server using "sudo nohup java -jar start.jar &" In my local box java 1.7 is installed and in EC2 it is 1.6.0_24. I have mapped solr dir to an EBS volume. /dev/mapper/vg1-solr 8361916 1935928 6342128 24% /home/ec2-user/SOLR/solr/example/solr Is there any known issue ?

    Read the article

  • Cannot SSH into Amazon EC2 instance

    - by edelwater
    I read: Cannot connect to ec2 instance http://stackoverflow.com/questions/5635640/cannot-ssh-into-amazon-ec2-instance Amazon EC2 instance ssh problems etc... But could not fix it: suddenly (after a year of service, no changes on my winscp settings) it gives me "network error connection timed out" (im using ec2-user) (also from the amazon console). Log FILE: http://pastebin.com/vNq6YQvN All Sites that run on it run fine port 22 is allowed (never changed it) (security group) using the correct ec2-user and domain via my winscp / putty i can connect to other hosting (via ssh) update: SOLVED. I spend 2 days without looking at my own IP address .... (since it did not change the past 3 years....). Your comments made the spark in my brain. thank you so much. (and only now i find dozens of discussions from angry users that the static addresses from my provider are changed to dynamic ones: http://gathering.tweakers.net/forum/list_messages/1501005/12 ...)

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >