Search Results

Search found 2839 results on 114 pages for 'amazon'.

Page 6/114 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • How should secret files be pushed to an EC2 (on AWS) Ruby on Rails application?

    - by nikc
    How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk? I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using: git aws.push The following files are in the .gitignore: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb Following this link I attempted to add an S3 file to my deployment: http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html Quoting from that link: Example Snippet The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp: sources: /etc/myapp: http://s3.amazonaws.com/mybucket/myobject Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .elasticbeanstalk .ebextensions directory: sources: /var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz That config.tar.gz file will extract to: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message: Error message: No such file or directory - /var/app/current/config/database.yml Exception class: Errno::ENOENT Application root: /var/app/current

    Read the article

  • Managing Many External Hosts Using EC2 and Route 53

    - by futureal
    Looking for a "best practice" answer to managing externally-addressable hosts using the combination of Amazon EC2 and Amazon Route 53, without using Elastic IPs for each host. In my scenario I will have 30+ hosts that need to be accessible from outside EC2, so directly using internal DNS will not work. In the past, I have addressed hosts by assigning an elastic IP to that host (let's say, 55.55.55.55) and then creating an associated A record. For example, let's say I want to create "ec2-corp01.mydomain.com" I might do: ec2-corp01.mydomain.com. A 55.55.55.55 300 Then on that EC2 instance, I would assign the Elastic IP of 55.55.55.55, and everything works fine. Of course, to make this work, I need to have one Elastic IP per instance, which is something I'd like to avoid if possible; I'd like the infrastructure to be more dynamic. So my thought is to try something like: Create a script that queries the internal EC2 tools to determine an instance's private hostname On instance boot, call that script to determine its hostname, and then using the command-line Route 53 interface to find and update that hostname to its current internal hostname Since the host will have a relatively low TTL (let's say 300 as above, or 5 minutes) it should take effect pretty quickly Is this a good idea? Is there a better or more widely accepted way to handle it? If it IS a good idea, what type of record should I be creating? A CNAME that points to the internal host, like ec2-55-55-55-55.compute-1.amazonaws.com? Is an A record better or worse? Thanks!

    Read the article

  • How to schedule automatic (daily) snapshots of AWS EC2 Windows Instance?

    - by Stanley
    I have some Windows servers hosted on Amazon EC2. Some run Windows Server 2003 and other run Windows Server 2008. These are EBS-backed instances. Most of the instances also have some additional EBS-volumes attached. We want to schedule a daily snapshot of the windows machines (and also the attached EBS-volumes) to S3 so that we have daily backups available. One would think that this is a very common requirement and would be made available via the AWS Management Console, but alas, it is not. What approaches are available? How do I schedule daily snapshots on our Windows Servers? There are several scripting examples available online for Linux, but not so much for windows. I have had a look at http://sehmer.blogspot.com/2011/04/amazon-ec2-daily-snapshot-script-for.html as well as https://github.com/ronmichael/aws-snapshot-scheduler. Has anyone used one of these approaches and does it work? I have also considered a service like Skeddly which seems inexpensive at first glance but when you look at using it for several servers the price soon escalates to such a point where it seems a better option to create your own solution as you can then apply it to new servers in the future. With Skeddly we'll pay for each server. How do we schedule daily snapshots of our windows instances?

    Read the article

  • cannot reach munin port on other AWS instance

    - by Amedee Van Gasse
    2 AWS instances, in the same region but different availability zones, one is in regular EC2 and the other is in VPC, both have an Elastic IP, both are 64bit Amazon Linux AMI 2014.03.1. Both are running munin-node. The instance in the VPC is running munin-cron. I have added incoming TCP and UDP port 4949 to the security groups of both instances. On the munin node, I added an allow-line with the IP address (regular expression) of the munin server to /etc/munin/munin-node.conf. I bind munin-node to any interface using host *. Then I did sudo service munin-node restart. Then I ran netstat. $ sudo netstat -at | grep munin tcp 0 0 *:munin *:* LISTEN So the port is open there. On the munin server AND on the munin node: $ nmap AMAZON-IP -p 80,4949 | grep tcp 80/tcp open http 4949/tcp closed munin On the munin node: $ nmap localhost -p 80,4949 | grep tcp 80/tcp open http 4949/tcp open munin So from the outside, the http port is open (Apache is running) but the munin port is closed. The node can't even reach the munin port on it's own public IP address, but it can on localhost. I added port 80 as a sanity check, to be sure that there is network connectivity at all. So what am I overlooking here?

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • I don't get prices with Amazon Product Advertising API

    - by Xarem
    I try to get prices of an ASIN number with the Amazon Product Advertising API. Code: $artNr = "B003TKSD8E"; $base_url = "http://ecs.amazonaws.de/onca/xml"; $params = array( 'AWSAccessKeyId' => self::API_KEY, 'AssociateTag' => self::API_ASSOCIATE_TAG, 'Version' => "2010-11-01", 'Operation' => "ItemLookup", 'Service' => "AWSECommerceService", 'Condition' => "All", 'IdType' => 'ASIN', 'ItemId' => $artNr); $params['Timestamp'] = gmdate("Y-m-d\TH:i:s.\\0\\0\\0\\Z", time()); $url_parts = array(); foreach(array_keys($params) as $key) $url_parts[] = $key . "=" . str_replace('%7E', '~', rawurlencode($params[$key])); sort($url_parts); $url_string = implode("&", $url_parts); $string_to_sign = "GET\necs.amazonaws.de\n/onca/xml\n" . $url_string; $signature = hash_hmac("sha256", $string_to_sign, self::API_SECRET, TRUE); $signature = urlencode(base64_encode($signature)); $url = $base_url . '?' . $url_string . "&Signature=" . $signature; $response = file_get_contents($url); $parsed_xml = simplexml_load_string($response); I think this should be correct - but I don't get offers in the response: SimpleXMLElement Object ( [OperationRequest] => SimpleXMLElement Object ( [RequestId] => ************************* [Arguments] => SimpleXMLElement Object ( [Argument] => Array ( [0] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Condition [Value] => All ) ) [1] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Operation [Value] => ItemLookup ) ) [2] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Service [Value] => AWSECommerceService ) ) [3] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => ItemId [Value] => B003TKSD8E ) ) [4] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => IdType [Value] => ASIN ) ) [5] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => AWSAccessKeyId [Value] => ************************* ) ) [6] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Timestamp [Value] => 2011-11-29T01:32:12.000Z ) ) [7] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Signature [Value] => ************************* ) ) [8] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => AssociateTag [Value] => ************************* ) ) [9] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Version [Value] => 2010-11-01 ) ) ) ) [RequestProcessingTime] => 0.0091540000000000 ) [Items] => SimpleXMLElement Object ( [Request] => SimpleXMLElement Object ( [IsValid] => True [ItemLookupRequest] => SimpleXMLElement Object ( [Condition] => All [IdType] => ASIN [ItemId] => B003TKSD8E [ResponseGroup] => Small [VariationPage] => All ) ) [Item] => SimpleXMLElement Object ( [ASIN] => B003TKSD8E [DetailPageURL] => http://www.amazon.de/Apple-iPhone-4-32GB-schwarz/dp/B003TKSD8E%3FSubscriptionId%3DAKIAI6NFQHK2DQIPRUEQ%26tag%3Dbanholzerme-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB003TKSD8E [ItemLinks] => SimpleXMLElement Object ( [ItemLink] => Array ( [0] => SimpleXMLElement Object ( [Description] => Add To Wishlist [URL] => http://www.amazon.de/gp/registry/wishlist/add-item.html%3Fasin.0%3DB003TKSD8E%26SubscriptionId%3DAKIAI6NFQHK2DQIPRUEQ%26tag%3Dbanholzerme-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D12738%26creativeASIN%3DB003TKSD8E ) [1] => SimpleXMLElement Object ( [Description] => Tell A Friend [URL] => http://www.amazon.de/gp/pdp/taf/B003TKSD8E%3FSubscriptionId%3DAKIAI6NFQHK2DQIPRUEQ%26tag%3Dbanholzerme-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D12738%26creativeASIN%3DB003TKSD8E ) [2] => SimpleXMLElement Object ( [Description] => All Customer Reviews [URL] => http://www.amazon.de/review/product/B003TKSD8E%3FSubscriptionId%3DAKIAI6NFQHK2DQIPRUEQ%26tag%3Dbanholzerme-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D12738%26creativeASIN%3DB003TKSD8E ) [3] => SimpleXMLElement Object ( [Description] => All Offers [URL] => http://www.amazon.de/gp/offer-listing/B003TKSD8E%3FSubscriptionId%3DAKIAI6NFQHK2DQIPRUEQ%26tag%3Dbanholzerme-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D12738%26creativeASIN%3DB003TKSD8E ) ) ) [ItemAttributes] => SimpleXMLElement Object ( [Manufacturer] => Apple Computer [ProductGroup] => CE [Title] => Apple iPhone 4 32GB schwarz ) ) ) ) Can someone please explain me why I don't get any price-information? Thank you very much

    Read the article

  • Godaddy vs. Route53 for DNS

    - by tim peterson
    I have my website set up as an EC2 instance and my DNS is currently Godaddy. I'm considering switching to Amazon AWS Route53 for DNS. The one thing I noticed however is that Route53 charges monthly fees but I never get any bills from Godaddy. Obviously, nobody likes getting charged for something they can get for free. If Godaddy is cheaper, can anyone confirm that the page load speed of an EC2 instance is actually better via Route53 vs. Godaddy? If it is not faster or cheaper, can someone point out other reasons it might make sense to do this switch? thanks, tim

    Read the article

  • Create Windows AMI with instance storage

    - by Jonathan Oliver
    I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation. In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI. In short, here's what I'm trying to do: Be able to run a Windows instance (EBS/S3 instance doesn't matter) Attach local instance storage as drive D: Persist that configuration as an AMI such that I can start lots of them as necessary from either the GUI, command line, or REST API. Be able to take a launched instance, update software, shutdown, and create another AMI based upon that. Wash, rinse, repeat. One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • AWS: Multi-region setup using single RDS instance

    - by Ion
    I'm trying to scale our web application (PHP, MySQL, memcache) in a multi-region scheme. Currently we are using a setup with two EC2 instances behind an ELB and an RDS instance, all of them in US-EAST (Virginia) region. We would like to have a presence in the EU (Ireland) region as well. This means at least a new EC2 instance there (identical to the others, serving the same application). I have copied the desired AMI, setup the new instance, setup a same ELB configuration (required for SSL termination) and configured latency-based routing in Route53. And it works as suggested. But, clients from EU have speed problems. This is due to the fact that the EU EC2 instances connect to the US-based RDS instance. As far as I know Amazon has not yet enabled RDS multi-region replication. Do you have any suggestions on how to properly speed up the whole setup while using the single RDS instance? Also, any ideas in general on how to scale things up? Ideally we would like to continue using the RDS technology for various reasons. Nevertheless, I am open to suggestions (I guess the next idea would be to host our own MySQL servers).

    Read the article

  • Ruby1.9 and Amazon SQS?

    - by fields
    Is there a good library/gem for accessing Amazon SQS from ruby1.9? The Amazon ruby example and right_aws do not work as-is with ruby1.9. I'd strongly prefer something that's known to work under reasonably heavy load (a few hundred thousand queue items or more per day).

    Read the article

  • Do I need Amazon's EC2, Cloudfront, RDS?

    - by Jasie
    Hello, I want to publish a web site on Amazon's servers, that: Runs CakePHP Uses MySQL to store data Lets users upload audio through flash (currently using a hosted Flash Media Server), and listen to the files later Do I need Amazon's EC2 for the website, RDS for the MySQL database, and CloudFront for the FMS? Thanks.

    Read the article

  • PHP Libraries for Amazon Simple Notification Service

    - by webdestroya
    I wanted to start using the Amazon Simple Notification Service (http://aws.amazon.com/sns/), but I have not found any PHP libraries that I can use to access the service. I would rather not create my own library, I wanted to see if anybody has used any PHP libraries for the SNS service, and if they would recommend any.

    Read the article

  • Mapping an amazon server to a domain name registered with name.com

    - by S4M
    I have an amazon S3 web server and a domain name registered in name.com (the name is sam-experiments.com). I am trying to have a static page hosted on the amazon web server to be displayed on http://www.sam-experiments.com On the web server side, my bucket name is 'www.sam-experiments.com', and it links to here: http://www.sam-experiments.com.s3-website-eu-west-1.amazonaws.com/ On name.com, I added a new record with the followin characteristics: Record Type: CNAME Record Host: www.sam-experiments.com Record Answer: www.sam-experiments.com.s3.amazonaws.com. (as specified in the documentation here: http://docs.amazonwebservices.com/AmazonS3/latest/dev/VirtualHosting.html#VirtualHostingCustomURLs) TTL: 300 However, nothing gets displayed on www.sam-experiments.com, and I am not able to find what I am doing wrong. I really would appreciate some tip. Thanks! Note: I already posted this question in stackoverflow, but didnt get any answer, so I thought posting here may be more appropriate.

    Read the article

  • How can I parse Amazon S3 log files?

    - by artlung
    What are the best options for parsing Amazon S3 (Simple Storage) log files? I've turned on logging and now I have log files that look like this: 858e709ba90996df37d6f5152650086acb6db14a67d9aaae7a0f3620fdefb88f files.example.com [08/Jul/2010:10:31:42 +0000] 68.114.21.105 65a011a29cdf8ec533ec3d1ccaae921c 13880FBC9839395C REST.GET.OBJECT example.com/blog/wp-content/uploads/2006/10/kitties_we_cant_stop_here_this_is_bat_country.jpg "GET /example.com/blog/wp-content/uploads/2006/10/kitties_we_cant_stop_here_this_is_bat_country.jpg HTTP/1.1" 200 - 32957 32957 12 10 "http://atlanta.craigslist.org/forums/?act=Q&ID=163218891" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.19) Gecko/2010031422 Firefox/3.0.19" - What are the best options for automating the log files? I'm not using any other Amazon services other than S3.

    Read the article

  • Track url from Amazon S3 using Google Analytics

    - by morktron
    I couldn't find any decent pay per view video solutions for low budget clients. So I'm considering using a membership extension with Joomla and hosting the video with amazon S3. The only issue is that once someone has signed up to view or download the video if they have any web development experience they will be able to get the url of the video and freely publish it on the web. How can this be prevented? It looks like it can be done using IAM User Temporary Credentials - AWS SDK for PHP but the client would prefer not to have to pay someone to spend hours writing custom php code to get this to work. With Amazon s3 I could at least check the log files I guess to manually monitor the url but is there a way to track the url with Google Analytics? or is there a more elegant solution?

    Read the article

  • Wordpress : Automatically transfer media files to Amazon S3

    - by Ron Ranieri
    I've been using VPS to host 7 Wordpress websites, most of them require big storage but very little RAM and traffic. So I'm thinking of moving the static files(uploads folder) content to Amazon S3 and I'm looking for the most viable solution to this. I want every website to have their own bucket and newly uploaded media files automatically uploaded to Amazon S3 without using plugin. I'm ok with cron job, for example the files were uploaded first to my server, then transferred to S3 and deleted from my server every 24 hour. Or is there any way for me to change the default upload directory to my S3 bucket without sacrificing any Wordpress functionality(resize/title etc)? What do you think the most efficient way to do this? Currently I'm looking at this plus cron job but I would like to know better option if it exist.

    Read the article

  • Setting up scripts in Amazon EC2 Cloud

    - by racket99
    Hello, I am currently running a few perl and python scripts on a windows pc and would like to port over to the Amazon EC2 servers running 64-bit LINUX. The scripts are basic web scrapers that go to a variety of websites, get data and then save daily as csv files. I would like to install these in the cloud and get them running in an automated way so that they will run without my intervention. Also given that I don't want to lose all the data if the instance crashes, I should also upload the csv files to Amazon S3. Any idea how I can do this? I am not terribly versed in LINUX nor do I know Perl/Python well. What is the best way for me to tackle thi

    Read the article

  • Installing an EC2 (EBS based) instance on another AWS account

    - by imaginative
    We're moving from one AWS account to another. I have a couple of EC2 instances I would like to move over. These instances are EBS based. Googling around, I'm seeing all sorts of convoluted answers for how to appropriately launch this EBS based instance on a new account. Surely there has to be a simple way of going about this. As of right now, what I have done so far is backup the EB2 Volume as a snapshot. I then altered the permissions of the snapshot to allow the new AWS account number to be able to access it. I am not sure where to go from here. Do I have to create an image from the EBS snapshot? Upload it to S3? What's the next step(s)?

    Read the article

  • Why do I get xfs_freeze "Operation not supported" error with ec2-consistent-snapshot? Debian Squeeze w/ext4 filesystem

    - by Michael Endsley
    I'm running the following command: [root@somehost ~]# ec2-consistent-snapshot --aws-credentials-file '/some/dir/file' --mysql --mysql-socket '/var/run/mysqld/mysql.sock' --mysql-username 'backup' --mysql-password 'password' --freeze-filesystem '/dev/xvda1' vol-xxxxxx It returns this error: xfs_freeze: cannot freeze filesystem at /dev/xvda1: Operation not supported ec2-consistent-snapshot: ERROR: xfs_freeze -f /dev/xvda1: failed(256) snap-eeb66393 xfs_freeze: cannot unfreeze filesystem mounted at /dev/xvda1: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /dev/xvda1: failed(256) This is being run on Debian Squeeze with the ext4 Linux filesystem. Can anyone explain this error to me, or what might be its cause? When googling, I found information about it needing to be executed with sudo, but I'm performing the entire operation as root. I also found some posts about trying to run it after a CentOS upgrade using yum, but the situation appeared dissimilar. It's difficult to find things referring to this situation exactly. xfs_freeze is available for use on the filesystem. Is it possible that the filesystem, despite being ext4, somehow doesn't support freezing? Sorry if I've missed some bit of StackExchange etiquette with this post -- it's my first venture here!

    Read the article

  • Cloudformation with Ubuntu throwing errors

    - by Sammaye
    I have been doing some reading and have come to the understanding that if you wish to use a launchConfig with Ubuntu you will need to install the cfn-init file yourself which I have done: "Properties" : { "KeyName" : { "Ref" : "KeyName" }, "SpotPrice" : "0.05", "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] }, "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ], "InstanceType" : { "Ref" : "InstanceType" }, "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [ "#!/bin/bash\n", "apt-get -y install python-setuptools\n", "easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-1.0-6.tar.gz\n", "cfn-init ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource LaunchConfig ", " --configset ALL", " --access-key ", { "Ref" : "WorkerKeys" }, " --secret-key ", {"Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, " --region ", { "Ref" : "AWS::Region" }, " || error_exit 'Failed to run cfn-init'\n" ]]}} But I have a problem with this setup that I cannot seem to get a decent answer to. I keep getting this error in the logs: Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: config-scripts-per-once already ran once Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-boot with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-instance with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-user with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cc_scripts_user.py[WARNING]: failed to run-parts in /var/lib/cloud/instance/scripts Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[WARNING]: Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 117, in run_cc_modules#012 cc.handle(name, run_args, freq=freq)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 78, in handle#012 [name, self.cfg, self.cloud, cloudinit.log, args])#012 File "/usr/lib/python2.7/dist-packages/cloudinit/__init__.py", line 326, in sem_and_run#012 func(*args)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/cc_scripts_user.py", line 31, in handle#012 util.runparts(runparts_path)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 223, in runparts#012 raise RuntimeError('runparts: %i failures' % failed)#012RuntimeError: runparts: 1 failures Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[ERROR]: config handling of scripts-user, None, [] failed Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling keys-to-console with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling phone-home with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling final-message with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-user'] I have absolutely no idea what scripts-user means and Google is not helping much here either. I can, when I ssh into the server, see that it runs the userdata script since I can access cfn-init as a command whereas I cannot in the original AMI the instance is made from. However I have a launchConfig: "Comment" : "Install a simple PHP application", "AWS::CloudFormation::Init" : { "configSets" : { "ALL" : ["WorkerRole"] }, "WorkerRole" : { "files" : { "/etc/cron.d/worker.cron" : { "content" : "*/1 * * * * ubuntu /home/ubuntu/worker_cron.php &> /home/ubuntu/worker.log\n", "mode" : "000644", "owner" : "root", "group" : "root" }, "/home/ubuntu/worker_cron.php" : { "content" : { "Fn::Join" : ["", [ "#!/usr/bin/env php", "<?php", "define('ROOT', dirname(__FILE__));", "const AWS_KEY = \"", { "Ref" : "WorkerKeys" }, "\";", "const AWS_SECRET = \"", { "Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, "\";", "const QUEUE = \"", { "Ref" : "InputQueue" }, "\";", "exec('git clone x '.ROOT.'/worker');", "if(!file_exists(ROOT.'/worker/worker_despatcher.php')){", "echo 'git not downloaded right';", "exit();", "}", "echo 'git downloaded';", "include_once ROOT.'/worker/worker_despatcher.php';" ]]}, "mode" : "000755", "owner" : "ubuntu", "group" : "ubuntu" } } } } Which does not seem to run at all. I have checked for the files existance in my home directory and it's not there. I have checked for the cronjob entry and it's not there either. I cannot, after reading through the documentation, seem to see what's potentially wrong with my code. Any thoughts on why this is not working? Am I missing something blatant?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >