Search Results

Search found 678 results on 28 pages for 'aws'.

Page 3/28 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • IAM / AWS Access control via Windows Azure Active Directory

    - by Haroon
    I am trying to figure out how to configure IAM in Amazon AWS to use Windows Azure Active Directory. I found http://blogs.aws.amazon.com/security/post/Tx71TWXXJ3UI14/Enabling-Federation-to-AWS-using-Windows-Active-Directory-ADFS-and-SAML-2-0, however it is about configuring ADFS. WAAD supports SAML 2.0 http://azure.microsoft.com/en-us/documentation/articles/fundamentals-identity/ Has anyone figured it out yet?

    Read the article

  • Map a URL bought with Dreamhost to Amazon EC2 (AWS)

    - by Edan Maor
    I have several URLs I purchased through Dreamhost. I'm starting to use Amazon's AWS, and I'd like to map the URLs to Amazon. This is something of a silly question, and I've already done the same thing several times to other services (mapping from Dreamhost to WebFaction). But for some reason when I tried to find the proper way to do the same mapping to Amazon, I find a lot of detailed writing talking about whether I should be using CNAME or A records, etc. So I wanted to ask in the simplest possible terms and hopefully get a simple, concrete answer: I bought a URL from Dreamhost, I have an EC2 server running on AWS (to which I already mapped an Elastic IP address). How do I make the URL map to AWS? And if there are several options, which one should I effectively be using? P.S. Meta-question - why are things so much more difficult with AWS? When I search Google for "Move from Dreamhost to WebFaction, I get very simple answers on how to do the mapping. In what way is AWS different?

    Read the article

  • How do you get AWS VPC EC2 instances to be able to see the AWS APIs?

    - by Peter Mounce
    We're spinning up infrastructure inside of an AWS VPC via CloudFormation. We're using auto-scaling groups to bring up VPC-EC2 instances (so, we don't bring up instances directly; ASGs manage that). Inside of a PVC, EC2 instances only have a private IP; they cannot see the outside world without further work. When these instances spin up, we have some bootstrap tasks that require talking to the various AWS APIs. We also have some ongoing tasks that require AWS API traffic. How are you tackling this apparent chicken-egg problem? We've read about: NAT instances - but don't like this so much because it's another layer to our stack. assigning elastic-IPs to each VPC instance that needs to talk - but a) they all do, and b) since we're using ASGs, we don't know which instances to assign EIPs to at provision-time, and c) we'd need to set up something to monitor those ASGs and assign EIPs when instances are terminated and replaced spinning up an instance (actually, a load-balanced pair, probably spanning AZs) to act as an AWS-API proxy for all API traffic I guess I'm wondering whether there's some kind of back-door we can open that allows our VPC EC2 instances access to the AWS API endpoints, but nothing else, for cheap-complexity setup, that doesn't add another network-hop layer to our infrastructure for serving requests.

    Read the article

  • Pass User Data to AWS client

    - by bearrito
    Has anyone successful passed user data to the AWS CLI ? I have tried various incantations of the following but it does not work. Docs say string must be base64 encoded : http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html The instance logs never indicate the script is executed and chef is installed. aws ec2 run-instances --image-id ami-a73264ce --count 1 --instance-type t1.micro --key-name scrubbed --iam-instance-profile Arn=arn:aws:iam::scrubbed:instance-profile/scrubbed --user-data $(base64 chef_user_data.sh --wrap=0) chef_user_data.sh #!/bin/bash curl -L https://www.opscode.com/chef/install.sh | sudo bash

    Read the article

  • Ways to serve AWS from another domain

    - by mplungjan
    I have installed Ghost on AWS (it is running node) I very much dislike the URL they gave me http://ec2-nn-nnn-nnn-nnn.us-west-2.compute.amazonaws.com/ghost/ I own a domain and linux hosting (but not a VPS) - what would be a practical way to serve my blog via URLS on my own (sub) domain? I can use php and access .htaccess on my domain - possibly do things on the ASW instance too (let me know what to look for)

    Read the article

  • mvn deploy to AWS (ssh via distributionManagement)

    - by Dexter
    I am working on deploying the WAR file to AWS using Maven. I am planning to use 'mvn deploy' for the same which would ssh the war file to AWS. I am following http://maven.apache.org/plugins/maven-deploy-plugin/examples/deploy-ssh-external.html. This is my POM file <project> ... <distributionManagement> <repository> <id>ssh-aws</id> <url>scpexe://<ec2 instance>.compute-1.amazonaws.com</url> </repository> </distributionManagement> <build> <extensions> <!-- Enabling the use of FTP --> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ssh-external</artifactId> <version>1.0-beta-6</version> </extension> </extensions> </build> .. </project> This is my settings.xml <server> <id>ssh-aws</id> <username>aws-user</username> </server> The only issue is that I am unable to figure out the url in distributionManagement node of pom.xml. I am able to ssh in the AWS server by the following. ssh -i ~/pemfile/pemfile-key.pem aws-user@<ec2 instance>.compute-1.amazonaws.com But when I run mvn clean deploy, I receive this.. Exit code: 1 - Permission denied (publickey). -> [Help 1] Thanks in advance.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • AWS EC2 Oracle RDB connection to Oracle Database Instance

    - by llaszews
    Provisioning my Oracle database instance to AWS EC2 RDB was easy. Just a few clicks! However, getting my connection to my Oracle cloud database was not as easy. A couple things that are not obvious (using Oracle SQL Developer): 1. Need to set up a database security group. 2. Need to use end point for the host name. This video is the best one on the internet to explain both points: http://www.youtube.com/watch?v=ocFURuX0eEw

    Read the article

  • Connecting to an Amazon AWS database [closed]

    - by Adel
    so I'm a bit overwhelmed/bewildered by the whole concept of networking/remote-desktop , etc. The context is that - in my company I need to access a remote database. The standard way I use is to first connect using a VPN-Client( called Shrew Soft Access manager), then once that says: "network device configured tunnel enabled" I'm good to connect using windows "Remote Desktop Connection" . But now our company set up an Amazon AWS database, and I'm told I need to connect, and I ony need to use RDP. So I tried the standard windows one - but it doesn't work. On wikipedia , I looked up remote desktop sftware and downloaded one called VNC Viewer. but it doesn't work. Any advice/tips/comments appreciated EDIT: YAYA! I finally got a little more connected . I had to use my username as a fully qualified name: Computer: XYZ.XYZ.XYZ.XYZ USERNAME: XYZ.XYZ.XYZ.XYZ\aazzam

    Read the article

  • Complete deployment to AWS

    - by Ionut
    I'm trying to deploy a Java Application to AWS free tier. I need the following: RDS provider. Using MySQL client S3 service. This is required for the Lucene Index and image uploading SES service. I need to be able to send emails to new registered users. Namecheap is my domain provider. EC2 instance Elastic Beanstalk instance. I managed to create an EC2 instance, upload the WAR file and link it to the Namecheap domain. However I find it difficult to link the other instances to the current application. I find the documentation a little messy and I can't find the right way to do this. Can you provide a simple walk through to deployment for this use case? Thanks!

    Read the article

  • rails2 and aws-simple (simpledb): data cannot be deleted from amazon simpledb?

    - by z3cko
    i am developing a ruby on rails (2.3.8) application with data storage amazon simpledb. i am using the aws-sdb gem in the version aws-sdb (0.3.1) there are a few bugs, but the problems are outlined in the comments of this tutorial from amazon: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1242 i am wondering if it is a bug of the gem or maybe a proxy issue, but i cannot delete any data from simpledb. anyone else experienced this or has a clue? >> t=Team.find(:first) => #<Team:0x329f718 @prefix_options={}, @attributes={"updated_at"=>Fri May 28 16:33:17 UTC 2010, "id"=>0}> >> t.destroy => #<Net::HTTPOK 200 OK readbody=true> >> t=Team.find(:first) => #<Team:0x321ad38 @prefix_options={}, @attributes={"updated_at"=>Fri May 28 16:33:17 UTC 2010, "id"=>0}> the team model is a normal ActiveResource Model, according to said tutorial. class Team < ActiveResource::Base self.site = "http://localhost:8888" # Proxy host + port self.prefix = "/fb2010_dev/" # SDB domain end

    Read the article

  • AWS Cloud Formation.Requires capabilities : [CAPABILITY_IAM] (Child Stack)

    - by Drew Khoury
    I'm running a CloudFormation template in the AWS Console. Running Stack Directly I started with a template that used IAM resources, and the console prompts me to acknowledge IAM capabilities when running the stack directly. Running Stack as a child I then tried to call the same stack from a parent stack and did not receive the same prompt. The stack then failed with the message: Requires capabilities : [CAPABILITY_IAM] Research The docs indicate that I can run CF scripts in a number of ways. There's plenty of docs around CLI/API and supplying the capability parameter, but there appears to be no information about how to make sure it's applied when running through the console. http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html IAM Resources in AWS CloudFormation Templates CF Console CLI API What I've done / What I think I've raised an issue via the forum for now, but no response (yet): https://forums.aws.amazon.com/thread.jspa?threadID=139160 I suspect this is a bug in the Console, as there doesn't appear to be any documentation of how to change the behaviour via the console and as far as I'm aware this should just work. Anyone came across the same problem, or can report that it's working fine for them?

    Read the article

  • Not enough disk space '/' in AWS instance

    - by Sumant
    i am running Ubuntu 11.04 instance for my Web Server on AWS cloud, now i am getting there is no disk space in / partition of my server. df -ah say this Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 7.8G 97M 99% / proc 0 0 0 - /proc none 0 0 0 - /sys fusectl 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security none 3.7G 112K 3.7G 1% /dev none 0 0 0 - /dev/pts none 3.7G 0 3.7G 0% /dev/shm none 3.7G 80K 3.7G 1% /var/run none 3.7G 0 3.7G 0% /var/lock /dev/xvdb 414G 16G 377G 4% /mnt Now i have Tried these thing for getting some extra space on / partition Clean up All Log files for Apache. Removed all unnecessary files from server. Home directory Cleanup. But Still I am not getting enough space. This Instance type is m1.large with 8GB EBS. Now i am getting i have enough disk space in /dev/xvdb. Is there a way i can allocate some diskspace to / from /dev/xvdb or Any other Ways. Please suggest me the possible solution for this.Is it possible to use the same /dev/xvdb partition with another instance.

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • AWS RDS Timeout

    - by warder57
    I know next to nothing about networking/servers. So I'm assuming I'm missing something obvious. All of the resources I can find on this, either don't work or are outdated. I created a brand new AWS account on the free plan. I created a postgres RDS DB instance. I made sure that this RDS instance is set to publicly accessible. This RDS instance has the default VPC/Security Group settings. In order to connect to this DB from my local machine, I used pgadmin3 and followed the instructions provided on the AWS documentation page. Seen here: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html I've double checked all of the information required to connect: Host: whatever.whatever.us-west-2.rds.amazonaws.com Port: 5432 Username: USERNAME Password: PASSWORD When I try to connect to the database, my connection fails due to a timeout. (During step 4 in the above guide.) Can anyone point me to whatever I am missing? Thanks in advance

    Read the article

  • Options for PCI-DSS on AWS - file integrity monitoring and intrusion detection

    - by Brill Pappin
    I need to deploy some file integrity monitoring and intrusion detections software on AWS instances. I really wanted to use OSSEC, however it does not work well in an environment where servers can auto deploy and shut down based on load, because it requires server managed keys to be generated. Including the agent in the AMI will not allow monitoring as soon as it comes up because of that. There are many options out there, and several are listed in other posts on this site, however none that I've seen so far deal with the unique problems inherent in AWS or cloud based deployments in general. Can anyone point me at some products, preferably open source, that we might use to cover those portions of PCI DSS that require this software? Has anyone else achieved this on AWS?

    Read the article

  • Finding the owner of an AWS access key + secret key pair

    - by nightw
    I would like to have a simple solution (possibly in 1-3 plain API calls to AWS) to find the owner of an AWS access key. I have the password of the "root" AWS account and of course I can manage the users and credentials through IAM, but we have a lot of users and I don't want to look at them one by one looking for the owner of the key. So basically I have a working access key + secret key pair (in fact a couple of them), but I do not know which user's key is it and what rights are on it. What is the easiest way to do this? Thank you in advance.

    Read the article

  • Recommended method for routing www to zone apex (naked domain) using AWS Route 53

    - by Dan Christian
    In my AWS Route 53 control panel I simply have 2 A records currently set up for the 'www' and the 'non www' names. Both point to the Elastic IP address associated with my EC2 Instance. This works well and my website is available at both variations but I really want all 'www' to route to the 'non www'. What is the reccomened method, using AWS Route 53, for routing all traffic that comes to... www.example.com to example.com

    Read the article

  • C# Code Help With Amazon (AWS) - The request must contain the parameter Signature.

    - by leen3o
    I'm struggling with the final part of getting my first bit of code working with the AWS - I have got this far, I attached the web reference in VS and this have this amazon.AWSECommerceService service = new amazon.AWSECommerceService(); // prepare an ItemSearch request amazon.ItemSearchRequest request = new amazon.ItemSearchRequest(); request.SearchIndex = "DVD"; request.Title = "scream"; request.ResponseGroup = new string[] { "Small" }; amazon.ItemSearch itemSearch = new amazon.ItemSearch(); itemSearch.AssociateTag = ""; itemSearch.Request = new ItemSearchRequest[] { request }; itemSearch.AWSAccessKeyId = ConfigurationManager.AppSettings["AwsAccessKeyId"]; itemSearch.Request = new ItemSearchRequest[] { request }; ItemSearchResponse response = service.ItemSearch(itemSearch); // write out the results foreach (var item in response.Items[0].Item) { Response.Write(item.ItemAttributes.Title + "<br>"); } I get the error The request must contain the parameter Signature. I know you have to 'sign' requests now, but can't figure out 'where' I would do this or how? any help greatly appreciated?

    Read the article

  • ez components and AWS PHP SDK makes ez components freak out

    - by David
    Hi, I try to work with ez Components and AWS PHP SDK at the same time. I have a file called resize.php which is just handling resizing images using the ez Components ImageTransition tools. I queue the image for resize in Amazon AWS SQS. If I load the AWS PHP SDK and ez Components in the same file, PHP always complains about not finding the ez Components classes. Code looks something like this: amazonSQS.php: require 'modules/resize.php'; require 'modules/aws/sdk.class.php'; $sqs = new AmazonSQS(); $response = $sqs->send_message($queue_url, $message); resize.php: function resize_image($filename) { $settings = new ezcImageConverterSettings( array( //new ezcImageHandlerSettings( 'GD', 'ezcImageGdHandler' ), new ezcImageHandlerSettings( 'ImageMagick', 'ezcImageImagemagickHandler' ), ) ); Error message: Fatal error: Class 'ezcImageConverterSettings' not found in /home/www.com/public_html/modules/resize.php on line 10 If I call resize.php from another PHP file which has AWS not included, it works fine. I load ezComponents like this: require 'ezc/Base/ezc_bootstrap.php'; It is installed as a PEAR package. Any idea someone?

    Read the article

  • AWS own email domain and some generic questions

    - by John Brunner
    I'm getting started with Amazon Web Services and I have a few question I'm not sure about. As every (company) webpage I want to use an "[email protected]" email adress, but how is that done? I looked up at godaddy.com (for domain registration), the offer me an email adress like I want, but for 3 dollars per month. Is this possible with AWS? Because at AWS you have just a complex domain which is not very userfriendly or serious. Also I want to host my dynamic webpage on the amazon cloud, but I'm not sure if I'm doing that right. I've read many guides, and all I know is that I have to purchase a Elastic Compute Cloud, and a Simple Storage Service... and every guide is working with the basic linux package, why not Windows? Is it more expensive? I just want to host a mySQL Server for the dynamic webpage, which is reached over a normal domain. And one last question, if I sign up for an AWS account it asks me for an email account. But I found it a little bit unserious to write there my free-webmailer-adress... How is it done the normal way? Thanks in advance! Best regards, john.

    Read the article

  • AWS free tier "sign up date" vs "credit card details submission date"

    - by Mayur Rokade
    I am worried about my account expiry date. I created an account on AWS in July 2013 and submitted my credit card details on 31st Oct 2013. I went in Billing Management Console/Bills section where when I click on Date, I can see months ranging from July 2013 to Nov 2013. From AWS FAQs I gathered When does the AWS free usage tier expire? The AWS free usage tier will expire 12 months from the date you sign up. So WHEN will my account expire, July 2014 (sign up date) or Oct 2014 (credit card details submission date) ?

    Read the article

  • best practices for setting up a new windows 2008 R2 server with ec2 AWS

    - by Alex
    Can someone comment what they would add to the following list of SOP in terms of best practices? This is being set up on AWS, and then after further testing, back in our datacenter. Standard Operation Procedure (SOP): Installation Part: 2 - Installation of Software Components in Windows 2008 R2 (Updated). Step: 1 Logon to the host through Remote Desktop. Strp: 2 Open Server Manager - Server Roles - Install Web Server IIS 7.5 with compatible of IIS 6 features and Management compatibility mode. Step: 3 Open IE/Mozilla to Download the below listed software's and save all installation files to folder called "AWS Server Install Files" for future reference.. Net Framework 2.0 (Download that from internet) Crystal reports for .Net Framework 2.0 (x64) (Download that from internet) SQL Server 2005 (AWS Image) Step: 4 Once all software's saved on local drive, then Install it one by one. Step: 5 Navigate to Desktop folder to install the below listed softwares. Microsoft Asp.net 2.0 AjaxExtention 1.0 (placed on Desktop \Softwares) WebEx recorder. (placed on Desktop \Softwares) Winrar(placed on Desktop \Softwares) Step: 6 Make sure all the software are working fine. Step: 7 Inspect the server once entirely. Step: 8 Logoff & Stop the Instance.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >