Search Results

Search found 2839 results on 114 pages for 'amazon cloudwatch'.

Page 19/114 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Amazon EC2 Socket connection not being accepted

    - by Joseph
    I am trying to run a java application on my EC2 instance. The application accepts socket connections on port 54321. If I try and connect to it, it times out. My Security Group is set as: TCP Port (Service) Source Action 21 0.0.0.0/0 Delete 22 (SSH) 0.0.0.0/0 Delete 80 (HTTP) 0.0.0.0/0 Delete 20393 0.0.0.0/0 Delete 54321 0.0.0.0/0 Delete Is there anything else I need to do? # iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination # iptables -nvL -t nat Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination #

    Read the article

  • Copy an Amazon EC2 Instance to use locally

    - by Excolo
    Ok, so we have a spare server I have installed Debian Wheezy on, and setup Xen on for virtual machines. It has better performance than all our ec2 instances combined, and will cost less to run (for a few various reasons) I would like to get the EC2 instances downloaded to my server, and converted to run for Xen, but im having difficulty finding anything specific. I did not setup the EC2 instances myself, and am not very familiar with them. Everything I have found (which isnt much) just says "Do XYZ" and I have no idea how to do those. So being as specific as possible would be helpful. Also, confusingly I see people writing in forums saying you can only export linux images (which mine are, Ubuntu images) but then I see on amazons export tool saying you can only export Windows server? Am I missing something here? Is that not the right place to be looking? Thanks

    Read the article

  • amazon ec2-medium apache requests per second terrible

    - by TheDayIsDone
    EDITED -- test running from localhost now to rule out network... i have a c1.medium using EBS. when i do an apache benchmark and i'm just printing a "hello" for the test from localhost - no database hits, it's very slow. i can repeat this test many times with the same results. any thoughts? thanks in advance. ab -n 1000 -c 100 http://localhost/home/test/ Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.23 Server Hostname: localhost Server Port: 80 Document Path: /home/test/ Document Length: 5 bytes Concurrency Level: 100 Time taken for tests: 25.300 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 816000 bytes HTML transferred: 5000 bytes Requests per second: 39.53 [#/sec] (mean) Time per request: 2530.037 [ms] (mean) Time per request: 25.300 [ms] (mean, across all concurrent requests) Transfer rate: 31.50 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 7 21.0 0 73 Processing: 81 2489 665.7 2500 4057 Waiting: 80 2443 654.0 2445 4057 Total: 85 2496 653.5 2500 4057 Percentage of the requests served within a certain time (ms) 50% 2500 66% 2651 75% 2842 80% 2932 90% 3301 95% 3506 98% 3762 99% 3838 100% 4057 (longest request)

    Read the article

  • Getting Started with Amazon Web Services in NetBeans IDE

    - by Geertjan
    When you need to connect to Amazon Web Services, NetBeans IDE gives you a nice start. You can drag and drop the "itemSearch" service into a Java source file and then various Amazon files are generated for you. From there, you need to do a little bit of work because the request to Amazon needs to be signed before it can be used. Here are some references and places that got me started: http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html https://affiliate-program.amazon.com/gp/flex/advertising/api/sign-in.html You definitely need to sign up to the Amazon Associates program and also register/create an Access Key ID, which will also get you a Secret Key, as well. Here's a simple Main class that I created that hooks into the generated RestConnection/RestResponse code created by NetBeans IDE: public static void main(String[] args) {    try {        String searchIndex = "Books";        String keywords = "Romeo and Juliet";        RestResponse result = AmazonAssociatesService.itemSearch(searchIndex, keywords);        String dataAsString = result.getDataAsString();        int start = dataAsString.indexOf("<Author>")+8;        int end = dataAsString.indexOf("</Author>");        System.out.println(dataAsString.substring(start,end));    } catch (Exception ex) {        ex.printStackTrace();    }} Then I deleted the generated properties file and the authenticator and changed the generated AmazonAssociatesService.java file to the following: public class AmazonAssociatesService {    private static void sleep(long millis) {        try {            Thread.sleep(millis);        } catch (Throwable th) {        }    }    public static RestResponse itemSearch(String searchIndex, String keywords) throws IOException {        SignedRequestsHelper helper;        RestConnection conn = null;        Map queryMap = new HashMap();        queryMap.put("Service", "AWSECommerceService");        queryMap.put("AssociateTag", "myAssociateTag");        queryMap.put("AWSAccessKeyId", "myAccessKeyId");        queryMap.put("Operation", "ItemSearch");        queryMap.put("SearchIndex", searchIndex);        queryMap.put("Keywords", keywords);        try {            helper = SignedRequestsHelper.getInstance(                    "ecs.amazonaws.com",                    "myAccessKeyId",                    "mySecretKey");            String sign = helper.sign(queryMap);            conn = new RestConnection(sign);        } catch (IllegalArgumentException | UnsupportedEncodingException | NoSuchAlgorithmException | InvalidKeyException ex) {        }        sleep(1000);        return conn.get(null);    }} Finally, I copied this class into my application, which you can see is referred to above: http://code.google.com/p/amazon-product-advertising-api-sample/source/browse/src/com/amazon/advertising/api/sample/SignedRequestsHelper.java Here's the completed app, mostly generated via the drag/drop shown at the start, but slightly edited as shown above: That's all, now everything works as you'd expect.

    Read the article

  • Importing AMIs from Hyper-v VHDX

    - by jwdaigle
    I have a couple of VHDX files that we use to template locally hosted VMs. I would like to try (some) of these on Amazon, so I need to build an AMI to upload to AWS. I have found http://aws.amazon.com/ec2/vmimport/ , which is very helpful to get started. It appears that AWS does not yet support VHDX, so I found some info that told me to export it out of Hyper-V as a VHD file, and then convert/upload it, which I am in the process of trying out. But the real question is that when I was looking for info, I came across http://stackoverflow.com/questions/14346114/unable-to-rdp-to-ec2-instance . The AWS documentation seems to imply that all I need to do is run the importer and all will be well. True? I dont want to waste all the upload bandwidth and then find out it wont work. Is there something I need to install into the Hyper-V VM before converting/upload it using the AWS command line tools? EC2-Config? Any help greatly appreciated -

    Read the article

  • AWS Cloud Formation.Requires capabilities : [CAPABILITY_IAM] (Child Stack)

    - by Drew Khoury
    I'm running a CloudFormation template in the AWS Console. Running Stack Directly I started with a template that used IAM resources, and the console prompts me to acknowledge IAM capabilities when running the stack directly. Running Stack as a child I then tried to call the same stack from a parent stack and did not receive the same prompt. The stack then failed with the message: Requires capabilities : [CAPABILITY_IAM] Research The docs indicate that I can run CF scripts in a number of ways. There's plenty of docs around CLI/API and supplying the capability parameter, but there appears to be no information about how to make sure it's applied when running through the console. http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html IAM Resources in AWS CloudFormation Templates CF Console CLI API What I've done / What I think I've raised an issue via the forum for now, but no response (yet): https://forums.aws.amazon.com/thread.jspa?threadID=139160 I suspect this is a bug in the Console, as there doesn't appear to be any documentation of how to change the behaviour via the console and as far as I'm aware this should just work. Anyone came across the same problem, or can report that it's working fine for them?

    Read the article

  • Why do AWS spot-instance prices spike above the "on demand" pricing?

    - by Laykes
    Amazon Pricing on Spot Instance Inconsistencies This is something which will be best explained through screenshots of a historical chart of instance pricings. If you look at a lot of the instance prices for spot instances, you will notice regular patterns of spikes. See here: As you can see, the price for this compute medium instance, regularly spikes above the on demand price. A c1.medium instance (on demand), would only cost $0.186 per hour. But for a period of a few weeks, in zone B, the price would regularly spike to $1.20. This is some 6 times the actual on demand price. It's also not isolated. If you look at zone-b again for small instances, there is a similar, spike frequently. Which goes 4x the on demand pricing. Does anyone know why this happens? Here are a few suggestions Someone entered $1.2 instead of $0.12 (I would discount this since it happened 20 times over the space of 3 weeks). Amazon regularly artifically inflate their prices by bidding on their own instances to get the most bang for their buck. (I would discount this since it would be ridiculous and bad business) Some company launched 1000 servers at once, and wants to make sure that they all launch. (I would discount this since they would presumably launch them at a price which would be below the minimum on demand price. Why would you pay above on demand for a single server?). It's a bug in their reporting?

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • No clue for high load average on top

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 16 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st(see below). Mem is about 1.5GB free. Any idea what could it be, or where else can we check? Many thanks for the help. # # top # top - 07:57:42 up 4:18, 1 user, load average: 1.36, 1.45, 1.47 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu(s): 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7120092k total, 5644920k used, 1475172k free, 532888k buffers Swap: 0k total, 0k used, 0k free, 3463936k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1557 mysql 20 0 1829m 374m 6448 S 14.3 5.4 11:15.09 mysqld 6655 apache 20 0 416m 49m 3744 S 9.3 0.7 0:04.85 httpd 27683 apache 20 0 421m 54m 3708 S 9.0 0.8 0:00.99 httpd 6682 apache 20 0 424m 57m 3788 S 8.3 0.8 0:03.81 httpd 16816 apache 20 0 419m 51m 3760 S 4.3 0.7 0:04.09 httpd 22182 apache 20 0 417m 50m 3756 S 1.7 0.7 0:06.34 httpd 219 root 20 0 0 0 0 S 0.3 0.0 0:00.34 kworker/7:1 699 root 20 0 0 0 0 S 0.3 0.0 0:00.40 kworker/3:1 1 root 20 0 19376 1508 1212 S 0.0 0.0 0:00.29 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.71 ksoftirqd/0

    Read the article

  • "iostat" command different in two equal machines

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 8 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are - 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st. Mem is about 1.5GB free. Any idea what could it be, or where else can we check? iostat command on the proper machine: avg-cpu: %user %nice %system %iowait %steal %idle 8.97 0.03 4.46 0.19 0.14 86.23 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.60 0.69 55.38 587620 47254184 xvdfp2 2.64 1.10 61.04 934786 52091056 xvdfp4 0.86 0.19 41.72 163866 35601920 xvdfp1 4.37 36.59 73.89 31220810 63051504 xvdfp3 8.03 7.08 94.63 6045402 80749184 iostat command on problematic machine: avg-cpu: %user %nice %system %iowait %steal %idle 9.29 0.04 5.55 0.26 0.11 84.74 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.13 3.34 68.85 246244 5077888 xvdfp1 7.60 74.31 104.88 5480362 7734840 xvdfp3 13.22 73.67 125.00 5433386 9218600 xvdfp4 1.11 0.76 65.08 55762 4799248 xvdfp2 4.16 3.31 99.17 243818 7313264 Many thanks for the help.

    Read the article

  • EC2 kernel decision and issues with creating a new machine with my AMI

    - by roacha
    I could really use some advice. I started a new instance on EC2 using Amazon's AMI and during the deployment process I selected a Kernel ID of "Use Default". I then configured my server the way that I wanted to and took a snapshot of it. I then created my own AMI to create new servers with. When I try and create a new server with this AMI the server fails to start and I get the error: EXT3-fs: sda1: couldn't mount because of unsupported optional features (240). Which appears to happen because I am selecting a kernel id of "Use default" again when building my second server. I have read that in order for this to work I need to choose the same kernel id that was used in my original server. I have deleted my original server and don't know what it was using. What is the best process to follow in order to not have these issues? Should I choose "Use Default" for my original server? How do you know which kernel it selected? Then should I just document this and always specify this during the deployment of my next servers using my custom AMI? OR should I choose a custom kernel id during the initial build and always use this one moving ahead hoping Amazon never retires it? Thanks for any advice!

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • Updating permissions on Amazon S3 files that were uploaded via JungleDisk

    - by Simon_Weaver
    I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN. The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200. I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able. I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure. JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not. S3 doesn't appear to propagate permissions down which is a real pain. I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.

    Read the article

  • Amazon Product API ResponseGroups and Default results

    - by aboxy
    A. In our application, most of the data we work with is stored as free text .i.e. there is no categorization done as of now. We are using openNLP libraries to make sense of the data(extract keywords/classify) and do a query to Amazon web services to pull the results of the query. We use searchindex=All and keywords=. Results are not always returned and we basically get 'AWS.ECommerceService.NoExactMatches' How to avoid that? 1) Is there a way to specify default results if no match found? e.g. Amazon carousel widget does that if the search query did not return results, it basically show some computer items. 2) Should I batch the request always and add another search criteria to every request? If my first criteria does not pull any results, we can be sure that our 2nd query will always pull results(possibly caching?) Here is one search criteria 'Open Circle Hoop Earrings Polished Stainless Steel Open Circle Hoop Earrings Polished Stainless Steel DiamondShark' This return no results via API. On Amazon site,I get alternative suggestions with some results which are pretty relevant. Is there a way to pull those results? B. We just need a thumbnail image and a title and description for our app. Which responseGroup is appropriate? We are using medium rt now but there is awful lot of information even with that responseGroup. Any help is appreciated. thanks

    Read the article

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • Amazon Product API: "Your request is missing a required parameter combination" on Blended ItemSearch

    - by Daniel Schaffer
    I'm having some problems trying to do an ItemSearch on the Blended index using the Amazon Product API. According to the documentation, Blended requests cannot specify the MerchantId parameter - and indeed, if I try to include it I get an error telling me so. However, when I don't include it, I get an error telling me that my request is missing a required parameter combination and that a valid combination includes MerchantId... what the hell? Here's the XML response: <Items xmlns="http://webservices.amazon.com/AWSECommerceService/2005-10-05"> <Request> <IsValid>False</IsValid> <ItemSearchRequest> <Availability>Available</Availability> <Condition>All</Condition> <Keywords> home theater pc and other geekery</Keywords> <ResponseGroup>Similarities</ResponseGroup> <ResponseGroup>SalesRank</ResponseGroup> <ResponseGroup>OfferSummary</ResponseGroup> <ResponseGroup>Small</ResponseGroup> <ResponseGroup>Images</ResponseGroup> <SearchIndex>Blended</SearchIndex> </ItemSearchRequest> <Errors> <Error> <Code>AWS.MissingParameterCombination</Code> <Message>Your request is missing a required parameter combination. Required parameter combinations include MerchantId, Availability.</Message> </Error> </Errors> </Request> </Items> The failing requests are being sent as part of batches with other requests that are succeeding. I'm using REST to send my requests, so here's an example of a request: http://ecs.amazonaws.com/onca/xml?AWSAccessKeyId=-------------& ItemSearch.1.Keywords=Mates%20of%20State& ItemSearch.1.MerchantId=Amazon& ItemSearch.1.SearchIndex=DVD& ItemSearch.2.Keywords=teaching%20Lily%20various%20computer%20related%20skills& ItemSearch.2.SearchIndex=Blended& ItemSearch.Shared.Availability=Available& ItemSearch.Shared.Condition=All& ItemSearch.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary%2CSimilarities& Operation=ItemSearch%2CSimilarityLookup& Service=AWSECommerceService& SimilarityLookup.1.ItemId=B000FNNHZ2& SimilarityLookup.2.ItemId=B000EQ5UPU& SimilarityLookup.Shared.Availability=Available& SimilarityLookup.Shared.Condition=All& SimilarityLookup.Shared.MerchantId=Amazon& SimilarityLookup.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary& Timestamp=2010-04-02T17%3A18%3A05Z& Signature=---------------- Any ideas as to what I'm doing wrong?

    Read the article

  • Using Amazon S3/Cloudfront and Encoding.com to deliver web video – step by step for iPhone/iPod/iPad

    - by joelvarty
      The Amazon AWS newsletter for May 2010 had a great link in it to this article by encoding.com on how you can use they service to encode your video for multi-format, multi-bandwidth streaming to many devices, including iPhone, iPad, and Flash with H264.   This looks like it doesn’t actually take advantage of CloudFront streaming, but merely splits your encoded files into the available chunks and includes all of the M3U8 files that point to the different bitrates and such.   This looks like a pretty sweet service in general, especially since they seem to have an API as well, so that may be very useful to those of you out there looking to host video. more later – joel

    Read the article

  • Uploading files to EC2 Windows instance

    - by nitramk
    I've created an instance of a Windows Server 2008 AMI at Amazon EC2. I now need to upload some installation files to it. One way to do this would be to activate the FTP server in Windows, set up an account and use that to upload files. Is there a better way to do this? Maybe some way to upload directly to an EBS?

    Read the article

  • Ubuntu Software RAID 0 on AWS Does Not Survive Reboot

    - by Eric J.
    I'm experimenting with creating a software RAID 0 device from 4 EBS volumes on Ubuntu 9.10 running at Amazon AWS following this guide: http://alestic.com/2009/06/ec2-ebs-raid The device appears (and according to SysBench is 3.5x faster than a regular attached EBS volume). Problem is, when I reboot the instance, all files on the RAID device are gone. The device is available and mounted where expected, but contains no files. I am able to write new files to it, which survive until the next reboot.

    Read the article

  • Store profile image of all users into single directory or per subdirectory id?

    - by Luccas
    I'm using amazon s3 as storage for users profile pic. I see that many websites generates large random filenames and put them into the same root directory like: http://xxx.us-east-1.amazonaws.com/aHR0cHM6Ly9mYmNkbi1wcm9maWxlLWEuYWthbWFpaGQubmV0L2hwcm9maWxlLWFrLWFzaDIvMjczMzkxXzEwMDAwMDMxMjAxMzg5OV81NTk3MjM4Mzdfbi5qcGc.jpg And my question is: What are the pros and cons of that approach? If I palce them into different directories, what problems I will have in future? http://xxx.us-east-1.amazonaws.com/users/id/username.jpg or http://xxx.us-east-1.amazonaws.com/users/id/random_number.jpg Thanks!

    Read the article

  • Autoscaling EC2 with NFS mounts

    - by Jamie Taylor
    I'm trying to set up a shared filesystem on EC2 and I've read tutorials such as this: http://blog.ronaldmccollam.com/2012/07/configuring-nfs-on-ubuntu-in-amazon-ec2.html In step 2 it talks about configuring the exports, for this I need an IP range but when I'm auto-scaling I can't predict what the IP will be before it scales. Is there any other way of doing this while still staying secure? Thanks Edit: Just tried s3fs, didn't seem to work properly

    Read the article

  • AWS Free Usage Tier + Cloudflare... possible?

    - by crashintoty
    If I throw my MySQL/PHP app up on a Amazon EC2 instance (using their AWS Free Usage Tier program) and couple it with CloudFlare (the free plan of course) roughly how many daily visitors can I comfortably handle before performance starts to suffer? Just looking for a rough estimate or educated guess - I understand this setup might be less than ideal but I'm still very curious nonetheless. Thanks in advance

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >