Search Results

Search found 1101 results on 45 pages for 'aws sdb'.

Page 10/45 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Backup AWS Dynamodb to S3

    - by Ali
    It has been suggested on Amazon docs http://aws.amazon.com/dynamodb/ among other places, that you can backup your dynamodb tables using Elastic Map Reduce, I have a general understanding of how this could work but I couldn't find any guides or tutorials on this, So my question is how can I automate dynamodb backups (using EMR)? So far, I think I need to create a "streaming" job with a map function that reads the data from dynamodb and a reduce that writes it to S3 and I believe these could be written in Python (or java or a few other languages). Any comments, clarifications, code samples, corrections are appreciated.

    Read the article

  • How can I use Amazon's API in PHP to search for books?

    - by TerranRich
    I'm working on a Facebook app for book sharing, reviewing, and recommendations. I've scoured the web, searched Google using every search phrase I could think of, but I could not find any tutorials on how to access the Amazon.com API for book information. I signed up for an AWS account, but even the tutorials on their website didn't help me one bit. They're all geared toward using cloud computing for file storage and processing, but that's not what I want. I just want to access their API to search info on books. Kind of like how http://openlibrary.org/ does it, where it's a simple URL call to get information on a book (but their databases aren't nearly as populated as Amazon's). Why is it so damned hard to find the information I need on Amazon's AWS site? If anybody could help, I would greatly appreciate it.

    Read the article

  • Running multiple environments on one AWS EC2 instance (Elastic Beanstalk)

    - by Abbas
    I am very new to the Amazon AWS services. I was wondering if there is a way to run an instance of EC2 (say, Amazon Linux AMI) and then connect two environments to this instance. Particularly, I'd like to run a PHP and a Tomcat environment on a single EC2 instance. The problem is, every time I create a new environment in Elastic Beanstalk, it seems to create a new EC2 instance as well. Am I missing something here? I'd appreciate any hint on this.

    Read the article

  • Quantity in amazon cart?

    - by user201140
    Hi I'd like to change the quantity in an AWS order. I've used the form below and that works fine for adding 1 item to the cart, but when I try to change it to a quantity of 2 it says there are no items in my cart. What am I doing wrong? Thanks. <form method="GET" action="http://www.amazon.com/gp/aws/cart/add.html" id="Quantity"> <input type="hidden" name="AWSAccessKeyId" value="Access Key ID" /> <input type="hidden" name="AssociateTag" value="Associate Tag" /> <?php echo "<input type='hidden' name='ASIN.1' value='".$itemid."'/>"; ?> <input type="hidden" name="Quantity.1" value="1"/><br/> <input type="image" src="buynow.gif" value="Submit" alt="Submit"> </form>

    Read the article

  • How to run AWS sample JAVA code on an EC2

    - by SeaPlusPlus
    I just started with Amazon web services, and I have an EC2 instance. I downloaded the JAVA SDK and the Eclipse toolbox. I am able to run a sample program locally on my PC and connect to the Amazon databases, etc. My question is, what do I need to do to get this working on my EC2 instance? This may not even be specific to AWS. On Eclipse, I can just "Run as Application" and run any code. On the server side, what do I need to do? Should I ftp over my .java files? Should I export it to a jar and upload that? Do I need to install anything special to actually run it? I'm just trying to run the basic DynamoDB example that connects to the database and adds a new table and row

    Read the article

  • Can't seem to sign AWS Cloud Front URL properly

    - by Joe Corkery
    Hi everybody, I found a lot of detailed examples online on how to sign an Amazon CloudFront URL for private content. Unfortunately, whenever I implement these examples my URL doesn't seem to work. The resource path is correct because I can download the file when it is set for world read, but the URL doesn't work when set just for authorized users. The PHP code I am using is below. If anybody has any insights as to what I might be doing wrong (I'm guessing it is something obvious that I am just not seeing right now), it would be greatly appreciated. function urlCloudFront($resource) { $AWS_CF_KEY = 'APKA...'; $priv_key = file_get_contents(path_to_pem_file); $pkeyid = openssl_get_privatekey($priv_key); $expires = strtotime("+ 3 hours"); $policy_str = '{"Statement":[{"Resource":"'.$resource.'","Condition":{"DateLessThan":{"AWS:EpochTime":'.$expires.'}}}]}'; $policy_str = trim( preg_replace( '/\s+/', '', $policy_str ) ); $res = openssl_sign($policy_str, $signature, $pkeyid, OPENSSL_ALGO_SHA1); $signature_base64 = (base64_encode($signature)); $repl = array('+' => '-','=' => '_','/' => '~'); $signature_base64 = strtr($signature_base64,$repl); $url = $resource . '?Expires=' .$expires. '&Signature=' . $signature_base64 . '&Key-Pair-Id='. $AWS_CF_KEY; print '<p><A href="' .$url. '">Download VIDA (CloudFrount)</A>'; } urlCloudFront("http://mydistcloud.cloudfront.net/mydir/myfile.tar.gz"); Thanks.

    Read the article

  • How do I prevent my filesystems from being mounted read-only after suspending?

    - by Chas. Owens
    I have Ubuntu 12.04 installed on an SDHC card (only one ext2 partition, no swap). When I suspend using pm-suspend, my root filesystem is mounted read-only. I am currently "fixing" this with the following file: /etc/pm/sleep.d/99_make_disk_rw: #!/bin/sh mount -o remount,rw / But the disk is marked as needing an fsck on reboot. How can I prevent the filesystem from being mounted read-only or whatever is going wrong here. Update: It looks like it is getting mounted read-only because an error occurred. I have changed the mount options for / in /etc/fstab to noatime,nodiratime,errors=continue and it no longer mounts the SDHC card as read-only after it resumes. So the problem is happening when it suspends, not when it resumes as I had thought. I checked /sys/bus/usb/devices/1-4/power/persist and it is set to 1. So the SDHC card shouldn't appear disconnected to the OS (or more accurately it should recover from the disconnection without error). Here seems to be the relevant section of the syslog Sep 10 10:34:23 iubit kernel: [ 748.246226] sd 4:0:0:0: [sdb] Media Changed Sep 10 10:34:23 iubit kernel: [ 748.246234] sd 4:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Sep 10 10:34:23 iubit kernel: [ 748.246243] sd 4:0:0:0: [sdb] Sense Key : Unit Attention [current] Sep 10 10:34:23 iubit kernel: [ 748.246253] Info fld=0x0 Sep 10 10:34:23 iubit kernel: [ 748.246258] sd 4:0:0:0: [sdb] Add. Sense: Not ready to ready change, medium may have changed Sep 10 10:34:23 iubit kernel: [ 748.246271] sd 4:0:0:0: [sdb] CDB: Read(10): 28 00 00 5d 3e f0 00 00 08 00 Sep 10 10:34:23 iubit kernel: [ 748.246291] end_request: I/O error, dev sdb, sector 6110960 Sep 10 10:34:23 iubit kernel: [ 748.247027] EXT2-fs (sdb1): error: ext2_fsync: detected IO error when writing metadata buffers Sep 10 10:34:23 iubit anacron[6954]: Anacron 2.3 started on 2012-09-10 Sep 10 10:34:23 iubit anacron[6954]: Normal exit (0 jobs run) Sep 10 10:34:24 iubit laptop-mode: Laptop mode Sep 10 10:34:24 iubit laptop-mode: enabled, not active Sep 10 10:34:24 iubit kernel: [ 749.055376] sd 4:0:0:0: [sdb] No Caching mode page present Sep 10 10:34:24 iubit kernel: [ 749.055387] sd 4:0:0:0: [sdb] Assuming drive cache: write through Sep 10 10:34:25 iubit anacron[7555]: Anacron 2.3 started on 2012-09-10 Sep 10 10:34:25 iubit anacron[7555]: Normal exit (0 jobs run) Sep 10 10:34:31 iubit kernel: [ 756.090861] EXT2-fs (sdb1): previous I/O error to superblock detected

    Read the article

  • SAP : HANA débarque sur AWS, l'éditeur continue d'imaginer des offres pour démocratiser sa base de données en mémoire

    SAP HANA débarque sur Amazon Web Services L'éditeur continue d'imaginer des offres pour démocratiser sa base de données en mémoire Pour bénéficier de ? ou tester ? la base de données « in-memory » de SAP, plus besoin d'une appliance ou d'un serveur. L'éditeur Allemand vient en effet de certifier HANA à l'usage des développeurs sur Amazon Web Services. « SAP HANA One est disponible sur le marketplace d'AWS », annonce SAP. « Il est maintenant possible de faire fonctionner cette puissante base de données en mémoire sur AWS EC2 pour $ 0,99 de l'heure ». [IMG]http://ftp-developpez.com/gordon-fowler/SAP%20HANA.png[/IMG] Pour rappel, ...

    Read the article

  • Amazon sort le SDK AWS pour Node.js, un kit de développement open source qui facilite l'accès à ses solutions Cloud

    Amazon sort AWS SDK pour Node.js un kit de développement open source qui facilite l'accès à ses solutions Cloud aux développeurs Node.js Voila une nouvelle qui va certainement ravir les développeurs JavaScript utilisant Node.js. La team AWS (Amazon Web Services) Developer Tools vient de publier un kit de développement qui permettra aux développeurs Node.js d'accéder facilement à sa plateforme de Cloud Computing. Ce nouveau SDK permettra d'exploiter avec souplesse les fonctionnalités des solutions d'Amazon Web Services, notamment la plateforme de stockage Cloud Amazon S3 (Simple Storage Service), Amazon EC2 (Elastic Compute Cloud), ainsi qu'Amazon DynamoDB. Le SDK est...

    Read the article

  • Amazon Web Services : Fault tolerant solution

    - by Algorist
    Hi, I am using Boto library to write scripts for automating our jobs on AWS. My script actually starts a hadoop cluster using cloudera scripts and then does some customization. I am having a problem with retries. Seems like very command in my script fails once couple of days. I started adding retry to all the commands, but then the code is very clumsy and difficult to maintain. what do people do in general. Thank you Bala

    Read the article

  • Amazon Web Services Apache Server

    - by Samnsparky
    I am trying to get a feel for the costs imposed by running apache on AWS continually. Assuming that the service is scarcely used, does anyone know how many cpu hours that would eat up in a month just by sitting there and running? I understand that this is slightly impractical but I am trying to figure out what the cost of entry is to deploy an application on this platform (as compared to GAE). I suspect it to be small but I would like to know.

    Read the article

  • MP3 player mpman TS300 not recognized

    - by Nick Lemaire
    I just received a mpman ts300 mp3 player for Christmas. But when I try to connect it to ubuntu (10.10) through usb it doesn't seem to be recognized. I searched for a linuxdriver but came up with nothing. I even tried installing mpman manager from the software center, but still the same problem... Has anybody got any ideas about how to fix this? Thank you edit: some additional information When I plug it in, it doesn't show up under /dev. And it is not listed in the result of 'lsusb' This is the output of 'dmesg' [19571.732541] usb 1-4: new high speed USB device using ehci_hcd and address 2 [19571.889154] scsi7 : usb-storage 1-4:1.0 [19572.883155] scsi 7:0:0:0: Direct-Access TS300 USB DISK 1.00 PQ: 0 ANSI: 0 [19572.885856] sd 7:0:0:0: Attached scsi generic sg2 type 0 [19572.887266] sd 7:0:0:0: [sdb] 7868416 512-byte logical blocks: (4.02 GB/3.75 GiB) [19573.012547] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19573.292550] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19573.572565] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19573.725019] sd 7:0:0:0: [sdb] Test WP failed, assume Write Enabled [19573.725024] sd 7:0:0:0: [sdb] Assuming drive cache: write through [19573.728521] sd 7:0:0:0: [sdb] Attached SCSI removable disk [19575.333015] sd 7:0:0:0: [sdb] 7868416 512-byte logical blocks: (4.02 GB/3.75 GiB) [19575.452547] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19580.603253] usb 1-4: device descriptor read/all, error -110 [19580.722560] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19595.842579] usb 1-4: device descriptor read/64, error -110 [19611.072656] usb 1-4: device descriptor read/64, error -110 [19611.302540] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19616.332568] usb 1-4: device descriptor read/8, error -110 [19621.462692] usb 1-4: device descriptor read/8, error -110 [19621.692551] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19626.722567] usb 1-4: device descriptor read/8, error -110 [19631.852802] usb 1-4: device descriptor read/8, error -110 [19631.962587] usb 1-4: USB disconnect, address 2 [19631.962587] sd 7:0:0:0: Device offlined - not ready after error recovery [19631.962840] sd 7:0:0:0: [sdb] Assuming drive cache: write through [19631.965104] sd 7:0:0:0: [sdb] READ CAPACITY failed [19631.965109] sd 7:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [19631.965112] sd 7:0:0:0: [sdb] Sense not available. [19631.965125] sd 7:0:0:0: [sdb] Assuming drive cache: write through [19631.965130] sdb: detected capacity change from 4028628992 to 0 [19632.130042] usb 1-4: new high speed USB device using ehci_hcd and address 3 [19647.242550] usb 1-4: device descriptor read/64, error -110 [19662.472560] usb 1-4: device descriptor read/64, error -110 [19662.702566] usb 1-4: new high speed USB device using ehci_hcd and address 4 [19677.822587] usb 1-4: device descriptor read/64, error -110 [19693.052575] usb 1-4: device descriptor read/64, error -110 [19693.282582] usb 1-4: new high speed USB device using ehci_hcd and address 5 [19698.312600] usb 1-4: device descriptor read/8, error -110 [19703.442594] usb 1-4: device descriptor read/8, error -110 [19703.672548] usb 1-4: new high speed USB device using ehci_hcd and address 6 [19708.702581] usb 1-4: device descriptor read/8, error -110 [19713.840077] usb 1-4: device descriptor read/8, error -110 [19713.942555] hub 1-0:1.0: unable to enumerate USB device on port 4 [19714.262545] usb 4-2: new full speed USB device using uhci_hcd and address 2 [19729.382549] usb 4-2: device descriptor read/64, error -110 [19744.612534] usb 4-2: device descriptor read/64, error -110 [19744.842543] usb 4-2: new full speed USB device using uhci_hcd and address 3 [19759.962714] usb 4-2: device descriptor read/64, error -110 [19775.200047] usb 4-2: device descriptor read/64, error -110 [19775.422630] usb 4-2: new full speed USB device using uhci_hcd and address 4 [19780.444498] usb 4-2: device descriptor read/8, error -110 [19785.574491] usb 4-2: device descriptor read/8, error -110 [19785.802547] usb 4-2: new full speed USB device using uhci_hcd and address 5 [19790.824490] usb 4-2: device descriptor read/8, error -110 [19795.961473] usb 4-2: device descriptor read/8, error -110 [19796.062556] hub 4-0:1.0: unable to enumerate USB device on port 2

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Amazon S3 tools for Debian?

    - by Jonik
    I need to (programmatically, in a shell script) upload an EAR file to an Amazon S3 bucket on Debian (5.0.4). What, if any, Debian package provides simple, scriptable tools for that? (I want raw S3 bucket access, so please don't suggest solutions like Jungle Disk.)

    Read the article

  • Which is faster for read access on EC2; local drive or EBS?

    - by Phillip Oldham
    Which is faster for read access on an EC2 instance; the "local" drive or an attached EBS volume? I have some data that needs to be persisted so have placed this on an EBS volume. I'm using OpenSolaris, so this volume has been attached as a ZFS pool. However, I have a large chunk of EC2 disk space that's going to go unused, so I'm considering re-purposing this as a ZFS cache volume but I don't want to do this if the disk access is going to be slower than that of the EBS volume as it would potentially have a detrimental effect.

    Read the article

  • PostgresQL on Amazon EBS volume, realistic performance, or move to something more lightweight?

    - by Peck
    Hi, I'm working on a little research project, currently running as an instance on ec2, and I'm hoping to figure out whether I'm going down the right path. We, like a thousand other people, are making use of some of twitters streaming feeds to do gather some data to have fun with and my db seems to be having problems keeping up, and queries take what seems to be a very long time. I'm not a DBA by trade, so I'll just dump some info here and add more if need be. System specs: ec2 xl, 15 gigs of ram ebs: 4 100 gb drives, raid 0. The stream we're getting we're looking at around 10k inserts per minute. 3 main tables, with the users we're tracking somewhere in the neighborhood of 26M rows currently. Is this amount of inserts on this hardware too much to ask out of ebs? Should take a look at some things with less overhead like mongodb?

    Read the article

  • Which AMI should I use as a base for a Django application?

    - by Edan Maor
    I'm starting development of a Django application, on Amazon's Web Services. I'm looking to build an instance that will serve the Django. I don't have much experience with such things, having only used a shared host before (WebFaction). So I'm wondering, which AMI should I use as a base? I'm assuming I want an Ubuntu AMI, possibly with certain things like Apache pre-installed? One minor point: I'm planning to serve several different Django projects from the same instance. I use virtualenv on my dev machine right now to separate the different projects, I'm assuming I'll do the same on EC2. Thanks!

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • How to use Amazons' X.509 key and X.509 cert

    - by April
    How do I use Amazons' X.509 key and X.509 cert? I have downloaded these two files but don't know how to use them. Using rightscale (https://my.rightscale.com/aws_credentials/), do I copy and paste the content of these two files? When I do, the rightscale app says not valid. What am I doing wrong? How do I use these?

    Read the article

  • How do I access my public DNS on Amazon's EC2

    - by Spencer Carnage
    I'm working with Amazon's EC2 for the first time. I went through all of the steps at http://docs.amazonwebservices.com/AWSEC2/latest/GettingStartedGuide/ and now I'm trying to access the public DNS through a web browser. I get nothing. I don't even know if I'm supposed to get anything, really. I just want to see something that indicates that my instance is accessible from the web for development's sake. I'm completely new at this and I can't find a simple answer to this anywhere to save my life. Any help would be MUCH appreciated. Thanks!

    Read the article

  • Can not connect to my sql database.

    - by madhup
    Hi all, I am trying to connect to mysql database on amazon through a php script, but I am shown this error: Warning: mysql_connect() [function.mysql-connect]: Lost connection to MySQL server at 'reading initial communication packet', system error: 111 I have tried and searched places and did the following things: In "/etc/mysql/my.cnf" I commented out the line bind address: 127.0.0.1 to allow the acccess to all. checked /etc/hosts.allow and /etc/hosts.deny and made sure that there are no rules present that may cause But still no luck. Please suggest any other way. Thanks, Madhup

    Read the article

  • Detaching EBS Volumes (in LVM) take a lot of time

    - by Cheezo
    I have an EC2 Instance(EBS Backed-root partition) with EBS volumes configured via LVM. I have formatted it as ext4 and can mount it to store data etc. Now i want take a snapshot of the root partition, hence in that case i go and detach the other non-root EBS volumes (configured in LVM). Here a regular detach does not work, and i have "force" detach almost always. Although, i another similar setup with RAID instead of LVM and there after stopping RAID, i can easily detach. The whole setup is running Ubuntu Maverick 10.10 Please assist me in the same.

    Read the article

  • pdflush hanging on Amazon EBS drives when using multi-GB files - any workaround?

    - by rhh
    Hello, When I run gunzip on a 1.7GB file (which generates an 8GB file) on an EBS volume, pdflush freezes after gunzip runs and the CPU hangs indefinitely at 100% IO Wait. Here's the output from 'ps aux | grep pdflush'. Note the D status root 87 0.0 0.0 0 0 ? D 06:18 0:00 pdflush root 88 0.0 0.0 0 0 ? D 06:18 0:00 pdflush The only solution is to kill the pdflush processes. The processes don't die immediately either. This problem is repeatable and happens with new instances. I'm running 2xlarge instances and I have way more RAM free than is being used (i.e. /proc/meminfo shows 20+GB MemFree) Has anyone found a workaround to this problem in the past? Thanks for any thoughts. Robert

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >