Search Results

Search found 2853 results on 115 pages for 'amazon cloudfront'.

Page 10/115 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Amazon-like ecommerce site

    - by Soule
    Hey there, My idea was to make an e-commerce site alot like Amazon. Not exactly cloning it, but since its for a niche market, i need something like it. I was thinking of using Magento or something like that to use it as a base, but I cant figure out how to allow users to: Sign Up for account, get verified by me. Allowed to add items, so they can be searchable. Product Reviews, What can I use to achieve/make this, and what are some suggestions? I can code in PHP and python, thanks!

    Read the article

  • How to encrypt Amazon CloudFront signature for private content access using canned policy

    - by Chet
    Has anyone using .net actually worked out how to successfully sign a signature to use with CloudFront private content? After a couple of days of attempts all I can get is Access Denied. I have been working with variations of the following code and also tried using OpenSSL.Net and AWSSDK but that does not have a sign method for RSA-SHA1 yet. The signature (data) looks like this {"Statement":[{"Resource":"http://xxxx.cloudfront.net/xxxx.jpg","Condition":?{"DateLessThan":?{"AWS:EpochTime":1266922799}}}]} This method attempts to sign the signature for use in the canned url. So of the variations have included chanding the padding used in the has and also reversing the byte[] before signing as apprently OpenSSL do it this way. public string Sign(string data) { using (SHA1Managed SHA1 = new SHA1Managed()) { RSACryptoServiceProvider provider = new RSACryptoServiceProvider(); RSACryptoServiceProvider.UseMachineKeyStore = false; // Amazon PEM converted to XML using OpenSslKey provider.FromXmlString("<RSAKeyValue><Modulus>....."); byte[] plainbytes = System.Text.Encoding.UTF8.GetBytes(data); byte[] hash = SHA1.ComputeHash(plainbytes); //Array.Reverse(sig); // I have see some examples that reverse the hash byte[] sig = provider.SignHash(hash, "SHA1"); return Convert.ToBase64String(sig); } } Its useful to note that I have verified the content is setup correctly in S3 and CloudFront by generating a CloudFront canned policy url using my CloudBerry Explorer. How do they do it? Any ideas would be much appreciated. Thanks

    Read the article

  • how to 'load data infile' on amazon RDS?

    - by feydr
    not sure if this is a question better suited for serverfault but I've been messing with amazon RDS lately and was having trouble getting 'file' privileges to my web host mysql user. I'd assume that a simple: grant file on *.* to 'webuser@'%'; would work but it does not and I can't seem to do it with my 'root' user as well. What gives? The reason we use load data is because it is super super fast for doing thousands of inserts at once. anyone know how to remedy this or do I need to find a different way? This page, http://docs.amazonwebservices.com/AmazonRDS/latest/DeveloperGuide/index.html?Concepts.DBInstance.html seems to suggest that I need to find a different way around this. Help? UPDATE I'm not trying to import a database -- I just want to use the file load option to insert several hundred-thousand rows at a time. after digging around this is what we have: mysql> grant file on *.* to 'devuser'@'%'; ERROR 1045 (28000): Access denied for user 'root'@'%' (using password: YES) mysql> select User, File_priv, Grant_priv, Super_priv from mysql.user; +----------+-----------+------------+------------+ | User | File_priv | Grant_priv | Super_priv | +----------+-----------+------------+------------+ | rdsadmin | Y | Y | Y | | root | N | Y | N | | devuser | N | N | N | +----------+-----------+------------+------------+

    Read the article

  • How to get Amazon s3 PHP SDK working?

    - by JakeRow123
    I'm trying to set up s3 for the first time and trying to run the sample file that comes with the PHP sdk that creates a bucket and attempts to upload some demo files to it. But this is the error I am getting: The difference between the request time and the current time is too large. I read on another question on SO that this is because Amazon determines a valid request by comparing the times between the server and the client, that the 2 must be within a 15 min span of one another. Now here is the problem. My laptop's time is 12:30AM June 8, 2012 at the moment. On my server I created a file called servertime.php and placed this code in that file: <?php print strftime('%c'); ?> and the output is: Fri Jun 8 00:31:22 2012 It looks like the day is correct but I don't know what to make of 00:31:22. In any case, how is it possible to always make sure the time between the client and server is within a 15 minute window of one another. What if I have a user in China who wishes to upload a file on my site which uses s3 for the cdn. Then the time difference would be over a day. How can I make sure all my user's times are within 15 minutes of my server time? What if the user is in the U.S. but the time on their machine is misconfigured. Basically how to get s3 bucket creation and upload to work?

    Read the article

  • amazon product advertising api - item lookup request working example

    - by I__
    would anyone have a working example of an amazon ITEMLOOKUP ? i have the following code but it does not seem to work: string ISBN = "0393326381"; string ASIN = ""; if (!(string.IsNullOrEmpty(ISBN) && string.IsNullOrEmpty(ASIN))) { AWSECommerceServicePortTypeChannel service = new AWSECommerceServicePortTypeChannel(); ItemLookup lookup = new ItemLookup(); ItemLookupRequest request = new ItemLookupRequest(); lookup.AssociateTag = secretKey; lookup.AWSAccessKeyId = accessKeyId; if (string.IsNullOrEmpty(ASIN)) { request.IdType = ItemLookupRequestIdType.ISBN; request.ItemId = new string[] { ISBN.Replace("-", "") }; } else { request.IdType = ItemLookupRequestIdType.ASIN; request.ItemId = new string[] { ASIN }; } request.ResponseGroup = new string[] { "OfferSummary" }; lookup.Request = new ItemLookupRequest[] { request }; response = service.ItemLookup(lookup); if (response.Items.Length > 0 && response.Items[0].Item.Length > 0) { Item item = response.Items[0].Item[0]; if (item.MediumImage == null) { //bookImageHyperlink.Visible = false; } else { //bookImageHyperlink.ImageUrl = item.MediumImage.URL; } //bookImageHyperlink.NavigateUrl = item.DetailPageURL; //bookTitleHyperlink.Text = item.ItemAttributes.Title; //bookTitleHyperlink.NavigateUrl = item.DetailPageURL; if (item.OfferSummary.LowestNewPrice == null) { if (item.OfferSummary.LowestUsedPrice == null) { //priceHyperlink.Visible = false; } else { //priceHyperlink.Text = string.Format("Buy used {0}", item.OfferSummary.LowestUsedPrice.FormattedPrice); //priceHyperlink.NavigateUrl = item.DetailPageURL; } } else { //priceHyperlink.Text = string.Format("Buy new {0}", item.OfferSummary.LowestNewPrice.FormattedPrice); //priceHyperlink.NavigateUrl = item.DetailPageURL; } if (item.ItemAttributes.Author != null) { //authorLabel.Text = string.Format("By {0}", string.Join(", ", item.ItemAttributes.Author)); } else { //authorLabel.Text = string.Format("By {0}", string.Join(", ", item.ItemAttributes.Creator.Select(c => c.Value).ToArray())); } /* ItemLink link = item.ItemLinks.Where(i => i.Description.Contains("Wishlist")).FirstOrDefault(); if (link == null) { //wishListHyperlink.Visible = false; } else { //wishListHyperlink.NavigateUrl = link.URL; } * */ } } } the problem is with this: thisshould be defined differently but i do not know how AWSECommerceServicePortTypeChannel service = new AWSECommerceServicePortTypeChannel();

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • How do I login once I promote my Windows Server 2012 to domain controller in my Amazon VPC?

    - by Developr
    I am following this guide: http://d36cz9buwru1tt.cloudfront.net/pdf/EC2_AD_How_to.pdf to setup my domain controller. I get AD installed correctly, but when I do the promotion to DC, the server restarts and when I try to access it, I am unable to login using any of the local system accounts. I even created my own separate user account, but that did not help. I made sure to disable the amazon settings for renaming the machine, the machine has a static ip and has been renamed.

    Read the article

  • Can't Connect to IIS Ftp Site under Amazon EC2

    - by h3n
    IIS 7.5: Ftp Firewall Suport: Data Ranges 49152-65535 using external Ip of Amazon EC2 static IP Ftp IPv4 Restriction: allow: Amazon EC2 static IP Ftp Authentication: Anonymous: Enabled, Basic: Disabled, IISMgr: Enabled Ftp Authorization: Allow All Users: Read/Write Windows Firewall (Inbound): Open port 21 Open port ranges: 49152-65535 (Outbound) Open port: 20 Amazon EC2 Security Group: Custom TCP Rule: 21 Custom TCP Rule: 49152-65535 It works on Internet Explorer when I typed the address: ftp://localhost on the server but when I entered the Amazon EC2 Static IP (ftp://IPADRESS) it doesnt connect. I cant connect also to FileZilla

    Read the article

  • Create signed urls for CloudFront with Ruby

    - by wiseleyb
    History: I created a key and pem file on Amazon. I created a private bucket I created a public distribution and used origin id to connect to the private bucket: works I created a private distribution and connected it the same as #3 - now I get access denied: expected I'm having a really hard time generating a url that will work. I've been trying to follow the directions described here: http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/index.html?PrivateContent.html This is what I've got so far... doesn't work though - still getting access denied: def url_safe(s) s.gsub('+','-').gsub('=','_').gsub('/','~').gsub(/\n/,'').gsub(' ','') end def policy_for_resource(resource, expires = Time.now + 1.hour) %({"Statement":[{"Resource":"#{resource}","Condition":{"DateLessThan":{"AWS:EpochTime":#{expires.to_i}}}}]}) end def signature_for_resource(resource, key_id, private_key_file_name, expires = Time.now + 1.hour) policy = url_safe(policy_for_resource(resource, expires)) key = OpenSSL::PKey::RSA.new(File.readlines(private_key_file_name).join("")) url_safe(Base64.encode64(key.sign(OpenSSL::Digest::SHA1.new, (policy)))) end def expiring_url_for_private_resource(resource, key_id, private_key_file_name, expires = Time.now + 1.hour) sig = signature_for_resource(resource, key_id, private_key_file_name, expires) "#{resource}?Expires=#{expires.to_i}&Signature=#{sig}&Key-Pair-Id=#{key_id}" end resource = "http://d27ss180g8tp83.cloudfront.net/iwantu.jpeg" key_id = "APKAIS6OBYQ253QOURZA" pk_file = "doc/pk-APKAIS6OBYQ253QOURZA.pem" puts expiring_url_for_private_resource(resource, key_id, pk_file) Can anyone tell me what I'm doing wrong here?

    Read the article

  • Why does my custom Amazon EC2 AMI have limited instance type options?

    - by John
    The Basic 64-bit Amazon Linux AMI has the following instance type options available: Micro Large Extra-Large High-Memory Extra Large ... etc I booted up this AMI as a micro type, made customizations, shut it down, detached the volume, took a snapshot, and registered my own custom AMI: ec2-register –snapshot [snapshot_id] –description "my description" –name "my name" –kernel aki-427d952b That worked. HOWEVER, when I try to create an instance from my custom AMI, only the following instance types are available: Micro Small High-CPU Medium ... which coincidentally are the same instance types available if you try to boot up the 32-bit Amazon image. Why are the available instance types of my custom image varying from the available instance types of the image I based it off of?

    Read the article

  • Are whole VM images backed up on Amazon EC2/S3?

    - by John
    I've been trying to get my head around Amazon Web Services as a VPS provider. My understanding is a EC2 instance running Windows is basically a Windows VM, very similar to renting a VPS from a more traditional hosting provider. I don't want to have complex backups, either to administer or to restore - if my restore involves installing SVN, MySQL, Jira, etc on a new box before I can even try to restore the backup then it's not great to me. What I really want is a service which backs up my entire VM... if the PC running the VPS dies then the VM image is installed on a new PC and off we go again. With Amazon being all about flexibility and elasticity, I wondered if they have this service? I can't figure it out from reading their docs.

    Read the article

  • Why do I get "Permission denied (publickey)" when trying to SSH from local Ubuntu to a Amazon EC2 se

    - by Vorleak Chy
    I have an instance of an application running in the cloud on Amazon EC2 instance, and I need to connect it from my local Ubuntu. It works fine on one of local ubuntu and also laptop. I got message "Permission denied (publickey)" when trying to access SSH to EC2 on another local Ubuntu. It's so strange to me. I'm thinking some sort of problems with security settings on the Amazon EC2 which has limited IPs access to one instance or certificate may need to regenerate. Does anyone know a solution?

    Read the article

  • How to Setting up Amazon EC2 with own OS and DB ?

    - by Spencer Lim
    i got my own version of OS and DB which are window server 2008 Hyper-V R2 and Sql server R2 2008 both in enterprise version may i know how to configure it up and running ? with amazon EC2, what other is a must combination to make it run ? also how could i install the operating system and DNS ? i never doing server before, but i just need something like VPS to support my development and testing. Amazon Ec2 seem the best and cheapest service due to only $1 per hour. Appreciate if Any brief guide provided, Thx =D

    Read the article

  • How to Setting up Amazon EC2 with own OS and DB?

    - by SLim
    i got my own version of OS and DB which are window server 2008 Hyper-V R2 and Sql server R2 2008 both in enterprise version may i know how to configure it up and running ? with amazon EC2, what other is a must combination to make it run ? also how could i install the operating system and DNS ? i never doing server before, but i just need something like VPS to support my development and testing. Amazon Ec2 seem the best and cheapest service due to only $1 per hour.

    Read the article

  • Amazon S3: allow users to upload on a restricted basis (per bucket maybe)?

    - by Tom
    Hi there, I'm thinking about signing up to the Amazon S3 storage service. What I want to do is create a service where other people can register their own bucket with a certain amount of storage. These users will install my software, which then uploads their files. Of course, the users may only upload what they have paid for. For this to work I would like to create a separate bucket for each customer, each with its own properties. Question 1: is this possible with the API? How? This means that the installed software must have the rights needed to upload to my Amazon S3 account. Question 2: can I create individual authentication IDs for each bucket or customer, so that they can only upload with restrictions I have set? Thanks in advance.

    Read the article

  • How to run AWS sample JAVA code on an EC2

    - by SeaPlusPlus
    I just started with Amazon web services, and I have an EC2 instance. I downloaded the JAVA SDK and the Eclipse toolbox. I am able to run a sample program locally on my PC and connect to the Amazon databases, etc. My question is, what do I need to do to get this working on my EC2 instance? This may not even be specific to AWS. On Eclipse, I can just "Run as Application" and run any code. On the server side, what do I need to do? Should I ftp over my .java files? Should I export it to a jar and upload that? Do I need to install anything special to actually run it? I'm just trying to run the basic DynamoDB example that connects to the database and adds a new table and row

    Read the article

  • Amazon AWS s3fs mount problem on Fedora 14

    - by Alex
    I successfully compiled and installed s3fs (http://code.google.com/p/s3fs/) on my Fedora 14 machine. I included the password credentials in /etc/ as specified in the guide. When I run: sudo /usr/bin/s3fs bucket_name /mnt/bucket_name/ it runs successfully. (note: the bucket name is the same as the folder name in /mnt/). When I run ls in /mnt/ I get the error "ls: cannot access bucket_name: Permission denied". When I run sudo chmod 640 /mnt/bucket_name I get "chmod: changing permissions of `bucket_name': Input/output error". When I reboot the machine I can access the folder /mnt/bucket_name normally but it is not mapped to the s3 bucket. So, basically I have two questions. 1) How do I access the folder (/mnt/bucket_name) as usual after I mount it to the s3 bucket and 2) How can I keep it mounted even after machine restart. Regards

    Read the article

  • Interesting questions related to lighttpd on Amazon EC2

    - by terence410
    This problem appeared today and I have no idea what is going on. Please share you ideas. I have 1 EC2 DB server (MYSQL + NFS File Sharing + Memcached). And I have 3 EC2 Web servers (lighttpd) where it will mounted the NFS folders on the DB server. Everything going smoothly for months but suddenly there is an interesting phenomenon. In every 8 minutes to 10 minutes, PHP file will be unreachable. This will last about 1 minute and then back to normal. Normal files like .html file are unaffected. All servers have the same problem exactly at the same time. I have spent one whole day to analysis the reason. Finally, I find out when the problem appear, the file descriptor of lighttpd suddenly increased a lot. I used ls /proc/1234/fd | wc -l to check the number of fd. The # of fd is around 250 in normal time. However, when the problem appeared, it will be raised to 1500 and then back to normal. It sounds funny, right? Do you have any idea what's going on? ======================== The CPU graph of one of the web server.

    Read the article

  • Amazon S3 permissions

    - by Joe
    Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?

    Read the article

  • Do I need a ssl certificate if just pointing my domain to Cloudfront?

    - by hashpipe
    I have a website running on a domain (e.g site.com). I have an additional domain(e.g sitecdn.com) which basically points to Amazon Cloudfront for delivery. Amazon Cloudfront in turn basically fetches the data from the main domain (site.com). I use this setup primarily to have multiple subdomains of my sitecdn.com to point to assets via the cdn. The main website has a ssl certificate, and I intend to put all assets served from the cdn as https links only. Something like <img src="https://img.sitecdn.com/image.jpg" /> I'm a little confused whether I need a ssl for my cdn domain. In cloudfront I can set the setting to allow both https and http traffic. Do I need a ssl certificate for this ? If yes, then where do I install the ssl certificate, since I don't have a server for sitecdn.com.

    Read the article

  • How to have SSL on Amazon Elastic Load Balancer with a Gunicorn EC2 server?

    - by Riegie Godwin
    I'm a self taught back end engineer so I'm learning all of this stuff as I go along. For the longest time, I've been using basic authentication for my users. Many developers are advising against this approach since each request will contain the username & password in clear text. Anyone with the right skills can sniff on the connection between my iOS application and my Django/Gunicorn Server and obtain their password. I wouldn't want to put my user's credentials at risk so I would like to implement a more secure way of authentication. SSL seems to be the most viable option. My server doesn't serve any static content or anything crazy of that sort. All the server does is send and receive "json" responses from and to my iOS application. Here is my current topology. iOS application ------ Amazon Elastic Load Balancer ------- EC2 Instances running HTTP Gunicorn. Gunicorn runs on port 8000. I have a CNAME record from GoDaddy for the Amazon Elastic Load Balancer DNS. So instead of using the long DNS to make requests, I just use server.example.com. To interact with my servers I send and receive requests to server.example.com:8000/ This setup works and has been solid. However I need to have a more secure way. I would like to setup SSL between my iOS application and my Elastic Load Balancer. How can I go about doing this? Since I am only sending json responses to my application, do I really need to buy a certificate from a CA or can I create my own? (since browsers will not be interacting with my servers. My servers are only designed to send json responses to my iOS application).

    Read the article

  • CheckPoint/Amazon VPC VPN tunnel working inconsistently

    - by Lee
    First time poster, so please be gentle and correct me if there's Server Fault etiquette I'm missing. We have two CheckPoint edge devices at sites A & B, independently managed, connecting to two Amazon private clouds. In both cases, the two Amazon VPCs are in the same community on the CheckPoint device. A VPN tunnel exists between the two CheckPoint devices as well. Between Sites A & B and the Amazon VPC in Northern Virigina, we are unable to keep more than one tunnel up. Both will come up, but tunnel 2 will drop an hour after initiation and will not come back up while tunnel 1 is up. We believe the 1-hour period is due to IPsec phase 2 renegotiation, but can't be sure. On our side, we see the tunnel 2 remote endpoint as not responding to phase 2 negotiation. Between Sites A & B and the Amazon VPC in Oregon, we have no issues. Both tunnels are up and fail over properly. The CheckPoint gateways are using domain-based VPNs. According to CheckPoint's advice to Amazon, this won't work. Yet, in Oregon, it does. We've pursued this with Amazon and, despite the fact it's working in Oregon, they've refused to troubleshoot with us further. Can anyone suggest anything we can do to try to get this stabilized? Going to route-based VPNs is not an option for us.

    Read the article

  • Creating Signed URLs for Amazon CloudFront

    - by Zack
    Short version: How do I make signed URLs "on-demand" to mimic Nginx's X-Accel-Redirect behavior (i.e. protecting downloads) with Amazon CloudFront/S3 using Python. I've got a Django server up and running with an Nginx front-end. I've been getting hammered with requests to it and recently had to install it as a Tornado WSGI application to prevent it from crashing in FastCGI mode. Now I'm having an issue with my server getting bogged down (i.e. most of its bandwidth is being used up) due to too many requests for media being made to it, I've been looking into CDNs and I believe Amazon CloudFront/S3 would be the proper solution for me. I've been using Nginx's X-Accel-Redirect header to protect the files from unauthorized downloading, but I don't have that ability with CloudFront/S3--however they do offer signed URLs. I'm no Python expert by far and definitely don't know how to create a Signed URL properly, so I was hoping someone would have a link for how to make these URLs "on-demand" or would be willing to explain how to here, it would be greatly appreciated. Also, is this the proper solution, even? I'm not too familiar with CDNs, is there a CDN that would be better suited for this?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >