Search Results

Search found 3025 results on 121 pages for 'amazon ec2'.

Page 105/121 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • What database works well with 200+GB of data?

    - by taw
    I've been using mysql (with innodb; on Amazon rds) because it's sort of universal default, but it's been ridiculously under-performing, and tweaking it only delays the inevitable. The data is mostly relatively short (<1kB of bytes each) blobs information about 100Ms of urls. There is (or should be, mysql cannot seem to handle it) very high amount of insert / update / retrieve but few complex queries - not that complex queries wouldn't be useful, but because mysql is so slow that it's far faster to get the data out, process it locally, and cache the results somewhere. I can keep tweaking mysql and throwing more hardware at it, but it seems increasingly futile. So what are the options? SQL/relational model/etc. optional - anything will do as long as it's fast, networked, and language-independent.

    Read the article

  • ssl port didnt work on nginx

    - by Jin Lin
    I set up the unicorn and nginx on one of my ec2 machine. and my request are loading ok with nginx listen to port 80. but when I enable it to ssl, which listen to port 443. It doesn't work. and it can still work with port 80, https. server { listen 443 ssl; # replace with your domain name server_name domain.com; # replace this with your static Sinatra app files, root + public root /home/ubuntu/domain/public; ssl on; ssl_certificate /etc/ssl/domain.crt; ssl_certificate_key /etc/ssl/domain.key; # maximum accepted body size of client request client_max_body_size 4G; # the server will close connections after this time keepalive_timeout 5; location ~ ^/assets/ { add_header ETag ""; gzip_static on; expires max; add_header Cache-Control public; } location / { proxy_set_header X-Forwarded-Proto https; try_files $uri @app; } location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # pass to the upstream unicorn server mentioned above proxy_pass http://unicorn_server; } }

    Read the article

  • Subdomains not working with virtual hosts on apache2 ubuntu

    - by cy834sh4rk
    I'm trying to set up a subdomain on my ec2 account but can't figure out what's going on. I've looked for a few hours and haven't been able to find an answer :-/ I'm trying to set up a subdomain using virtual hosts but no matter what I try the browser can't find the subdomain :-( I have the following vhosts files set up: apache2/sites-available/mysite (this site currently works) <VirtualHost *:80 ServerName mysite.com ServerAdmin webmaster@localhost DocumentRoot /home/sites/mysite <Directory /home/sites/mysite Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory ErrorLog ${APACHE_LOG_DIR}/mysite-error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/mysite-access.log combined </VirtualHost apache2/sites-available/red (this is the subdomain I'm trying to set up) <VirtualHost *:80 ServerName red.mysite.com ServerAdmin webmaster@localhost DocumentRoot /var/www/red <Directory /var/www/red Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory ErrorLog ${APACHE_LOG_DIR}/red-error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/red-access.log combined </VirtualHost Apache mod_rewrite is enabled. I've enabled both sites using a2ensite and I make sure I restart apache every time I make a change. /etc/hosts 127.0.0.1 localhost 127.0.0.1 mysite.com 127.0.0.1 red.mysite.com Any help would be appreciated. Thanks!

    Read the article

  • Upgrading MySQL Connector/Net

    - by Todd Grover
    I am trying to publish a website with our hosting provider. I am getting error due to the fact that they only allow a medium trust and the MySQL Connector/Net that I am using requires reflection to work. Unfortunately, reflection is not allowed in a medium trust. After some research I found out that the newest version of the MySQL Connector/Net may solve this problem. Connector/Net 6.6 includes enhancements to partial trust support to allow hosting services to deploy applications without installing the Connector/Net library in the GAC. I am thinking that will solve my problem. So, I unistalled MySQL Connector/Net 6.4.4 and I installed MySQL Connector/Net 6.6.4. When I run the application in Visual Studio 2010 I get the error: ProviderIncompatibleException was unhandled by user code The message is An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. Check the inner exceptions for details and ensure that the connection string is correct. InnerException is The provider did not return a ProviderManifestToken string. Everything works fine when I have Connector/Net 6.4.4 installed. I can access the database and perform Read/Write/Delete action against it. I have a reference to the following in the project: MySql.Data MySql.Data.Entity MySql.Web My connection string in Web.config <connectionStrings> <add name="AESSmartEntities" connectionString="server=ec2-xxx-xx-xxx-xx.compute-1.amazonaws.com; user=root; database=nunya; port=3306; password=xxxxxxx;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> What might I be doing wrong? Do I need any additional setting(s) to work with version 6.6.4 that wasn't required in the older version 6.4.4?

    Read the article

  • What considerations should be made for a web app to be released on a cloud hosted system?

    - by Rhubarb
    I have a web app that is primarily a WordPress app, but it pulls content from a Django app, simply by calling a service that uses Django models. My understanding of cloud computing is a bit vague. If the site needs to scale up with short notice, does the cloud provider (Amazon, Rackspace, whomever) simply spin up new instances (copies) of my initially configured server? How is state managed between all of them? Are there any good primers on this subject? It's hard to find much out there without getting caught up in the marketing.

    Read the article

  • Looking for a Open Source Library Management software on LAMP preferably Java based.

    - by Sathya
    I am planning to set up a library in my organization. I am looking for a OpenSource solution. While I am continuing my googling on the same, Pls share your experience with any software that you might have come across. The features that I look for is 1. ISBN based. 2. Interfaces for barcode scanning (of ISBN code) 3. Integration with Amazon ( for reviews and ratings) 4. All the usual library facilities (Search, Checkin, checkout, checkout history, user reviews, user ratings etc). 5. Nice to have features - More personalization, booking of titles in advance, recommending books to select users etc.

    Read the article

  • Apache Axis2 tried books

    - by josh
    I am looking to get my hands wet in Java Web Services. Apache CXF and Axis2 seem to be in demand a lot. I googled a bit for finding a good Axis2 book that has lot of working examples. some books like Quick Start Apache Axis 2 seem to have bad reviews on Amazon and titles like Web Services with Apache CXF and Axis2 seem to lack working examples. I want to get opinion from people who currently work with Axis2 and have read a book or two on the subject. Which is a good book for an Axis2 n00b.

    Read the article

  • RDF Usage Rates for Syndication

    - by David in Dakota
    Is RDF still used widely for content syndication? Specifically, I know only of Slashdot as a large scale website syndicating content in that format (say versus RSS). Understandably this might seem vague to answer so more specifically: Can anyone list any larger sites similar in scale to Amazon or CNN using it? Any web based publishing platforms (Wordpress, Joomla, etc...) that generate syndication feeds with this xml vocabulary. Any other more quantifiable evidence that it is used for syndication online. I understand that RDF may be a parent specification but in this case I'm talking about sites that syndicate content using <rdf as a root element and heavily leveraging elements from the RDF namespace: http://www.w3.org/1999/02/22-rdf-syntax-ns#

    Read the article

  • Ruby ways to authenticate using headers?

    - by webdestroya
    I am designing an API system in Ruby-on-Rails, and I want to be able to log queries and authenticate users. However, I do not have a traditional login system, I want to use an APIkey and a signature that users can submit in the HTTP headers in the request. (Similar to how Amazon's services work) Instead of requesting /users/12345/photos/create I want to be able to request /photos/create and submit a header that says X-APIKey: 12345 and then validate the request with a signature. Are there any gems that can be adapted to do that? Or better yet, any gems that do this without adaptation? Or do you feel that it would be wiser to just have them send the API key in each request using the POST/GET vars?

    Read the article

  • Expression Tree : C#

    - by nettguy
    My understanding of expression tree is : Expression trees are in-memory representation of expression like arithmetic or boolean expression.The expressions are stored into the parsed tree.so we can easily transalate into any other language. Linq to SQL uses expression tree.Normally when our LINQ to SQL query compiler translates it to parsed expression trees.These are passed to Sql Server as T-SQL Statements.The Sql server executes the T-SQL query and sends down the result back.That is why when you execute LINQ to SQL you gets IQueryable<T> not IEnumetrable<T>.Because IQuerybale contains public IQueryable:IEnumerable { Type Element {get;} Expression Expression {get;} IQueryaleProvider Provider {get;} } Questions : Microsoft uses Expression trees to play with LINQ-to-Sql.What are the different ways can i use expression trees to boost my code. Apart from LINQ to SQL,Linq to amazon ,who used expression trees in their applications? Linq to Object return IEnumerable,Linq to SQL return IQueryable ,What does LINQ to XML return?

    Read the article

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • Ngram IDF smoothing

    - by adi92
    I am trying to use IDF scores to find interesting phrases in my pretty huge corpus of documents. I basically need something like Amazon's Statistically Improbable Phrases, i.e. phrases that distinguish a document from all the others The problem that I am running into is that some (3,4)-grams in my data which have super-high idf actually consist of component unigrams and bigrams which have really low idf.. For example, "you've never tried" has a very high idf, while each of the component unigrams have very low idf.. I need to come up with a function that can take in document frequencies of an n-gram and all its component (n-k)-grams and return a more meaningful measure of how much this phrase will distinguish the parent document from the rest. If I were dealing with probabilities, I would try interpolation or backoff models.. I am not sure what assumptions/intuitions those models leverage to perform well, and so how well they would do for IDF scores. Anybody has any better ideas?

    Read the article

  • PHP displays blank white page even with all error reporting enabled

    - by Andy Shinn
    I am trying to debug a broken page in a Drupal application and am having a hard time getting PHP to spit anything useful out. I have the following set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On error_log = /var/log/php/php_error.log I have a file showing me phpinfo() which confirms these variables are set correctly for the environment. I have increased memory_limit to 256M (which should be more than enough). Yet, the only indication I get is a status 500 code in the apache access log and a blank white page from PHP. The Apache virtual host has LogLevel set to debug and the error log only outputs: [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 0 to 2 : URL /index.php, referer: http://ec2-174-129-192-237.compute-1.amazonaws.com/admin/reports/updates [Sat Jun 16 20:03:03 2012] [error] [client 173.8.175.217] File does not exist: /var/www/favicon.ico [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 42 to 44 : URL /favicon.ico The PHP error log outputs nothing at all. kernel and syslog show nothing related to Apache or PHP. I have also tried installing suphp and checking its log just confirms the user is executing correctly: [Sat Jun 16 20:02:59 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 [Sat Jun 16 20:05:03 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 This is on Ubuntu 12.04 x86_64 with the following PHP modules: ii php5 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.3.10-1ubuntu3.1 command-line interpreter for the php5 scripting language ii php5-common 5.3.10-1ubuntu3.1 Common files for packages built from the php5 source ii php5-curl 5.3.10-1ubuntu3.1 CURL module for php5 ii php5-gd 5.3.10-1ubuntu3.1 GD module for php5 ii php5-mysql 5.3.10-1ubuntu3.1 MySQL module for php5 So, what am I missing here? Why no error reporting?

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • GlusterFS as elastic file storage?

    - by Christopher Vanderlinden
    Is there any way to run GlusterFS in a replicated mode, but with the ability to dynamically scale the volume up and down? Say you have 3 servers all running glusterd. your Gluster volume would have to be setup with replica 3 gluster volume create test-volume replica 3 192.168.0.150:/test-volume 192.168.0.151:/test-volume 192.168.0.152:/test-volume You would then mount it as say \mnt\gfs_test What happens when I want to add 2 more servers to the storage pool and then also use them in this volume? Is there any easy way to expand the volume AND increase that replica count to 5? My end goal is to run this on EC2 instances, say 3 Apache front ends, with the webroot setup on the gluster volume mount. My concern is that if I ever need to spin up a server, I would want the server to not only be an additional Apache front end, but also another server in the gluster file system, adding to fault tolerance as well as possibly giving a slight boost in read speed. Maybe there are better options that would fit the bill here? Thanks.

    Read the article

  • How to decouple an app's agile development from a database using BDUF?

    - by Rob Wells
    G'day, I was reading the article "Database as a Fortress" by Dan Chak from the excellent book "97 Things Every Software Architect Should Know" (sanitised Amazon link) which suggests that databases should not be designed using an agile approach. There's an SO question on agile approaches and databases "Agile development and database changes" which has some excellent answers covering agile development approaches. In fact, one of the answers supplies a brilliant idea of what's needed for each update of the DB. ;-) But after reading Dan Chak's article, I am left wondering if an agile approach is really suitable for large scale systems. This of course leads on to the question of how best to decouple an agile approach for the application that is interacting with the BDUF database design without adding complicated translation layers in the final design employed? Any suggestions? cheers,

    Read the article

  • How can one use online backup with large amounts of static data?

    - by Billy ONeal
    I'd like to setup an offsite backup solution for about 500GB of data that's currently stored between my various machines. I don't care about data retention rates, as this is only a backup of, not primary storage, for my data. If the backup is stored on crappy non-redundant systems, that does not matter. The data set is almost entirely static, and mostly consists of things like installers for Visual Studio, and installer disk images for all of my games. I have found two services which meet most of this: Mozy Carbonite However, both services impose low bandwidth caps, on the order of 50kb/s, which prevent me from backing up a dataset of this size effectively (somewhere on the order of 6 weeks), despite the fact that I get multiple MB/s upload speeds everywhere else from this location. Carbonite has the additional problem that it tries to ignore pretty much every file in my backup set by default, because the files are mostly iso files and vmdk files, which aren't backed up by default. There are other services such as EC2 which don't have such bandwidth caps, but such services are typically stored in highly redundant servers, and therefore cost on the order of 10 cents/gb/month, which is insanely expensive for storage of this kind of data set. (At $50/month I could build my own NAS to hold the data which would pay for itself after ~2-3 months) (To be fair, they're offering quite a bit more service than I'm looking for at that price, such as offering public HTTP access to the data) Does anything exist meeting those requirements or am I basically hosed?

    Read the article

  • Server Clustering (Django, Apache, Nginx, Postgres)

    - by system-matrix
    I have a project deployed with django, Apache, Nginx and Postgres. The project has requirement of live data viewable to customers. The projects main points are: 1. Devices in field send data to server(devices are also like website users) after login. 2. There is background import process which imports the uploaded data in postgres. 3. The webusers of the system use this data and can send commands to the devices, which devices read when they login. 4. There are also background analysis routines running on the data. All the above mentioned setup and system is deployed on one amazon EC2 cloud machine. The project currently supports over 600 devices and 400 users. But as the number of devices are increasing with time the performance of the server is going down. We want to extend this project so that it can support more and more devices. My initial thinking is, We will create one more server like current one and divide the devices amongst these to servers. But Again We need a central user and device managment point though django admin. Any Ideas? What are the best possible ways to create a scalable architecture? How can I create a Postgres Cluster and Use it with Django, if possible?

    Read the article

  • Java interface and abstract class issue

    - by George2
    Hello everyone, I am reading the book -- Hadoop: The Definitive Guide, http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/0596521979/ref=sr_1_1?ie=UTF8&s=books&qid=1273932107&sr=8-1 In chapter 2 (Page 25), it is mentioned "The new API favors abstract class over interfaces, since these are easier to evolve. For example, you can add a method (with a default implementation) to an abstract class without breaking old implementations of the class". What does it mean (especially what means "breaking old implementations of the class")? Appreciate if anyone could show me a sample why from this perspective abstract class is better than interface? thanks in advance, George

    Read the article

  • How to bind a control to a singleton in Cocoa?

    - by SphereCat1
    I have a singleton in my FTP app designed to store all of the types of servers that the app can handle, such as FTP or Amazon S3. These types are plugins which are located in the app bundle. Their path is located by applicationWillFinishLoading: and sent to the addServerType: method inside the singleton to be loaded and stored in an NSMutableDictionary. My question is this: How do I bind an NSDictionaryController to the dictionary inside the singleton instance? Can it be done in IB, or do I have to do it in code? I need to be able to display the dictionary's keys in an NSPopupButton so the user can select a server type. Thanks in advance! SphereCat1

    Read the article

  • Nginx proxy to IIS Connection Timeout

    - by MitMaro
    I am having an issue with random timeouts with a Nginx proxy connecting to an IIS machine. I have been watching a packet capture between the two servers and it seems that the IIS machine is receiving a SYN packet but is not responding with what I think should be an ACK response. Before the timeout occurs there seems to be a slower response from the IIS server. There is no unusual memory or processor usage on the IIS or Nginx machine. Some information on the servers and setup: Nginx Machine: Ubuntu 10.04 64bit Nginx 0.7.65 Amazon EC2 Windows Machine: Windows Server 2008 IIS 7 ASP.net Application in Integrated Mode Nginx Error: 2011/01/10 17:57:40 [error] 8297#0: *30 connect() failed (110: Connection timed out) while connecting to upstream, client: 209.***.***.***, server: secure.example.com, request: "GET /a/path/deliver.aspx HTTP/1.1", upstream: "http://***.***.***.****:****//another/path/deliver.aspx", host: "secure.example.com" WireShark Packets 6521.449528 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422103 TSER=0 WS=7 6524.443239 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477422403 TSER=0 WS=7 6530.443241 10.***.***.*** -> 174.***.***.*** TCP 38695 > us-cli [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=477423003 TSER=0 WS=7

    Read the article

  • How important is caching for a site's speed with PHP?

    - by benhowdle89
    I've just made a user-content orientated website http://www.humanisms.co.uk Its done in PHP, MySQL and jQuery's AJAX, at the moment there is only a dozen or so submissions and already i can feel it lagging slightly when it goes to a new page (therefore running a new mysql query) Is it most important for me to try and optimise my mysql queries (by prepared statements) or is it worth in looking at CDN's (amazon s3) and caching (much like the WordPress plugin WP Super Cache) which works by serving static HTML files when there hasnt been new content submitted. Which route is the most beneficial, for me as a developer, to take, ie. where am i better off concentrating my efforts to speed up the site?

    Read the article

  • Why I am getting type error undefined from my jquery from my php json?

    - by Brained Washed
    I don't know what is wrong is am I just missing something, all my expected data is successfully receive based on firebug' console tab the problem is displaying the data. Here's my jquery code: success: function(data){ var toAppend = ''; if(typeof data === "object"){ for(var i=0;i<data.length;i++){ toAppend += '<tr><td colspan="2">'+data[i]['main-asin'][0]+'</td></tr>'; toAppend += '<tr><td>'+data[i]['sub-asin'][0]+'</td><td></td></tr>'; } $('.data-results').append(toAppend); } } Here's my php code: foreach($xml->Items->Item as $item){ $items_from_amazon[] = array('main-asin'=>$item->ASIN); foreach($xml->Items->Item->Variations->Item as $item){ $items_from_amazon[] = array('sub-asin'=>$item->ASIN); } } echo json_encode($items_from_amazon); //return amazon products And here's the result from my firebug:

    Read the article

  • What is the Simplest Possible Payment Gateway to Implement? (using Django)

    - by b14ck
    I'm developing a web application that will require users to either make one time deposits of money into their account, or allow users to sign up for recurring billing each month for a certain amount of money. I've been looking at various payment gateways, but most (if not all) of them seem complex and difficult to get working. I also see no real active Django projects which offer simple views for making payments. Ideally, I'd like to use something like Amazon FPS, so that I can see online transaction logs, refund money, etc., but I'm open to other things. I just want the EASIEST possible payment gateway to integrate with my site. I'm not looking for anything fancy, whatever does the job, and requires < 10 hours to get working from start to finish would be perfect. I'll give answer points to whoever can point out a good one. Thanks!

    Read the article

  • How do API Keys and Secret Keys work?

    - by viatropos
    I am just starting to think about how api keys and secret keys work. Just 2 days ago I signed up for Amazon S3 and installed the S3Fox Plugin. They asked me for both my Access Key and Secret Access Key, both of which require me to login to access. So I'm wondering, if they're asking me for my secret key, they must be storing it somewhere right? Isn't that basically the same thing as asking me for my credit card numbers or password and storing that in their own database? How are secret keys and api keys supposed to work? How secret do they need to be? Are these applications that use the secret keys storing it somehow? Thanks for the insight.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >