Search Results

Search found 9975 results on 399 pages for 'amazon plugin'.

Page 15/399 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Apache not responding in amazon ec2

    - by Viren
    Well this might sound awkward but I facing terrible issue with my Amazon EC2 instance one of the finding I see is that apache is not responding on port 80 which is weird because I can't even find the incoming packet to port 80 in tcpdump output As per the security rules all security rules are in place correctly at least in amazon console I restarted the apache to listen to port 8080 and added port 8080 and add 8080 to security rule and everything work but I cant just able to understand as to why the port 80 not responding Needless to say since port 8080 is responding all my CNAME and A-record is working too UPDATE No firewall issue either I just cross check the iptables and list is empty Can some share a light on this

    Read the article

  • Problem with copying local data onto HDFS on a Hadoop cluster using Amazon EC2/ S3.

    - by Deepak Konidena
    Hi, I have setup a Hadoop cluster containing 5 nodes on Amazon EC2. Now, when i login into the Master node and submit the following command bin/hadoop jar <program>.jar <arg1> <arg2> <path/to/input/file/on/S3> It throws the following errors (not at the same time.) The first error is thrown when i don't replace the slashes with '%2F' and the second is thrown when i replace them with '%2F': 1) Java.lang.IllegalArgumentException: Invalid hostname in URI S3://<ID>:<SECRETKEY>@<BUCKET>/<path-to-inputfile> 2) org.apache.hadoop.fs.S3.S3Exception: org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error Message: The request signature we calculated does not match the signature you provided. check your key and signing method. Note: 1)when i submitted jps to see what tasks were running on the Master, it just showed 1116 NameNode 1699 Jps 1180 JobTracker leaving DataNode and TaskTracker. 2)My Secret key contains two '/' (forward slashes). And i replace them with '%2F' in the S3 URI. PS: The program runs fine on EC2 when run on a single node. Its only when i launch a cluster, i run into issues related to copying data to/from S3 from/to HDFS. And, what does distcp do? Do i need to distribute the data even after i copy the data from S3 to HDFS?(I thought, HDFS took care of that internally) IF you could direct me to a link that explains running Map/reduce programs on a hadoop cluster using Amazon EC2/S3. That would be great. Regards, Deepak.

    Read the article

  • Are there any less costly alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • Retrieving license type (linux/windows/windows+sqlserver) for an Amazon EC2 instance via the API?

    - by Geir
    I need to calculate the hourly running costs for my Amazon EC2 instances. This varies even between instances with same hardware configs (instance types) because I use different amazon images (AMIs): some plain windows server and some windows server with sql server (both of them have additional costs compared with plain linux instances) The EC2 Java API has a describeInstances() method which returns Instance objects with metadata such as instance id, instance type (m1.small/large...), state (running,stopped..) public ip, etc. This Instance object also has a .getLicense().getPool() which according to the Java API should return "The license pool from which this license was used (ex: 'windows')." I thought this is were it may also give 'windows+sqlserver' or something to that effect. The getLicense() method does however return null.. I've navigated around the EC2 web console, not being able to find this information, but I'm hoping that it is possible - otherwise it would mean that you cannot identify the true hourly cost of an particular instance unless you know which AMI was used to create it in the first place (plain windows server or windows server with sql server). Anyone? Thanks :) /Geir

    Read the article

  • Are there any less costlier alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • Unable to install Wordpress on Amazon's EC2 instance due to missing php-mbstring

    - by alexus
    I've created a new instance on Amazon's EC2 and I'm trying in wordpress and it's failing due to php-mbstring: # yum install wordpress Loaded plugins: amazon-id, rhui-lb Resolving Dependencies --> Running transaction check ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-simplepie >= 1.3.1 for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-gd for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-PHPMailer for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-gd.x86_64 0:5.4.16-21.el7 will be installed --> Processing Dependency: libpng15.so.15(PNG15_0)(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libt1.so.5()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libpng15.so.15()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libXpm.so.4()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 --> Processing Dependency: libX11.so.6()(64bit) for package: php-gd-5.4.16-21.el7.x86_64 ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch --> Processing Dependency: php-IDNA_Convert for package: php-simplepie-1.3.1-4.el7.noarch ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package libX11.x86_64 0:1.6.0-2.1.el7 will be installed --> Processing Dependency: libX11-common = 1.6.0-2.1.el7 for package: libX11-1.6.0-2.1.el7.x86_64 --> Processing Dependency: libxcb.so.1()(64bit) for package: libX11-1.6.0-2.1.el7.x86_64 ---> Package libXpm.x86_64 0:3.5.10-5.1.el7 will be installed ---> Package libpng.x86_64 2:1.5.13-5.el7 will be installed ---> Package php-IDNA_Convert.noarch 0:0.8.0-2.el7 will be installed --> Processing Dependency: php-mbstring for package: php-IDNA_Convert-0.8.0-2.el7.noarch ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch ---> Package t1lib.x86_64 0:5.1.2-14.el7 will be installed ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package libX11-common.noarch 0:1.6.0-2.1.el7 will be installed ---> Package libxcb.x86_64 0:1.9-5.el7 will be installed --> Processing Dependency: libXau.so.6()(64bit) for package: libxcb-1.9-5.el7.x86_64 ---> Package php-IDNA_Convert.noarch 0:0.8.0-2.el7 will be installed --> Processing Dependency: php-mbstring for package: php-IDNA_Convert-0.8.0-2.el7.noarch ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Running transaction check ---> Package libXau.x86_64 0:1.0.8-2.1.el7 will be installed ---> Package php-IDNA_Convert.noarch 0:0.8.0-2.el7 will be installed --> Processing Dependency: php-mbstring for package: php-IDNA_Convert-0.8.0-2.el7.noarch ---> Package php-PHPMailer.noarch 0:5.2.6-1.el7 will be installed --> Processing Dependency: php-mbstring >= 5.1.0 for package: php-PHPMailer-5.2.6-1.el7.noarch ---> Package php-simplepie.noarch 0:1.3.1-4.el7 will be installed --> Processing Dependency: php-mbstring for package: php-simplepie-1.3.1-4.el7.noarch ---> Package wordpress.noarch 0:3.9.1-1.el7 will be installed --> Processing Dependency: php-mbstring for package: wordpress-3.9.1-1.el7.noarch --> Processing Dependency: php-enchant for package: wordpress-3.9.1-1.el7.noarch --> Finished Dependency Resolution Error: Package: php-PHPMailer-5.2.6-1.el7.noarch (epel) Requires: php-mbstring >= 5.1.0 Error: Package: php-IDNA_Convert-0.8.0-2.el7.noarch (epel) Requires: php-mbstring Error: Package: wordpress-3.9.1-1.el7.noarch (epel) Requires: php-mbstring Error: Package: php-simplepie-1.3.1-4.el7.noarch (epel) Requires: php-mbstring Error: Package: wordpress-3.9.1-1.el7.noarch (epel) Requires: php-enchant You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest # I'm using RHEL7: # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.0 (Maipo) # yum repolist Loaded plugins: amazon-id, rhui-lb repo id repo name status epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 4,325 rhui-REGION-client-config-server-7/x86_64 Red Hat Update Infrastructure 2.0 Client Configuration Server 7 1 rhui-REGION-rhel-server-releases/7Server/x86_64 Red Hat Enterprise Linux Server 7 (RPMs) 4,447 repolist: 8,773 # a while back and another environment I had to run following command first in order to get access to php-mbstring: rhn-channel --add --channel=rhel-x86_64-server-optional-6 How do you do that in Amazon EC2?:

    Read the article

  • Manage Your Amazon S3 Account with CloudBerry Explorer

    - by Mysticgeek
    If you have an Amazon S3 account you’re using to backup your data, you might want an easy way to manage it. CloudBerry Explorer is a free app that runs on your desktop an provides an easy way to manage your S3 account. Installation and Setup Just download and install the application with the defaults. When the application launches you’ll be prompted to enter in your username and email to get a registration key. Or you can continue on by clicking Register later. Now you will want to set up your Amazon S3 account. Click on File \ Amazon S3 Accounts. Double-click on the New Account icon.   Next enter in your Amazon account Access and Secret keys, select SSL if you want, then click the Test Connection button. Provided everything was entered correctly, you’ll see the Connection Success screen, just close out of it. Browse and Manage files Once you have your account setup through the Explorer, you can start viewing and managing your files on S3. The left pane shows your S3 buckets and stored files, while the right side shows your local computer. This allows you to manage your files in your Amazon S3 buckets directly from your desktop! It’s very easy to use, and you can drag and drop files from your computer to the S3 account or vice versa. There is also the ability to transfer files between Amazon S3 accounts from within the explorer. Go into Tools and Content Types and you can control the file types by adding, removing, or editing them. If you end up messing something up along the lines, you can always select Reset to defaults and everything will be back to normal. There is a multiple tabbed view so you can easily keep track of your different accounts and local machine. It allows the ability to create new storage buckets directly in the Explorer. Or you can delete buckets as well… Different actions can be accessed from the toolbars or by right-clicking and selecting from the context menu. Here we see a cool option that lets you move your data inside Amazon S3. It is faster and doesn’t cost money by moving the files to your computer first, then to another account. However, if you want data moved to your local machine first, you have that option as well.   Not all features are available in the free version, and if it’s not, you’ll be prompted to purchase a license for the Pro version. We will have a comprehensive review of the Pro version in the near future.    If you ever need help with CloudBerry Explorer, go to Tools \ Diagnostics. It will run a quick diagnostics check and you can send the information to the CloudBerry team for assistance. Delete Files from Amazon S3 To delete a file from you Amazon S3 account, simply highlight the files or folder you want to get rid of then click Delete on the toolbar. You can also right-click the file and select Delete from the Context Menu. Click Yes to the confirmation dialog box… Then you can watch the progress as your files are deleted in the bottom section of the explorer. Conclusion CloudBerry Explorer free version has several neat features that will allow you easy and basic control over you Amazon S3 account. The free version may be enough for basic users, but power users will want to upgrade to the pro version, as it includes a lot more features. Using the free version allows you to get a feel for what CloudBerry Explorer has to offer, and is a good starting point. Keep in mind that Amazon S3 is introducing Reduced Redundancy Storage which will lower the price of data stored. The price drops from $0.15 per GB to only $0.10 per GB. If you’re a Windows Home Server user, check out our review of CloudBerry Online Backup 1.5 for WHS. Download CloudBerry Explorer Free for Amazon S3 Similar Articles Productive Geek Tips CloudBerry Online Backup 1.5 for Windows Home ServerReopen Closed Tabs in Internet ExplorerPreview and Purchase Ebooks with Kindle for PCTroubleshoot and Manage Addons in Internet Explorer 8Beginner Geek: Delete User Accounts in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor

    Read the article

  • Rsync plugin to many local wordpress installs via script or cli

    - by Nick Abbey
    I am maintaining a large number of wordpress installs on a production server, and we are looking to deploy InfiniteWP for managing these installs. I am looking for a way to script the distribution of the plugin folder to all of these installs. On server wp-prod, all sites are stored in /srv//site/ The plugin needs to be copied from ~/iws-plugin to /srv//site/wp-content/plugins/ Here's some pseudo code to explain what I need to do: array dirs = <all folders in /srv> for each d in dirs if exits "/srv/d/site/wp-content/plugins" rsync -avzh --log-file=~/d.log ~/plugin_base_folder /srv/d/site/wp-content/plugins/ else touch d.log echo 'plugin folder for "d" not found' >> ~/d.log end end I just don't know how to make it happen from the cli or via bash. I can (and will) tinker with a bash or ruby script on my test server, but I'm thinking the command-line-fu here on SF is strong enough to handle this issue much more quickly than I can hack together a solution. Thanks!

    Read the article

  • Grails: GGTS not running on Amazon AWS EC2 anyone else successful?

    - by Anonymous Human
    Im just curious if anyone has had success trying to run the Groovy Grails tool suite on an Amazon AWS EC2 instance with its display exported into your windows machine. If so, I wanted to know which flavor of linux was used on the EC2. I am not having much success with it on the Amazon Linux but haven't tried their Ubuntu instances yet. I got all the way to getting GGTS installed and getting the display exported but when I launch GGTS I get log errors about libraries missing. This is most likely because I didn't use yum to install it so I am probably missing dependencies but I didn't have a choice its not offered as a yum package. Here are my log file errors when I try to launch GGTS: !SESSION 2014-06-08 03:08:04.873 ----------------------------------------------- eclipse.buildId=3.5.1.201405030657-RELEASE-e43 java.version=1.7.0_55 java.vendor=Oracle Corporation Framework arguments: -product org.springsource.ggts.ide Command-line arguments: -os linux -ws gtk -arch x86_64 -product org.springsourc e.ggts.ide !ENTRY org.eclipse.osgi 4 0 2014-06-08 03:08:12.116 !MESSAGE Application error !STACK 1 java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: /home/ec2-user/ggts_sh/ggts-3.5.1.RELEASE/configuration/org.eclipse.osgi /bundles/704/1/.cp/libswt-pi-gtk-4335.so: libgtk-x11-2.0.so.0: cannot open share d object file: No such file or directory no swt-pi-gtk in java.library.path /home/ec2-user/.swt/lib/linux/x86_64/libswt-pi-gtk-4335.so: libgtk-x11-2 .0.so.0: cannot open shared object file: No such file or directory Can't load library: /home/ec2-user/.swt/lib/linux/x86_64/libswt-pi-gtk.s o at org.eclipse.swt.internal.Library.loadLibrary(Library.java:331) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:240) at org.eclipse.swt.internal.gtk.OS.<clinit>(OS.java:45) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:63) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:54) at org.eclipse.swt.widgets.Display.<clinit>(Display.java:133) at org.eclipse.ui.internal.Workbench.createDisplay(Workbench.java:679) at org.eclipse.ui.PlatformUI.createDisplay(PlatformUI.java:162) at org.eclipse.ui.internal.ide.application.IDEApplication.createDisplay( IDEApplication.java:154) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEAppli cation.java:96) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandl e.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runAppli cation(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(Ec lipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja va:354) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja va:181) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:636) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:591) at org.eclipse.equinox.launcher.Main.run(Main.java:1450) at org.eclipse.equinox.launcher.Main.main(Main.java:1426)

    Read the article

  • properties-maven-plugin: Error loading properties-file

    - by yournamehere
    I want to extract all the properties from my pom.xml into a properties-file. These are the common properties like dependency-versions, plugin-versions and directories. I'm using the properties-maven-plugin, but its not working as i want it to. The essential part of my pom.xml: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>properties-maven-plugin</artifactId> <version>1.0-alpha-1</version> <executions> <execution> <phase>initialize</phase> <goals> <goal>read-project-properties</goal> </goals> <configuration> <files> <file>${basedir}/pom.properties</file> </files> </configuration> </execution> </executions> </plugin> Now when i run "mvn properties:read-project-properties" i get the following error: [INFO] One or more required plugin parameters are invalid/missing for 'properties:read-project-properties' [0] Inside the definition for plugin 'properties-maven-plugin' specify the following: <configuration> ... <files>VALUE</files> </configuration>. The pom.properties-file is located in the same dir as the pom.xml. What can i do to let the properties-maven-plugin read my properties-file?

    Read the article

  • Stange stream of HTTP GET requests in apache logs, from amazon ec2 instances

    - by Alexandre Boeglin
    I just had a look at my apache logs, and I see a lot of very similar requests: GET / HTTP/1.1 User-Agent: curl/7.24.0 (i386-redhat-linux-gnu) libcurl/7.24.0 \ NSS/3.13.5.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.2 Host: [my_domain].org Accept: */* there's a steady stream of those, about 2 or 3 per minute; they all request the same domain and resource (there are slight variations in user agent version numbers); they come form a lot of different IPv4 and IPv6 addresses, in blocs that belong to amazon ec2 (in Singapore, Japan, Ireland and the USA). I tried to look for an explanation online, or even just similar stories, but couldn't find any. Has anyone got a clue as to what this is? It doesn't look malicious per say, but it's just annoying me, and I couldn't find any more information about it. I first suspected it could be a bot checking if my server is still up, but: I don't remember subscribing to such a service; why would it need to check my site twice every minute; why doesn't it use a clearly identifying fqdn. Or, should I send this question to amazon, via their abuse contact? Thanks!

    Read the article

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • Creating an ec2 image on amazon fails at mkfs.ext3

    - by Dave Orr
    I'm trying to create an image of my ec2 instance in Amazon's cloud. It's been a bit of an adventure so far. I did manage to install Amazon's ec2-api-tools, which was harder than it seemed like it should have been. Then I ran: ec2-bundle-vol -d /mnt -k pk-{key}.pem -c cert-{cert}.pem -u {uid} -s 1536 Which returned: Copying / into the image file /mnt/image... Excluding: /sys/kernel/debug /sys/kernel/security /sys /proc /dev/pts /dev /dev /media /mnt /proc /sys /etc/udev/rules.d/70-persistent-net.rules /etc/udev/rules.d/z25_persistent-net.rules /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00677357 s, 155 MB/s mkfs.ext3: option requires an argument -- 'L' Usage: mkfs.ext3 [-c|-l filename] [-b block-size] [-f fragment-size] [-i bytes-per-inode] [-I inode-size] [-J journal-options] [-G meta group size] [-N number-of-inodes] [-m reserved-blocks-percentage] [-o creator-os] [-g blocks-per-group] [-L volume-label] [-M last-mounted-directory] [-O feature[,...]] [-r fs-revision] [-E extended-option[,...]] [-T fs-type] [-U UUID] [-jnqvFKSV] device [blocks-count] ERROR: execution failed: "mkfs.ext3 -F /mnt/image -U 1c001580-9118-4a50-9a25-dcf02be6d25f -L " So mkfs.ext3 wants -L, which is a volume name. But ec2-bundle-vol doesn't seem to take in a volume name as an argument, and the docs (http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html) don't seem to think one should be needed. Certainly their sample command: # ec2-bundle-vol -d /mnt -k ~root/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem -u 495219933132 -s 1536 doesn't specify anything. So... any help? What am I missing?

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Maven Release Plugin with JAXB issues

    - by Wysawyg
    Hiya, We've got a project set up to use the Maven Release Plugin which includes a phase that unpacks a JAR of XML schemas pulled from Artifactory and a phase that generates XJC classes. We're on maven release 2.2.1. Unfortunately the latter phase is executing before the former which means that it isn't generating the XJC classes for the schema. A partial POM.XML looks like: <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <id>unpack</id> <!-- phase>generate-sources</phase --> <goals> <goal>unpack</goal> <goal>copy</goal> </goals> <configuration> <artifactItems> <artifactItem> <groupId>ourgroupid</groupId> <artifactId>ourschemas</artifactId> <version>5.1</version> <outputDirectory>${project.basedir}/src/main/webapp/xsd</outputDirectory> <excludes>META-INF/</excludes> <overWrite>true</overWrite> </artifactItem> </artifactItems> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>maven-buildnumber-plugin</artifactId> <version>0.9.6</version> <executions> <execution> <phase>validate</phase> <goals> <goal>create</goal> </goals> </execution> </executions> <configuration> <doCheck>true</doCheck> <doUpdate>true</doUpdate> </configuration> </plugin> <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <configuration> <schemaDirectory>${project.basedir}/src/main/webapp/xsd</schemaDirectory> <schemaIncludes> <include>*.xsd</include> <include>*/*.xsd</include> </schemaIncludes> <verbose>true</verbose> <!-- args> <arg>-Djavax.xml.validation.SchemaFactory:http://www.w3.org/2001/XMLSchema=org.apache.xerces.jaxp.validation.XMLSchemaFactory</arg> </args--> </configuration> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin> I've tried googling for it, unfortunately I ended up with a case of thousands of links none of which were actually relevant so I'd be very grateful if someone knew how to configure the order of the release plugin steps to ensure a was fully executed before it did b. Thanks

    Read the article

  • Rails Plugin - Install as Plugin or Install As Gem

    - by Joseph Misiti
    Hey guys, I am new to rails and have a question regarding the plugins. It seems there are two approaches you can take when using a third party plugin in a ROR App: 1) install a gem using sudo gem install GEM, and then "require" it in your rails project 2) install the plugin using script/generate plugin install PLUGIN. The plugin in code appears in your vendor directory and then you are good to go (sometimes, i could not get Devise working via this method). Since it appears both of these methods accomplish them same thing, why should I choose one method over the other. Thanks,

    Read the article

  • Amazon EC2 master node hanging

    - by Algorist
    Hi, I am using cloudera setup to launch a cluster with hadoop on Amazon. Sometimes, the master hadoop node hangs and we have to restart the job from the job. Did anyone face similar problem and resolve the issue. Thank you.

    Read the article

  • Alternative to Amazon's S3 service?

    - by Cory
    Just wondering if there is good alternative to Amazon's S3 service? I like S3 but the bandwidth cost is high. I looked at CouldFiles from Rackspace but the cost is even higher. I don't mind prepaying or having monthly payment in order to reduce the bandwidth cost greatly. Thank you for any help

    Read the article

  • Use personal Amazon S3 account to backup Gmail using Ubuntu

    - by lokheart
    As question, I have found online that backupify offer shifting from their S3 to user's personal S3, I have a backupify account but I can't find this options, besides, I don't prefer having my email being processed by somebody else. Is it possible to use my own personal amazon s3 account to backup gmail? Preferably as incremental backup, as I don't have to use too much of bandwidth to load redundant data back to S3. I am using ubuntu, so script is OK for me. Thanks!

    Read the article

  • Remote backup with Amazon S3

    - by mjr
    Does anyone know of some good Mac software to sync a network attached storage device or a drobo to Amazon S3? I'd love to mail them a storage device for the first dump and then do nightly syncs.

    Read the article

  • Amazon S3 tools for Debian?

    - by Jonik
    I need to (programmatically, in a shell script) upload an EAR file to an Amazon S3 bucket on Debian (5.0.4). What, if any, Debian package provides simple, scriptable tools for that? (I want raw S3 bucket access, so please don't suggest solutions like Jungle Disk.)

    Read the article

  • Site-to-Site vpn setup amazon ec2 openswan (left) and cisco asa 5540 (right)

    - by user197279
    Need help on this VPN set-up on amazon EC2 using openswan Left side: EC2: setup a peer ip:- according to client using cisco (must be public) encrypted network:- according to client using cisco (must be public) Right side: Cisco ASA 5540: Peer ip: 3.3.3.3 Peer host/rightsubnet: 3.3.3.30/32 (Public NAT'd ip) The goal is to setup a site-to-site vpn connection with the client and I need guidance on the setup required on EC2. Appreciate the help Thanks.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >