Search Results

Search found 3198 results on 128 pages for 'amazon iam'.

Page 16/128 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • How to use second volume devide of amazon EC2

    - by Khoyendra Pande
    I have two volumes of amazon EC2 where by default 1 GiB volume using which has fulled. Now I want to use my second volume which is 9 Gim. I used command cat /proc/partitions I got major minor #blocks name 202 1 1048576 xvda1 202 80 9437184 xvdf Then I hit mkfs.ext3 -F /dev/sdf its showing mkfs.ext3: No such file or directory while trying to determine filesystem size then I hit command df and I got Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 1032088 1031280 0 100% / tmpfs 313160 8 313152 1% /lib/init/rw udev 297800 24 297776 1% /dev tmpfs 313160 4 313156 1% /dev/shm overflow 1024 32 992 4% /tmp means still I am unable to use my 9 GiB space Volume. I am conform I have two volume where attachment information is i-7e4fb41c:/dev/sda1 (attached) and i-7e4fb41c:/dev/sdf (attached) where only sda1 is using. Any one know how may I use my second volume(sdf). Thx

    Read the article

  • How do I access my public DNS on Amazon's EC2

    - by Spencer Carnage
    I'm working with Amazon's EC2 for the first time. I went through all of the steps at http://docs.amazonwebservices.com/AWSEC2/latest/GettingStartedGuide/ and now I'm trying to access the public DNS through a web browser. I get nothing. I don't even know if I'm supposed to get anything, really. I just want to see something that indicates that my instance is accessible from the web for development's sake. I'm completely new at this and I can't find a simple answer to this anywhere to save my life. Any help would be MUCH appreciated. Thanks!

    Read the article

  • Amazon EC2 Web server in Load Balancer gives 503

    - by dale
    we've been running our web servers at Amazon with load balancer and auto-scaling for over a year with no problem. All of a sudden today the request began to get aborted with the error: 503 ... Backend server is at capacity The web servers are at 1% CPU and no other alarms trigger. We use Amazons load balancer and nginx. Lots of requests like this are showing up in the access_log. 10.246.114.93 - - [05/Jun/2014:20:16:09 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:09 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:09 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:09 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:10 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:10 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:10 +0000] "-" 400 0 "-" "-" 10.229.15.214 - - [05/Jun/2014:20:16:10 +0000] "-" 400 0 "-" "-" 10.229.15.214 - - [05/Jun/2014:20:16:10 +0000] "-" 400 0 "-" "-" Any thoughts?

    Read the article

  • Video encoding is very slow on Amazon EC2 instance

    - by Timka
    We are using Amazon EC2 m1.xlarge instance for video re-encoding and it looks like the actual encoding process takes a very long time. For an average 250mb video file it takes about an hour to encode. Intance: m1.xlarge (Xeon E5645 x 15gb ram) Windows Server 2008 R2 64-bit AviSynth version 2.5 (32bit) + ffms2 plugin (FFmpegSource 1.21) FFmpeg SVN-r13712 libavutil 3213056 libavcodec 3356930 libavformat 3411456 libavdevice 3407872 Number of parallel jobs is 3 Average CPU utilization ~96% Update#1 Source video: mp4/h.264 Parameters for ffmpeg: --enable-memalign-hack --enable-avisynth --enable-libxvid --enable-libx264 + --enable-libgsm --enable-libfaac --enable-libfaad --enable-liba52 + --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-pthreads + --enable-swscale --enable-gpl Video files encoded to mp4/h.264 with the following extra command line options: -threads 0 -coder 0 -bf 0 -refs 1 -level 30 -maxrate 10000000 -bufsize 10000000

    Read the article

  • Amazon EC2 hostnames

    - by Firefly
    I'm currently trying to setup a Tigase cluster on Amazon EC2 instances in a VPC and I'm having troubles getting it to work due to the hostnames of the instances not being "full DNS names". According to the Tigase documentation: Please note the proper DNS configuration is critical for the cluster to work correctly. Make sure the 'hostname' command returns a full DNS name on each cluster node. Can anyone explain what a full DNS name is and how I can set my instances to use one? Currently my instances get a default hostname of the form "ip-10-0-0-20".

    Read the article

  • Amazon EC2 Sign In

    - by Barry
    When I change the home directory of my Amazon EC2 instance from /home/ubuntu to /home/ubuntu/folder in the /etc/passwd file, I am no longer able to access the instance using my existing keypair. Once I switch it back to the original directory I have no problems and can log into my instance as normal. I have checked the permissions on the new folder and they are drwxr-xr-x, which is the same the /home/ubuntu folder. I have a number of instances running at the minute and because of this change I have no way of logging back into them to rectify the situation. Does anyone have an idea what is going on? Thanks in advance

    Read the article

  • Pass User Data to AWS client

    - by bearrito
    Has anyone successful passed user data to the AWS CLI ? I have tried various incantations of the following but it does not work. Docs say string must be base64 encoded : http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html The instance logs never indicate the script is executed and chef is installed. aws ec2 run-instances --image-id ami-a73264ce --count 1 --instance-type t1.micro --key-name scrubbed --iam-instance-profile Arn=arn:aws:iam::scrubbed:instance-profile/scrubbed --user-data $(base64 chef_user_data.sh --wrap=0) chef_user_data.sh #!/bin/bash curl -L https://www.opscode.com/chef/install.sh | sudo bash

    Read the article

  • Exporting virtual machine from Windows Azure to Amazon

    - by Somebody
    As documentation says: You can import VMware ESX and VMware Workstation VMDK images, Citrix Xen VHD images and Microsoft Hyper-V VHD images for Windows Server 2003, Windows Server 2003 R2, Windows Server 2008 and Windows Server 2008 R2. What about virtual machine with Ubuntu on it? Will it import it successfully? Have anyone tried to do it? Is it possible to import VM directly from Windows Azure. Without need to download VM on my machine and then upload it to Amazon? Thanks.

    Read the article

  • Upgrading Fedora on Amazon to 12 but getting libssl.so.* & libcrypto.so.* are missing

    - by bateman_ap
    I am upgrading to Fedora 12 on a Amazon EC2 using help here: http://www.ioncannon.net/system-administration/894/fedora-12-bootable-root-ebs-on-ec2/ I managed to do a 64 bit instance OK, however facing some problems with a standard one. On the final bit of the install from 11 to 12 I am getting an error: Error: Missing Dependency: libcrypto.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) Error: Missing Dependency: libssl.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) This is referenced in the comments from the link above but all it says is: Q: Apache failed, or libssl.so.* & libcrypto.so.* are missing A: These versions are mssing the symlinks they require. Easy fix, go symlink them to the newest versions in /lib However I am afraid I don't know how to do this. If it is any help I tried running the command locate libssl.so and got: /lib/libssl.so.0.9.8b /lib/libssl.so.6

    Read the article

  • High CPU Steal percentage on Amazon EC2 Instance

    - by Aditya Patawari
    I am experiencing high CPU steal percentage in a Amazon EC2 large instance. I know it means that my virtual CPU is waiting on the real CPU of the machine for time. My question is that what can I do to reduce this percentage and get maximum out of the CPU? Steal percentage is consistently at 20%. System load crosses 10 when this happens. I have checked memory and network and I am sure that they are not the bottleneck. Is that normal for such environment? Also are there any system level optimization techniques for reducing steal percentage form the virtual instance? avg-cpu: %user %nice %system %iowait %steal %idle 52.38 0.00 8.23 0.00 21.21 18.18

    Read the article

  • Amazon EC2 IPSEC

    - by John Qualis
    I have configured a ubuntu 12.04 64-bit server machine on Amazon AWS to act as a strongswan IPSEC server. I want to connect to it from my MAC OSX Lion's inbuilt IPSEC client. The OSX machine is in my home network. I log into the AWS machine using a ssh to ubuntu@public-ip and I provide the private RSA key in form of .pem file which I downloaded when the machine instance was created. The ssh connection works file but the IPSEC connection fails. What credentials/configurations should I provide for an IPSEC connection from my OSX client to AWS ubuntu server? My OSX machine is behind an ISP provided modem/router. Appreciate any help and thanks in advance

    Read the article

  • How does Amazon ec2-user get its sudo rights

    - by Johan
    I am looking for where the default Amazon AMI linux image sets up the privileges for the default ec2-user account. After logging in with this account I can use sudo successfully. Checking via the sudoers file, which I open by running visudo (with no other options) I see a few default settings and permissions for root ALL ALL So ... Where is the permissions for ec2-user assigned? I have not yet tried to add a new permission but ultimately I want to resign ec2-user for systems management tasks and use a non-full root user for administering the applications (stop and start mysql, httpd, edit apache's vhost files, and upload / edit web content under the web root)

    Read the article

  • "Permission denied (publickey)" error when ssh'ing to Amazon EC2 Debian AMI 05f3ef71

    - by user193537
    I have launched a Debian system using AMI 05f3ef71 in Amazon EC2, but I have no lock connecting to it using SSH as suggested in "Connecting to Your Linux/UNIX Instances Using SSH". I tried several user names: ec2-user, root, debian... None of them worked. I always get a Permission denied (publickey) error message. Using ec2-get-console-output instance_id as suggested doesn't work either, it requires option "-K". If I supply it, I get the error message Required option '-C, --cert CERT' missing, but I have no idea what to supply there. Port 22 is opened on the affected instance. Does anyone have an idea what I could try to log in to my instance?

    Read the article

  • Use Amazon EC2 as a backup server

    - by MikeMurko
    I would like to use Amazon EC2 as an emergency backup database+web server in the event our primary host becomes unavailable. I feel like I wouldn't have trouble setting up a Windows instance, install SQL Server and get the web server up and running (would take a few hours, plus installing various libraries, our source code, etc). My question relates to pricing. If I simply "stop" the instance rather than "terminate" it, does that stop counting "instance-hours"? I would prefer not to terminate the instance and lose all that work I spent setting it up. If I must "terminate" in order to stop the billing - is it possible to make an image of the server after I have set it all up, then save that image somewhere (S3?) Is this something that people do regularly? Ideally this instance would just be waiting in the wings for an issue with our host, but costing us nothing except perhaps data storage costs.

    Read the article

  • Amazon EC2 - Reserved Private Addresses

    - by reach4thelasers
    I've got a private DNS Server running on Amazon EC2. I don't need a public IP Address because its only used for private addressing web1.xxx.internal database1.xxx.internal Problem is I had to terminate the instance recently and start a new one. This meant that the private IP address of the DNS server changed and I had to log in to each of my other 15 server one-by-one and change the DNS address to point to the new DNS server. There must be a better way to do this, if so, what is it?

    Read the article

  • Amazon EC2 as load balanced/failover solution

    - by sugiggs
    Hi All, I'm thinking of an idea but not sure the pros/cons of it. At the moment, we are hosting our website on a dedicated server. As a failover/load balanced solution, I'm thinking to use Amazon EC2+EBS. The files can be rsync and mysql can be setup as master-master replication When the load is high, I can up the machine, given sometime to "sync" and load balanced the traffic there. is it do-able? any link I can read more on this?

    Read the article

  • Mounting an Amazon EC2 instance on Mac OS X

    - by hinghoo
    I've got public key authentication working between my Mac OS X and an Amazon EC2 instance so that from the command-line I can just type the following and it works: ssh root@[IPAddressOfEC2Instance] The strange thing is that I can't seem to mount the instance using "Connect to Server" in the Finder. I've tried typing the following server addresses into the "Connect to Server" dialog: ftps://[IPAddressOfEC2Instance] ftps://root@[IPAddressOfEC2Instance] But all I get is You entered an invalid username or password. Please try again. The root user on the EC2 instance has a blank password and I'm wondering if it has to do with that. However, I can't change the password for the root user. I can use an SFTP client to connect to the machine, I just can't mount it with "Connect to server". It asks for a username and password (for a registered user) and it's root/[blank] which it doesn't accept. The other option is "Guest" which brings up an empty folder in the Finder.

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • Is there a way of using HTTPS with Amazon's CloudFront CDN and CNAMEs?

    - by Metalshark
    We use Amazon's CloudFront CDN with custom CNAMEs hanging under the main domain (static1.example.com). Although we can break this uniform appearance and use the original whatever123wigglyw00.cloudfront.net URLs to utilise HTTPS, is there another way? Do Amazon or any other similar provider offer HTTPS CDN hosting? Is TLS and its selective encryption available for use somewhere (SNI: Server Name Indication)? Foot note: assuming that the answer is no, but just in the hope someone knows. EDIT: Now using Google App Engine https://developers.google.com/appengine/docs/ssl for CDN hosting with SSL support.

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • can not connect to SQL running on amazon ec2 machine

    - by njj56
    I am using SQL managment studio 2008 running on an Amazon EC2 machine. I am unable to connect to the database in my asp.net application. The EC2 instance has been set to accept connections over the SQL port. I am also able to remote the machine as well as view websites hosted on the server. Listed below is part of the connection string relating to this instance. When the program is ran and this connection string is called, it returns tcp error 0 - no return response. it just times out. <add name="ProjectServer" connectionString="Data Source=*IP ADDRESS HERE*,1433;Initial Catalog=*Catalog Name*;User ID=IP-0A6ED514\Administrator;"/> I removed the ip and the catalog name for the example, but I am sure they are correct. The only thing that I could think may cause an error, is the differences in names between the user id and the server name - the server name is ip-0A6ED514\sharepoint but the user name is ip-0A6ED514\administrator when I log into the sql server manager on the EC2 instance. A password is not used. Not sure if I would need to leave in a blank string for password - also not sure if the difference between server name and user id to log in makes a difference. Any help is appreciated. Thank you. update - when this connection string is used with out the port, i get tcp provider error 40 - when the port is in there, i get error 0 edit- the sql server is using windows authentication - does this make a difference? Usually I always use SQL server authentication

    Read the article

  • Amazon VPC NAT not working

    - by rpkelly
    I'm trying to create a NAT instance for my VPC to allow instances on private subnets connect to the internet (most importantly, S3). I tried following the instructions here: http://docs.amazonwebservices.com/AmazonVPC/2011-07-15/UserGuide/index.html?VPC_NAT_Instance.html . Unfortunately, the instances in the private subnet (call it 10.10.2.0/24) cannot reach the internet. I have done the following: Create a NAT instance (Amazon's ami-vpc-nat-1.0.0-beta.i386-ebs (ami-d8699bb1)) in public subnet (call it 10.10.1.0/24). Changed "Source / Dest Check" to disabled. Created a new entry in the default routing table (which is used by 10.10.2.0/24) and had it point to the ID of the newly created instance. Associated an Elastic IP address with the NAT instance. Allowed all outbound traffic on the security group of the NAT instance. Ensured that all traffic could pass between the two subnets. I've tried also doing this with an existing instance using iptables, but had no luck. And I have verified that sys.net.ipv4.ip_forward is 1, just in case anyone was wondering. And I still have no internet connectivity from the instances on 10.10.2.0/24. Does anyone have any suggestions?

    Read the article

  • Enabling mod_rewrite on Amazon Linux

    - by L. De Leo
    I'm trying to enable mod_rewrite on an Amazon Linux instance. My Directory directives look like this: <Directory /> Order deny,allow Allow from all Options None AllowOverride None </Directory> <Directory "/var/www/vhosts"> Order allow,deny Allow from all Options None AllowOverride All </Directory> And then further down in httpd.conf I have the LoadModule directive: ... other modules... #LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so #LoadModule proxy_module modules/mod_proxy.so ... other modules... I have commented out all the Apache modules not needed by Wordpress. Still when I issue http restart and then check the loaded modules with /usr/sbin/httpd -l I get only: [root@foobar]# /usr/sbin/httpd -l Compiled in modules: core.c prefork.c http_core.c mod_so.c Inside the virtual host containing the Wordpress site I have an .htaccess containing: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress The .htaccess is owned by apache which is the user apache runs under. The apachectl -t command returns Syntax OK What am I doing wrong? What should I check?

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >