Search Results

Search found 13950 results on 558 pages for 'durable services'.

Page 108/558 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Reverse Proxy Wordpress with Lighttpd

    - by Jonah
    I am deploying an application and a Wordpress installation on AWS. I have Wordpress set up under Apache on an EC2, and my application under Lighttpd, and I want to reverse-proxy Wordpress through the application node. This works fine, I just set up the reverse proxy in Lighttpd as so: $HTTP["url"] =~ "^/blog" { proxy.server = ( "/blog" => ( "blog" => ( "host" => "123.456.789.123", "port" => 80 )) ) } url.rewrite-once = ( "^(.*?)$" => "/index.php/$1" ) However, the issue is in the rewrite. When I enable rewriting, it catches it before the reverse proxy, and routes to index.php on the application server. I need it to not rewrite if it's going to the blog. I tried various regex matches and other configurations, but I haven't been able to get it to support rewriting and proxying at the same time. How can this be done?

    Read the article

  • Why can't I create an Alias Resource Record Set for an EC2 instance

    - by praterade
    I have been working with AWS for over a year, setting up EC2 instances, domains, ELBs, etc. When I want to assign a subdomain to an EC2 instance, I have to create an elastic IP (that I pay for), then assign a CNAME record to that elastic IP. When I want to assign a subdomain to an ELB (load balancer) instance, I just create an alias resource record set to the ELB. I've read over the docs and don't understand why AWS doesn't support aliasing to instances. Am I missing a key concept here? Wouldn't it be simpler to just alias EC2 instances and skip the whole elastic IP bit?

    Read the article

  • Running a service with a user from a different domain not working

    - by EWood
    I've been stuck on this for a while, not sure what permission I'm missing. I've got domain A and domain B, A trusts B, but B does not trust A. I'm trying to run a service in domain A with a user account from domain B and I keep getting Access is Denied. I'm using the FQDN after the username and the password is correct. The user account from domain B is a local administrator on the domain A server, the user account has the logon locally, and as a service permissions. Must. Get. This. Working. Update: I found something interesting in the logs I must have missed. This ought to get me pointed in the right direction. Event ID: 40961 - LsaSrv : The Security System could not establish a secured connection with the server ldap/{server fqdn/fqdn@fqdn} No authentication protocol was available. I've found a few fixes for 40961 but nothing has worked so far. I've verified reverse lookup zones. nslookup resolves the correct dc properly. still workin' at it. Upadte: In response to Evan; I ran " runas /env /user:ftp_user@fqdn "notepad" " then entered the users password and notepad came up. It seems to work successfully. This issue is now resolved. The problem is visible in the screenshot. Windows tries to use the UPN for the user account if you dig your user out of AD with the Browse button. This fails every time even with the right user and password. Simply using the SAM format (Domain\User) works. So simple, yet so annoying. Can't believe I missed this. Thanks to everyone who helped.

    Read the article

  • Anti Virus Service does not run - Windows XP SP3 32bit Home

    - by Stefan Fassel
    I have a somewhat strange problem here. I am trying to run Anti Virus Software on my Windows XP Home 32bit System. After a serious crash I had to fall back to an outdated copy of my initial installation and had Windows install 5 years of updates. So far so good. After Intalling a new Anti Virus Software (Bitdefender 2012) everything seemed to be fine, initial scanning went fine and configuration was working. But after restarting the System the Virus Scanner was unable to start up again. Even the Configuration console of the AV Software did not start. I tried scanning the System for malware, but nothing was found. Then I tried a different AV Software (MS Security Essentials), but in the end it did fail to start too. I have tried to start the Service manually, but I seem to be missing the privilege to do so. I am logged in as a Non-"Administrator" User with Admin privileges (Not much choices there on a XP Home System). I cannot switch to Administrator account outside the protected mode. When running Windows in protected mode I am unable to start the AV Software because it does not run in protected mode. I am a bit at loss now...

    Read the article

  • Disable disk caches in AWS EBS for PostgreSQL?

    - by Alexandr Kurilin
    It's my understanding that, without correctly disabling OS-level and drive-level caching, there is a chance that in case of system failure the Write-Ahead Log might not be saved correctly and in fact might get corrupted, possibly preventing data recovery. I've already made sure that wal_sync_method=fdatasync however I was unable to make any configuration changes with hdparm since I get the following: $ sudo htparm -I /dev/xvdf /dev/xvdf: HDIO_DRIVE_CMD(identify) failed: Invalid argument Looks like that option is not available in the kind of setup you get in EC2. Am I missing anything here? Are there any other obvious caches I have to disable to ensure the WAL's safety?

    Read the article

  • Migrate an intermediate CA to a new root

    - by Tim Brigham
    Using the Microsoft CA is there any way to cut over to a new certificate authority from an intermediate authority? Both my systems are Microsoft CAs - I have a 2008 R2 Enterprise CA (intermediate) and an old 2003 CA (root). The 2003 box bit the dust and I don't have good backups. I still have a few months before the CRL expires; instead of having to cut over to a new intermediate authority is there a ready way to simply point this intermediate authority to a new offline CA?

    Read the article

  • Is there a way to determine which service does an outgoing connection?

    - by fluxtendu
    I'm redoing my firewall configuration with more restrictive policies and I would like to determine the provenance (and/or destination) of some outgoing connections. I have an issue because they come from svchost.exe and go to web content/application delivery providers - or similar: 5 IP in range: 82.96.58.0 - 82.96.58.255 --> Akamai Technologies akamaitechnologies.com 3 IP in range: 93.150.110.0 - 93.158.111.255 --> Akamai Technologies akamaitechnologies.com 2 IP in range: 87.248.194.0 - 87.248.223.255 --> LLNW Europe 2 llnw.net 205.234.175.175 --> CacheNetworks, Inc. cachefly.net 188.121.36.239 --> Go Daddy Netherlands B.V. secureserver.net So is it possible to know which service does a particular connection? Or what's your recommendation about the rules applied to these ones? (Comodo Firewall & Windows 7)

    Read the article

  • Spawn phone call from EC2 alerts

    - by Matt
    I have a system setup on AWS/EC2, it currently is using their CloudWatch alert system. The problem is this sends just to email, when ideally I would like this to be making a phone call and/or sending text messages to certain phone numbers when an alert fires (Note that I do not need the phone call to actually say anything, just call the person). We are trying to solve the problem that Amazon alerts are only useful if people are checking their email, which isnt always the case because all server problems love to happen at 4am on saturday... Please respond with any possible solutions/ideas, ideally I do not want to implement an entire monitoring system (IE: Nagios) on top of everything to handle this.

    Read the article

  • AWS RDS Timeout

    - by warder57
    I know next to nothing about networking/servers. So I'm assuming I'm missing something obvious. All of the resources I can find on this, either don't work or are outdated. I created a brand new AWS account on the free plan. I created a postgres RDS DB instance. I made sure that this RDS instance is set to publicly accessible. This RDS instance has the default VPC/Security Group settings. In order to connect to this DB from my local machine, I used pgadmin3 and followed the instructions provided on the AWS documentation page. Seen here: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html I've double checked all of the information required to connect: Host: whatever.whatever.us-west-2.rds.amazonaws.com Port: 5432 Username: USERNAME Password: PASSWORD When I try to connect to the database, my connection fails due to a timeout. (During step 4 in the above guide.) Can anyone point me to whatever I am missing? Thanks in advance

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • Creating url from the cloudfront aws

    - by GroovyUser
    I have my js files which I have uploaded into the Amazon S3 and linked it with the Cloudfront. I got a url something like this : dxxxxxxxx.cloudfront.net But opening that url in my browser ( I'm getting an error ) : <Error> <Code>AccessDenied</Code> <Message>Access Denied</Message> <RequestId>xxxxxx</RequestId> <HostId> xxxxxxxxxxxxxxx </HostId> </Error> But what I want actually is to use the url and add them to my webpage. How could I do that? Thanks in advance.

    Read the article

  • What are the steps needed to set up and use security for AWS command line tools?

    - by chris
    I've been trying to set up the AWS command-line tools following Eric's most useful guide at http://alestic.com/2012/09/aws-command-line-tools. I can't seem to find a good how-to for how to generate the x509 certificate and private key, and how that relates to the various security files the guide creates. Update: I have found a couple of links that describe the some steps. These steps seem to work, however I'm not sure if this is secure & the best way to do it: 1) Create a private key openssl genrsa -out my-private-key.pem 2048 2) Create x.509 cert openssl req -new -x509 -key my-private-key.pem -out my-x509-cert.pem -days 365 Hit enter to accept all of the defaults. Then, from the IAM Dashboard, User, select a user & click on the "Security Credentials" tab. Click on "Manage Signing Certificates", then "Upload Signing Certificate", paste in the contents of my-x509-cert.pem, click OK and it should be accepted. One step that is discussed, but not required for me, was the addition and subsequent removal of a pass phrase on the private key. Should I have been prompted for one, and is my cert potentially unsafe because of this?

    Read the article

  • why does and EBS volumes mounted in an Ubuntu 12.04 EC2 instance as /dev/sdh1 appear as /dev/xvdh1?

    - by Andres
    When mounting an EBS volume on ubuntu specified as /dev/sdh1 it actually mounts it at /dev/xvdh1. The aws console still thinks it's mounted at /dev/sdh1 so it took a while to realize that it was actually mounted, just in the wrong place I ran into this problem a long time ago using ubuntu on ec2. I just ran into it again https://forums.aws.amazon.com/post!reply.jspa?messageID=351382 and it seems like I'm not alone: https://forums.aws.amazon.com/thread.jspa?threadID=68957&tstart=0 I haven't found a good answer as to why this happens or how to fix it. Any ideas?

    Read the article

  • Dynamically loading chef recipies from URLs

    - by andy
    I'm deploying a web app on AWS. I intend to use chef to build AMIs which I'll then put into production. I want to have Chef monitor a URL stored in simpleDB. The URL would point to a tarball in S3. There would be different URLs, one for a config tarball, one for a code tarball. When I update the URL in simpleDB, I want chef to spot this and pull in and apply the configs/deploy the code. Is this possible? Has anything like this been done before or would I need to roll my own code? I think Chef can monitor URLs, but how would be the best way of getting it to load that URL from simpleDB?

    Read the article

  • Migrating an Active Directory domain controller to AWS

    - by Xavier Hutchinson
    I am required to migrate a Active Directory server into AWS with a couple other servers (SQL and IIS) to create a dev and test environment for our network / development. My plan at this time is to simply rebuild the Active Directory server in AWS from scratch - which is quite time consuming indeed! I was wondering if anyone had a recommendation as to a better and more efficient approach of migrating a copy of a physical Active Directory server to the cloud? The server is Windows Server 2012. Thank you!

    Read the article

  • Debugging logrotate postrotate script

    - by robert
    Following is my logrotate conf. /mnt/je/logs/apache/jesites/web/*.log" { missingok rotate 0 size 5M copytruncate notifempty sharedscripts postrotate /home/bitnami/.conf/compress-and-upload.sh /mnt/je/logs/apache/jesites/web/ web endscript } And compress-and-upload.sh script, #!/bin/sh # Perform Rotated Log File Compression tar -czPf $1/log.gz $1/*.1 # Fetch the instance id from the instance EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`" if [ -z $EC2_INSTANCE_ID ]; then echo "Error: Couldn't fetch Instance ID .. Exiting .." exit; else /usr/local/bin/s3cmd put $1/log.gz s3://xxxx/logs/$(date +%Y)/$(date +%m)/$(date +%d)/$2/$EC2_INSTANCE_ID-$(date +%H:%M:%S)-$2.gz fi # Removing Rotated Compressed Log File rm -f $1/log.gz The files are rotated, but shell script is not executed. I don't know how to debug the postscript. Is there any logfile I chek to see if there is any permission issues. If i directly execute the script from commandline file upload works. Thanks.

    Read the article

  • cannot reach munin port on other AWS instance

    - by Amedee Van Gasse
    2 AWS instances, in the same region but different availability zones, one is in regular EC2 and the other is in VPC, both have an Elastic IP, both are 64bit Amazon Linux AMI 2014.03.1. Both are running munin-node. The instance in the VPC is running munin-cron. I have added incoming TCP and UDP port 4949 to the security groups of both instances. On the munin node, I added an allow-line with the IP address (regular expression) of the munin server to /etc/munin/munin-node.conf. I bind munin-node to any interface using host *. Then I did sudo service munin-node restart. Then I ran netstat. $ sudo netstat -at | grep munin tcp 0 0 *:munin *:* LISTEN So the port is open there. On the munin server AND on the munin node: $ nmap AMAZON-IP -p 80,4949 | grep tcp 80/tcp open http 4949/tcp closed munin On the munin node: $ nmap localhost -p 80,4949 | grep tcp 80/tcp open http 4949/tcp open munin So from the outside, the http port is open (Apache is running) but the munin port is closed. The node can't even reach the munin port on it's own public IP address, but it can on localhost. I added port 80 as a sanity check, to be sure that there is network connectivity at all. So what am I overlooking here?

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • Unable to access the WCF service over VPN!

    - by kurozakura
    Heres the scenario, im on a network A, and i use a vpn client to connect network B to access the webservice which can be accessed in network B.Even though im connect to network B , im unable to access the webservice link.Do i need to configure any settings. But if u r originally in network B and even though if u have connected to network A using vpn client, im able to access the webservice link. But the other way isnt working.

    Read the article

  • Options for PCI-DSS on AWS - file integrity monitoring and intrusion detection

    - by Brill Pappin
    I need to deploy some file integrity monitoring and intrusion detections software on AWS instances. I really wanted to use OSSEC, however it does not work well in an environment where servers can auto deploy and shut down based on load, because it requires server managed keys to be generated. Including the agent in the AMI will not allow monitoring as soon as it comes up because of that. There are many options out there, and several are listed in other posts on this site, however none that I've seen so far deal with the unique problems inherent in AWS or cloud based deployments in general. Can anyone point me at some products, preferably open source, that we might use to cover those portions of PCI DSS that require this software? Has anyone else achieved this on AWS?

    Read the article

  • Public DNS Server fails on Windows Amazon EC2

    - by Adroidist
    I have started a new Windows server instance on Amazon EC2. The security group has the following rules: Ports Protocol Source 22 tcp 0.0.0.0/0 80 tcp 0.0.0.0/0 443 tcp 0.0.0.0/0 3389 tcp 0.0.0.0/0 53 udp 0.0.0.0/0 -1 icmp 0.0.0.0/0 I am able to ping the public DNS server of the machine and i can connect to it using Windows Remote Desktop connection. However, when i put in my web browser the public DNS server, it fails to connect. Morever, I used filezilla and putty (and in both I loaded the private key .pem) but i receive connection timed out. I disabled the firewall on both my pc and the instance (which I entered using Remote desktop connection). Can you please tell me what I am missing?

    Read the article

  • Amazon AWS VPN how to open a port?

    - by Victor Piousbox
    I have a VPN with public and private subnets; I am considering only public subnet for now. The node 10.0.0.23, I can ssh into it. Let's say I want to connect to MySQL on the node using its private address: ubuntu@ip-10-0-0-23:/$ mysql -u root -h 10.0.0.23 ERROR 2003 (HY000): Can't connect to MySQL server on '10.0.0.23' (111) ubuntu@ip-10-0-0-23:/$ mysql -u root -h localhost Welcome to the MySQL monitor. Commands end with ; or \g. --- 8< --- snip --- 8< --- mysql> The port 3306 is not reachable if I use the private IP? My security group allows port 3306 inbound from 0.0.0.0/0 AND from 10.0.0.0/24. Outbound, allowed all. The generic setup done by Amazon through their wizard does not work... I add ACL that allows everything for everybody, still does not work. What am I missing?

    Read the article

  • How to configure Amazon Security Groups to achieve multi-tier architecture?

    - by ks78
    What is the preferred way to configure Amazon Security Groups to achieve a multi-tier architecture? Each of my instances has its own Security Group, which I only want to use for rules specific to an instance. I'd like to keep any rules which apply to multiple instances in a separate Security Group, which can then be assigned to instance Security Groups as necessary. As an example, I've setup a group called "admin", which allows administrative access from my IP. I added the "admin" group as the source to each of my instance security groups. However, I still can't access the instances from my IP without adding the rules directly to the instance's group. Am I missing something? Although it seems a multi-tier security architecture should be possible, it doesn't seem to be working.

    Read the article

  • Authority Information Access local path being ignored

    - by Kevin
    I have a CA set up in Server 2008 R2, and generally it is working, but I can't control the local path/filename it writes its own certificate to for the Authority Information Access publishing. Here's a screen shot of the dialog I'm trying to set this on: From these settings I would expect to get the file: C:\Windows\system32\CertSrv\CertEnroll\DAMNIT.crt But instead I get: C:\Windows\system32\CertSrv\CertEnroll\SERVER.domain.com_My Issuing Authority(1).crt Of course, the actual change shown wouldn't be very useful, but it's illustrative; no matter what path/filename I use, it always lands up in the same place and with the same name. I actually wanted to change the name from <ServerDNSName>_<CaName><CertificateName>.crt to <CaName><CertificateName>.crt, since the latter corresponds to the HTTP URL whereas the former does not. Admittedly, I haven't set up many CAs so perhaps I'm just deluded as to what this dialog is supposed to be setting, but if so this is notoriously bad UI design. (Incidentally, I have a couple other complaints with the same dialog.) What's going on here and is there some way to get the filename pattern I want?

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >