Search Results

Search found 15442 results on 618 pages for 'drupal services'.

Page 160/618 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Typical Service Response Time for software verndors [closed]

    - by Miky D
    I'm trying to find out what are the standard service/tech-support response times that are expected of a software vendor. We're being asked by a customer to enter into an agreement regarding technical support for a software application that we're selling. Basically, I'm interested in the typical turn-around time (i.e. time to respond, time to resolution) based on the severity of the issue. And also, I'm interested in the financial structure of such agreements: i.e. charge/incident, bundle with unlimited incidents/customer etc. Any information or suggestions of where to find such information (even examples of other software vendors websites) would be greatly appreciated!

    Read the article

  • Windows 2003 server RRAS on VPC

    - by Saif
    I'm trying to setup a L2TP VPN server(to give user access on to all my VPN instance) on a Windows 2003 instance running on my VPC. While trying to enable RRAS I'm getting error, "less than two network interfaces were detected on this machine". Eventually it's because there's only one network interface available, the which has private IP. I have elastic IP assigned to this instance as well. But RRAS can't see this. What should I do to RRAS to be able to see the interface with elastic IP?

    Read the article

  • Cheapest High Available Web Server [closed]

    - by xyz
    I would like to create a high-available setup (e.g. a small cluster) for a webserver, i.e. it will run Apache, PHP and MySQL. There will be between 2-8 small websites running with only very little traffic and workload. High availability is however very important. I don't want to be dependent on 1 datacenter, so there must be a minimum of 2 servers placed in different datacenters, and if one server goes down, the user must experience no or only a minimum of downtime - and no data loss. I have considered Amazon AWS using their Elastic Load Balancing, since it is possible to buy 2 EC2 instances in 2 availability zones and set up load balancing and RDS (Multi-AZ). However this seems rather expensive. Using the AWS price calculator http://calculator.s3.amazonaws.com/calc5.html it totals to 185$/month the first year (including the free tier). Are my calculations incorrect or is there a cheaper way to make this HA setup? Best regards

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • HAProxy and Intermediate SSL Certificate Issue

    - by Sam K
    We are currently experiencing an issue with verifying a Comodo SSL certificate on an Ubuntu AWS cluster. Browsers are displaying the site/content fine and showing all the relevant certificate information (at least, all the ones we've checked), but certain network proxies and the online SSL checkers are showing we have an incomplete chain. We have tried the following to try to resolve this: Upgraded haproxy to the latest 1.5.3 Created a concatenated ".pem" file containing all the certificate (site, intermediate, w/ and w/out root) Added an explicit "ca-file" attribute to the "bind" line in our haproxy.cfg file. The ".pem" file verifies OK using openssl. The various intermediate and root certificates are installed and showing in /etc/ssl/certs. But the checks still come back with an incomplete chain. Can anyone advise about anything else we can check or any other changes we can make to try to fix this? Many thanks in advance... UPDATE: The only relevant line from the haproxy.cfg (I believe), is this one: bind *:443 ssl crt /etc/ssl/domainaname.com.pem

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • How can I get the size of an Amazon S3 bucket?

    - by Garret Heaton
    I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, [s3cmd] doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

    Read the article

  • Separation of memory oriented process and CPU oriented process

    - by Jeevan Dongre
    I am develops guy working for an e-commerce company I am running my e-commerce application built using ruby on rails spree commerce. I am presently running 2 medium instances in the production. One is a high memory instance which has 3.8 RAM and single Core CPU and another one is high CPU instance which has Dual Core CPU. Basically AWS calls it has m1.medium and c1.medium instance respectively. My question is it possible to separate the processes according the cpu intense and memory intense? So that all the cpu intense process can be made run in high cpu instance and all the memory intense process can be made to run in the high memory instances. Is any tool available to identify those process. Kindly give me some heads up. Thank you

    Read the article

  • Can I associate my spare Elastic IP addresses to an Amazon EC2 instance started in an Autoscale group and Monitoring?

    - by undefined
    I want to know if I can reserve a number of Amazon Elastic IP addresses and assign them to instances started by Autoscale. So basically, when a new instance is started because a trigger has been triggered can I also set the API to look for a spare IP address and allocate it to the instance. I need to do this because the started instance will need to communicate to a server outside the cloud and get through a firewall which will only allow remote access from a predefined set of IP addresses. So i think i need to reserve some IPs, add them to my firewall settings then allocate them (automatically) when a new instance is started. Any ideas?

    Read the article

  • Does setting an A record for a root domain set it (automatically) for subdomains?

    - by Edan Maor
    I bought a domain from Dreamhost, but my servers are actually running on Amazon's AWS. I have an Elastic IP, say 1.1.1.1. In the Dreamhost panel, I've added an A record for my domain name, pointing it to 1.1.1.1. My question is, are all subdomains (e.g. www.mydomain.com, a.mydomain.com, etc.) automatically mapped to 1.1.1.1 as well, because the root is? Or do I have to add separate rules for each subdomain?

    Read the article

  • How to secure a group of Amazon EC2 instances

    - by ks78
    I have several Amazon EC2 instances running Ubuntu 10.04 and I've recently started using Amazon's Route 53 as my DNS. The purpose of doing that was to allow the instances to refer to each other by name rather than private IP (which can change). I've pointed my domain name (via GoDaddy) to Amazon's name servers, allowing me to access my EC2 webservers. However, I noticed I can now access the EC2 instances which I don't want to be public, such as the dedicated MySQL Server. I was thinking Amazon's Security Groups would still be in effect when using Route 53, but that doesn't seem to be the case. Before I started using Route 53, I was thinking of having one instance run a reverse proxy, which would help protect the web servers behind it. Then IP-restrict all the other instances. I know IP restricting can be done using the firewall within each instance, but should I ever need to access them from another IP address, I'd need a way in. Amazon's control panel made it a breeze to open a port when necessary. Does anyone have any suggestions for keeping EC2 instances secure, but also accessible to their administrator? Also, what's the best topology for a group of EC2 instances, consisting of web servers and a dedicated database server, from a security perspective? Does having a reverse proxy server even make sense?

    Read the article

  • Amazon EC2 Elastic Load Balancing - strategy for zero downtime server restart

    - by Yoga
    I have 5 web servers (Apache/mod_perl) behind Amazon EC2 Elastic Load Balancing, when I deploy codes to the web servers, I am doing this.. For each machine, shutdown the Apache Update the code Start over the server and proceed to the next server I think when my server is shutdown, ELB will not distribute request to my server, but how about the request still serving? I think a better approach is Stop accepting new request from ELB Sleep for sometimes, shutdown web server only if all requests are responded Update the codes Start the server again But how to perform (1) and (2) from my local sever? Do I need to use AWS API? or other easy way to do it? Thanks.

    Read the article

  • Configure non-destructive Amazon S3 bucket policy

    - by Assaf
    There's a bucket into which some users may write their data for backup purposes. They use s3cmd to put new files into their bucket. I'd like to enforce a non-destruction policy on these buckets - meaning, it should be impossible for users to destroy data, they should only be able to add data. How can I create a bucket policy that only lets a certain user put a file if it doesn't already exist, and doesn't let him do anything else with the bucket.

    Read the article

  • How to combine AWS and dedicated external servers?

    - by rfw21
    I have an extensive network of servers all currently hosted on AWS EC2. For reasons of cost I plan to gradually migrate to dedicated servers where possible. So: How can I best combine AWS and non-AWS servers in my network? Ideally, I should be able to assign internal IP addresses to the external servers, include them in AWS security groups and ensure that all private traffic between my AWS servers and external servers is secure.

    Read the article

  • What software to use for data backup?

    - by ViliusK
    What software should I use to make back-ups for of my computer files? Features I need: copies should be backup'ed on my Ubuntu server. client soft should monitor folders which I've chosen to backup. client soft should run on Windows XP, Windows 7, Linux. there should be Web UI to view backup'ed file versions. there should be availability to see diff with older versions. backup should be done over Internet to remove machine - server. Any suggestions?

    Read the article

  • Temporary user-profiles on Windows Server 2008 TS

    - by sinni800
    Hello, for a publicly accessible terminal server I have created a user profile which only allows running of a few programs (demonstration of applications). This results in many people connecting to the same user name on the server, essentially sharing the same profile. How can I copy the original, empty profile on every logon to a seperate directory and delete it afterwards, so everybody starts with a clean copy of the "Guest"-Account?

    Read the article

  • SSRS report on SharePoint Web Part

    - by MicroSumol
    I have this configuration: DBK- SQL/SSRS/SSAS (includes SharePoint databases) SPK- SharePoint I created a SharePoint Site with an SSL certificate. Then on DBK I setup the SSRS with an SSL. Finaly went back to SharePoint and setup a webpart on a subsite to connect to the SSRS report. The problem comes that the user is asked 2 times to authenticate. Once when he logs into sharepoint, then when he wants to see the SSRS report. Since I am not an expert on SSRS, I am asking is there an easy way to pass the SharePoint credentials to the SSRS report. Would it be easier to install SSRS on SPK? Would that even work or solve my problem?

    Read the article

  • get a list of running ec2 instances programmatically

    - by user113981
    Hi i have started with aws and found out that we can get a list of running servers with the aws php sdk. Is there any other way to get the list of all ec2 instances? after getting the list i want to sync the data from one main instances to all the instances. Something like a button click can also do the operation. Are rsync, incron the only options, or it can be done by aws php sdk also. Please provide some tutorial links.

    Read the article

  • AWS RDS Mysql with benstalk Hibernate app: Character encoding issue

    - by TeraTon
    I'm running a webapp from amazon rds with tomcat 7 and spring, which uses hibernate as the persistence layer. The application and utf-8 encoding work properly on localhost, but for some reason when I deploy to amazon, the UTF-8 encoding breaks. I use mysql 5.5.27 on amazon rds and the table that we wish to update has collation set to utf8 - utf8_unicode_ci And in hibernate I have set: < prop key="hibernate.connection.charSet"UTF-8 UTF-8 characters get replaced by ??? and this is of course especially bad for passwords and usernames + email as it basically kills them. Anyone else encountered character encoding breaking when deploying to amazon?

    Read the article

  • How to add a second domain to an EC2 instance with Elastic & Route 53

    - by memeLab
    I've got my domain site.com running on EC2, using Elastic IP and Route 53. I want to park site.net so that it resolves to the same site.. I've looked up Migrating an Existing Domain to Route 53 in the docs, but can't find mention of how to add a second domain! I figured I'd have to create an A record, but when I do so, the record is created site.net.site.com .. not quite what I'm after! I've also done searches for mixes of 'route 53', 'park domain', 'addon domain', 'second domain', but no dice... My prior experience is with cPanel and Plesk, so I'm a bit lost! Any pointers would be appreciated! TIA

    Read the article

  • How can I match an AWS account number to a key and secret

    - by iwein
    My client gave me a key and a secret to manage his EC2 things, but to make one of my AMI's available to run I have to fill in the Account Number. Is it possible to deduce the account number from the key and the secret? Obviously I also asked the client for this information, but since it's weekend and I'm not fond of waiting I wanted to see if I could figure it out myself. Have you done this before?

    Read the article

  • How does one debug Windows network share authentication?

    - by ajs410
    I have machine0 with 32-bit Vista, logged in as a domain user, running a VMWare image of 32-bit Vista, logged in as a local user, with the VM set to bridge the network. From an administrator account (called admin) within the VM, I try to access the hidden C$ share on machine0 (i.e. start - run - "\\machine0\C$\"). I get no prompts for credentials. Worse, machine0 has an admin account (different password), and machine0\admin gets locked out when VM\admin tries to access the network share. I get a message several seconds later, which feels like a cached credential failure leading to the lockout. I have checked several places for cached credentials; net use, Stored Usernames and Passwords, mapped shares. I rebooted (both machine0 and VM) to make sure the session was clear of any cached credentials. I can force net use to use my domain credentials when accessing machine0, and then I can see the share. I can also see shares that do not require credentials. I decided to try another machine on the network (machine1), 64-bit Vista, local user. This machine has no lockout policy, and after several seconds (feels like failed cached credentials again) it prompts me for credentials. After I enter them, it re-prompts me, saying "logon unsuccessful" (tried my domain credentials, and also machine1\admin's). Which is bogus, because I proceed to log on with remote desktop using the machine1\admin credentials. I have tried this on another machine (machine2, 64-bit Vista), running a copy of the same 32-bit VM, and I don't remember having this problem. machine0 has a fingerprint reader...could that try storing passwords and interfere? Are there any places I'm missing where there could be cached credentials? Is there a way to see what credentials are flying around when I try to connect?

    Read the article

  • "service"-command and environment variables

    - by varesa
    I am trying to start a service that requires a env. variable to be set to certain path. I set this variable in "/etc/profile.d/". However when I start this service using the service command, it doesn't work. man service: service runs a System V init script in as predictable environment as possible, removing most environment variables and with current working directory set to /. So it seems that service is removing my variables. How should I set the variables up to keep them from being removed. Or is that something i should not do. I could start the service manually using the init-scripts, or even hardcode the path into the script, but I'd like to know how to use it with the service command.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >