Search Results

Search found 50945 results on 2038 pages for 'web testing'.

Page 428/2038 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • Chrome developer tools - network panel gaps

    - by Chris Nicholson
    In the Chrome developer tools, under the network tab, I'm curious to know what is happening during the gaps. If you look at my image below, I have highlighted in orange the areas where these gaps exist. Where I'm able to load a lot of my page from cache it's a shame these large gaps occur as they make up most of my page load time. What exactly is happening in this time? EDIT Okay I found this answer which essentially sums up my question, so a different question: does anyone know a good method to reduce the length of these gaps? Presumably (albeit rather extreme) if I loaded all my CSS on the page there wouldn't be a delay after loading the CSS file before the images were loaded.

    Read the article

  • Postfix as mail relay for web servers?

    - by Ben Carleton
    Hi all, I want to set up Postfix to relay mail from a group of webservers. I would like to limit senders by IP so I can restrict the box to only my webservers, so I don't have an open relay and don't have to worry about authentication. So, what I guess I need is to limit inbound access but allow mail to be sent to any outbound address. I've looked through the docs and don't even know where to start, so any tips would be appreciated. Thanks!

    Read the article

  • What is the difference between a plain Amazon ec2 instance and beanstalk?

    - by Alex Ford
    I am a solo developer and the sites I'm deploying are very small, usually hobby sites and I have a few questions about the Amazon services. Is there a reason for me to use beanstalk or should I just stick with a single ec2 instance? Should I use RDS for database? I heard someone say that I could just install a database on my ec2 instance, making it cheaper. I'm trying to keep everything as cheap as possible. I need to point custom domains to my sites. Pretty sure that means I have to deal with elastic IPs. Do those work with beanstalk or only with individual ec2 instances? Thanks in advance!

    Read the article

  • Web based console connections not working in Windows 7 posted: Jan 20, 2010 8:55 AM

    - by nmeth
    For slightly complicated reasons we tend to give people console access to VMs via the webui. This has worked fine in the past, however when the users update their client machines to Windows 7 (or Vista, I am told, although I have not tested that), then the console fails to work. On IE8, having allowed the ActiveX control, the tab causes a "Internet Explorer has stopped working" dialog. On Firefox 3.5 , once the plugin has been installed, using the console causes the browser to crash. I've updated to the most recent VC 2.5 release, and ESX 3.5u5. Anyone else seeing this? Any clues how to get round it (other than using the fat client). Nigel.

    Read the article

  • web browser not opening

    - by Hoss7811
    I had a virus adware on my desktop. I cleaned it using AVG. Now no browser will open, though the Internet connection is fine. When trying to open any browser I get a dialog box that says it can't find that location. This happens to all my browsers. I even downloaded firefox on my laptop and installed it on the desktop and still get the same response - Windows can't find the location. This all started when opening an email. I'm running Windows Vista

    Read the article

  • IAM / AWS Access control via Windows Azure Active Directory

    - by Haroon
    I am trying to figure out how to configure IAM in Amazon AWS to use Windows Azure Active Directory. I found http://blogs.aws.amazon.com/security/post/Tx71TWXXJ3UI14/Enabling-Federation-to-AWS-using-Windows-Active-Directory-ADFS-and-SAML-2-0, however it is about configuring ADFS. WAAD supports SAML 2.0 http://azure.microsoft.com/en-us/documentation/articles/fundamentals-identity/ Has anyone figured it out yet?

    Read the article

  • Outlook Web Access, reverse proxy and browser

    - by M'vy
    Hi SF'ers! We recently moved an exchange server behind a reverse proxy due to the loss of a public IP. I've managed to configure the reverse proxy (httpd proxy_http). But there is a problem for the SSL configuration. When accessing the OWA interface with Firefox, all is ok and working. When accessing with MSIE or Chrome, they do not retrieve the good SSL Certificate. I think this is due to the multiples virtual host for httpd. Is there a workaround to make sure MSIE/Chrome request the certificate for the good domain name like FF does? Already tested with the SSL virtual host : SetEnvIf User-Agent ".*MSIE.*" value BrowserMSIE Header unset WWW-Authenticate Header add WWW-Authenticate "Basic realm=exchange.domain.com" A: ProxyPreserveHost On also: BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 Or: SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 And lots of ProxyPassand ProxyReversePath on /exchweb /exchange /public etc... And it still don't seem to work. Any clue? Thanks. Edit 1: Precision of versions # openssl version OpenSSL 0.9.8k-fips 25 Mar 2009 /usr/sbin/httpd -v Server version: Apache/2.2.11 (Unix) Server built: Mar 17 2009 09:15:10 Browser versions : MSIE : 8.0.6001 Opera: Version 11.01 Revision 1190 Firefox: 3.6.15 Chrome: 10.0.648.151 Operating System: Windows Vista 32bits. They are all SNI compliant, I've tested them this afternoon https://sni.velox.ch/ You're right Shane Madden, I have multiple sites on the same public IP (and same port as well). The server itself is just a reverse proxy, that rewrite addresses to internal servers. The default host is a dev site, configure with the certificate that does not match the OWA (of course... would have been to easy) <VirtualHost *:443> ServerName dev2.domain.com ServerAdmin [email protected] CustomLog "| /usr/sbin/rotatelogs /var/log/httpd/access-%y%m%d.log 86400" combined ErrorLog "| /usr/sbin/rotatelogs /var/log/httpd/error-%y%m%d.log 86400" LogLevel warn RewriteEngine on SetEnvIfNoCase X-Forwarded-For .+ proxy=yes SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL:+SSLv3 SSLCertificateFile /etc/httpd/ssl/domain.com.crt SSLCertificateKeyFile /etc/httpd/ssl/domain.com.key RewriteCond %{HTTP_HOST} dev2\.domain\.com RewriteRule ^/(.*)$ http://dev2.domain.com/$1 [L,P] </VirtualHost> The certificate of domain is a *.domain.com The second vHost is : <VirtualHost *:443> ServerName exchange.domain2.com ServerAdmin [email protected] CustomLog "| /usr/sbin/rotatelogs /var/log/httpd/exchange/access-%y%m%d.log 86400" combined ErrorLog "| /usr/sbin/rotatelogs /var/log/httpd/exchange/error-%y%m%d.log 86400" LogLevel warn SSLEngine on SSLProxyEngine On SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL:+SSLv3 SSLCertificateFile /etc/httpd/ssl/exchange.pem SSLCertificateKeyFile /etc/httpd/ssl/exchange.key RewriteEngine on SetEnvIfNoCase X-Forwarded-For .+ proxy=yes RewriteCond %{HTTP_HOST} exchange\.domain2\.com RewriteRule ^/(.*)$ https://exchange.domain2.com/$1 [L,P] </VirtualHost> and it's certificate is exchange.domain2.com only. I presume the SNI is somewhere not activated on my server. The versions of openssl and apache seams to be ok for the SNI support. The only thing I do not know is if httpd has been compile with the good options. (I assume it's a fedora packet).

    Read the article

  • Local Apache Web server works only when connected to the net

    - by Jean
    Hello, I installed Ubunut and got the LAMP stack installed too. Now the problem is I have to be connected to the internet for the local apache webserver to work, else it does not. I changed the IP address on the dnshost, in the apache2.conf file, got the servername in the httpd.conf, which was empty. Any ideas guys. Thanks Jean

    Read the article

  • AWS Load balancer connection reset

    - by joshmmo
    I have an ELB set up with two instances. The issue I have with it is that when I do not add www. to it, the ELB just hangs. This is some info I get when I spider with wget: Spider mode enabled. Check if remote file exists. --2013-06-20 13:40:54-- http://learning.example.com/ Resolving learning.example.com... 54.xxx.x.x53, 50.xx.xxx.x71 Connecting to learning.example.com|54.xxx.x.x53|:80... connected. HTTP request sent, awaiting response... No data received. Retrying. when I add www. it works great. I have a GoDaddy SSL cert that I added to the listener section that covers 3 domains, www.learning.example.com, files.learning.example.com and learning.example.com. These are my listener settings: - HTTP 80 HTTPS 443 N/A N/A - SSL 443 SSL 443 Change canvasNew (Change) My EC2 instances are running apache2 on Ubuntu 12.04. I will be happy to post my vhosts file if needed. However, when I ran the server with the domains pointing to just one EC2 instance things worked fine. How can I fix this issue for learning.example.com? Why does www work just fine? A second question would be what is the difference between instance protocol and load balancer protocol? EDIT: Here are the dig results for learning.example.com from yesterday. I changed the DNS entry to point to one instance to make sure it was the elb. When I switch it back I will do it for www.learning.example.com ; <<>> DiG 9.9.1-P2 <<>> learning.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20210 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;learning.example.com. IN A ;; ANSWER SECTION: learning.example.com. 2559 IN CNAME canvas-22222222222.us-west-1.elb.amazonaws.com. canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 54.xxx.x.x53 canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 50.xx.xxx.x71 ;; Query time: 83 msec ;; SERVER: 10.x.xx.20#53(10.x.xx.20) ;; WHEN: Thu Jun 20 13:40:47 2013 ;; MSG SIZE rcvd: 137 EDIT 2: Here is some more info that might be helpful. Port Configuration: 80 (HTTP) forwarding to 443 (HTTPS) Backend Authentication: Disabled Stickiness: Disabled(edit) 443 (SSL, Certificate: canvasNew) forwarding to 443 (SSL) Backend Authentication: Disabled So I switched everything to one EC2 IP address to bypass the elb to make sure things are working. It's running great. www and the non-www url work perfectly fine. Its only when I switch things to the ELB that learning.example.com hangs and www.learning.example.com works. Hopefully you can get some ideas flowing.

    Read the article

  • can the remote app rd web access be accessed from my local system

    - by shiva
    I am new to the remote app remote desktop access. I can access the application that i have published from my server using the link FQDN\rdweb. But on trying to access the same url from my local system(outside the server domain, say from my home pc) i get a not found error. Is there anything that i need to change in my local system to be able to access the remote applications? Or is it that for accessing webapps i need to be logged into the server? Please help me understand this

    Read the article

  • Web Applications under Apache Tomcat with multiple directory contexts

    - by goran
    I have two webapps, prod-1.2.1.war and test-2.0.0.war. If I put these straight into the "tomcat/webapps"-folder, they'll get deployed as; hXXp://localhost/prod-1.2.1/ hXXp://localhost/test-2.0.0/ This works but really I would like them to show up as; hXXp://localhost/vegshop/prod/ hXXp://localhost/vegshop/test/ As you see I somehow would like the "vegshop" to be included in the context path. I also would like the version-numbering to disappear without having to rename the WAR-files. Thank you. This is Apache Tomcat v6.0 under Linux 2.6, running SUN JDK 1.6.

    Read the article

  • Upload database backup from mysql to Amazon S3 or Glacier without creating local file

    - by Rubem Azenha
    Is there a tool that makes possible to backup a Mysql database to Amazon S3 or Amazon Glacier without having o create a local file with the database contents? Something like that: mysqldump -u root -ppass -h host --all-databases | magical-s3-tool s3-bucket backup-yyyy-mm-dd.sql This magical tool would use the pipe data and transfer the backup data directly to S3, without creating a local file.

    Read the article

  • Encrypt EC2 API call

    - by Frank
    I have to host an AMI in the Amazon Marketplace. i need to get the type of instance, whenever some user launches the AMI., like if its small medium or large. based on that i need to make some changes in the AMI when its created. I can do this with Amazon API call, to get the instance type, but the problem is that the instances created with the AMI will be started by other users, and i cannot use my AWS Credentials in the Amazon API. Is there any way that i can create an anonymous readonly user to make only specific type of EC2 API Calls? Or can i encrypt my EC2 API credentials, so no one can use it?

    Read the article

  • HAProxy and 2 webservers

    - by enrico
    I have a website that is split into two different servers: chat server in node.js normal website (lighttpd + php + whatever) now, I have set HAProxy in the same machine as node.js chat, so that when my website is accessed, it will redirect to the chat login. (Eg: mysite.com/messenger) What I want to do now is to put a link on the chat page to send to the other part of the website which has a normal files tree, like home.php, photos.php, settings.php, etc. but I really have no clue how this whole redirection works. Also, what about URL rewriting? If I have like info.php?item=phone and want to change it to mysite.com/phone ... is this something I should do with HAProxy or with lighttpd? Thanks in advance.

    Read the article

  • ISPconfig3 + CentOS 6.2 , confused on how to move forward after initial install?

    - by Damainman
    I installed ISPCONFIG3 on centos 6.2 using the great guide on howtoforge.com. Everything is up and running and I can access ISPCONFIG via a browser. However I am not sure how to move forward with the initial setup so I can setup the very first account and get my website live. Details: Only have 1 server, the centos+ispconfig is running on a virtual machine of XEN XCP. I setup the server name to be server1.mydomain.com. I only have 2 usable ips. I plan to use them as follows: xx.xx.xx.01 : For my website and the websites of all accounts I add. xx.xx.xx.02 : For ns1.mydomain.com and ns2.mydomain.com (Yea I know they should be different ips at different locations, but this is what I have to work with at the moment.... ) I registered the nameservers at my registrar with the .02 ip. I want to use bind and ISPconfig to run the DNS on my server itself and not via my registrar. Right now if I go to the .01 IP it shows the centos+apache successful install page. So to break it down basically I am not sure where to start when it comes to: (What to consider and what to do to setup the first domain on the server) Telling bind to use the name server domains with .02. Setting up my First website(which will be my main website) in ISPconfig so mydomain.com resolves properly to my server. Make it so when you go to the .01 IP, it either redirects or shows the contents of my main website. (If this can't be done, then any advice is appreciated) Making sure that when I add a new domain, it automatically puts in the proper information for the domain so it points to the right mail, database, dns, entry. If I overlooked a tutorial then please feel free to let me know, and any advice would be greatly appreciated. Some of the tutorials I found were not specific to doing everything on only one server with Centos+Apache+Bind. Right now all I did was install centos and install ISPconfig3. Trying to move forward correctly so I don't mess up everything I did by not knowing what to do. Thank you in advance!!

    Read the article

  • How to integrate monit into web app deployment process

    - by shabunc
    I have: Tomcat with webapp deployed via mvn tomcat:redeploy. Monit, pinging the host and restarting server if ping failed. The thing is there in a moment during the redeployment when ping will fail - and this is normal, actually. So, the question is - what is the best way to teach monit to consider the fact of redeployment and not to confuse it with "real" black outs. This is of course an issue of balance between elegance, ease of implementation and scalability. The most straightforward solution I can think of - is just to shutdown monit before deployment and start it up after once again. But this if far from elegance I guess.

    Read the article

  • What differences are there between an official Ubuntu AMI image and a base install from an ISO?

    - by David Winter
    When creating a new instance on AWS using an official Ubuntu 12.04 server AMI, what differences are there compared to if I was to do a standard server install on a computer of my own? For example, the default user is 'ubuntu'. An SSH public key is added to that users authorized_keys file. Sudo is passwordless for that user. PasswordAuthentication is disabled for SSH. etc etc. Configurations have been changed from their defaults, and I'd like to know if there is a list, or somewhere I could find out the modifications made.

    Read the article

  • AWS: Should my EC2 and RDS instances be in the same Availability Zone?

    - by DOOManiac
    I just noticed that all of our EC2 instances are in zone us-west-2b, but our Multi-AZ RDS instance is in us-west-2a. Performance-wise everything seems to be okay, and it will be a hassle to "move" the instances to one place since you have to stop and re-create them all. However if either of the two zones goes down when we will have some downtime; if everything is in one zone then at least we have a higher chance of not being in the zone that has downtime... Is this something worth fixing, or am I over-thinking it? (I was about to purchase some EC2 Reserved Instances, which are tied to specific AZs, so I wanted to make sure before going through with it) Thanks!

    Read the article

  • "iostat" command different in two equal machines

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 8 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are - 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st. Mem is about 1.5GB free. Any idea what could it be, or where else can we check? iostat command on the proper machine: avg-cpu: %user %nice %system %iowait %steal %idle 8.97 0.03 4.46 0.19 0.14 86.23 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.60 0.69 55.38 587620 47254184 xvdfp2 2.64 1.10 61.04 934786 52091056 xvdfp4 0.86 0.19 41.72 163866 35601920 xvdfp1 4.37 36.59 73.89 31220810 63051504 xvdfp3 8.03 7.08 94.63 6045402 80749184 iostat command on problematic machine: avg-cpu: %user %nice %system %iowait %steal %idle 9.29 0.04 5.55 0.26 0.11 84.74 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.13 3.34 68.85 246244 5077888 xvdfp1 7.60 74.31 104.88 5480362 7734840 xvdfp3 13.22 73.67 125.00 5433386 9218600 xvdfp4 1.11 0.76 65.08 55762 4799248 xvdfp2 4.16 3.31 99.17 243818 7313264 Many thanks for the help.

    Read the article

  • Good set of web hosting permissions?

    - by Jorge Israel Peña
    Hey guys, I just got a linode and I'm in the process of configuring it. It's running nginx with php-fpm and passenger. nginx was compiled and is running as user nginx. php-fpm (php with fastcgi process manager) is running as www-data (in group www-data). My sites are currently in /var/www, so for example /var/www/test.com I'm just wondering what the general 'flow' of things is. So for example, /var/www is owned by root, should I chown of /var/www/test.com to nginx or www-data? Or should I put nginx in the www-data group? How should site uploading work, I just transfer files to the /var/www/test.com directory as root (sudo) and then chown -R www-data:www-data .? Thanks. I'm capable of figuring things out on my own, I'm just wondering what the typical/general way of handling users/groups/permissions/site-files is on linux with a webserver.

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >