Search Results

Search found 80190 results on 3208 pages for 'web site project'.

Page 479/3208 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • Is there a simple LDAP-to-HTTP gateway out there?

    - by larsks
    We have a local LDAP directory that provides basic contact information about our user community. We would like to integrate this into some third-party hosted services that allow us to implement widgets that run arbitrary Javascript. In order to connect Javascript to our LDAP directory, I would like to set up a simple LDAP-to-HTTP proxy that would accept HTTP GET requests, translate them into an appropriate LDAP query, and respond with directory information as JSON-encoded data. In an ideal world, something like this: GET /[email protected] Would get me something like this: { "cn": "Bob Person", "title": "System Administrator", "sn": "Person", "mail": "[email protected]", "telepehoneNumber": "617-555-1212", "givenName": "Bob" } (And this obviously assumes that the web application has locally configured information about what base DN to use, how to authenticate, etc). I guess I could write one...but surely something like this already exists? UPDATE The consensus seems to be that there isn't a pre-existing solution out there and that I should just get off my lazy derriere and write one. So I did, and it's here. It's not especially pretty, but it works for my prototyping and I figure maybe someone else will find it useful someday.

    Read the article

  • How should secret files be pushed to an EC2 (on AWS) Ruby on Rails application?

    - by nikc
    How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk? I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using: git aws.push The following files are in the .gitignore: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb Following this link I attempted to add an S3 file to my deployment: http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html Quoting from that link: Example Snippet The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp: sources: /etc/myapp: http://s3.amazonaws.com/mybucket/myobject Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .elasticbeanstalk .ebextensions directory: sources: /var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz That config.tar.gz file will extract to: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message: Error message: No such file or directory - /var/app/current/config/database.yml Exception class: Errno::ENOENT Application root: /var/app/current

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • Software distribution from web server to client using PHP/FTP

    - by Jenolan
    I develop and maintain a number of add-ons and utilities for various widget (mainly aMember) which generally means I need to install php based codes onto other people's systems. Whilst I have a VPS and have access to rsync and all sorts of yummy tools most of the people I deal with have a basic ftp access and that's all folks. To upload from my local system is also a problem as I am satellite based (two-way) so it is fairly slow and expensive and in any case the files are already on my server. So there is no rsync, fxp, ssh and I can't really install anything as it is obviously not my system, they would be justifiably miffed if I started installing file managers or other things onto their sites. What I have been trying to find is a utility that I can run on my server from the web, preferably php based, that will be like a file manager but a bit different. Two panels. LH-Side the local server .. pretty much like a standard FM application RH-Side ability to login via FTP to the clients system Then I can fiddle as required. The closest thing I have found is net2ftp but it doesn't have the gui interface, at the moment I simply ssh into my server power up ncftp and run that way, but something easier to use would be mucho niceness. Thanks in advance! Larry

    Read the article

  • Apache 403 Forbidden Error when accessing local web server using local IP address

    - by amjo324
    I have an odd problem when attempting to browse to pages stored on a local web server (Apache 2.2). The pages are served as expected when I browse to localhost or 127.0.0.1 on port 80. Yet when I attempt to browse to the same pages by referencing the local IP address (192.168.x.x), I receive a HTTP 403 (Forbidden) error. In essence, http://localhost:80 works but 192.168.x.x:80 doesn't even though I'm specifying the IP of the local machine. You may be thinking "who cares? just use localhost". However, this is the first step in troubleshooting why I cannot remotely access these pages from different hosts on my LAN. I'm presuming this can't be a firewall issue as I'm only connecting to the local machine. Even so, I verified there was no iptables rules that could be having an effect. I've checked the Apache error logs and the corresponding line of relevance is: [Sat Oct 19 07:38:35 2013] [error] [client 192.168.x.x] client denied by server configuration: /var/www/ I've inspected most of the apache config files and they don't appear to differ from what you would expect with a default install. I can't see anything in apache2.conf that would be a problem and httpd.conf is an empty file. This is an excerpt from /etc/apache2/sites-enabled/000-default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> Any insight as to where I can look next to find a solution ? Thanks in advance.

    Read the article

  • Google Chrome not loading web pages correctly unless multiple refreshes

    - by Brandon Wilson
    Webpages in Google Chrome do not load correctly from time to time. I can't reproduce it, it just happens. Some times it happens when I load the browser other times it happens when I am just browsing. Just now I went to five different web sites which 3 out of 5 of them did not load correctly. I have attached a photo of how Super User loaded the first time I loaded it. If I refreshed it it will load correctly. Facebook is bad like this. Some times Facebook will load correctly but some of there back end scripting may not load so the page may not refresh automatically. Not sure what is going on. I have tried other browsers (Firefox and Internet Explorer) and they seem to be working correctly. Chrome seems to be acting up only on this computer. All my computers are running Windows 8 and I have removed Chrome completely off this computer and re-installed. I even disabled all extensions and cleared all the caches. I even tried running Chrome without being logged in. Not sure what else to do at this point. An example of superuser.com not loading correctly: When I refresh the problem will go away until it happens again. Sometimes it takes two or three refreshes in order for it to correctly load.

    Read the article

  • How to fix subfolders IIS7 functionality?

    - by Amr ElGarhy
    I have a problem in my sharing hosting that all websites in subfolders, their URL appear like this: http://amrelgarhy.com/amrelgarhy/ I sent to godaddy, and they sent me that its because of IIS7 and they can't solve, any one can tell me how to fix that? Here what i sent to godaddy and their reply: "as i saw before on this page http://www.godaddy.com/gdshop/hosting/shared.asp?ci=9009 compare windows plans, "Multiple Web sites: unlimited" so i have the right to run more than one website inside my hosting. But what i am facing now that i can't make more than website as a primary website. I have igurr.com as a primary website, i want to make others as primary because: I am facing a problem that all home pages for the other websites "which physically in sub folders" are like that "http://amrelgarhy.com/amrelgarhy/" the URL + the folder name and that what i don't want." GODADDY "Thank you for contacting Hosting Support. The behavior you are describing is standard for IIS 7.0 accounts. All alias domains in this environment will append the foldername their located in. I.E. a an alias domain www.coolexample.com pointed to the '/example' directory will display in a browser as "www.coolexample.com/example". This is due to the way IIS 7.0 handles virtual directories. Unfortunately we do not have any direct work around for this. We apologize for any inconvenience this may cause. "

    Read the article

  • Safari's location bar (auto-suggest and web search)

    - by Lri
    Auto-suggest don't seem to work for queries with spaces. Am I missing something? If you select an item from the suggestion list that was matched by its title, the title is filled in before the address. Can you change it to work like in other browsers? SMRT disables searching by title completely. Can you combine Top Hit, History and Bookmarks into a single section? The preferences starting with DebugSafari4 don't work anymore. (Like DebugSafari4IncludeFancyURLCompletionList.) Can you direct unresolved addresses to something like google.com/search?q=?&btnI instead of ?.com? Like by changing keyword.URL in Firefox. Can you remove or hide the web search field? In Camino, Cruz and Fluid it can be resized to zero width. You can't circumvent the normal maximum ratio with InputFieldWidthRatio. AddressBarIncludesGoogle doesn't appear do anything in the current version. Are there fixes or workarounds to any of these? I'm lumping these issues together, because they are closely related — a lot of them were introduced when the location bar was redesigned in Safari 5. I'm also hoping to find something like an extension or a plugin that would replace the standard location bar.

    Read the article

  • Determining a realistic measure of requests per second for a web server

    - by Don
    I'm setting up a nginx stack and optimizing the configuration before going live. Running ab to stress test the machine, I was disappointed to see things topping out at 150 requests per second with a significant number of requests taking 1 second to return. Oddly, the machine itself wasn't even breathing hard. I finally thought to ping the box and saw ping times around 100-125 ms. (The machine, to my surprise, is across the country). So, it seems like network latency is dominating my testing. Running the same tests from a machine on the same network as the server (ping times < 1ms) and I see 5000 requests per second, which is more in-line with what I expected from the machine. But this got me thinking: How do I determine and report a "realistic" measure of requests per second for a web server? You always see claims about performance, but shouldn't network latency be taken into consideration? Sure I can serve 5000 request per second to a machine next to the server, but not to a machine across the country. If I have a lot of slow connections, they will eventually impact my server's performance, right? Or am I thinking about this all wrong? Forgive me if this is network engineering 101 stuff. I'm a developer by trade. Update: Edited for clarity.

    Read the article

  • Easy GUI way to auto scale EC2 and RDS: aws console, scalr, ylastic...?

    - by Zillo
    I am managing all my instances with the AWS Management Console (the GUI web console) but now I want to use Auto Scale and it seems that this can not be done with that console. Yes, there is CloudWatch but I can only create alarms (e-mail notifications), it seems that CouldWatch needs you to add the auto scale policy in some other place (by command line console?). I would like to use some easy GUI interface. Ylastic and Scalr seems to be a good option. Which one do you think is better? Regarding Scalr, is there any difference between the open source software Scalr and the service Scalr.net? I mean, is the GUI interface the same? I like the idea of the Scalr because I do not need to give my Secret Access Key to a third party (like in Ylastic or in Scalr.net) One question about the Scalr software, it has to be installed in the instances or it must be installed in another machine? Do I need to setup again all my security permissions, AMIs, snapshots, etc. or I can use AWS Management Console for everything and Scalr just to auto scale.

    Read the article

  • Limiting access in Silverlight\Pivotviewer

    - by sparaflAsh
    I'm going to deploy a pivotviewer application. As some of you might know this silverlight application load a .cxml index file for a group of images. My need is to make .cxml file and image files not accessible for the user. Now, if I don't have a need I usually code like this in C# and the file is hosted in the documentroot: _cxml = new CxmlCollectionSource(new Uri("http://www.myurl.it/Collection.cxml", UriKind.Absolute)); This means that my cxml and then the images are available by http for everyone who knows the URI. I'm a newbie to server configuration, so any help/hint would be deeply appreciated. Someone suggested me to take the files out of the root, but it seems like I can't go to pick them up if they are not a URL in Silverlight. At least I didn't managed to understand how. Someone else suggested me to play with web.config file to hide URLs, but I don't really know where to start. My question is, what's the best practice to hide my stuff? Obviously I can edit the question if you need more details.

    Read the article

  • Incorrect deployment of WSGI app to AWS using Elastic Beanstalk

    - by Dzmitry Zhaleznichenka
    cross-link to AWS forums I have developed a simple Python web service using WSGI and would like to deploy it to AWS cloud using Elastic Beanstalk. My problem is I cannot make all the options I specify in Elastic Beanstalk configuration to be correctly configured in the cloud. For deployment, I use Elastic Beanstalk CLI utility. I have run eb init command and set up the required parameters. After this, a directory named .elasticbeanstalk was created in my source tree. It has two config files that are used for deployment, namely config and optionsettings. The latter one among the other options contains the WSGI configuration that has to update /etc/httpd/conf.d/wsgi.conf at the instances. After some of my adjustments the file has the following settings: [aws:elasticbeanstalk:application:environment] DJANGO_SETTINGS_MODULE = PARAM1 = PARAM2 = PARAM4 = PARAM3 = PARAM5 = [aws:elasticbeanstalk:container:python] WSGIPath = handler.py NumProcesses = 2 StaticFiles = /static= NumThreads = 10 [aws:elasticbeanstalk:container:python:staticfiles] /static = static/ [aws:elasticbeanstalk:hostmanager] LogPublicationControl = false [aws:autoscaling:launchconfiguration] InstanceType = t1.micro EC2KeyName = zmicier-aws [aws:elasticbeanstalk:application] Application Healthcheck URL = [aws:autoscaling:asg] MaxSize = 10 MinSize = 1 Custom Availability Zones = [aws:elasticbeanstalk:monitoring] Automatically Terminate Unhealthy Instances = true [aws:elasticbeanstalk:sns:topics] Notification Endpoint = Notification Protocol = email It turns out that not all of these options are considered when I start the environment or update it. Thus, when I update NumThreads or NumProcesses, the respective parameters get changed in wsgi.conf as expected. But whatever I write to the WSGIPath and StaticFiles parameters, I'm not able to automatically change the respective values of wsgi.conf, they remain Alias /static /opt/python/current/app/ WSGIScriptAlias / /opt/python/current/app/application.py which drives me nuts. Moreover, when I deploy my application using git aws.push and having the following contents of .ebextensions/python.config file, neither of options I specify in it affects the deployment. option_settings: - namespace: aws:elasticbeanstalk:container:python option_name: WSGIPath value: mysite/wsgi.py - namespace: aws:elasticbeanstalk:container:python option_name: NumProcesses value: 5 - namespace: aws:elasticbeanstalk:container:python option_name: NumThreads value: 25 - namespace: aws:elasticbeanstalk:container:python:staticfiles option_name: /static/ value: app/static/ I wonder what I should do to force AWS use all the parameters I specify in the configuration, namely the WSGI Path and path to my static data.

    Read the article

  • How to configure SCSI hard drives and RAID for Poweredge 2850 web server

    - by Saul
    I'm trying to set up a Poweredge 2850 as a web server, but as a server novice it's causing me some confusion. Its a virgin install so no data to be lost as yet, so I'd like to get the best arrangement for setting up Windows Server 2008. The box will run IIS, a mail and FTP server. The current physical arrangement of the hot swap drives is 1 73GB 3 146GB 5 blank 0 73GB 2 146GB 4 146GB (but flashes green, amber off) When I enter the PERC config screens on boot up I've got Raid Ch- 0 ID 0 ONLIN A00-00 1 ONLIN A00-01 2 ONLIN A01-00 3 ONLIN A01-01 4 HOTSP I think that drives 0 and 1 are set to RAID 1 and drives 2 and 3 are also set to RAID 1, certainly I can see 2 logical drives, both raid 1 of 69880MB and 139900MB Now what I think I am getting here is that the 2 73GB drives mirror each other and the 2 146GB drives mirror 2? so by my noob thinking if a drive fails I can pull it, insert and new one and it will reduplicate from its matching pair? I think the flashing amber probably indicates a failing drive in slot 4, should that just be binned? What confuses me coming from a home user XP background is that when I load up Windows Server 2008 OS under my computer I only see a C drive of about 70GB capacity. i.e wheres the 146GB drive? Any advice appreciated

    Read the article

  • Running docker in VPC and accessing container from another VPC machine

    - by Bogdan Gaza
    I'm having issues while running docker in AWS VPC. Here is my setup: I've got two machines running in VPC: 10.0.100.150 10.0.100.151 both having an elastic IPs assigned to them, both running in the same internet enabled subnet. Let's say I'm running a web server that serves static files in a container on the 10.0.100.150 machine the container: IP: 172.17.0.2 port 8111 is forwarded on the 8111 port on the machine. I'm trying to access the static files from my local machine (or another non-VPC machine also tried an EC2 instance not running in the VPC) and it work flawlessly. If I try to access the files from the other machine (10.0.100.151) it hangs. I'm using wget to pull the files. Tried to debug it with tcpdump and ngrep and that I have seen is that the request reaches the container. If I ngrep on the host machine I see the requests going in but no response going back. If I ngrep on the container I see the requests going in and the response going back. I've tried multiple iptables setups (with postrouting enabled, with manually forwarding ports etc) but no success. Help in any way - even debugging directions would be much appreciated. Thanks!

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • SQL Server 2008 Web VS SQL Server 2008 Enterprise

    - by Jeremy
    I wrote an application a few months ago, and was hosting it out of our offices on a workstation with an Intel Core 2 Quad Q8200 @ 2.33GHz, 8 GB RAM, Windows Server 2008 Enterprise and SQL Server 2008 Enterprise. Both the webserver and database server were run on the same machine. We had a huge influx in traffic, and moved ClubUptime.com, and got 2 of their top teir windows VMs. The Database server runs Windows 2008 R2 Standard and SQL Server 2008 R2 Web on 8 GB ram and an Intel Xeon e5620 @ 2.40GHz. Ever since switching, the database which used to run at around 400MB in RAM now runs at around 4-7GB, and there haven't been any changes to it (other than a couple columns here and there). Our traffic has quadrupled, and our DB is 6 GB on disk, why would SQL server take up 7 GB if the DB is only 6. And why would it be storing the ENTIRE database in memory? Another thing is why growing 4 times in size did the database's memory footprint grow 12 times? Last question: Why does the CPU peg at 100% now where it didn't before? The design is simple, VERY few joins, NO subqueries. I am just at a loss, unless it is the SQL server edition, or the fact that I moved from real hardware to a VM.

    Read the article

  • Migrating from Exchange 2003 to 2010 UID changes from 32 characters to 64 characters

    - by Seth
    We have built a custom CRM tool that integrates with Exchange 2010 using Exchange Web Services. The issue we are encountering revolves around editing appointments through the CRM tool that were created in exchange 2003. We have migrated the sales staff from Exchange 2003 to 2010 so that we could use EWS. EWS works great except for appointments that were created prior to the migration. Those appointments created prior to the migration in Exchange 2003 cannot be modified using EWS. The reason is that the ExchangeItemUID for the appointment changed from 32 characters to 64 characters. EWS does not recognize ExchangeItemUIDs that are 32 characters. We are looking for a solution that will allow us to modify these appointments. We are open to ideas of running a script that will update all appointment events for the sales people so that 2003 appointments are converted to 2010 format. We are also open to alternate IDs as opposed to using UID. I have seen some references to using CleanGlobalObjectID, but I don't see that property in EWS. Has anyone encountered this problem before? Any help you could give would be greatly appreciated!

    Read the article

  • Walkthrough/guide building aplication server for multi tenant web app [on hold]

    - by Khalid Adisendjaja
    The web app will detect a subdomain such as tenant1.app.com, tenant2.app.com, etc to identify tenant environment, each tenant environment will have a different database credential (port,db name,etc) but still connecting to the same database server. Each tenant should use app.com for their main domain, using their own domain is prohibitted. Each tenant will have their own rest api endpoint such as tenant1.app.com/api/v1/xxxx, tenant2.app.com/api/v1/xxxx, tenant3.app.com/api/v1/xxxx I've come to a simple solution by setting a wildcard subdomain (*.app.com) on webserver Apache/Nginx vhost configuration file. I have googled so many concept for building a multi-tenant app server but still don't understand how to really done it, what is the right way to do it and what is actually required to do this task. So I've come to this questions, Do I need a proxy server, dns masking, etc.. How to monitor each tenants activity What about server performance, load balancing, and scalability How to setup ssl certificate for each tenant what about application cache for each tenant Is it reliable to use the setup for production etc ... I have a very litte experience on server infrastructure, so I'm looking for a DIY walkthrough, step by step guide, or sophisticate solution ready to implemented for production

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • Testing realistic loads for new versions of existing web app

    - by David Cournapeau
    Assuming I have a relatively complex web application, I am interested in testing performances of a new version using a traffic as realistic as possible. Traffic is relatively complex (session-based, lots of internal logic which depends on incoming requests). The webapp depends on many servers (databases, frontends, etc...). I can think of two basic directions: Recording every incoming request with its timestamp in production in a centralized manner and replaying it from N clients to reproduce a load as close as possible as the original. Issue: because we have many servers, getting the centralized log is not trivial. having a system duplicating requests to a staging area so that I could "plug" a dev version of my webapp to it at anytime without affecting the production. Issue: I have not found much information about it expect this, which suggests to me that may not be the best solution. OTOH, it is realistic by definition. What is the standard way of doing this kind of testing ? I did not find much information about load testing with complex, realistic traffic.

    Read the article

  • illegitimate traffic from user agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729)

    - by user114293
    Since the beginning of the year, I'm getting a lot of traffic with the user agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729). My access logs show 40% - 60% from that user agent. That's strange because the user agent states a Firefox 3.0.10 browser (is anybody using that browser in 2012? Definitely not 40%-60% of visitors on a normal website). Also, the logs show that this user agent only requested the HTML document and no referenced assets like images, css, js files. I checked the IPs of those requests (with that UA). It's coming from all over the world. I recognized that those IPs sometimes have a mobile user agent. So my suspicion is a mobile app that is doing a lot of "spider requests" - but if that would be the case than other web sites should have the same problem. That's actually my question: Does anybody experience same/similar problems?

    Read the article

  • AWS own email domain and some generic questions

    - by John Brunner
    I'm getting started with Amazon Web Services and I have a few question I'm not sure about. As every (company) webpage I want to use an "[email protected]" email adress, but how is that done? I looked up at godaddy.com (for domain registration), the offer me an email adress like I want, but for 3 dollars per month. Is this possible with AWS? Because at AWS you have just a complex domain which is not very userfriendly or serious. Also I want to host my dynamic webpage on the amazon cloud, but I'm not sure if I'm doing that right. I've read many guides, and all I know is that I have to purchase a Elastic Compute Cloud, and a Simple Storage Service... and every guide is working with the basic linux package, why not Windows? Is it more expensive? I just want to host a mySQL Server for the dynamic webpage, which is reached over a normal domain. And one last question, if I sign up for an AWS account it asks me for an email account. But I found it a little bit unserious to write there my free-webmailer-adress... How is it done the normal way? Thanks in advance! Best regards, john.

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Struggling with proper way to setup Permissions on Linux/Apache Web Server

    - by Dr. DOT
    Your expert experience and assistance is great, greatly appreciated here. I have been running a LAMP server for a long time, yet I still struggle with the best way to set file & directory permissions for FTP and WWW protocol activity. My Control panel is WHM/cPanel (not that it makes a difference), and out-of-the box: files are owned by the user account setup in WHM (eg, "abc") files have a group setting of "abc" as well file permissions are created with 644 directories are owned by "abc" directories have a group setting of "abc" directories permissions are created with 0755 Again, these are the default permission settings. Now everything is fine with FTP activity, but please advise me if any of these file/directory settings create issues, especially with security. Here's where my struggle comes into play. I have PHP apps that allow a visitor to create, edit, rename, delete, etc. sub-directories and files in certain selected directories. PHP runs as "nobody" on my server. So in order to get my PHP/Web apps to work, I have had to: chown nobody * chgrp nobody * chmod 0777 * to everything in these certain & selected sub-directories. I know this is probably a huge security whole (so don't ask me for any links :) but how should I set all the permissions to allow my FTP user to do his thing while allowing the PHP apps to do their thing will also "minimizing" any security risks and exposures? I know that big CMS systems like Drupal, Joomla, WordPress and so on, handle this. Thanks ahead of time for reading through this and offering your expert advice!

    Read the article

  • AWS: Multi-region setup using single RDS instance

    - by Ion
    I'm trying to scale our web application (PHP, MySQL, memcache) in a multi-region scheme. Currently we are using a setup with two EC2 instances behind an ELB and an RDS instance, all of them in US-EAST (Virginia) region. We would like to have a presence in the EU (Ireland) region as well. This means at least a new EC2 instance there (identical to the others, serving the same application). I have copied the desired AMI, setup the new instance, setup a same ELB configuration (required for SSL termination) and configured latency-based routing in Route53. And it works as suggested. But, clients from EU have speed problems. This is due to the fact that the EU EC2 instances connect to the US-based RDS instance. As far as I know Amazon has not yet enabled RDS multi-region replication. Do you have any suggestions on how to properly speed up the whole setup while using the single RDS instance? Also, any ideas in general on how to scale things up? Ideally we would like to continue using the RDS technology for various reasons. Nevertheless, I am open to suggestions (I guess the next idea would be to host our own MySQL servers).

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >