Search Results

Search found 10299 results on 412 pages for 'apache'.

Page 329/412 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • Creating user accounts in Amazon EC2

    - by Tvanover
    I am putting together a test environment using Amazon's EC2 for me and some friends to collaborate on a project. I am not a server guy but I do know my way around a bash prompt and have done some work on ubuntu before. I am using Amazon Linux AMI i386 EBS and have gotten apache and php running. Now I need to create the user accounts my friends and I will use to upload files (sftp) and work on the project (ssh). How should I go about this? Should I just use adduser and configure it like normal? Or should I use the AWS IAM groups?

    Read the article

  • dig gets the right result from DNS server, but name still fails to resolve

    - by EMiller
    Under what conditions would the following occur? From a given OSX machine on an internal network: $~ cat /etc/resolv.conf nameserver 10.102.120.7 nameserver 10.102.120.2 From the same machine: $~ dig @10.102.120.7 in.local <snip> ... ;; QUESTION SECTION: ;in.local. IN A ;; ANSWER SECTION: in.local. 43200 IN A 10.102.123.30 <snip> ... And yet, this workstation cannot ping in.local, nor load pages hosted by apache on that machine. 10.102.123.30 is definitley up (2 OSX machines I know fail to resolve in.local - but other machines on the network can). I have also checked their /etc/hosts to see if anything there might interfere... Not sure what else to check...

    Read the article

  • Reverse proxy with SSL and IP passthrough?

    - by Paul
    Turns out that the IP of a much-needed new website is blocked from inside our organization's network for reasons that will take weeks to fix. In the meantime, could we set up a reverse proxy on an Internet-based server which will forward SSL traffic and perhaps client IPs to the external site? Load will be light. No need to terminate SSL on the proxy. We may be able to poison DNS so original URL can work. How do I learn if I need URL rewriting? Squid/apache/nginx/something else? Setup would be fastest on Win 2000, but other OSes are OK if that would help. Simple and quick are good since it's a temporary solution. Thanks for your thoughts!

    Read the article

  • How to allow a single domain name with iptables

    - by Claw
    I am looking for a way to make iptables only accept requests for my domain name and reject the others. Lately I misconfigured my apache proxy, it is now fixed, but I keep receiving a load of requests looking like that : xxxx.xx:80 142.54.184.226 - - [12/Sep/2012:15:25:14 +0200] "GET http://ad.bharatstudent.com/st?ad_type=iframe&ad_size=700x300&section=3011105&pub_url=${PUB_URL} HTTP/1.0" 200 4985 "http://www.gethealthbank.com/category/medicine/" "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0)" xxxx.xx:80 199.116.113.149 - - [12/Sep/2012:15:25:14 +0200] "GET http://mobile1.login.vip.ird.yahoo.com/config/pwtoken_get?login=heaven_12_&src=ntverifyint&passwd=7698ca276acaf6070487899ad2ee2cb9&challenge=wTBYIo2AEdMFr6LtdyQZPqYw9FS9&md5=1 HTTP/1.0" 200 425 "-" "MobileRunner-J2ME" which I would like to block. How can I manage this ?

    Read the article

  • .htaccess error with css

    - by user66161
    Hey Guys, I really need your help with writing seo url. I'm new to apache, mod rewrite and .htaccess and after a week without success. I want to change: sub.domain.com/soccer/teams.php?name=tigers to sub.domain.com/soccer/tigers What should my link (tigers) be? how would i set this that it doesn't cause a .css|.jpg|.png errors. My .htaccess file is located in /soccer/ folder. Please help or direct me to where i can fine help.

    Read the article

  • Migrating domains - 301 Redirect of all contents of directory

    - by Trufa
    I need to do a 301 redirect with apache since I'm migrating domains. What I would need to do is the following, from certain directories, redirect all of it's content to a different damin (where the file already exists). Let's say I have one.com/files/something.doc or one.com/files/other.php I have already "copied" or "backed up" all the contents of the directory, so the following already exist: two.com/old/files/something.doc and two.com/old/files/other.php So I would just need to redirect anything in the directory "files" (or whatever). I hope the question is clear enough, if not please ask for any clarification needed!! Thanks in advance!!

    Read the article

  • backup, sync and search files over internet and intranet

    - by Cawas
    There are many online backup options out there. Dropbox, Sugarsync, Mozy, Carbonite, Jungledisk and my favorite so far, Crashplan. Some of them allow searching, all of them sync with their online servers, none of those (or many many others I didn't listed here) have what I want. I'm _not_ looking for an online backup service in here. Sure, some people might say "use rsync", "linux" and/or "set up apache" and so on... But that's just too much for maintenance, if it's even viable of building up. It needs to be simple. So, anyone knows of a really good solution out there? Picture mostly Google Desktop Search (or quick search) awesome searching, mixed with Crashplan Desktop, which is able to do everything by itself, and something like Dropbox's file versioning, along with dropbox the ability to seamless sync over intranet and internet like crashplan, switching between them when needed. I bet there's nothing like this yet, but well, I'm not sure. It would be great!

    Read the article

  • "Countersigning" a CA with openssl

    - by Tom O'Connor
    I'm pretty used to creating the PKI used for x509 authentication for whatever reason, SSL Client Verification being the main reason for doing it. I've just started to dabble with OpenVPN (Which I suppose is doing the same things as Apache would do with the Certificate Authority (CA) certificate) We've got a whole bunch of subdomains, and applicances which currently all present their own self-signed certificates. We're tired of having to accept exceptions in Chrome, and we think it must look pretty rough for our clients having our address bar come up red. For that, I'm comfortable to buy a SSL Wildcard CN=*.mycompany.com. That's no problem. What I don't seem to be able to find out is: Can we have our Internal CA root signed as a child of our wildcard certificate, so that installing that cert into guest devices/browsers/whatever doesn't present anything about an untrusted root? Also, on a bit of a side point, why does the addition of a wildcard double the cost of certificate purchase?

    Read the article

  • How can I access a Web server in a VM from an iPad?

    - by Nick Haslam
    I have a virtual machine (running Windows Server 2012, if it's relevant), on VMware Workstation. It is running an Apache Tomcat web server, and I'm wanting to access that webserver from an iPad. Is this feasible, or even possible ? I have tried running Connectify Hotspot on the host machine, but that only gets me as far as being able to access a webpage on the host machine. It doesn't look to pass the connection through to the VM as they are on different subnets. Any thoughts are gratefully received.

    Read the article

  • How do I configure ubuntu server's iptables to allow java without opening the floodgates?

    - by rofls
    I'm new to servers, so please bear with me. I have my amateur site running. Problem is, I followed Rackspace's instructions on setting up iptables and am pretty sure that's why the java server I'm trying to use on port 8080 isn't working (it runs the script but my android test app doesn't connect to it). When I try running the same java server script on port 80 it doesn't even start. I also ran nmap on my domain and saw that indeed only port 80 and 22 (for ssh) are responding. Is it possible to run Java and apache happily on the same server? If so, how can I configure my iptables correctly. (I'm aware that I should probably do some sort of filtering in the java server itself, but will figure that out later).

    Read the article

  • Creating a portable PHP install?

    - by Xeoncross
    I would like to create a folder with a couple versions of PHP that I can start in cgi mode as needed. I use different windows machines for development and I would like to be able to move around computers without needing to install PHP on each one. Something like below F:/PHP /5.3.2 /5.2.8 /5.1.0 Then I could just start each up as needed with something like F:\php\5.3.2\php-cgi.exe -b 127.0.0.1:9000 Which would allow nginx or apache to use the PHP service. This would really help to make my development environment decoupled. Does anyone know how to create a portable PHP install?

    Read the article

  • umask is being ignored on Gentoo while creating new files

    - by drcelus
    I have a server running Gentoo and hosting a drupal installation. Whenever a Drupal update is executed, the directory permissions of the updated module turn from 755 to 744 preventing the application from accessing the files. The umask is defined as 022 under /etc/profile and the Apache server is running under user and group nobody. I believe this has nothing to do with the drupal installation since if I create a directory as root, the same happens, it is created with 744 permissions, since the umask is 022 shouldn't it be created as 755 ? Why is the umask being ignored and how do I tell the server to create the directories with permission 755 ?

    Read the article

  • what can be reasons for localhost responding super-slow the first time a page is requested?

    - by frequent
    Still learning my server-ways with Apache2.2/MySQL5.2/Coldfusion8 on localhost (running Windows XP) What I notice is every time I request a page for the first time after firing up Coldfusion and Apache, localhost needs forever (1+ minute to respond and send the inital page). After that all seems to run at normal loading time. I'm using require.js to pull in Jquery, Jquery Mobile and two other plugins on first page load, but loading the same page from a real server works normally, so I'm also ruling out this as a probable cause. Since it happens regardless of which page I'm loading first it shoud not be page related, so I'm looking for other clues on why this could happen. Thanks for some thought!

    Read the article

  • Let CGI-PHP load a non-default shared library.

    - by ralle
    In Apache2 I configured PHP as CGI in a virtual host: SetEnv PHPRC "/usr/local/php5.3" ScriptAlias /php5.3 "/usr/local/php5.3/bin" Action application/php5.3 /php5.3/php-cgi AddType application/php5.3 .php Everything works fine. Now I have some issues with the standard version of the GD because it restricts me in settings several hinting and anti-aliasing stuff for fonts. Therefore I want to modify the GD source and create a new shared library. Since I don't want a modifed library in my system I want only PHP to use that library. My question now: How can I change the Apache configuration in a way that PHP uses a certain new version of the library? Something like this does not work: ScriptAlias /php5.3 "LD_LIBRARY_PATH=/path/to/my/lib:$LD_LIBRARY_PATH /usr/local/php5.3/bin"

    Read the article

  • What is the best way to setup a heartbeat agent for failover between two VMs?

    - by EGr
    I have two VMs in VirtualBox that use NAT for their network adapters. They are both getting the same IP address, so I will need to reconfigure that; but knowing that, is it possible to set up a heartbeat agent to failover an apache server if one of the two VMs go down? The way I pictured it would be that the webserver would be able to be accessed externally via :80. No matter what VM was running, I would be able to access the website at that IP/port since failover would be setup. I'm running into trouble setting up IPs when the network adapters are set to NAT, and people have told me that I shouldn't be setting the IPs in this configuration. So what should I do to achieve what I'm looking for? Is it even feasible?

    Read the article

  • Binding services to localhost and using SSH tunnels - can requests be forged?

    - by Martin
    Given a typical webserver, with Apache2, common PHP scripts and a DNS server, would it be sufficient from a security perspective to bind administration interfaces like phpmyadmin to localhost and access it via SSH tunnels? Or could somebody, who knew eg. that phpmyadmin (or any other commonly availible script) is listening at a certain port on localhost easily forge requests that would be executed if no other authentication was present? In other words: could somebody from somewhere in the internet easily forge a request, so that the webserver would accept it, thinking it originated from 127.0.0.1 if the server is listening on 127.0.0.1 only? If there were a risk, could it be somehow dealt with on a lower level than the application, eg. by using iptables? The idea being, that if someone found a weakness in a php script or apache, the network would still block this request because it did not arrive via a SSH-tunnel?

    Read the article

  • Creating a seperate static content site for IIS7 and MVC

    - by JK01
    With reference to this serverfault blog post: A Few Speed Improvements where it talks about how static content for stackexchange is served from a separate cookieless domain... How would someone go about doing this on IIS7.5 for a ASP.NET MVC site? The plan so far: Register domain eg static.com, create a new website in IIS Manually copy the js / css / images folders from MVC as is so that they have the same paths on the new server Enable IIS gzip settings (js/css = high compression, images = none) Set caching with far future expiry dates <clientCache cacheControlCustom="public" /> in the web.config Never set any cookies on the static.com site Combine and minimize js / css Auto deploy changes in static content with WebDeploy Is this plan correct? And how can you use WebDeploy to deploy the whole web app to one server and then only the static items to another? I can see there is a similar question, but for apache: Creating a cookie-free domain to serve static content so it doesn't apply

    Read the article

  • Strange PHP output buffering

    - by radek-k
    PHP: header('Content-type: text/plain'); for ($i=0; $i<10; $i++){ echo "$i\r\n"; ob_flush(); flush(); sleep(1); } I tried script above on 2 different servers. Both respond numbers 0...9 in every line. In case of first server each number is received every second. In case of second server there is no output for 10 seconds and entire output is displayed at once. What might be wrong int second case? I tried various uutput control Functions but it didn't help. Set of response headers in both cases is pretty much the same: HTTP/1.1 200 OK Date: Mon, 03 Jan 2011 19:21:21 GMT Server: Apache X-Powered-By: PHP/5.2.14 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/plain

    Read the article

  • [linux] preventing access in shared hosting

    - by jack
    Hi Linux Admins I set up a small shared hosting that contains some sites. For each site, there is a user. I mean, for abcd.com, I created abcd.com user and put htdocs for web hosting. I have no idea on how to prevent abcd.com from accessing xyzd.com's data. I have chmoded by changing 0 to others permission, which makes access defined by Apache when I view it with browser. How can I secure access? Thanks.

    Read the article

  • Chef command to create new ec2 instance with second ebs volume attached and mounted instead of the default ephemeral volume?

    - by runamok
    We currently use this command to create a new ec2 instance with chef: knife ec2 server create --node-name=prod-apache-1 --availability-zone us-east-1c --image ami-3d4ff254 --distro ubuntu12.04-gems --groups "default" --ssh-key foo --identity-file ~/.ssh/id_rsa --ssh-user ubuntu --flavor m1.small After this command we then run further chef commands to finish provisioning the server. I was wondering if it would be possible while first setting up the instance I wanted a 100 gb volume created and mounted at /mnt and to have the ephemeral storage mounted at /tmp or /mnt-ephemeral instead. If not what further commands in chef would you advise running? I know how to do this via the aws console and can probably figure out how to do it via the ec2 command line tools but I am knew to chef and a bit overwhelmed.

    Read the article

  • How to I test if mod_rewrite is enabled?

    - by user124130
    I'm setting up an environment for wordpress on apache2, on a fresh install of ubuntu 12.04. In order to get friendly URLS working, I'm trying to set up mod_rewrite. I followed some instructions I found on the net, and used a2enmod. Now. after restarting apache, I'd like to check if the module is actually loaded. The command that I've found for getting a list of loaded modules is this: apache2 -t -D DUMP_MODULES However, this returns an error: apache2: bad user name ${APACHE_RUN_USER} So, how do I actually list all loaded modules, or otherwise check to see if mod_rewrite has been enabled?

    Read the article

  • Super simple high performance http server

    - by masylum
    I´m building a url shortener web application and I would like to know the best architecture to do it in order to provide a fast and reliable service. I would like to have two separate servicies in different machines. The first machine will have the application itself with a apache, nginx, whatever.. The second one will contain the database. The third one will be the one that will be responsible to handle the short url petitions. For the third machine I just need to accept one kind of http petition (GET www.domain.com/shorturl), but it have to do it really fast and it should be stable enough. Which server do you recommend me? Thank's in advance and sorry for my english

    Read the article

  • How can I get a notification from my server if the mail queue stops

    - by Ash
    I am using QMail with Plesk 10 on an Apache server. Occasionally the mail queue stops processing emails - this most recently happenend when an email account got hacked and started sending hundreds of emails. We did not find out about this until a client of ours contacted to say that their emails were not being recieved, so we checked the mail queue and lo and behold the service had stopped. In future I would like to be notified when the mailqueue stops. How can I set something up so the server will run a command whenever the mailqueue stops?

    Read the article

  • is there any valid reason for users to request phpinfo()

    - by The Journeyman geek
    I'm working on writing a set of rules for fail2ban to make life a little more interesting for whoever is trying to bruteforce his way into my system. A good majority of the attempts tend to revolve around trying to get into phpinfo() via my webserver -as below GET //pma/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //admin/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //dbadmin/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //mysql/config/config.inc.php?p=phpinfo(); HTTP/1.1 I'm wondering if there's any valid reason for a user to attempt to access phpinfo() via apache, since if not, i can simply use that, or more specifically the regex GET //[^>]+=phpinfo\(\) as a filter to eliminate these attacks

    Read the article

  • Permission denied on network share

    - by Philipp
    i have a Windows 8 host system running a virtual(hyper-v) Debian6 client with an lamp environment. My development environment runs under Windows and I mapped the folder with my php files to a network drive so Apache has access to them.(mount.cifs //pc/share /var/share/) This far no problems - I see my app on windows in the browser. The problem is, I can't write stuff in php to the share folder - everytime i got a permission denied message in my error logs. For testing purpose i tried to change the directory permissions of /var/share with chmod -R 777 /var/share without success. Now Iam a little bit stumped.. has anyone an idea how to solve this?

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >