Search Results

Search found 11704 results on 469 pages for 'apache fan'.

Page 361/469 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Mod_rewrite display subdomain.domain.com and call domain.com/subdomain/ for SSL

    - by Jeff H.
    I have a website secured by a standard SSL certificate, securing a few different shops under different subdirectories. Ex. domain.com/shop1/ The shops are also accessible via a subdomain e.g. shop1.domain.com. What I'm trying to accomplish: display shop1.domain.com to the user, while keeping all of the actual server calls as domain.com/shop1, so that the secure pages will continue to work properly. (Not sure if I'm using the proper language, exactly, I hope my point is clear.) To be clear: my SSL is working fine, and I don't need help with that, and I don't need or want to purchase a UCC cert. It can't be that difficult for anyone with experience with Apache. (I've spent 3 hours trying to learn about mod_rewrite. It's just not clicking.) I'm on a GoDaddy secure shared server, so please keep in mind that I'm not able to reset the server or anything.

    Read the article

  • Gateway time out connecting to tethered server from Android

    - by BentFX
    I've got an Android device running android-wifi-tether. It works as advertised. I connect to it from my Ubuntu 12.04 laptop running Apache 2.2.22. The laptop is manually configured to IP 192.168.2.100 in the hosts file. It can ping itself and access it's own web server through that address. The WiFi tether hotspot gives the laptop the same 192.168.2.100 address(Laptop was configured to match the hotspot address as a troubleshooting step, and could be wrong.) Using ping I can ping the laptop from the phone using the 192.168.2.100 address. Using portscan the phone shows port 80 open on the 192.168.2.100 address. So, everything looks like it's in place, but any attempt to browse to http://192.168.2.100 fails after a few moments with a 504(Gateway time out) Any help would certainly be help.

    Read the article

  • Tomcat 6 IP restrictions

    - by KB22
    I need to protect a certain folder within a web application of mine from access from outside of an defined IP range. With O'Reilly's Tomcat Tips I figured that: <Context path="/path/to/secret_files" ...> <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="127.0.0.1" deny=""/> </Context> Is the way to go? I'm not that much into tomcat configuration so I'm dazzled a little as to where to put these restrictions. Do I put this Within my web.xml or is this a thing I need to add to some general tomcat conf file?

    Read the article

  • How can I do Ubuntu Lucid Desktop installation with HTTP preseed file and desktop installation CD?

    - by netvope
    Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url=http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity, it boots as a LiveCD. If I remove boot=casper, it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto, it still can't do automatic install. If I remove auto, it's the same. What am I missing? From the apache log of the server hosting preseed.cfg, I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt. Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct. Any ideas?

    Read the article

  • Nginx config rewriting subdomain name to 1st URI segment

    - by tim peterson
    I'm unable to do the following nginx.conf rewrite: test.mysite.info to: mysite.info/test here's what i've tried: server { server_name test.mysite.info; rewrite ^ https://mysite.info/test/$request_uri; } I know my DNS (Route53 AWS) is correct b/c: test.mysite.info redirects to mysite.info (just not mysite.info/test) I have an Apache server handling mysite.com which using .htaccess I can rewrite test.mysite.com to mysite.com/test. I haven't changed anything else from the default nginx.conf installation so I'm totally confused as to why such a simple thing isn't working. Here is my full nginx.conf file if that is helpful.

    Read the article

  • Want to install FlightGear but 'libapr1' dependency cannot be satisfied

    - by Jonathan Reno
    When I want to install the flightgear:amd64 package, it requests to install the simgear2.8.0:amd64 package onto my Linux Mint 13 KDE 64-bit system. But I cannot install simgear2.8.0:amd64 because I could not find it in the GetDeb repositories, could not install it from PlayDeb, and could not find a .deb file online. So, I tried to install simgear2.8.0:i386, but it wants me to install (read reinstall) libapr1, but it is already installed properly with its dependencies. By the way, libapr1 is necessary for my Apache installation. Could you help me fix my problem?

    Read the article

  • SSL URL gives a 404

    - by terrid25
    I have recently created an SSL cert on my server *.key and a *csr file. I then created the *crt and the *.ca-bundle with Comodo. I have 2 current vhosts: vhost for - http://www.example.com NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/home/example/public_html/example.com/httpdocs" ServerName example.com ServerAlias www.example.com </VirtualHost> vhost for https://www.example.com NameVirtualHost *:443 <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/ssl/certs/example_com.crt SSLCertificateKeyFile /etc/ssl/certs/server.key <Directory /home/example/public_html/example.com/httpdocs> AllowOverride All </Directory> DocumentRoot /home/example/public_html/example.com/httpdocs ServerName example.com </VirtualHost> The problem is, when I go to https://www.example.com I get a 404 I'm not sure if the vhost(s) is correct or why I get a 404. Has anyone ever seen this before? I have enabled mod_ssl and restarted apache Many Thanks

    Read the article

  • The proxy server received an invalid response from an upstream server

    - by chandank
    I have tomcat server behind the apache. I am using mod_ssl and reverse proxy to the tomcat. All are running at default ports. The full error is as follow. ack Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request POST /pages/doeditpage.action. Reason: Error reading from remote server If I clean the browser cache the error goes away and comes back after few attempts. I test the same on Chrome/Firefox/IE on Windows platform. Wondering it works perfectly on Linux based Chrome/Firefox. I googled a lot there are few answers at stack overflow but I am not able to find my answer. Is this a server side problem? because so many browsers cant be wrong at same time on Windows.

    Read the article

  • .htaccess to block by file name possible?

    - by Tiffany Walker
    I have a bunch of files that are secure_xxxxxx.php. Is there a way to use .htaccess to block access to all the secure_* php files based on IP? EDIT: I've tried but I get 500 errors <FilesMatch "^secure_.*\.php$"> order deny all deny from all allow from my ip here </FilesMatch> Don't see any errors in apache error logs either httpd -M Loaded Modules: core_module (static) authn_file_module (static) authn_default_module (static) authz_host_module (static) authz_groupfile_module (static) authz_user_module (static) authz_default_module (static) auth_basic_module (static) include_module (static) filter_module (static) log_config_module (static) logio_module (static) env_module (static) expires_module (static) headers_module (static) setenvif_module (static) version_module (static) proxy_module (static) proxy_connect_module (static) proxy_ftp_module (static) proxy_http_module (static) proxy_scgi_module (static) proxy_ajp_module (static) proxy_balancer_module (static) ssl_module (static) mpm_prefork_module (static) http_module (static) mime_module (static) dav_module (static) status_module (static) autoindex_module (static) asis_module (static) info_module (static) suexec_module (static) cgi_module (static) dav_fs_module (static) negotiation_module (static) dir_module (static) actions_module (static) userdir_module (static) alias_module (static) rewrite_module (static) so_module (static) fastinclude_module (shared) auth_passthrough_module (shared) bwlimited_module (shared) frontpage_module (shared) suphp_module (shared) Syntax OK

    Read the article

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • Using dnsmasq dns server on a laptop for a locally hosted tld

    - by kimausloos
    Hi, I'm running a working apache install with mod vhost_alias on a laptop. Everything works, but I have to manually add hosts in my /etc/hosts file. I'd like to install dnsmasq to point a tld to my local machine (this is not the problem, I know how to add a wildcard tld to dnsmasq), but how do I make it so that it will go look for other dns servers upstream? Remember I'm talking about a laptop, so I will be working on various wifi hotspots, with different default gateways, dns servers, etc. I've searched for quite some time but the only things I find are guides that require you to hardcode the upstream servers wich is a no-go for me. Any help is appreciated!

    Read the article

  • How do I enable JPEG Support for PHP?

    - by ngache
    My Configure Command doesn't say anything about jpg, nor gif/png, but I can see gif/png support in the output of phpinfo(). I built PHP with --with-gd, but only GIF Support and PNG Support are in the output of phpinfo(), how do I enable JPEG Support? UPDATE I got this problem when compiling : Sorry, I cannot run apxs. Possible reasons follow: 1. Perl is not installed 2. apxs was not found. Try to pass the path using --with-apxs2=/path/to/apxs 3. Apache was not built using --enable-so (the apxs usage page is displayed) The output of /usr/local/apache2/bin/apxs follows: cannot open /usr/local/apache2/build/config_vars.mk: No such file or directory at /usr/local/apache2/bin/apxs line 218. What should I do now?

    Read the article

  • Centos livecd of current installation

    - by mplacona
    I'm trying to create a liveCD of my current Centos installation. i want it to be almost like a backup, so whenever I want to copy my current installation to another computer I would simply install from my custom liveCD. I know this is possible, and found some resources in the nets, but they all seem to only create a minimal version of CENTOS, and I'm wanting to have all the current functionalities available to me at present, including all of my development functionalities, Apache and samba settings, etc. I have done (on Debian though) it a few years ago, but can't remember how. could anyone please shed me a light on this? Thanks in advance

    Read the article

  • How to make a normal user can install package with `pkg_add` on FreeBSD?

    - by Eonil
    How to make a normal user can install package with pkg_add on FreeBSD? pkg_add -r command fails with normal user with sudo. Downloading succeeds, but installation fails with this error message. Equal command executed successfully with root login. %sudo pkg_add -r apache22 Password: Error: Unable to get ftp://ftp.freebsd.org/pub/FreeBSD/ports/amd64/packages-8.1-release/Latest/apache22.tbz: Syntax error, command unrecognized pkg_add: unable to fetch 'ftp://ftp.freebsd.org/pub/FreeBSD/ports/amd64/packages-8.1-release/Latest/apache22.tbz' by URL % Assume my username as eonil. I added eonil ALL=(ALL) ALL code as next of root ALL=(ALL) ALL via visudo, and added the user to wheel group by pw usermod eonil -G wheel . But the user cannot install package with sudo pkg_add -r apache22. (not only the apache, any package.)

    Read the article

  • httpd not starting with systemd on F17

    - by malfy
    Title says it all. This is a fresh Fedora 17 system running on a Xen hypervisor. No idea why it won't start [root@box~]  uname -a Linux box.localhost 3.5.4-2.fc17.i686.PAE #1 SMP Wed Sep 26 22:10:23 UTC 2012 i686 i686 i386 GNU/Linux [root@box~]  cat /etc/redhat-release Fedora release 17 (Beefy Miracle) [root@box~]  systemctl enable httpd.service ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service' [root@box~]  systemctl start httpd.service Job failed. See system journal and 'systemctl status' for details. [root@box~]  systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: failed (Result: exit-code) since Fri, 19 Oct 2012 22:43:37 -0500; 3s ago Process: 18225 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=226/NAMESPACE) CGroup: name=systemd:/system/httpd.service Oct 19 22:43:37 box.localhost systemd[18225]: Failed at step NAMESPACE spawning /usr/sbin/httpd: No such file or directory [root@box~]  ls -al /usr/sbin/httpd -rwxr-xr-x 1 root root 343496 Apr 30 04:56 /usr/sbin/httpd

    Read the article

  • Linux Mint something wrong with my .bashrc

    - by user2309862
    The path of my .basrc file is /home/vamsi/.bashrc It is weird that my file has nothing but the path I set. I think I am using a file at the wrong location or that I have lost my .bashrc file as none of the environment variables set here seem to work. #ANDROID_DEV ANDROID_HOME=/opt/android-sdk-linux export ANDROID_HOME PATH= $PATH:$ANDROID_SDK_HOME/tools export PATH PATH=$PATH:$ANDROID_HOME/platform-tools export PATH PATH=$PATH:$ANDROID_HOME/build-tools export PATH #MAVEN-PATH M2_HOME=/opt/apache-maven-3.1.0 export M2_HOME M2=$PATH:$M2_HOME/bin export M2 I was prompted to install maven2 in order to use mvn, but the android command cannot be found. Could you please help me find a solution to this issue. EDIT: Meanwhile,I tried this: export PATH=${PATH}:/opt/android-sdk-linux/platform-tools export PATH=${PATH}:/opt/android-sdk-linux/tools Now,the output of $PATH echoes: bash: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/opt/android-sdk-linux/platform-tools:/opt/android-sdk-linux/build-tools: No such file or directory

    Read the article

  • missing files after reassemble of RAID-5

    - by Kris_R
    I had to open my file-server's housing on Sunday to replace a faulty fan. What I didn't see was that one of the sata-cables was not properly connected. The 1st thing I did after a reboot was a check of the RAID status and it showed immediately that one drive is missing. Till this moment the device was not used (however it was mounted, so I'm not 100% sure that system did nothing). I stopped md0 and re-plugged the cable: mdadm --stop /dev/md0 poweroff After another reboot I checked the removed drive: mdadm --examine /dev/sdd1 ... Checksum : 3276bc1d - correct Events : 315782 Layout : left-symmetric Chunk Size : 32K Number Major Minor RaidDevice State this 0 8 49 0 active sync /dev/sdd1 0 0 8 49 0 active sync /dev/sdd1 1 1 8 65 1 active sync /dev/sde1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 17 3 active sync /dev/sdb1 I was a bit surprised that it was shown as active (even if earlier mdadm said, that this device was removed from array) and its checksum was OK. I recreated RAID with: mdadm --assemble /dev/md0 --scan The command mdadm --detail /dev/md0 showed that all drives were running and system was in "clean" state. I mounted the device md0 and then came hic-cup. I wanted to work on one of the last files that I had been using before all the situation and it was not there. In another place I missed actually all files from the directory where I was working. As far as I can see most of the files that are older than a few days are intact but some newer ones are missing. Now the big question: what would be your advice? Is there a way to get these data? I thought about removing the drive that was earlier labeled by mdadm and rebuild array with another empty HDD. I've found that after re-assemble the "broken" drive has another label (changed from sdd to sdb). Can this have influence on rebuilt process? If yes, how to reassemble the array properly? I'm sure the SATA-cables are connected still in the same order to the controller. p.s. Please no advises like "restore from backup". I'm doing back-ups on Sunday's night and this happened in the late afternoon, so backup is not really options for me. p.s.s. I asked this question on Unix&Linux but no answer came up during last two days. I'm getting quite anxious. Sorry for duplicating if any of you reading the other forum.

    Read the article

  • Setting MacBook timezone to UTC

    - by Andy A
    To run my web app, I need to set my timezone to UTC on my MacBook. I can do this temporarily by opening a Konsole and entering sudo ln -sf /usr/share/zoneinfo/UTC /etc/localtime However, my timezone returns to normal when I restart my machine! Any advice? Edit : The response to this question by 'Celada' implies that I can just make my Server UTC. I am using Apache Tomcat 7. Adding to Celada's response, how can I make it UTC?

    Read the article

  • DNS entries issues

    - by Yaman
    I have some troubles with my DNS entries (or maybe my Apache conf). I have something like this : kira.mydomain.com A 123.45.67.89 youfood.mydomain.com CNAME kira.mydomain.com www.youfood.mydomain.com CNAME youfood.mydomain.com All's good when I check theses entries with nslookup. When I try going on http://www.youfood.mydomain.com, it work but not with http://youfood.mydomain.com ... Here my vhost : <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName youfood.mydomain.com ServerAlias www.youfood.mydomain.com DocumentRoot /home/ftp_youfood/www/trunk <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/ftp_youfood/www> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> [...] </VirtualHost> Is there anything wrong ?

    Read the article

  • What consequences to take from what i read in logfiles?

    - by Helene Bilbo
    Since some weeks i manage my first Webserver, a Seaside application behind an Apache proxy on Linode, and i installed logwatch to send me daily logs. Where can i get information on when i have to act as a consequence of what i read in these logwatch reports? For example i read that all kinds of people try to login on funny nonexisting accounts or all kinds of webcrawlers test for nonexisting cms login pages, some ip adresses get banned and unbanned by fail2ban... I assume that's normal? Is it? But how do i know that i probably have to do something? What do i look for in the logs?

    Read the article

  • Problems with kickstart script, partition info crashes deployment

    - by tore-
    Hi, Currently testing cobbler, but have problem with the kickstart script when the partition information is loaded. Here is my ks: http://pastebin.ca/1824343 I can't figure out what is the problem with the partsection at all. Without it, it works. I've even tried autopart. If the entry is removed, it works, but of course I have to provide the installer with partition information. Under the kickstart an python exception is raised. I get a Errno 2 No such file or directory. My Apache logs states: File does not exist: /var/www/cobbler/links/CentOS-5.3-x86_64/images/updates.img File does not exist: /var/www/cobbler/links/CentOS-5.3-x86_64/disc1 File does not exist: /var/www/cobbler/links/CentOS-5.3-x86_64/images/product.img But without the part information, no error occours. What am I not seeing? Cobbler 2.0.3, imported the CentOS 5.3 x86_64 DVD, PXE booting from a Xen guest.

    Read the article

  • My NGINX server doesn't use my *.less file

    - by Nicolas
    On my NGINX server, I use a LESS file instead of a CSS file. But my web page is displayed without any style. I tried on Apache and it works great. So I tried to add the less mime-type to nginx in the file /etc/nginx/mime.types : types { text/css css less; And types { text/less less; None of these two try work. Does anyone knows how to use LESS files with NGINX ?

    Read the article

  • install CA root trust certificate in Cent OS

    - by Shyamin Ayesh
    i install SSL certificate in my web site and now i have some questions about it. my web site is working correctly in google chrome web browser but it's not working in firefox browser. one of my friend is say's me the CA Root Trust certificate is not installed in the server. now i need to know how can i confirm the CA Root Trust is not installed and how to install CA Root Trust certificate in Cent OS 6.4 minimal with Apache. my SSL certificate issued AlphaSSL and it's domain validating wildcard certificate CA - G2. thank you very much for prompt reply !

    Read the article

  • Selinux interfering with vboxwebsrv or phpvirtualbox

    - by Mike W
    I have a brand new installation of Fedora 18, with a brand new installation of Virtualbox 4.2. I have spent a painful few hours trying to get phpVirtualBox working. Apache 2.4 and PHP 5.4 are installed, along with the phpVirtualBox software. Attempting to access phpVirtualBox allowed me to login, but then I'd have a prolonged wait until an 'Error fetching HTTP headers' message appeared. Finally, I set SeLinux to permissive, and Bingo! things start to work. For some reason the SeLinux Troubleshooter isn't flagging any messages from SeLinux, I don't know what to look for now. This is a development box so I could leave SeLinux set to permissive but I will need to make this work in anger on the next project. My question, then, is this: What changes to SeLinux policies do I need to make to allow phpVirtualBox and vboxwebsrv to work together? If there's more information I can post that will assist I'll gladly post it - just let me know what it is.

    Read the article

  • PHP not loading php.ini in directory or following error_reporting() on Windows 7

    - by Marcus
    Normally I develop under E_ALL error level, but for sanity on this project I want notices and strict off. So initially tried: error_reporting(E_ALL & ~(E_STRICT|E_NOTICE)); And several other combinations of the same thing, nothing worked. Next I tried to create a local php.ini the directory with error_reporting = E_ALL & ~E_NOTICE but nope, that didn't work either. phpinfo() is reporting: Scan this dir for additional .ini files: (none) Can someone help me fix either of these problems? Preferably both! Thanks! I'm running PHP Version 5.2.13 on Apache/2.2.14 under Windows 7 x64.

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >