Search Results

Search found 10640 results on 426 pages for 'apache2 module'.

Page 105/426 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Can mod_rewrite Conditions/Rules be executed in random order?

    - by Tom
    I have some mod rewrite rules that test for the presence of a file on various NFS mounts and I would like that the tests occur randomly, as a very rudimentary way to distribute load. For example: RewriteEngine on RewriteCond %{REQUEST_URI} ^/(.+)$ RewriteCond /mnt/mount1/%1 -f RewriteRule ^/(.+)$ /mnt/mount1/$1 [L] RewriteCond %{REQUEST_URI} ^/(.+)$ RewriteCond /mnt/mount2/%1 -f RewriteRule ^/(.+)$ /mount2/$1 [L] RewriteCond %{REQUEST_URI} ^/(.+)$ RewriteCond /mnt/mount3/%1 -f RewriteRule ^/(.+)$ /mnt/mount3/$1 [L] RewriteCond %{REQUEST_URI} ^/(.+)$ RewriteCond /mnt/mount4/%1 -f RewriteRule ^/(.+)$ /mnt/mount4/$1 [L] As far as I understand mod_rewrite Apache will look for the file on /mnt/mount1, then mount2, mount3 and so on. Can I randomize this on each request? I understand this is an odd request but I need a creative solution to some unforeseen downtime. On a side note, do I need to redeclare RewriteCond %{REQUEST_URI} ^/(.+)$ each time like I have done? Thanks

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • DNS redirecting to Apache

    - by leo
    I have CentOS installed on LVM, that is on Debian. There are BIND and Apache on CentOS. I need to access sites from browser on Debian with names like: 1.domain, 2.domain, etc. So I set up Apache and I can access these sites, but using /etc/hosts/ on Debian. And now I'm trying to configure bind. named.conf: zone "domain" IN { type master; file "/var/named/domain.zone"; allow-update { none; }; }; 192.168.100.1 is DNS' ip; 192.168.100.139 is Apache ip; domain.zone: $TTL 86400 @ IN SOA domain. root.domain. ( 100 1H 1M 1W 1D ) @ IN NS ns1.domain. @ IN A 192.168.100.139 ns1 IN A 192.168.100.1 WWW IN A 192.168.100.139 1 IN A 192.168.100.139 2 IN A 192.168.100.139 www.1 IN A 192.168.100.139 www.2 IN A 192.168.100.139 Also, is it necessary to configure 100.168.192.in-addr.arpa? Please, explain me where I'm wrong.

    Read the article

  • Apache in MAC OS X

    - by Michal K.
    I have problems with apache on MAC OS Lion 10.7.5. I have VirtualHosts: <VirtualHost *:80> ServerName devel.dev DocumentRoot /var/www </VirtualHost> <VirtualHost *:80> ServerName test.dev DocumentRoot /var/www <Directory "/var/www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I have configured httpd.conf to ServerRoot = /usr/htdocs (empty directory). test.dev and devel.dev says always "It works!". Why ? In /var/www I have index.php which contains only one letter "k" (for tests). Edit, more info: I have restart apache milion times. File with VirtualHosts is included. error.log: [Tue Oct 02 20:03:55 2012] [notice] caught SIGTERM, shutting down [Tue Oct 02 20:03:55 2012] [warn] mod_bonjour: Cannot stat template index file '/System/Library/User Template/English.lproj/Sites/index.html'. [Tue Oct 02 20:03:55 2012] [notice] Digest: generating secret for digest authentication ... [Tue Oct 02 20:03:55 2012] [notice] Digest: done [Tue Oct 02 20:03:55 2012] [notice] Apache/2.2.22 (Unix) DAV/2 PHP/5.3.15 with Suhosin-Patch configured -- resuming normal operations When I stop apache, localhost still displays It works!

    Read the article

  • Apache Getting Bogged Down By Certain Script (Wp-Cron.php) - How To Kill Process Automatically

    - by user50037
    I have a server that is running a number of wordpress blogs, and a number of them have several hundred/thousand posts. Every couple of days, the server slows to a crawl due to a file being run on Wordpress called WP-cron.php. My entire apache process log turns into this : http:// imgur.com/A7K9k.png Times that by quite a bit. And server no go. Each process takes up about 1.1% of ram. And when we have 50 of them on the go. It gets insane. Not all of them are coming from the same blog, they are pretty widespread. In the Apache process page of WHM, they are usually ALL set to the status of "C", which means closing. But they can sit there until they crash the server, and they still hold the memory. Just google "wp-cron.php load" and you will find plenty of people with similar issues. In anycase, we have think it is down to users adding a tonne of dead "pinglists" to their wordpress installation. Which in turn wordpress loops through them endlessly. Problem number 1. Does anyone have any other suggestions about what would cause the Wordpress file wp-cron.php to loop endlessly. I still think it is down to pings, because all of the people we have contacted about their account load going sky high, have had massive ping lists. Problem number 2. Even if it is down to excessive pinglists in wordpress. We cannot be babying every single account on the server waiting for it to start spawning the wp-cron processes. It often happens overnight, and I start getting SMS alerts at 2am about the load. I have CSF installed, which apparently would have ended the processes if they ran over XXX time. But I have been told that it won't catch the processes because they end up in this state of "closing" (They show up as "C" on the Apache page of WHM). Apparently CSF will only kill processes that are "running" which C does not count. I have seen various other scripts such as : http://dltj.org/article/die-apache-die/ . I took a look at the stat of /proc. But I was boggled at which delimited part was the time running. And if there was any way I could connect it back to an actual Apache process, so that I could see what file was running (So only close connections connected to wp-cron.php, with a state of "C"). Overall I know Problem 2 glosses over the real reason. But I do put the whole thing to excessive pinglists in Wordpress. But I just cannot sit there and babysit every single installation 24/7. So I need a way to save the server when I am not available. Any help would be much appreciated.

    Read the article

  • Subversion COPY/MOVE - File not found: transaction 'XXX-XX'

    - by theplatz
    I'm attempting to create a branch in one of my subversion repositories and keep running into an error. No mater what is done, I keep getting the following: File not found: transaction '3062-2e6', path '/Software/XXXXXX/branches/testbranch' I've noticed that the first part of the '3063-3e6' in the above message is the last successful committed revision in the repository. My apache logs don't give much more information: [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Could not MOVE/COPY /svn/p070361/!svn/bc/3049/Software/SXXXXXX/trunk. [404, #0] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Unable to make a filesystem copy. [404, #160013] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] File not found: transaction '3059-2e2', path '/Software/XXXXXX/branches/testbranch' [404, #160013] This is all happening on a server with an nginx frontend that proxies to Apache for the subversion bits. Other repositories are able to branch fine and I was able to create the branch using file:/// from the command line on the server this is occurring on. The permissions on this repository match every other repository and disk space is not an issue.

    Read the article

  • how to set php SERVER_PORT var to 80, behind varnish?

    - by Daniel
    how to force php to read SERVER_PORT as 80, when apache listens on 8080 and varnish listens on 80 ?? if my apache vhost is set to 8080, SERVER_PORT will always be 8080, this is troubling me a little since in many parts of the application some links are calculated with SERVER_NAME and SERVER_PORT together, .. so what I need is that php "believes" that SERVER_PORT is 80, so all links will pass trough varnish regards Daniel

    Read the article

  • htaccess not found

    - by clarkk
    I have installed a Apache 2 (from webmin) server on Debian 6.. I have setup a virtual host db.domain.com on the server which works fine, but .htaccess doesn't work if you get access from the ip address and the directory is listed if no index.php is found? db.domain.com -> 403 forbidden xxx.xxx.xxx.xxx -> gets access to the server Why is .htaccess omitted when you get access from the servers ip address? httpd.conf <Directory *> Options -Indexes FollowSymLinks </Directory> <VirtualHost *:80> ServerName db.domain.com DocumentRoot /var/www </VirtualHost> htaccess order deny,allow deny from all

    Read the article

  • Why is my concurrency capacity so low for my web app on a LAMP EC2 instance?

    - by AMF
    I come from a web developer background and have been humming along building my PHP app, using the CakePHP framework. The problem arose when I began the ab (Apache Bench) testing on the Amazon EC2 instance in which the app resides. I'm getting pretty horrendous average page load times, even though I'm running a c1.medium instance (2 cores, 2GB RAM), and I think I'm doing everything right. I would run: ab -n 200 -c 20 http://localhost/heavy-but-view-cached-page.php Here are the results: Concurrency Level: 20 Time taken for tests: 48.197 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 392111200 bytes HTML transferred: 392047600 bytes Requests per second: 4.15 [#/sec] (mean) Time per request: 4819.723 [ms] (mean) Time per request: 240.986 [ms] (mean, across all concurrent requests) Transfer rate: 7944.88 [Kbytes/sec] received While the ab test is running, I run VMStat, which shows that Swap stays at 0, CPU is constantly at 80-100% (although I'm not sure I can trust this on a VM), RAM utilization ramps up to about 1.6G (leaving 400M free). Load goes up to about 8 and site slows to a crawl. Here's what I think I'm doing right on the code side: In Chrome browser uncached pages typically load in 800-1000ms, and cached pages load in 300-500ms. Not stunning, but not terrible either. Thanks to view caching, there might be at most one DB query per page-load to write session data. So we can rule out a DB bottleneck. I have APC on. I am using Memcached to serve the view cache and other site caches. xhprof code profiler shows that cached pages take up 10MB-40MB in memory and 100ms - 1000ms in wall time. Pages that would be the worst offenders would look something like this in xhprof: Total Incl. Wall Time (microsec): 330,143 microsecs Total Incl. CPU (microsecs): 320,019 microsecs Total Incl. MemUse (bytes): 36,786,192 bytes Total Incl. PeakMemUse (bytes): 46,667,008 bytes Number of Function Calls: 5,195 My Apache config: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 120 MaxRequestsPerChild 1000 </IfModule> Is there something wrong with the server? Some gotcha with the EC2? Or is it my code? Some obvious setting I should look into? Too many DNS lookups? What am I missing? I really want to get to 1,000 concurrency capacity, but at this rate, it ain't gonna happen.

    Read the article

  • Varnish going sick

    - by junke1990
    I'm having trouble with Varnish, it works for a couple of views and then just goes sick... The weird thing is that it does work for about 20 or 30 requests. If I call apache directly it works fine. I'm running Varnish Version: 3.0.3-1 on Debian Squeeze and, for now, Apache on port 80 and Varnish on port 8080 on the same server.. I'm using https://github.com/mattiasgeniar/varnish-3.0-configuration-templates as base for my VCLs and modified the VCLs to support Concrete5. Anyone any clue on how I should debug this? backend default { .host = "127.0.0.1"; .port = "80"; .connect_timeout = 1.5s; .first_byte_timeout = 45s; .between_bytes_timeout = 30s; .probe = { .url = "/"; .timeout = 1s; .interval = 10s; .window = 10; .threshold = 8; } } LOG 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791312 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791315 1.0 0 Backend_health - default Still sick 4--X-R- 0 8 10 0.000689 0.000000 HTTP/1.1 301 Moved Permanently (the 301 is because I check for www.)

    Read the article

  • .htaccess redirect not preserving http_referer header

    - by CodeToaster
    We're merging with another company, and we want to redirect content from their (Apache) website to our (IIS) site. When traffic arrives at our site, we inspect the HTTP_REFERER, and if the visitor was just redirected from the company's site that we just merged with, they'll be presented with a "splash" page announcing the merger. I've added the line... Redirect / http://www.oursite.com/ ...to their .htaccess, which works fine, except that when the browser is redirected it doesn't send the HTTP_REFERER header. I've tried redirecting with redirect codes 301, 302 and 307 (the default, I believe, is 302) and all have the same effect (redirects fine, but no HTTP_REFERER). Can anyone provide some insight into why HTTP_REFERER wouldn't be included?

    Read the article

  • How to setup apache multi-site with multi-domain on ec2

    - by Esh
    Say I have two document roots domain1/ and domain2/ I know how to access those two roots from my own computer if they are hosted on the same computer. My question is that if I want to do the same thing on my ec2 server, how should I configure my elastic ips to those two roots? I know by default the elastic ip will only associate to the root with the name localhost(127.0.0.1). Anyone could give me a detailed answer? An example would help, thanks!

    Read the article

  • htaccess redirect loop

    - by Web Developer
    I am having issue in the last line of the below code which is causing the redirect loop (at least that's what i think so) RewriteEngine On RewriteBase /jgel/ RewriteCond %{REMOTE_ADDR} !^172\.172\.121\.142 RewriteCond %{REQUEST_URI} !maintainance\.php RewriteCond %{REQUEST_URI} !resources/(.*)$ [nc] RewriteRule ^(.*)$ maintenance.php [R=307,L] I have tried this and this too doesn't work RewriteEngine On RewriteBase / RewriteCond %{REMOTE_ADDR} !^172\.172\.121\.142 RewriteCond %{REQUEST_URI} !maintainance\.php RewriteCond %{REQUEST_URI} !resources/(.*)$ [nc] RewriteRule ^(.*)$ /jgel/maintenance.php [R=307,L]

    Read the article

  • connections in FIN_WAIT and CLOSE_WAIT state

    - by Raj
    I would like to elaborate the setup so You guys can understand the question and answer more accurately. I have HAProxy as load-balancer, 4 webservers (apache 2.2.3) and one database server (MySQL 5). I am monitoring these servers by nagios. I have disabled the keepalive on apache as we have only 8GB of memory. Now what happens whenever I receive alerts for high memory and cpu utilization, I have observed that the connections from apache to database server hang in established mode (keepalive with timeout value of 7200) and at other side means connections between haproxy and apache shows status as FIN_WAIT on haproxy server and CLOSE_WAIT at apache side. I also see the huge memory swapping and apache taking the most of the memory. I did strace on apache process and did not find any information. strace gets attached to apache process but did not produce any output. The processlist on Mysql server show s those processes in sleep mode. The application on webserver is Magento a php application. if you need further information please let me know. Thanks.

    Read the article

  • Dynamic authentication realms in Apache

    - by Cogsy
    I have a front end server acting as a gateway proxy for many (a dynamic 'many') building monitors with embedded webservers. They are accessed with a URL like: http://www.example.com/monitor1/ http://www.example.com/monitor2/ ... I'm trying to restrict access to these monitors to only the users that own them. So what I need is a way of specifying rights to users or groups for specific directories. The standard auth mechanisms I see in Apache won't work because I need to specify every location. I'd prefer some dynamic map or script. Any suggestions?

    Read the article

  • Zend Framework Application: module dependencies

    - by takeshin
    How do you handle dependencies between modules in Zend Framework to create reusable, drop-in modules? E.g. I have Newsletter module, which allows users to subscribe, providing e-mail address. Next, I plan to add Blog module, which allows to subscribe to posts by e-mail (it obviously duplicates some functionality of the newsletter, but the e-mails addresses are stored in User model). Next one is the Forum module, with the same subscribe to post functionality. But I want to have ability to use these modules independent to each one, i.e. application with newsletter alone, newsletter with blog, comibnation two or three modules at once. This is pretty common, e.g. the same story with search feature. I want to have search module, with options to search in all data, blog data or forum data if available. Is there any design pattern for this? Do I have to add some moduleExists($moduleMame), or provide some interface or abstract classes, some base controller pattern, similar for each module?

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • Apache Conf files: If Hostname=="Web4" Then Use This IP for VirtualHost

    - by jroberts
    I am getting ready to do a "spring cleaning" on the web heads at work. I would really like to put my config files into a git repo, and use the same config files for all the web heads. This is a problem for the sites that are on port 443. Is there anyway to do an if statement or something like that inside the conf file itself? I am trying to avoid writing a script to generate the conf files. Any ideas are greatly appreciated!! Thank you! Jeff

    Read the article

  • What is the best way to configure the number of workers in Apache?

    - by rbm
    My site receives a lot of traffic for 2 hours during the day (2000 hits per minute). The rest of the day receives less traffic(500e hits per minute). I have been experimenting with the MaxClients and MaxSpareServers values but I still get some downtime during peek hours. How can I calculate the best values for my configuration based on the amount of ram that I have ? Each process is like 36-40 M of Memory total used free shared buffers cached Mem: 3096 793 2302 0 0 0 -/+ buffers/cache: 793 2302 Swap: 0 0 0 Values that I am using now <IfModule prefork.c> StartServers 10 MinSpareServers 22 MaxSpareServers 60 ServerLimit 90 MaxClients 90 MaxRequestsPerChild 400 </IfModule>

    Read the article

  • Sudo-ing Apache for a particular vhost

    - by djechelon
    I want to manage several SVN repositories for a particular vhost in my system. I want those to be owned by a particular user in the system and not by wwwrun/www. The other websites hosted by Apache should be regularly executed by the unprivileged wwwrun/www user. I'm using worker. How can I tell Apache that every request for a specific vhost must be served impersonating a specific user like I would do in IIS? (This will also come useful when running FUDforum)

    Read the article

  • What's the safest way to kick off a root-level process via cgi on an Apache server?

    - by MartyMacGyver
    The problem: I have a script that runs periodically via a cron job as root, but I want to give people a way to kick it off asynchronously too, via a webpage. (The script will be written to ensure it doesn't run overlapping instances or such.) I don't need the users to log in or have an account, they simply click a button and if the script is ready to be run it'll run. The users may select arguments for the script (heavily filtered as inputs) but for simplicity we'll say they just have the button to choose to press. As a simple test, I've created a Python script in cgi-bin. chown-ing it to root:root and then applying "chmod ug+" to it didn't have the desired results: it still thinks it has the effective group of the web server account... from what I can tell this isn't allowed. I read that wrapping it with a compiled cgi program would do the job, so I created a C wrapper that calls my script (its permissions restored to normal) and gave the executable the root permissions and setuid bit. That worked... the script ran as if root ran it. My main question is, is this normal (the need for the binary wrapper to get the job done) and is this the secure way to do this? It's not world-facing but still, I'd like to learn best practices. More broadly, I often wonder why a compiled binary is more "trusted" than a script in practice? I'd think you'd trust a file that was human-readable over a cryptic binaryy. If an attacker can edit a file then you're already in trouble, more so if it's one you can't easily examine. In short, I'd expect it to be the other way 'round on that basis. Your thoughts?

    Read the article

  • apache dont send me mp3 header even when use direct address to the file

    - by user1728307
    apache dont send me mp3 header even when use direct address to the file, it means i can play it with flash audio players on my web pages, but when i tried to download from direct address on my server i got "Error 101 (net::ERR_CONNECTION_RESET): The connection was reset" or sometimes gives me a file with mp3 extension that has just 13B files-size, and when i open that file in gedit/notepad there is just: <html></html> i dont have any problem with php files and images, but mp3 files never be send to browser for download or play. i added this code to httpd.conf: AddType audio/mpeg .mp3 but there is not any difference!! thanks in advance

    Read the article

  • var/www/ permissions

    - by mk_89
    I have purchases a server today and I am almost through configuring it, I have managed to install mysql and have enabled a firewall which allows access to ports 80, 22 and 443 I am trying to test out a simple php file to see whether all is well but I get a 404 Not found error, I am certain that this file exists which was created using vi as I have confirmed it using Filezilla. What am I missing? is there another step that I must take to allow a simple php file to work.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >