Search Results

Search found 2291 results on 92 pages for 'webserver'.

Page 62/92 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Apache server doesn't create directory or file under www-data user [duplicate]

    - by Harkonnen
    This question already has an answer here: What permissions should my website files/folders have on a Linux webserver? 4 answers very newbie to Apache here I installed Apache 2.4 on my Arch server where I installed newznab (a newsgroups indexer). I have noticed that all files newznab needs to create are created under my login user, and not apache default user (www-data). I read here that it's bad security practice to allow www-data to write files. I agree. But as an apache newbie, I would like to know where (in the httpd.conf I suppose ?) the user allowed to write files can be configured, because I want another account to be allowed to write files instead of my main account.

    Read the article

  • Running a bash script from an HTML link or button

    - by Andrew
    I have a webserver that's hosting lots of images. I want the client to be able to press a button or a link, which will run a bash script, which will create a video based on all these pictures. The script I'm trying to run is this: #!/bin/bash # cd to the directory cd /var/www/gallery # use ffmpeg to make video ffmpeg -pattern_type glob -i 'img-*jpg' -r 1 video.mp4 # Take the first file in the directory and name it video.mp4.jpg (for thumbnail) cp `ls | sort -n | head -1` video.mp4.jpg The script is located on the server. So when the client clicks the link or button, the script will run, and the video is created. I've tried both solutions listed here but I can't seem to get it to work. I have php installed on my server.

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by Grant Tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS I have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah I know :) Anyways I have followed this LEMP (will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what I want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so I can host multiple websites - mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what I need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)?

    Read the article

  • CRL checking problem windows 2003

    - by Tim Mahy
    Hi all, we have CRL that is valid for 24 hours and has a next update in 12 hours. The CRL is valid from 12:12 AM to 12:12 AM and from 12:12 PM to 12:12 PM. In the logs of the CRL hosting webserver we see that one of our servers not always fetches the CRL at night, in most cases the server that missed the CRL IIS servers 403.16 on 12:13 PM. Is our following theory good: when a windows server misses fetching the CRL on it's nextUpdate but the current CRL is still valid, the fetching is not retried? This leads to a situation that when the CRL expires there is no overlap and gives a little time of 403.16 situations in IIS since the CRL is not thrusted and so all certificates are marked als unsafe? greetings, Tim

    Read the article

  • How can I limit CloudFront downloads

    - by Alex Crouzen
    I'm looking to use Amazon's CloudFront to host some content in the near future. Currently, I'm keeping it very simple and I'm just uploading my content to S3 and then making a distribution available via Cloudfront. However, because I have a limited budget, I'd like to be able to limit the number of downloads or the money spent on bandwidth. As far as I can see, I can't set any quotas or budgets like you can in Google's App Engine, so I'm looking for another way of doing this. Has anyone had any experience doing this? One approach I'm thinking of is having to place a webserver with redirects in between, but that kind of defeats the simplicity of CF for me.

    Read the article

  • Blocking ports on the public IP assigned to lo interface in GNU/Linux

    - by nixnotwin
    I have setup my Ubuntu server as a router and webserver by following the answer given here. My ISP facing interface eth0 has a private 172.16.x.x/30 ip and my lo interface has a public IP as mentioned in the answer to the question linked above. The setup is working well. The only snag I have experienced is that I could not find a way to block the ports exposed by the public IP on the lo interface. I tried doing iptables -A INPUT -i eth0 -j DROP, and my server lost connectivity to the public network (internet). I could not ping any public ips. What I want is a way to block ports that are exposed by the public ip on the lo interface. And also I require iptables rules that can expose ports like 80 or openvpn port to the public network.

    Read the article

  • What permissions / ownership to set on PHP Sessions Folder when running FastCGI / PHP-FPM (as user "nobody")?

    - by Professor Frink
    I'm having trouble getting a number of scripts running because PHP-FPM can't write to my session folder: "2009/10/01 23:54:07 [error] 17830#0: *24 FastCGI sent in stderr: "PHP Warning: Unknown: open(/var/lib/php/session/sess_cskfq4godj4ka2a637i5lq41o5, O_RDWR) failed: Permission denied (13) in Unknown on line 0 PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0" while reading upstream" Obviously this is a permission issue; my session folder's owner/group is the webserver's user, NGINX. PHP-FPM runs as nobody though, and hence adding it to the nginx group is not so trivial. A temporary solution is to set the permissions of /var/lib/php/session to 777 - I have a feeling that's not the "best practice" though. What is the best practice when you need to assign a daemon write access to a folder, but it is running as nobody ?

    Read the article

  • Server load is spiking from 2 to 250

    - by Hakzona
    Hello, I'am using Wordpress 2.9. Webserver 8GB from Hostgator. I'am fighting with this problem for long time but still can not find the solution. Php switched to run as an apache module, Php 5 in DSO, Apache suEXEC, eaccelerator installed, but this configuration started making huge server load on server. Server load spiking from 1 to 250 (4 cpus) and server stops, after period of time its back again and stops in about 10 minutes. It started happening when hostgator support team installed eaccelerator on server. What can make this problem and how can I fix it?

    Read the article

  • Basic clarification about Limited FTP/sFTP users

    - by mattewre
    I would like to get some clarification about the correct way to create limited users to access to my VPS user as WEBSERVER with Nginix. I'm used to NOT install FTP and access via SFTP only. It is ok for every set up? this is what I usually do from to create a limited user called "admin" that should be able to have access via SFTP to the folder with the website data mkdir -p /var/www/mysite.com/ adduser admin adduser admin www-data chown -R root:root /var/www chmod -R 755 /var/www chmod -R 755 /var/www/mysite.com chown -R admin:www-data /var/www/mysite.com/ It seems not to be the correct way, I always have problems with permission when I upload some files (for example with Wordpress in general). I would like to create an user that does work exactly as the one that the "provides" give to their client when they buy an Hosting service (that is a FTP, I would prefer SFTP access). It is for personal user, but I think that a limited user is a lot safer to use then the "root" via SFTP.

    Read the article

  • Nginx Forward SSL for single site

    - by Will.brown
    I have a nginx server setup and it works fine for http however i would like to bypass the proxy for https connection. I want it so that when someone goes to my ip https:// ip1 (Nginx server) it bypasses ngix and forwards all traffic to https:// ip2(webserver) i do not need ngix to do this for any ssl website just one particular website. SO Client to https:// ip1 to https:/ /ip2 to https:// ip1 to client pc I just want the nginx to not intercept the connection and forward it on and on return forward the connection to client Im guessing i do this by nat mascarade buy not exactly sure how to do it and if i will need to tell nginx to ignore ssl aswell can someone help me please this has gone me stuck

    Read the article

  • Making Python scripts more user friendly?

    - by Michael Morisy
    I have a bunch of python scripts I've put together that cut down on busy work, but I'd like to be able to share them in an easier-to-use format for others to be used internally. The scripts aren't accessing anything local, just open API's across a couple web apps. Ideally: a) Users wouldn't have to have a python compiler installed b) They can be using Windows when running it. c) It's simple enough they can just click something, and it will work. I've tried some of the Windows Python executable compilers, but none have really worked well and I was considering just uploading it to a webserver and putting up some basic password access protection around it Any suggestions for sharing scripts?

    Read the article

  • Shibboleth + IIS and Pound Reverse Proxy

    - by boburob
    Having a bit of a problem getting Shibboleth (SSO) working with ADFS and Pound. The main problem seems to be that: The website address will be https://website.domain.com Pound will then terminate the SSL and forward the traffic to the webserver on a different port (http://server.domain.com:8888) I have set up Shibboleth to protect the address http://server.domain.com:8888, which allows me to retrieve metadata and it all seems to be working fine. However the problem seems to be that ADFS is configured to protect the https website, so when Shibboleth attempts to recieve information from ADFS I get nothing except the following error: A token request was received for a relying party identified by the key 'https://msstagrevproxy.cwpintranet.com/shibboleth', but the request could not be fulfilled because the key does not identify any known relying party trust. Key: https://msstagrevproxy.cwpintranet.com/shibboleth I am not really sure how I can work around this as to retrieve the metadata from Shibboleth I have to use the https address but this does not actually exist in Shibboleth or IIS. Has anyone had any experience with this before or using any other SSO with a reverse proxy that works?

    Read the article

  • nginx configuration for URL URI paths

    - by hachiari
    I want to switch my webserver from apache to nginx however I have difficulties in converting my current htaccess to nginx configuration the conditions that I need: I want everything to be like apache, it can read file such as js, css, jpg, png ,etc I am currently using CodeIgniter PHP frameword, it uses the URI system thingy... So my htaccess configuration for CodeIgniter URI is: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] RewriteCond %{HTTP_HOST} ^www.domain.tld [NC] RewriteRule ^(.*)$ hxtp://domain.tld/$1 [L,R=301] I am also using minify to compress my css and js files, so the way I call my css and js is like: hxtp://domain.tld/?=css hxtp://domain.tld/?=js I tried some configurations from the net, but I could only solve problem no 2 Thank You

    Read the article

  • is there any Open Soruce solution for Failover of incoming Traffic?

    - by sahil
    Hi, We have two ISP... and both ISP's Ip Nat with same Webserver IP, i want failover for incoming traffic , is there any open source solution? can i do it by making two name server , one for each ISP? ... I am not sure but as per my knowledge primary and secondary name server will reply in round robin method till they are live , once any name server will be unreachable then only another will be reply...so if i am right then i think i can do incoming failover by making two name server in my office... Waiting for your valuable response... Thanking you, Sahil

    Read the article

  • Dedicated Server emails ending up in Junk

    - by Pasta
    I have a dedicated server that works fine. Recently I added a new domain with a new dedicated IP address. The emails from the webserver gets sent out from the primary IP address which is different from the IP address of the domain. This causes the emails to end up in the Junk email folders. Is there anything I can do changing the SMTP server to the new IP address or configuring send mail? I need this for my php server on centos.

    Read the article

  • Executed PHP files are stale unitl "touched" (Symlinked NFS mount as web root)

    - by mmattax
    We have a PHP application that has 3 web servers (running Nginx and Apache). The web server's directory root are symlinked directories that point to an NFS mount. For example: web01 has an NFS mount at /data/webapp, which is symlinked to /home/webapp. Apache serves content from /home/webapp/www. We also use ACP for our PHP opcode cache. When we deploy code, we SCP an archive file to the NFS server and extract it. Since upgrading RedHat 6, when we deploy our code the webserver execute "stale" PHP files until touch is run on the PHP files. We thought that APC might be causing a problem, but the issue exists, even after clearing the opcode cache. Any ideas on how to diagnose why the stale PHP code is being executed?

    Read the article

  • Improving server security [closed]

    - by Vicenç Gascó
    I've been developing webapps for a while ... and I always had a sysadmin which made the environment perfect to run my apps with no worries. But now I am starting a project on myself, and I need to set up a server, knowing near to nothing about it. All I need to do is just have a Linux, with a webserver (I usually used Apache), PHP and MySQL. I'll also need SSH, SSL to run https:// and FTP to transfer files. I know how to install almost everything (need advice about SSL) with Ubuntu Server, but I am concerned about the security topic ... say: firewall, open/closed ports, php security, etc ... Where can I found a good guide covering this topics? Everything else in the server... I don't need it, and I wanna know how to remove it, to avoid resources consumption. Final note: I'll be running the webapp at amazon-ec2 or rackspace cloud servers. Thanks in advance!!

    Read the article

  • MySQL replicate multiple places

    - by Frederik Nielsen
    Very trick task to find a good title for this question, but here goes the q: I have a few development machines, where I develop my PHP applications on, and testing via a local webserver. This works out pretty well for each machine. However, I would like to replicate the DB from my machines to a central location. So, to sum up: DEV1 - CENTRAL DEV2 - CENTRAL DEV3 - CENTRAL CENTRAL - DEV1 CENTRAL - DEV2 CENTRAL - DEV3 I hope this makes sense, as I cannot find an easy way to tell it. Basically, it is a 2-way replication, where all 4 databases contain the same info, and each of them can be updated locally, to then be pushed out to the others. Is this actually doable? All my dev machines are running Windows 7, and my central DB server is running CentOS 6.

    Read the article

  • FreeBSD Ngnix installation error

    - by Asaf Nevo
    I have a VPS which has Apache webserver installed. I'm trying to install Ngnix on it since my new server will be needing to handle large amount of connection simultaneously. I used this install guide and did: cd /usr/ports/www/nginx make install clean However I get this error: adding module in /usr/ports/www/nginx/work/arut-nginx-dav-ext-module-0e07a3e ./configure: error: no /usr/ports/www/nginx/work/arut-nginx-dav-ext-module-0e07a3e/config was found ===> Script "configure" failed unexpectedly. I'm pretty new to FreeBSD and I am used to controlling my server using Direct Admin. What shall I do next ?

    Read the article

  • Installing Bugzilla on Ubuntu 9.04 and Plesk

    - by makeflo
    Hey guys. I'm trying to install the latest Bugzilla version on my ubuntu server. (Want to use a subdomain like bugs.domain.com) I already installed all necessary perl modules and check_modules.pl doesn't show any errors. But when I'm running the testserver.pl script I get the following: TEST-OK Webserver is running under group id in $webservergroup TEST-FAILED Fetch of images/padlock.png failed I'm also not able to visit ANY file within the bugzilla folder from the browser. I'm always getting a 404 error. The bugzilla folder and all containing files are set to apache as the owner. I tried to enter the apache configuration form the installation guide in the http.include file of the domain and in the vhosts.conf file of the subdomain as well. I don't know what to do... Playing with plesks' suexecgroup doesn't bring any solution... I hope you can help me! Thanks in advance!

    Read the article

  • Apache security for multi-user development web server.

    - by mrmartinblue
    I've been searching and reading through documents all morning and understand that I need to use some combination of chown and probably 'jailing' to securely give programmers access to directories on my centos webserver. Here's the situation: I have an apache web server that has any number of virtual sites located in /var/www/site1 /var/www/site2 etc.. I have different developers that need full access both ssh and vsFTP to only the site they are working on. What is the best way to create and maintain security in this scenario. My thought would be to create a new user for each coder, jail that user to the website directory they are allowed to work in, add their user to a group and set the webroot's owner to that group. Any thoughts? Good, bad, ugly? Thanks!

    Read the article

  • Should I Use PHP as FastCGI?

    - by Synetech inc.
    Hi, I am running an Apache webserver on my Windows machine. It is not generally a public server (most of the little bit of traffic comes from the machine itself, and most of the public traffic comes from crawlers). Basically, it is mostly just for use as a test-bed, development system. I have read about how running PHP as FastCGI is better (ie faster and more stable) than as an Apache module. However, I really don’t like the idea of multiple PHP.exe processes (I don’t like that Apache has two processes and I’m not even too thrilled with Chromium’s multi-process model). So I’m wondering if it would be worthwhile to change PHP to FastCGI for this scenario. If it is, how would I configure it? Pretty much all of the information I have seen has been either for non-Windows or for IIS. As I said, I’m running Windows+Apache. Thanks a lot.

    Read the article

  • Harddrive in the freezer ever work for you?

    - by Stefan Thyberg
    Once upon a time, my little 10 GB drive in my webserver failed and of course I had no backup, teaching me to immediately set up an automatic backup job afterwards. Anyhow, this drive refused to start and as a last-ditch effort I put it in a plastic bag and put it in the freezer overnight, since I had heard somewhere that it might work and I really didn't have any other options. The next day I take it out, immediately plug it in outside the case and lo and behold, the drive works long enough for me to copy my data off it. Have you ever had a similar experience with this method?

    Read the article

  • PHP on IIS7 not showing pages

    - by Jeff
    I have a PHP website on a Windows 7 machine I'm working with and it cannot be viewed by any browser - IE, Chrome, Firefox. When navigating to the root of the website (default index.php) the browser reports it cannot find the address. Not a 404 error from the webserver, just as if it cannot resolve the name. Other websites in the same default web application that are also PHP work perfectly. I've aligned all folder permissions and everything else but this has got me stumped. I even went as far to create a new folder and throw in a test phpinfo() page and it worked. Copied this website's content to the new folder and it cannot find the index.php page. I checked all setting I know and can't seem to find what I'm missing. Anyone else encounter this issue? Remember the fix for it?

    Read the article

  • Unix: Sync directory with FTP or SFTP directory

    - by Svish
    I have a website on my local computer running Mac OS X. I am wondering if there is any built-in command that I can run in the Terminal that will upload that website to my webserver either through FTP or, if possible, SFTP. Installing new commands through MacPorts is also a possibility. A big bonus would be that it only uploaded the files that needs to be updated and not everything else. It would also be nice if I can tell it to delete the files on the server that no longer exists locally once in a while. Any good tips?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >