Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 853/974 | < Previous Page | 849 850 851 852 853 854 855 856 857 858 859 860  | Next Page >

  • Group Policy - Published software not upgrading

    - by VokinLoksar
    I'm testing this with mercurial MSIs, but it's the same for other packages. I've created a new group policy and added an old version of mercurial to User software installation as a Published package. On a Windows 7 client I install the package through Programs and Features. The installation works fine. Now, I would like to publish an updated version of mercurial. I create a new Published package. Under 'Upgrades' I configure it to replace (upgrade also doesn't work) the old version and mark this upgrade as 'Required'. The old package is not removed. The Windows 7 client is then restarted. When I log back in, I see a status message saying something like 'Removing managed software Mercurial ...'. There is no message about installation of the upgrade. If I look in Programs and Features, I can see the new version of mercurial listed. However, the actual mercurial directory under Program Files is missing. It's as though the installation recorded information about the MSI, but didn't actually install anything after removing the old version. As I mentioned, this isn't specific to mercurial. I've tried using other apps and have yet to find one that can be upgraded via a Published package. Using Assigned packages in Computer Configuration works without problems, but I would like this software to be optional rather than required. Ideas?

    Read the article

  • Write stderror to a file using PowerShell

    - by Zian Choy
    How do I capture error messages from a PowerShell-launched command in a text file? I searched the Internet for a while and found that supposedly, I should be able to do something like cmd /c "big blob of text >C:\output.txt 2>c:\errors.txt" to direct the output to output.txt and the errors to errors.txt but when I try to run the command, I get the following error: cmd.exe : The filename, directory name, or volume label syntax is incorrect. At C:\Users\Zian\Desktop\Untitled1.ps1:27 char:4 + cmd <<<< /c $command + CategoryInfo : NotSpecified: (The filename, d...x is incorrect.:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Furthermore, if I try to run the command without everything starting at "2", then the command executes correctly and output.txt catches the right output. I looked at Redirect stderr to variable in powershell but it wasn't helpful because the answer to that question suggests capturing the entire output and filtering it in memory. In my case, I am backing up every database on a computer and since the databases won't fit in my laptop's RAM, I cannot use the question's solution. I also found tantalizing suggestions about using $err = @(command goes here) but with no information on what to do other than simply inserting that line of text. I tried to utilize the search function on Serverfault with the string "@()", but it did not return any results. What can I do to get the error messages into errors.txt?

    Read the article

  • How would I write a terminal command to download a folder with wget from a Media Temple (gs) server?

    - by racl101
    I'm trying to download a folder using wget on the Terminal (I'm usin a Mac if that matters) because my ftp client sucks and keeps timing out. It doesn't stay connected for long. So I was wondering if I could use wget to connect via ftp protocol to the server to download the directory in question. I have searched around in the internet for this and have attempted to write the command but it keeps failing. So assuming the following: ftp username is: [email protected] ftp host is: ftp.s12345.gridserver.com ftp password is: somepassword I have tried to write the command in the following ways: wget -r ftp://[email protected]:[email protected]/path/to/desired/folder/ wget -r ftp://serveradmin:[email protected]/path/to/desired/folder/ When I try the first way I get this error: Bad port number. When I try the second way I get a little further but I get this error: Resolving s12345.gridserver.com... 71.46.226.79 Connecting to s12345.gridserver.com|71.46.226.79|:21... connected. Logging in as serveradmin ... Login incorrect. What could I be doing wrong?

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • Sharepoint: authenticating users via forms authentication

    - by sbee
    My problem is the following(sharepoint Newbie) , i want to change the default zone from being a Windows Authenticated Zone to a Forms Authenticated Zone ,thereby forcing the site collection administrator to log in via forms authentication and not windows also the sharepoint users will be accesing the site internally my goal is to effectively replace windows authentication with forms authentication as my company does not have active directory installed. So far i have created an ASP Application that adds the users to the database,the database was created via the .Net Framework Asp tool(Asp reg_sql),however when i change the default zone to the AspNetSqlMembershipProvider(Forms) and attempt to add my site collection administrator via the Central admistrator, i get the following error "No Exact Match found" as shown on the screenshot. My inkling is that somehow the people picker is failing to read the users from the database but reasearch on correcting that thus far has proved fruitless. I have made all the relevant changes on the these sites(Central admin site,My test site & Add Users site) config files.Changes are the following(Membeship Provider,Connection String,People Picker) i left out the role provider for now as it is optional. Help on this would ge highly appreciated...

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • Copied a file with winscp; only winscp can see it

    - by nilbus
    I recently copied a 25.5GB file from another machine using WinSCP. I copied it to C:\beth.tar.gz, and WinSCP can still see the file. However no other app (including Explorer) can see the file. What might cause this, and how can I fix it? The details that might or might not matter WinSCP shows the size of the file (C:\beth.tar.gz) correctly as 27,460,124,080 bytes, which matches the filesize on the remote host Neither explorer, cmd (command line prompt w/ dir C:\), the 7Zip archive program, nor any other File Open dialog can see the beth.tar.gz file under C:\ I have configured Explorer to show hidden files I can move the file to other directories using WinSCP If I try to move the file to Users/, UAC prompts me for administrative rights, which I grant, and I get this error: Could not find this item The item is no longer located in C:\ When I try to transfer the file back to the remote host in a new directory, the transfer starts successfully and transfers data The transfer had about 30 minutes remaining when I left it for the night The morning after the file transfer, I was greeted with a message saying that the connection to the server had been lost. I don't think this is relevant, since I did not tell it to disconnect after the file was done transferring, and it likely disconnected after the file transfer finished. I'm using an old version of WinSCP - v4.1.8 from 2008 I can view the file properties in WinSCP: Type of file: 7zip (.gz) Location: C:\ Attributes: none (Ready-only, Hidden, Archive, or Ready for indexing) Security: SYSTEM, my user, and Administrators group have full permissions - everything other than "special permissions" is checked under Allow for all 3 users/groups (my user, Administrators, SYSTEM) What's going on?!

    Read the article

  • Symlink path can be followed manually, but `cd` returns Permission denied

    - by Ricket
    I am trying to access the directory /usr/software/test/agnostic. There are several symlinks involved in this path. As you can see by the below transcript, I am unable to cd directly to the path, but I can check each step of the way and cd to the symlinked directories until I reach the destination. Why is this? (and how do I fix it?) Ubuntu 12.10, bash > ls /usr/software/test/agnostic ls: cannot access /usr/software/test/agnostic: Permission denied > cd /usr/software/test > cd agnostic bash: cd: agnostic: Permission denied > pwd -P /x/eng/localtest/arch/x86_64-redhat-rhel5 > ls -al | grep agnostic lrwxrwxrwx 1 root root 15 Oct 23 2007 agnostic -> noarch/agnostic > ls -al | grep noarch ... lrwxrwxrwx 1 root root 23 Oct 23 2007 noarch -> /x/eng/localtest/noarch > cd noarch > cd agnostic bash: cd: agnostic: Permission denied > ls -al | grep agnostic lrwxrwxrwx 1 5808 dip 4 Oct 5 2010 agnostic -> main > cd main > ls (correct output of `ls`) > pwd /usr/software/test/noarch/main > pwd -P /x/eng/localtest/noarch/main

    Read the article

  • Nexenta, NFS and LOCK_EX

    - by Givre
    I'm currently using an LAMP architecture and I expect a big problem :( I have several http web server using PHP5. All are mounting via NFS (v3) the directory for all the hosted websites. The file server is running the Nexenta Storage Appliance using ZFS . The problem is all the NFS client trying to write something in a file over the NFS get this problem : This is inside the apache2 process: open("/nfs/website1/file.txt", ORDWR|OCREAT, 0600) = 11647 fstat(11647, {stmode=SIFREG|0600, st_size=23754, ...}) = 0 flock(11647, LOCK_EX And the process never get the LOCK and keep waiting for... always. The effect? All the apache2 procces get used and waiting.. my severs can't still proccess the others requests because there is no more proccess available. I don't now where to find a solution.. for me it.'s on the NFS server side.. but wich configuration is wrong or missing ? How can I find what is wrong? If you need more information about the configuration, just ask me what can help you more :)

    Read the article

  • InstallShield or Windows installer corrupted

    - by Bobby S
    Just recently I've been unable to install any software on my Windows 7 machine. Anything that uses InstallShield or the Windows installer will just hang or give a weird error. I noticed there will be many duplicate isbew64.exe processes (like 25) that launch and then just sit there or else a lot of msiexec.exe *32 processes, depending on what I'm trying to install. One piece of software specifically is the Logitech Harmony software. It gives me an *is_string_not_defined* error, saying c:\program files (x86)\:\ the filename, directory name, or volume label syntax is incorrect. The other thing I was trying to install was Battlefield: Bad Company 2, and that just hangs as well, and then just leaves all the Windows installer processes running in the background after I quit the install process. Very odd. I've checked well and googled these issues, it doesn't appear to be any sort of malware issue. I feel like it's related to some kind of corrupted installer application. I've rebooted, deleted the InstallShield folder in program files/common files as some places online suggested but to no avail. I have no idea what to do, any ideas?

    Read the article

  • Windows Server 2008 R2 creating a multi-year client certificate using the IIS certsrv page while deploying SSTP VPN

    - by Warren P
    I am trying to follow instructions on Technet about deploying a Standard (non-enterprise) SSTP based VPN) that were originally written for Server 2008, but I am using Server 2008 R2, I have gotten as far as the part where it asks you to create a request a Server Authentication certificate. I have deployed IIS, and Active Directory Certificate Services, and chose "Standalone" and "Standard" (non-enterprise) Certificate Authority because I don't have an OID and don't think I should have to get one for a simple deployment of SSTP. The resulting certificates made by the Certification Authority "Issue" command, only have a 1 year period of validity, I want a multi-year certificate. At no point in this process is there any way to input this information unless it's through the Attributes text input area on the Advance Certificate Request page, which appears to be generated using an old ActiveX control, which means I can only do this using the workarounds in the article that I linked at the top, and only using Internet Explorer. Update:: It may be that this question is pointless since self-signed keys do not appear to work, when I try them, using Windows 8 as the VPN client. The problem is that the keys that are self-created by the technique shown here do not have any Certificate Revocation Server URLs and so you get an error "The revocation function was unable to check revocation", and the VPN connection fails.

    Read the article

  • Join Domain from VM

    - by Adis
    I have two VMs running on VMWare Player. I use NAT adapter settings. The host machine for VMs is running on corporate network. First VM has Domain controller running and I can log in on that machine using domain credentials. I named domain wm.local When I run IP config on this machine: IP: 192.168.87.132 Def Gataway: 192.168.87.2 DNS server: 192.168.87.2 DHCP server: 192.168.87.254 Second VM cannot join domain. When I try it with domain WM I'm propmted for credentials. And I enter Administrator credentials and than it waits for some time and I get response: "The specified domain either does not exist or could not be contacted" If i type wm.local as domain when trying to join it does not prompt me to login but just shows "An Active Directory Domain Controller (AD DC) for the domain wm.local could not be contacted. And here it takes no time to get this error message. Ipconfig on this machine: IP: 192.168.87.134 Def Gataway: 192.168.87.2 DNS server: 192.168.87.2 DHCP server: 192.168.87.254 I can ping second VM from first one. And I disabled firewalls on both machines. Any ideas? Is there any manual for this?

    Read the article

  • What are the best linux permissions to use for my website?

    - by Nic
    This is a Canonical Question about File Permissions on a Linux web server. I have a Linux web server running Apache2 that hosts several websites. Each website has its own folder in /var/www/. /var/www/contoso.com/ /var/www/contoso.net/ /var/www/fabrikam.com/ The base directory /var/www/ is owned by root:root. Apache is running as www-data:www-data. The Fabrikam website is maintained by two developers, Alice and Bob. Both Contoso websites are maintained by one developer, Eve. All websites allow users to upload images. If a website is compromised, the impact should be as limited as possible. I want to know the best way to set up permissions so that Apache can serve the content, the website is secure from attacks, and the developers can still make changes. One of the websites is structured like this: /var/www/fabrikam.com /cache /modules /styles /uploads /index.php How should the permissions be set on these directories and files? I read somewhere that you should never use 777 permissions on a website, but I don't understand what problems that could cause. During busy periods, the website automatically caches some pages and stores the results in the cache folder. All of the content submitted by website visitors is saved to the uploads folder.

    Read the article

  • auth user and exec a node app only with apache?

    - by Blame
    I couldn't find an answer on the web and I'm trying for days now so I hope that someone with more experience with apache can help me out. Iam writing an web editor and the user should be able to edit a file that is on the server in a directory the user has access to. The problem Iam facing is that I need to authenticate against the system users (shadow/passwd). So the user should be able to login whith a system account and then the node app which does all the logic should be started with the users rights. I hope to get this working without any additional script and only with Apache. I found out two things: I can use mod_auth_pam to authenticate the user There is a mod called suEXEC which can exec the node app with a specified user The problem is that I have to hard code which user is used by suEXEC but I want to decide when the user logs in. Is there any way to authenticate a user against the shadow/passwd and then exec a prog with the users rights? I dont want to run the node app as root and the user should only be able to access his own files. Any help would be appreciated! Thanks, Kodak

    Read the article

  • Java in a browser on Ubuntu

    - by Peter
    So I am running an old version of Ubuntu and all of a sudden, my capabilities to run Java in a browser have stopped and it no longer executes properly. Someone else set this up for me, but I believe it's running icedtea plugin 6 and jre 6. What would be the best ways to troubleshoot this? I suppose I could try downloading icedtea 7 and jre 7 (sun or openjdK) and see if that works? Also, in general, is installing icedtea, installing a java runtime environment, and then symlinking something to the plugins directory of your browser the best way to get Java working on most Linux systems? Unfortunately, there are many guides all over the internet offering totally different solutions, but this is probably the most common I've seen (not sure how well it works). My main use is downloading an executing .jnlp files (browsers should open it automatically). Although, I've seen people using "javasw" to run the file outside of a browser. Is this something I could consider doing? I can't seem to find javasw packaged for my version of Ubuntu.

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • Finding Webserver Vulnerability

    - by Brent
    We operate a webserver farm hosting around 300 websites. Yesterday morning a script placed .htaccess files owned by www-data (the apache user) in every directory under the document_root of most (but not all) sites. The content of the .htaccess file was this: RewriteEngine On RewriteCond %{HTTP_REFERER} ^http:// RewriteCond %{HTTP_REFERER} !%{HTTP_HOST} RewriteRule . http://84f6a4eef61784b33e4acbd32c8fdd72.com/%{REMOTE_ADDR} Googling for that url (which is the md5 hash of "antivirus") I discovered that this same thing happened all over the internet, and am looking for somebody who has already dealt with this, and determined where the vulnerability is. I have searched most of our logs, but haven't found anything conclusive yet. Are there others who experienced the same thing that have gotten further than I have in pinpointing the hole? So far we have determined: the changes were made as www-data, so apache or it's plugins are likely the culprit all the changes were made within 15 minutes of each other, so it was probably automated since our websites have widely varying domain names, I think a single vulnerability on one site was responsible (rather than a common vulnerability on every site) if an .htaccess file already existed and was writeable by www-data, then the script was kind, and simply appended the above lines to the end of the file (making it easy to reverse) Any more hints would be appreciated.

    Read the article

  • Getting the EFS Private Key out of system image

    - by thaimin
    I had to recently re-install Windows 7 and I lost my exported private key for EFS. I however have the entirety of my user directory and my figuring that the key must be in there SOMEWHERE. The only question is how to get it out. I did find the PUBLIC keys in AppData\Roaming\Microsoft\SystemCertificates\My\Certificates If I import them using certmg.msc it says I do have the private key in the information, but if I try export them it says I do not have the private key. Also, decryption of files doesn't work. There is also a "keys" folder at AppData\Roaming\Microsoft\SystemCertificates\My\Keys. After importing the certificates I copy those over into my new installation but it has no effect. I am starting to believe they are either in AppData\Roaming\Microsoft\Protect\S-1-5-21-...\ or AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-...\ but I am unsure how to use the files in those folders. Also, since my SID has changed, will I be able to use them? The other parts of the account have remained the same (name and password). I also have complete access to the user registry hive and most of the old system files (including the old system registry hives). I do keep seeing references to "Key Recovery Agent" but have not found anything about using, just that it can be used. Thanks!

    Read the article

  • Unix sort 10x slower with keys specified

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields, four of which are strings of avg. 15 characters, two are integers. Three of the fields are sometimes empty. All six fields combine to form a unique key - and that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o a_out.csv a_in.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since these results are so bad as-is. Any suggestions?

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

  • Changing the View of a Folder When It Is Opened from Finder

    - by user60044
    When I was running OS 10.4 I had all my folders beautifully organized with position and which ones should be opened in View mode and which one in Icon, etc. When I transitioned to 10.6 I found that OS ignored all that information and imposes an awful consequence of changing the view for some folder and that view moving up to all parent folders which don't have their views locked. The only way I can think of to get this functionality back is to either write a program that given a root directory will enumerate it setting the views based upon a static template I have. The other way I thought I could accomplish this is by Folder Actions. Aside from the fact that I don't know AppleScript it seems folder actions cannot be inherited. Since I have 1000s of folders involved here, all being inherited from a single root, I could not possibly manually add that action to each of those folders. What I would like to have is a folder action that whenever I open the folder from Finder, then if it contains any JPG, GIF, etc. type image files that it will automatically open that folder with a view of Icon with a reasonable size to support the number of images. If the folder contains only folders, then to open that folder in the list view. Does anyone have anything that could do this for me? Thank You, -- Mark

    Read the article

  • How can I have Vhosts with Lighttpd on Windows and keeping PHP through mod_cgi ?

    - by Pixelastic
    Hello, I installed Lighty on Windows 7 and managed to get it correctly serve both static and PHP files (through mod_cgi). At first I got the "No input file selected" message displayed when requesting a .php file. So, I updated the doc_root value in my php.ini to match the server.document-root defined in my Lighty config, and PHP stops complaining. Then I defined a VHost to point all foo.com requests to a specific dir. It worked well for all static files but when requesting a .php file, the mod_cgi was still picking files from the doc_root defined in php.ini, not in the directory I defined for server.document-root in my Vhost. I know its what's supposed to happen, PHP follows the config defined in php.ini. And I have to set this value in my php.ini otherwise no php is processed at all. What I don't understand is how I'm supposed to have virtual hosts with mod_cgi enabled here ? I tried adding [HOST=foo.com] section in the php.ini without any luck. I tried mod_fastcgi but could'n get it to work at all, I also tried mod_simple_host but could get it handle php. I managed to get it working by copying my PHP install to another dir (and changing the doc_root value) and adding a cgi.assign pointing to that install in my vhost. But this is a really hackish way, it means having one PHP install for each virtualhost. Note that I'm working on a development machine running Windows, this is not a production server, I just wanted to emulate the final Server config locally to test some changes. I googled a lot this problem but all I can find are people installing Lighty on windows with mod_cgi, or installing Lighty on Windows with virtual hosts, but I never found anyone who managed to get both.

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

< Previous Page | 849 850 851 852 853 854 855 856 857 858 859 860  | Next Page >