Search Results

Search found 16797 results on 672 pages for 'directory traversal'.

Page 569/672 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • mod_rewrite "Request exceeded the limit of 10 internal redirects due to probable configuration error."

    - by Shoaibi
    What i want: Force www [works] Restrict access to .inc.php [works] Force redirection of abc.php to /abc/ Removal of extension from url Add a trailing slash if needed old .htaccess : Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteRule ^(.*)/$ /$1.php [L,R=301] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.example.net/$1/ [R=301,L] </IfModule> New .htaccess: Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteCond %{REQUEST_FILENAME} \.php$ RewriteCond %{REQUEST_FILENAME} -f RewriteRule (.*)\.php$ /$1/ [L,R=301] #### Map pseudo-directory to PHP file RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule (.*) /$1.php [L] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule (.*) $1/ [L,R=301] </IfModule> errorlog: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://www.example.net/ Rewrite.log: http://pastebin.com/x5PKeJHB

    Read the article

  • samba4 not building in archlinux.

    - by kmplsv
    cp bin/tdbtool bin/tdbdump bin/tdbbackup /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/bin cp ./include/tdb.h /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/include cp tdb.pc /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/pkgconfig cp libtdb.a libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 mkdir -p /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` cp tdb.so /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` /bin/install -c -d /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8 for I in manpages/*.8; do \ /bin/install -c -m 644 $I /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8; \ done /bin/install: cannot stat `manpages/*.8': No such file or directory make: *** [installdocs] Error 1 Aborting... ==> ERROR: Makepkg was unable to build samba4. ==> Restart building samba4 ? [y/N] ==> ------------------------------- ==>c any ideas as what is causing my build to fail? i'm assuming it's an issue with manpages but i can't figure out exactly what package it is looking for that i don't have.

    Read the article

  • Why is wget so much faster than Firefox at some downloads?

    - by Earlz
    Recently, I needed to do an update of Xilinx WebPack, mind you, this is one hefty piece of software. It weighs in at 6gigs, which definitely isn't "quick" on any internet I've ever had available to me. So, when I went to download it(using Firefox of course), I was very... unsettled by the fact that the download was only going at 110kByte/s. My internet connection is capable of about 2200kByte/s download, so what gives!? My workaround in the past for this issue has been to take the link to my Linode linux server and download it there with wget, where the download will zip along at 14MByte/s, and then either copying it to my website directory and downloading it that way through HTTP, or using sftp. Both ways work about as well and will sufficiently max out my connection. However, I recently figured out the missing variable. I tried doing the download locally with wget and was able to max out my connection! TL;DR; Now, my question. Why is wget so much faster than firefox at downloading this file? I hardly ever have such a difference in download speeds except for with this one file.

    Read the article

  • Checksum for Protecting Read-Only Documents

    - by Kim
    My father owns a small business and has to hand over several year's worth of financial documents to his insurance's auditor. He's asked me to go through and make sure everything is "read-only" so the data (the files) absolutely, positively cannot be modified or manipulated (he's a bit paranoid). We're talking about 20,000 documents (emails, spreadsheets, etc.). My first inclination was to place everything inside of one root folder ("mydadsdocs/") and then write a script that recursively traversed its directory subtree and set the file permissions to read-only. But then I got to thinking: that's a lot of work for me to do to satisfy an old man who is just being paranoid, and afterall, if someone really wanted to modify a read-only file, it would be pretty easy to change file permissions anyways, soo.... Is there like a checksum I could run on the root folder, something that was very quick and easy, and that would basically "stamp" the data in that folder so if someone did change it, my father would have someone of knowing/proving it? If so, how? If not, any other recommendations that are quick, cheap (free) and effective?

    Read the article

  • PHP 5.3.5 Windows installer missing php_ldap.dll

    - by nmjk
    I'm working with Windows Server 2008, Apache 2.2. I'm using php-5.3.5-Win32-VC6-x86.msi as the installer, using the threadsafe version. I've gone through the install process four or five times just to make sure that I'm not missing anything ridiculous, but I don't think I am. The problem is that the php_ldap.dll extension simply doesn't seem to exist. It's not present in the installer interface (where the user is asked to choose which extensions to install), and it definitely doesn't appear in the ext/ directory after install. I found a lot of mentions of this issue for 5.3.3, including links to download the extension individually. Those links no longer exist, of course, and besides: they were for 5.3.3. I'd really rather use an extension that belongs with PHP 5.3.5. Anyone else encounter this problem? Any ideas as to what's going wrong? Anyone seen acknowledgement by the PHP folks that the file is indeed missing, and that it's an oversight? It's quite a frustration because the server I'm building has no purpose if I don't have PHP LDAP support. Cheers all, and thanks in advance for your assistance.

    Read the article

  • Updating Dell PowerEdge firmware on Ubuntu?

    - by Shtééf
    The company I work for recently got hands on a batch of second hand PowerEdge SC1425 machines. We'd like to put these to good use. Our operating system of choice is Ubuntu Server 10.04 64-bit, which installs just peachy on this type of machine. Now I'd like to install the firmware updates from Dell, which are apparently marked as recommended. This includes the updates for the BIOS, the BMC, and possibly some other hardware. I find it incredibly difficult to locate the files on the Dell website, and install any of them on an Ubuntu system: I downloaded the file OM_6.2.0_SUU_A01.iso. I believe I've read that the SUU DVD should be able to update any recent PowerEdge. Is this correct? Is this the latest version? Besides the version number, does A01 have any meaning? Is this image bootable? (At the moment, I just nosed around with a loop device mount.) Running /bin/bash ./suu from the DVD, I get: # /bin/bash ./suu ./suu: line 262: ./java/linux/i386/bin/java: No such file or directory The file exists and is executable, though. But I cannot execute it directly from the shell either.

    Read the article

  • Export NFS path containing "-" (dash)

    - by qdot
    I'm in a bit of a pinch with NFS exports file. Specifically, I can't find a way to export a directory containing "-" in the path name. Manual (exports(5)) states: Also, each line may have one or more specifications for default options after the path name, in the form of a dash ("-") followed by an option list. The option list is used for all subsequent exports on that line only. It then states: If an export name contains spaces it should be quoted using double quotes. You can also specify spaces or other unusual character in the export name using a backslash followed by the character code as three octal digits. Unfortunately, that is not the case. Specifically, if the pathname contains "-", either verbatim, or with \055 or is enclosed in double quotes, it still refers to the name without "-" Any ideas? I have a large number of directories, all of the form /vol/buildsystem-s3c2440 /vol/buildsystem-tao3530 and I'd prefer to have them all available as nfs exports. Short of replacing the "-" with "_" everywhere in the scripts, can it be done with "-" ?

    Read the article

  • IIS 7 - 403 Access Denied error on wwwroot trying to redirect to /owa

    - by cparker4486
    I'm trying to setup a redirect from http://mail.mydomain.com to https://mail.mydomain.com/owa. I've been unsuccessful in doing this by using IIS's HTTP Redirect so I looked to other options. The one I settled on is to create a default document in the wwwroot folder to handle the redirect. I created a file called index.aspx (and added index.aspx to the list of default documents) and put the following code in it: <script runat="server"> private void Page_Load(object sender, System.EventArgs e) { Response.Status = "301 Moved Permanently"; Response.AddHeader("Location","https://mail.mydomain.com/owa"); } </script> Instead of getting a redirect I get: 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've been trying to find an answer to this but have been unsuccessful so far. One thing I did try was to add the Everyone group to wwwroot with read access. No change. The AppPool for Default Web Site is DefaultAppPool and the Identity is ApplicationPoolIdentity. (I don't know what these things are but maybe knowing this will help you.) Thanks!

    Read the article

  • Java Deployment Ruleset not working

    - by adbertram
    I've created a Java Deployment Ruleset that looks like this: <ruleset version="1.0+"> <rule> <id location="http://hpfweb.mydomain.com/" /> <action permission="run" version="1.6.0_20" /> </rule> <rule> <id location="http://*.mydomain.com" /> <action permission="run" /> </rule> <rule> </ruleset> I've created a self-signed cert, added it into the keystore as well as Trusted Certification Authorities. I have an app at http://hpfweb.mydomain.com that requires Java v1.6.20 and will error out if any other version is attempted. When only this version is installed on the computer the application works. However, if a newer version is installed, it does not. As you can see, I've attempted to force the version to 1.6.0_20 in the ruleset. I've confirmed the deployment rule set is being applied successfully by going into the Java Control Panel -- Security and "view the active deployment rule set". It is exactly as you see here. I've also looked at the web source for the application and all references point to http://hpfweb* links. When the applet is launched I've brought up task manager and have confirmed the java.exe launched is coming from the jre6 directory. When the newer version is installed, I'm getting the error "accesscontrolexception - access denied (java.awt.AWTPermission.accessEventQueue".

    Read the article

  • Multiple PHP SAPI configuration

    - by DTest
    I'm trying to build PHP for use as an apache shared module --with-apxs2 but also with the 'php-cgi' binary (fastcgi) on Mac OSX 10.6. I'm using this ./configure : /configure --prefix=/usr/local/PHP \ --with-apxs2=/usr/local/apache/bin/apxs \ --disable-ipv6 \ --enable-cgi \ --with-curl \ --with-mysqli=/usr/local/mysql/bin/mysql_config \ --with-openssl=/usr \ --enable-ftp \ --enable-shared \ --enable-soap \ --enable-sockets \ --enable-zip \ --with-zlib-dir It builds the apache php5.so module just fine, but in /usr/local/PHP/bin, there is no php-cgi file. If I build it without the --with-apxs2 option (and indeed, I don't even need the --enable-cgi option) the php-cgi file gets built with no problems. Background on my setup: PHP 5.3.4, Apache 2.2.14, Mac OSX 10.6, Tomcat with JavaBridge (which is why I need the php-cgi file) Without the apxs2 option, /usr/local/php/bin/php -v produces: PHP 5.3.4 (cli) (built: Dec 21 2010 21:35:14) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies and /usr/local/php/bin/php-cgi -v produces: PHP 5.3.4 (cgi-fcgi) (built: Dec 21 2010 21:35:12) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies My question is, what am I not understanding with php SAPIs that won't allow the building of the two modules at the same time? Also, can I build it --with-apxs2 the first time, then make clean and rebuild in the same PHP directory /usr/local/php for the php files without issue?

    Read the article

  • Virtualmin & git integration

    - by weby3456
    I've installed virtualmin on my VPS to manage my websites. It's working perfect and as expected nearly a year now. Recently I wanted to add some features to one of my sites, and I need git integration. I've correctly installed git & gitweb on my server, and I can create repositories and watch them under http://sub.domain.com/git/gitweb.cgi Here is the current relevant directory tree: /home/user/domains/sub.domain.com/public_html/git/ drwxr-sr-x user user . drwxr-x--- user user .. -rw-r--r-- user user git-favicon.png -rw-r--r-- user user git-logo.png -rwxr-xr-x user user gitweb.cgi -rw-r--r-- user user gitweb.css drwxrwx--- apache user reponame.git /home/user/domains/sub.domain.com/public_html/git/reponame.git/ drwxrwx--- apache user . drwxr-sr-x user user .. drwxrwx--- apache user branches -rwxrwx--- apache user config -rwxrwx--- user user description -rwxrwx--- apache user HEAD drwxrwx--- apache user hooks drwxrwx--- apache user info drwxrwx--- apache user objects drwxrwx--- apache user refs But I have some questions: When I'm visiting http://sub.domain.com/git/gitweb.cgi, the owner is listed as 'Apache'. why? how can I change that? Usually, to create a new git repository, I'll do something like: $ mkdir proj $ cd proj $ git init Initialized empty Git repository in /home/user/proj/.git/ // here I'm creating the files or copy them from somewhere else $ git add *.php $ git add README $ git commit -m 'initial version' But after creating the repository in virtualmin, I can find a new dir named 'reponame.git' but not the '.git' dir. When I'm trying to run any git command (e.g. git status) I'm receiving "fatal: This operation must be run in a work tree". How can I work with that repository? Currently I need to explicitly grant access for users to be able to view the repositories via gitweb. How can I make certain repositories public?

    Read the article

  • Windows service runs file locally but not on server

    - by Ben
    I created a simple Windows service in dot net which runs a file. When I run the service locally I see the file running in the task manager just fine. However, when I run the service on the server it won't run the file. I've checked the path to the file which is fine. I also checked the permissions on the folder and file, and they fine as well. Also there are no exceptions happening. Below is the code used to launch the process which runs the file. I posted this first on stack overflow, and some people were thinking this is a config issue, so I moved it here. Any ideas? try { // TODO: Add code here to start your service. eventLog1.WriteEntry("VirtualCameraService started"); // Create An instance of the Process class responsible for starting the newly process. System.Diagnostics.Process process1 = new System.Diagnostics.Process(); // Set the directory where the file resides process1.StartInfo.WorkingDirectory = "C:\\VirtualCameraServiceSetup\\"; // Set the filename name of the file to be opened process1.StartInfo.FileName = "VirtualCameraServiceProject.avc"; // Start the process process1.Start(); } catch (Exception ex) { eventLog1.WriteEntry("VirtualCameraService exception - " + ex.InnerException); }

    Read the article

  • Passwordless SSH not working - keys copied and permissions set

    - by Comcar
    I know this question has been asked, but I'm certain I've done what all the other answers suggest. Machine A: used keygen -t rsa to create id_rsa.pub in ~/.ssh/ copied Machine A's id_rsa.pub to Machine B user's home directory Made the file permissions of id_rsa.pub 600 Machine B added Machine A's pub key to authorised_keys and authorised_keys2: cat ~/id_rsa.pub ~/.ssh/authorised_keys2 made the file permissions of id_rsa.pub 600 I've also ensured both the .ssh directories have the permission 700 on both machine A and B. If I try to login to machine B from machine A, I get asked for the password, not the ssh pass phrase. I've got the root users on both machines to talk to each other using password-less ssh, but I can't get a normal user to do it. Do the user names have to be the same on both sides? Or is there some setting else where I've missed. Machine A is a Ubuntu 10.04 virtual machine running inside VirtualBox on a Windows 7 PC, Machine B is a dedicated Ubuntu 9.10 server UPDATE : I've run ssh with the option -vvv, which provides many many lines of output, but this is the last few commands: debug3: check_host_in_hostfile: filename /home/pete/.ssh/known_hosts debug3: check_host_in_hostfile: match line 1 debug1: Host '192.168.1.19' is known and matches the RSA host key. debug1: Found key in /home/pete/.ssh/known_hosts:1 debug2: bits set: 504/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: Wrote 16 bytes for a total of 1015 debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug3: Wrote 48 bytes for a total of 1063 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/pete/.ssh/identity ((nil)) debug2: key: /home/pete/.ssh/id_rsa (0x7ffe1baab9d0) debug2: key: /home/pete/.ssh/id_dsa ((nil)) debug3: Wrote 64 bytes for a total of 1127 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/pete/.ssh/identity debug3: no such identity: /home/pete/.ssh/identity debug1: Offering public key: /home/pete/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug3: Wrote 368 bytes for a total of 1495 debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/pete/.ssh/id_dsa debug3: no such identity: /home/pete/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password

    Read the article

  • 403 Forbidden when Deploying asp.net 4.0 site to IIS 7

    - by Jordan
    So I have an EC2 instance running, the URL NoWeatherSurprises.com I have the DNS pointing there, and I set up a new site in IIS 7 and pointed it to a folder. I used Visual Studios Web Developer 2010 express to publish to this folder. It now has the binaries and such. However if I go to NoWeatherSurprises.com I get the welcome to IIS 7 screen. I'd expect to go to my application If I navigate to http://noweathersurprises.com/weather/ [weather was the folder I published to under wwwroot] I get a 403 forbidden. I have no idea why, I am guessing that it is trying to do a directory listing or something instead of launching my MVC Application. So 2 problems in summary. It is not pointing the domain to the folder directly and I need to add /weather I am getting a 403 forbidden instead of the results of my home controller with the index action. I am new to IIS 7, I had been using IIS 6 and had a lot less trouble setting it up, but I suspect that's my own fault and i am just missing something. Thanks in advance for any help

    Read the article

  • IIS 404 custom error

    - by Greg B
    I've deployed an ASP.NET 3.5 app to a 64bit Windows 2003 R2 server. In the web.config I have the following <customErrors mode="RemoteOnly" defaultRedirect="/404/"> <error statusCode="404" redirect="/404/"/> <error statusCode="500" redirect="/500/"/> </customErrors> In the website properties in IIS Manager I have set the 404 and 500 errors to Type = "URL" and the same URLs as in the web.config. I have a wildcard application map to the .NET 2.0 aspnet_isapi.dll with "Verify file exists" turned off. If I try to hit a fake .aspx file I successfully get sent to the 404 page. I belive this is because there is an explicit mapping for .aspx to the .NET DLL. If I try to access a fake directory I simply recieve a plain text response saying: The system cannot find the file specified. It would appear that these requests for directories are not being routed through the .NET pipeline, which is what I would expect (and need) to happen becuase of the wildcard application mapping. Any ideas?

    Read the article

  • Bash script to run a clamscan on Ubuntu- how to use return values properly?

    - by Marius
    I'm trying to put together a simple script that will scan my home directory with clamscan and give me a warning if any viruses were found. What I have so far is: #! /usr/bin/env bash clamscan -l ~/.ClamScan/$(date +"%a%b%d") -ir /home RETVAL=$? [ $RETVAL -eq 0 ] && notify-send 'clamscan finished. No viruses found' [ $RETVAL -eq 1 ] && notify-send 'clamscan found a virus' && touch ~/Desktop/VirusFound [ $RETVAL -eq 2 ] && notify-send 'clamscan encountered errors. Check the logs' && touch ~/Desktop/ClamscanError find ~/.ClamScan/* -mtime +7 -exec rm {} \; However, I'm unsure about a couple of things: I'm always wary of using rm- as far as I can tell, the find command I've got should be deleting any log files that are more than a week old. I'm also not entirely sure how the return value testing works- I've got a manual that briefly covers bash, which says that the meaning of $? is "match one character", and I'm not entirely sure how that grabs the return value. Should I be using -eq or = for testing the return value? From what I can tell -eq tests strings and = tests numerals, but I'm not sure what the type of the return value is.

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Nginx proxy with Redmine SVN authentication.

    - by Omegaice
    I am attempting to setup a system where I have an nginx server running as a reverse proxy for multiple websites that I want to run. To separate the websites I have created a Linux container which contains each site to allow me to reduce conflicts in database usage etc. I am currently trying to get my main site working and have nginx with passenger setup and connecting to redmine and I have an Apache install specifically setup for serving the SVN over HTTP and am attempting to use the redmine authentication with that. I have set everything up as described in the redmine howtos, but when I check a project out from the SVN it always works even if the project is private and whenever I try and commit to the repositories it fails saying "Could not open the requested SVN filesystem", the Apache error log related to that event is "(20014)Internal error: Can't open file '/srv/rcs/svn/error/format': No such file or directory". If I take out the redmine authentication I can checkout and check-in repositories fine but there is no authentication. Does anyone have any ideas? Edit I tried to solve this problem another way by attempting to have the authentication work by LDAP, I managed to get it so that my user could log into the redmine website but as soon as I tried to check anything out it said that access was forbidden to the repository.

    Read the article

  • cPanel web server redundancy advice?

    - by crgnz
    At present I operate a (reasonably low volume) web-hosting service with a Centos 5.3 server running cPanel/WHM. I would like to implement a level of redundancy such that in the event of server failure, I can restore service with a minimum of effort in less than 60 minutes. I also want to setup a secondary DNS that cPanel will replicate with. My current idea is to kill two birds with one stone by: My current server is called "www1" Purchase an identical server (HP DL360 G4) with mirrored disks. Call this server "www2" Install Centos 5.4 (or perhaps I should install 5.3 to be identical with www1) Install cPanel/WHM on this server and fully license it Setup www1 and www2 cPanel to replicate DNS with each other Setup a nightly replication script that does the following: a) rsync's the /home directory from www1 to www2 b) dumps all MySQL databases on www1 and copies them to a temp folder (with root access only) on www2 c) triggers a script to run on www2 that restores the MySQL dumps Thus each night a fully working copy of all the websites and MySQL databases is copied to www2. I do not have enough knowledge of MySQL replication to understand if it works safely and transparently with cPanel. Thus I propose the mysql dump/copy/restore due to not knowing any better! In the event that www1 dies a horrible death, I envisage that I could login to www2, change the IP addresses to those that www1 had, and presto, the websites are available again. The advantage of this idea is that it is fairly simple and "low tech" and thus does not require an expert sysadmin to setup and monitor (I am NOT an expert sysadmin) The disadvantage of this idea is that up to a full days worth of data changes would be lost. I think this would be acceptable to the sorts of customers I host at the moment. The other disadvantage would be having to pay for a full cPanel license, but I am comfortable with that cost, so for now all I want to discuss are technical considerations. Is this a sound scheme?

    Read the article

  • VNC error: "Could not connect to session bus: Failed to connect to socket"

    - by GJ
    I started a vncserver on display :1 on an ubuntu machine. When I connect to it, I get a grey X window with an error message Could not connect to session bus: Failed to connect to socket. The vnc log is: Xvnc Free Edition 4.1.1 - built Apr 9 2010 15:59:33 Copyright (C) 2002-2005 RealVNC Ltd. See http://www.realvnc.com for information on VNC. Underlying X server release 40300000, The XFree86 Project, Inc Sun Mar 20 15:33:59 2011 vncext: VNC extension running! vncext: Listening for VNC connections on port 5901 vncext: created VNC server for screen 0 error opening security policy file /etc/X11/xserver/SecurityPolicy Could not init font path element /usr/X11R6/lib/X11/fonts/Type1/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/Speedo/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/misc/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/75dpi/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/100dpi/, removing from list! cat: /var/run/gdm/auth-for-link2-eGnVvf/database: No such file or directory gnome-session[24880]: WARNING: Could not make bus activated clients aware of DISPLAY=:1.0 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of GNOME_DESKTOP_SESSION_ID=this-is-deprecated environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of SESSION_MANAGER=local/dell:@/tmp/.ICE-unix/24880,unix/dell:/tmp/.ICE-unix/24880 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused Sun Mar 20 15:34:10 2011 Connections: accepted: 0.0.0.0::51620 SConnection: Client needs protocol version 3.8 SConnection: Client requests security type VncAuth(2) VNCSConnST: Server default pixel format depth 16 (16bpp) little-endian rgb565 VNCSConnST: Client pixel format depth 16 (16bpp) little-endian rgb565 gnome-session[24880]: Gtk-CRITICAL: gtk_main_quit: assertion `main_loops != NULL' failed gnome-session[24880]: CRITICAL: dbus_g_proxy_new_for_name: assertion `connection != NULL' failed Any ideas how to fix it?

    Read the article

  • Adding 2008 Server to 2008 Domain

    - by Phillip
    Hello, I'm trying to create a lab for testing before I deploy solutions, I'm no experienced IT Administrator, and therefore I come here for help. I'm running 2 Virtual Servers on the same machine on a local connection between those two. They'are able to ping each other. Their names is TSDATA1 and TSDATA2 where TSDATA1 is the Domain Controller. I am able to ping between those two, on both "ping TSDATA1" and "ping 10.0.0.1" which is the IP address of TSDATA1. The IP address of TSDATA2 is 10.0.0.2. I'm trying to join the domain with TSDATA2 both I'm getting this error when trying: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain tsdata.local: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.tsdata.local Common causes of this error include the following: The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 10.0.0.1 One or more of the following zones do not include delegation to its child zone: tsdata.local local . (the root zone) For information about correcting this problem, click Help. I've figured out it has something to do with DNS lookup, but I have no clue what to do. Can anyone help?

    Read the article

  • OpenVPN Client timing out

    - by Austin
    I recently installed OpenVPN on my Ubuntu VPS. Whenenver I try to connect to it, I can establish a connection just fine. However, everything I try to connect to times out. If I try to ping something, it will resolve the IP, but will time out after resolving the IP. (So DNS Server seems to be working correctly) My server.conf has this relevant information (At least I think it's relevant. I'm not sure if you need more or not) # Which local IP address should OpenVPN # listen on? (optional) ;local a.b.c.d # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. port 1194 # TCP or UDP server? ;proto tcp proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca ca.crt cert server.crt key server.key # This file should be kept secret # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh dh1024.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. ;push "route 192.168.10.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. ;user nobody ;group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I've tried on multiple computers by the way. The same result on all of them. What could be wrong? Thanks in advance, and if you need other information I'll gladly post it. Information for new comments root@vps:~# iptables -L -n -v Chain INPUT (policy ACCEPT 862K packets, 51M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 3 packets, 382 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 4641 298K ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 1671K packets, 2378M bytes) pkts bytes target prot opt in out source destination And root@vps:~# iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 17937 packets, 2013K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 8975 packets, 562K bytes) pkts bytes target prot opt in out source destination 1579 103K SNAT all -- * * 10.8.0.0/24 0.0.0.0/0 to:SERVERIP Chain OUTPUT (policy ACCEPT 8972 packets, 562K bytes) pkts bytes target prot opt in out source destination

    Read the article

  • Need assistance making a batch file for renaming files in separate folders

    - by Carnaxus
    Ok, here's one for you. I'm trying to use a batch file to rename a bunch of files, but none of them are in the same folder as the batch file itself. The command prompt keeps telling me that the directory can't be found. I suppose I could just rename all the files in all the folders that match the filename, but I don't want to do that either; I only want to change certain ones. My batch file as it stands is: @echo off ren "engine/info.txt" "disabled.txt" ren "gravplating/info.txt" "disabled.txt" ren "HAWX content/info.txt" "disabled.txt" ren "laserz/info.txt" "disabled.txt" ren "NeuroNaval/info.txt" "disabled.txt" ren "NeuroPlanes/info.txt" "disabled.txt" ren "NeuroTanks/info.txt" "disabled.txt" ren "NeuroWeapons/info.txt" "disabled.txt" ren "WAC Base/info.txt" "disabled.txt" ren "WAC DamageSystem/info.txt" "disabled.txt" ren "WAC GravityController/info.txt" "disabled.txt" ren "WAC Helicopters/info.txt" "disabled.txt" ren "WAC Sweps/info.txt" "disabled.txt" ren "weapons/info.txt" "disabled.txt" ren "AFF_ships/info.txt" "disabled.txt" ren "AntiTakeRifle/info.txt" "disabled.txt" ren "Catmull-Rom Cameras/info.txt" "disabled.txt" ren "Displacer Cannon/info.txt" "disabled.txt" ren "Drumdevil's Trains/info.txt" "disabled.txt" ren "EVEOnline/info.txt" "disabled.txt" ren "gm_botmap_v3/info.txt" "disabled.txt" ren "gm_construct_flatgrass_v5-2/info.txt" "disabled.txt" ren "gm_mobenix_v3_final/info.txt" "disabled.txt" ren "gm_mobenix_v3_highquality_Water/info.txt" "disabled.txt" ren "gm_snabbansairfield_b1/info.txt" "disabled.txt" ren "gm_XhS_construct/info.txt" "disabled.txt" ren "linedraw/info.txt" "disabled.txt" ren "ModelManipulator/info.txt" "disabled.txt" ren "NeuroCars/info.txt" "disabled.txt" ren "Propeller Engine/info.txt" "disabled.txt" ren "VanDookie and Predaaator's pack/info.txt" "disabled.txt" ren "WAC ECM/info.txt" "disabled.txt" ren "WAC Extra Helicopters/info.txt" "disabled.txt" echo Done! pause

    Read the article

  • "Open file location" is broken for Win Live Photo Gallery and Chrome

    - by Arnold Spence
    Symptoms: Windows Live Photo Gallery: Right clicking on an image (.jpg, .png) and selecting the context menu item "Open file location" should open the containing folder in explorer. Instead, I get an error dialog stating "This file does not have a program associated with it for performing this action. Please install a program or, if one is already installed, create an association in the Default Programs control panel." Chrome: After downloading a file (of any type), clicking the down arrow on the download status bar at the bottom and clicking "Show in folder" results in the same error dialog mentioned above. I'm using Windows 7. I'm pretty sure this is a registry entry gone bad but I've been unable to locate any information about this specific problem. It's not a file type association problem as I am not trying to open the files concerned, I'm trying to open an explorer window at the location for the file. I've found a similar issue that somebody had with explorer itself. However, the suggested registry fixes here for "Folder", "Directory" and "Drive" did not solve the problem. Also, If I use the searchbar in explorer to do a search, right click on a file and choose "Open file location", explorer jumps to that location without any trouble. I have not yet identified other programs with this issue.

    Read the article

  • adding trac to apache2 configuration file

    - by Michael
    I currently have apache2 running from a mythtv/mythweb install. This made two config files available in sites-enabled. One of them ("default-mythbuntu") has the VirtualHost directive and seems like a normal file (except a change to the directory index). There is also a mythweb.conf file that only has directives and sets various variables for mythweb. I want to host a trac site as well. According to this site: http://trac.edgewall.org/wiki/TracOnUbuntu there are some setting I need to set for the Trac site. They give me directions for making a VirtualHost, but I think I should use the current VirtualHost and just add the directives (I'll need to change the default location they point to from the site above to just point to the trac location). Where should I put these directives? Can I make a Trac.conf with just the settings for Trac and enable it, or do I need to put them in the default-mythbuntu file? I don't like that later because it doesn't separate out the Trac configs. How does Apache know that the mythweb (and the trac.conf I want to make) belong to the virtualhost defined in the default-mythbuntu? It is the only virtualhost that is being defined on my system if that matters.

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >