Search Results

Search found 16797 results on 672 pages for 'directory traversal'.

Page 312/672 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • 502: proxy: pass request body failed

    - by Andrei Serdeliuc
    Sometimes I get the following error (in apache's error.log) when viewing my site over https: (502)Unknown error 502: proxy: pass request body failed to xxx.xxx.xxx.xxx:443 I'm not entirely sure what this is and why it happens, it's also not consistent. The request route is: Browser Proxy server (apache with mod_proxy + mod_ssl) Load balancer (aws) Web server (apache with mod_ssl) The configuration on the proxy server is as follows: <VirtualHost *:443> ProxyRequests Off ProxyVia On ServerName www.xxx.co.uk ServerAlias xxx.co.uk <Directory proxy:*> Order deny,allow Allow from all </Directory> <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> ProxyPass / balancer://cluster:443/ lbmethod=byrequests ProxyPassReverse / balancer://cluster:443/ ProxyPreserveHost off SSLProxyEngine On SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /var/www/vhosts/xxx/ssl/www.xxx.co.uk.cert SSLCertificateKeyFile /var/www/vhosts/xxx/ssl/www.xxx.co.uk.key <Proxy balancer://cluster> BalancerMember https://xxx.eu-west-1.elb.amazonaws.com </Proxy> </VirtualHost> Any idea what the issue might be?

    Read the article

  • OSX root user keeps re-enabling itself on reboot

    - by geodave
    Running Snow Leopard. Completely inexplicably, I seem to have enabled the OSX root user by accident. I honestly have no idea how it happened, but if memory serves I was looking at the login pane (with my two user accounts) when I must have hit something, and suddenly the two accounts were replaced by one that just said "Other..." Clicking the "Other..." account allows me to type a username and password, but neither of the normal two accounts would work. Since I never set a root password, it wouldn't let me in that way either. So I booted into Single User mode and ran these commands: /sbin/mount -uw / fsck -fy launchctl load /System/Library/LaunchDaemons/com.apple.DirectoryServices.plist dscl . -passwd /Users/root newpassword and that let me login as root. Then, I went to System Preferences, Accounts, Login Options, clicked Join, Open Directory Utility, and lastly in the Edit menu I clicked "Disable Root User" Great, I thought, back to normal. Except rebooting, I still only have the Other... account visible, and the root password I set beforehand doesn't work anymore! I have to reboot into Single User Mode and go through the whole process again just to get back into the system (as root) How on Earth did I accidentally enable this? I didn't even know about the Directory Utility before now. And most importantly, why the heck would it be re-enabling the root user on boot? Thanks in advance to any help!

    Read the article

  • Share Point ACL on OSX Lion Server - Posix group always takes over ACLs

    - by Ben
    Trying to configure a share point on a Lion Server machine. The directory is created by the local server admin (serveradmin) and has rwxr-x--- given to it. The serveradmin user belongs to the local staff group so serveradmin readwrite staff group read Others none We have an OD group for all the employees (Workers) . Using the Server tool we've given Full Control to the share point: Workers Full Control serveradmin readwrite staff group read Others none We would assume that Workers could then do what they want on the share but that doesn't seem to be the case. It appears the POSIX permissions take over the ACL permissions for Worker. If I change the staff permission to readwrite then the Workers can create a file or folder in the share point. I would think the ACL should take over but it doesn't, posix always win, rendering ACL useless. Furthermore if I leave the readwrite permission for staff and take Write permission away for the Workers group then the posix group still wins. Essentially the Workers ACL does absolutely nothing. There are reports of similar problems in this Apple forum thread: https://discussions.apple.com/thread/3722901 The directory nesting fix suggested there doesn't work for us. Has anyone had similar issues and know how to fix this? Edit: in Workgroup Manager the employees user are set to primary group staff and given the additional OD group Workers. Changing their primary group doesn't help, it only shifts the problem onto Others taking over rights (logically) Edit 2: Ok, this is interesting, adding OD Users to the share's ACL works totally fine

    Read the article

  • ImportError: No module named _socket? WSGI Deployment into Apache

    - by Sxkaur
    I am using WSGI 3.3 for python 2.7.3 (32bit) for Apache 2.2. I got the binary WSGI from http://code.google.com/p/modwsgi/downloads/detail?name=mod_wsgi-win32-ap22py27-3.3.so. I have been trying to deploy an application but keep on receiving the ImportError: no module named _socket. I have included my wsgi and error logs. APACHE config: #LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule wsgi_module modules/mod_wsgi.so <Directory C:/Users/xxxxd/Documents/cahd> AllowOverride None Options None Order deny,allow Allow from all </Directory> WSGIScriptAlias / C:/Users/xxxxd/Documents/cahd/cahd/django.wsgi import os, sys sys.path.append('C:/Users/xxxxd/Documents) sys.path.append('C:/Users/xxxxd/Documents/cahd/') os.environ['DJANGO_SETTINGS_MODULE'] = 'cahd.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() The error was: [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] Traceback (most recent call last): [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1 ]File "C:/Users/xxxxd/Documents/cahd/django.wsgi", line 10, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import django.core.handlers.wsgi [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\django\\Django-1.4.1\\django\\core\\handlers\\wsgi.py", line 8, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] from django import http [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\django\\Django-1.4.1\\django\\http\\__init__.py", line 11, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] from urllib import urlencode, quote [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\Python27\\Lib\\urllib.py", line 26, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import socket [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\Python27\\Lib\\socket.py", line 47, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import _socket [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] ImportError: No module named _socket

    Read the article

  • Phusion Passenger (Apache, Sinatra) suddenly not working for a single site on my server

    - by Kerrick
    I've had Phusion Passenger working for a few of my sites for months. Then, today, it stopped working for a single site. I hadn't changed anything (I hadn't even SSH'ed into the server for a week), and everything is set up the way it should for it to work. Plus, it's working fine for other sites! I'm about to pull my hair out trying to find out what's wrong, so I was hoping y'all could help. Passenger is not working on kerricklong.com -- I only get the "It works!" Apache default page. If I look at the headers, it's not even serving the X-Powered-By: Phusion Passenger (mod_rails/mod_rack) header that I get on my other (currently working) Passenger-powered sites on the same server running Ubuntu Server 10.04. The following is in my /etc/apache2/sites-available/kerricklong.com file, but it's identical (with names and paths changed) to the configuration file for the site that is working. <VirtualHost *:80> ServerAdmin [email protected] ServerName kerricklong.com ServerAlias *.kerricklong.com DocumentRoot /redacted/path/to/kerricklong.com/public ErrorLog /redacted/path/to/kerricklong.com/logs/error.log <Directory /redacted/path/to/kerricklong.com/public> Allow from all Options -MultiViews Include /etc/apache2/h5bp.conf </Directory> php_flag engine off </VirtualHost> I've got the necessary tmp/, logs/, and public/ directories, along with config.ru. I've also run sudo a2dissite then sudo a2ensite, sudo service apache2 restart, and reboot the server to try to fix it. What gives?

    Read the article

  • Git push over http (using git-http-backend) and Apache is not working

    - by Ole_Brun
    I have desperately been trying to get push for git working through the "smart-http" mode using git-http-backend. However after many hours of testing and troubleshooting, I am still left with error: Cannot access URL http://localhost/git/hello.git/, return code 22 fatal: git-http-push failed` I am using latest versions of Ubuntu (12.04), Apache2 (2.2.22) and Git (1.7.9.5) and have followed different tutorials found on the Internet, like this one http://www.parallelsymmetry.com/howto/git.jsp. My VHost file currently looks like this: <VirtualHost *:80> SetEnv GIT_PROJECT_ROOT /var/www/git SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER DocumentRoot /var/www/git ScriptAliasMatch \ "(?x)^/(.*?)\.git/(HEAD | \ info/refs | \ objects/info/[^/]+ | \ git-(upload|receive)-pack)$" \ /usr/lib/git-core/git-http-backend/$1/$2 <Directory /var/www/git> Options +ExecCGI +SymLinksIfOwnerMatch -MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have changed the ownership of the /var/www/git folder to root.www-data and for my test repositories I have enabled anonymous push by doing git config http.receivepack true. I have also tried with authenticated users but with the same outcome. The repositories were created using: sudo git init --bare --shared [repo-name] While looking at the apache2 access.log, it appears to me that WebDAV is trying to be used, and that git-http-backend is never fired: 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/info/refs?service=git-receive-pack HTTP/1.1" 200 207 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/HEAD HTTP/1.1" 200 232 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "PROPFIND /git/hello.git/ HTTP/1.1" 405 563 "-" "git/1.7.9.5" What am I doing wrong? Is it an issue with the version of git and/or apache that I am using perhaps? BTW: I have read all the git http related questions on ServerFault and StackOverflow, and none of them provided me with a solution, so please don't mark this as duplicate.

    Read the article

  • Updating modules on VPS hosted under OpenVZ

    - by tertle
    Been trying to install OpenVPN on a VPS but come into a few problems when trying to start the openvpn server: Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Anyway so after a bit of playing around and some advice, I found that the Linux kernel and modules don't match on my server. uname -r returns 2.6.18-028stab070.14 and ls /lib/modules returns 2.6.18-028stab070.7 The server is running OpenVZ and my container uses Ubuntu 9.10. So my question is, is it possible for me to update my modules on a VPS and if so how would I do this, or is this something I'll need to try get my host to do? Thanks in advance.

    Read the article

  • SORT empties my file?

    - by Jonathan Sampson
    I'm attempting to sort a csv on my machine, but I seem to be erasing the contents each time I use the sort command. I've basically created a copy of my csv lacking the first row: sed '1d' original.csv > newcopy.csv To confirm that my new copy exists lacking the first row I can check with head: head 1 newcopy.csv Sure enough, it finds my file and shows me the original second now (now first row). My csv consists of numerous values seperated by commas: Jonathan Sampson,,,,[email protected],,,GA,United States,, Jane Doe,Mrs,,,[email protected],,,FL,United States,32501, As indicated above, some fields are empty. I want to sort based upon the email address field, which is either 4, or 5 - depending on whether the sort command uses a zero-based index. So I'm trying the following: sort -t, +4 -5 newcopy.csv > newcopy.csv So I'm using -t, to indicate that my fields are terminated by the comma, rather than a space. I'm not sure if +4 -5 actually sorts on the email field or not - I could use some help here. And then newcopy.csv > newcopy.csv to overwrite the original file with new sort results. After I do this, if I try to read in the first line: head 1 newcopy.csv I get the following error: head: cannot open `1' for reading: No such file or directory == newcopy.csv <== Sure enough, if I check my directory the file is now empty, and 0 bytes.

    Read the article

  • Using npm install as a MS-Windows system account

    - by Guss
    I have a node application running on Windows, which I want to be able to update automatically. When I run npm install -d as the Administrator account - it works fine, but when I try to run it through my automation software (that is running as local system), I get errors when I try to install a private module from a private git repository: npm ERR! git clone [email protected]:team/repository.git fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! Error: Command failed: fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! npm ERR! at ChildProcess.exithandler (child_process.js:637:15) npm ERR! at ChildProcess.EventEmitter.emit (events.js:98:17) npm ERR! at maybeClose (child_process.js:735:16) npm ERR! at Socket.<anonymous> (child_process.js:948:11) npm ERR! at Socket.EventEmitter.emit (events.js:95:17) npm ERR! at Pipe.close (net.js:451:12) npm ERR! If you need help, you may report this log at: npm ERR! <http://github.com/isaacs/npm/issues> npm ERR! or email it to: npm ERR! <[email protected]> npm ERR! System Windows_NT 6.1.7601 npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "-d" npm ERR! cwd D:\nodeapp npm ERR! node -v v0.10.8 npm ERR! npm -v 1.2.23 npm ERR! code 128 Just running git clone using the same system works fine. Any ideas?

    Read the article

  • Error in Apache: /var/run/apache2 not found

    - by Julen
    This is more self-answered question but since it drove me crazy I would like to share with the community and maybe someone can tell me why it happened or what it caused. The thing is I wanted to install in my Ubuntu 10.4 machine a CGI app, one built in the samples that come with the gSOAP toolkit. My intention was to access those from ASP .NET machine. Regular Ubuntu does not come with Apache so I install it from Sypnatic. Pretty easy. I followed this How to Install Apache2 webserver with PHP,CGI and Perl Support in Ubuntu Server. Instead of apache.conf I tweaked httpd.conf since a college here used that file instead of the first to put his Apache running. Besides I was able to access his CGI from my ASP .NET but mysteriously I could not from mine, I was getting always "The request failed with HTTP status 503: Service Temporarily Unavailable". Checking Apache error.log I found these messages: No such file or directory: unable to connect to cgi daemon after multiple tries: /home/julen/htdocs/cgi-bin/calcserver And looking more carefully whenever I restarted Apache I got this other message No such file or directory: Couldn't bind unix domain socket /var/run/apache2/cgisock. cgid daemon failed to initialize I am pretty new with Ubuntu and I could not think that Apache and Synaptic made a mistake in the installation process of the server, but it is true that the /var/run/apache2 was missing whereas in my college's computer was not. I tried to find and "elegant" solution but I found a post from 2006 that had an slight reference to it. Finally I decided to create the folder myself (as root) and then everything worked fine. Hope this helps others if they encounter a similar problem. Still I have the doubt why the folders was not created in the first place. Best, Julen.

    Read the article

  • Safe place to put an executable file on Windows 7 (and Windows XP)

    - by Ricket
    I'm working on a tweak to our logon script which will copy an executable file to the local hard drive and then, using the schtasks command, schedule a task to run that executable daily. It's a standalone executable file, and when run it creates a folder in the working directory (which would be the same directory as the executable in this case). In Windows XP, of course, it can be put anywhere - I'd probably just throw it in C:\SomeRandomFolder and let it be. But this logon script also runs on Windows 7 64-bit machines, and those are trickier with UAC and all that. The user is a local administrator but UAC is enabled, so I'm pretty sure that the executable would be blocked from copying to a location like C:\ or C:\Program Files (since those seem to be at least mildly protected by UAC). The scheduled task needs to run under the user's profile, so I can't just run it with SYSTEM and ignore the UAC boundaries; I need to find a path which the user can copy into. Where can I copy this standalone executable file, so that the copy operation succeeds without a UAC prompt on Windows 7, the path is either common to both WinXP and Win7 or uses environment variables, and the scheduled task running with user permissions is able to launch the executable?

    Read the article

  • What does a status of "Backup" mean for Windows 7 local user profiles?

    - by Howiecamp
    Summary: Upon logging on to Windows 7 RTM I get a message that my profile can't be loaded and a temporary user profile is created. I logged off and back on as Administrator. The user profiles dialog shows my user profile with a Type of "Local" and a Status of "Backup" rather than "Local" which it should be. How can I change this to make my user profile accessible? The long story: My PC has a single hard drive partitioned into a C: and a D:. I'd moved my user profile directory (c:\Users) to d:\Users, removed c:\Users and then used mklink.exe to create a directory symbolic link c:\Users -- d:\Users. Worked like a charm since I did it. Today, I make a System Restore Point for drives C: and D:. Next, I dismounted D: and used the Disk Management tool to remove the "D:" drive letter from the D volume. (My plan was to reboot and then redirect the symbolic link.) Upon reboot, I got the user profile error described above. Finally, I restored the System Restore Points that I'd created for both drives and then rebooted again. Same issue.

    Read the article

  • Robocopy launches and then hangs/just sits there

    - by NateO
    I'm setting up an archive process to store old files on an external hard drive. The computer in question is running Windows 7 Pro 32bit. We have a server folder with 150,000+ files in it, most of which are pretty small (below 200k). I'm trying to use robocopy in a batch file to do this. It was working fine the other day, now all it does upon launch is sit there. It shows me all the options and whatnot, and also lists the number of files in the directory and the directory itself, but it never gets past that line. If I switch the destination to the local C drive, it eventually starts copying files. Is there something in my batch file that needs to change? Or could there be a problem with the external Western Digital drive that I'm using? The WD drive currently is holding about 175,000 files. Here is the one line batch file I have: robocopy "\\cgifp01\Prepress\Public\ImportedPDF" "E:\OldFiles" *.* /R:2 /W:10 /MINAGE:15 /MOV /B /XJ /XF "blank_test.pdf" Thanks for any tips or ideas. Nate

    Read the article

  • nconf deployment.ini configuration for a basic Nagios server on CentOS 6.2

    - by jshin47
    I have set up nconf and Nagios but I cannot figure out how to configure deployment.ini to properly deploy the generated configuration to /usr/local/nagios/etc. Here are the directory listings of interest: [jshin@nag0 tmp]$ ls Default_collector global [jshin@nag0 tmp]$ cd Default_collector/ [jshin@nag0 Default_collector]$ ls advanced_services.cfg hostgroups.cfg service_dependencies.cfg services.cfg host_dependencies.cfg hosts.cfg servicegroups.cfg [jshin@nag0 Default_collector]$ cd .. [jshin@nag0 tmp]$ cd global/ [jshin@nag0 global]$ ls checkcommands.cfg contacts.cfg misccommands.cfg timeperiods.cfg contactgroups.cfg host_templates.cfg service_templates.cfg [jshin@nag0 global]$ cd .. [jshin@nag0 tmp]$ cd /usr/local/nagios/etc/ [jshin@nag0 etc]$ ls cgi.cfg htpasswd.users nagios.cfg objects resource.cfg [jshin@nag0 etc]$ cd objects/ [jshin@nag0 objects]$ ls commands.cfg localhost.cfg switch.cfg timeperiods.cfg contacts.cfg printer.cfg templates.cfg windows.cfg Here is my deployment.ini (pretty much the default setting) ;; LOCAL deployment ;; [extract config] type = local source_file = "/var/www/html/nconf/output/NagiosConfig.tgz" target_file = "/tmp/" action = extract [copy collector config] type = local source_file = "/tmp/Default_collector/" target_file = "/usr/local/nagios/etc/Default_collector/" action = copy [copy global config] type = local source_file = "/tmp/global/" target_file = "/usr/local/nagios/etc/global" action = copy reload_command = "service nagios restart" What I am wondering is why the directory structure that the default deployment.ini seems to suggest, with Default_collector and global, is different from the one that Nagios has by default, with only a folder called objects. What am I missing? Or more importantly, how does your deployment.ini look?

    Read the article

  • Object Not found - Apache Rewrite issue

    - by Chris J. Lee
    I'm pretty new to setting up apache locally with xampp. I'm trying to develop locally with xampp (Ubuntu 11.04) linux 1.7.4 for a Drupal Site. I've actually git pulled an exact copy of this drupal site from another testing server hosted at MediaTemple. Issue I'll visit my local development environment virtualhost (http://bbk.loc) and the front page renders correctly with no errors from drupal or apache. The issue is the subsequent pages don't return an "Object not found" Error from apache. What is more bizarre is when I add various query strings and the pages are found (like http://bbk.loc?p=user). VHost file NameVirtualHost bbk.loc:* <Directory "/home/chris/workspace/bbk/html"> Options Indexes Includes execCGI AllowOverride None Order Allow,Deny Allow From All </Directory> <VirtualHost bbk.loc> DocumentRoot /home/chris/workspace/bbk/html ServerName bbk.loc ErrorLog logs/bbk.error </VirtualHost> BBK.error Error Log File: [Mon Jun 27 10:08:58 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ [Mon Jun 27 10:21:48 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/sites/all/themes/bbk/logo.png, referer: http://bbk.$ [Mon Jun 27 10:21:51 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ Actions I've taken: Move Rewrite module loading to load before cache module http://drupal.org/node/43545 Verify modrewrite works with .htaccess file Any ideas why mod_rewrite might not be working?

    Read the article

  • reverse proxy not rewriting to https

    - by polishpt
    I need your help. I'm having problems with reverse proxy rewriting to https: I have an alfresco app running on top of tomcat and as a front and an Apache server - it's site-enabled looks like that: <VirtualHost *:80> ServerName alfresco JkMount /* ajp13_worker <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature Off </VirtualHost> I also have a reverse proxy server running on second machine and i want it to rewrite queries to https. It's site-enabled looks like that: <VirtualHost 192.168.251.50:80> ServerName alfresco DocumentRoot /var/www/ RewriteEngine on RewriteRule (.*) https://alfresco/ [R] LogLevel warn ErrorLog /var/log/apache2/alfresco-80-error.log CustomLog /var/log/apache2/alfresco-80-access.log combined ServerSignature Off </VirtualHost> <VirtualHost 192.168.251.50:443> ServerName alfresco DocumentRoot /var/www/ SSLEngine On SSLProxyEngine On SSLCertificateFile /etc/ssl/certs/alfresco.pem SSLCertificateKeyFile /etc/ssl/private/alfresco.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /alfresco http://192.168.251.50:8080/alfresco ProxyPassReverse /alfresco http://192.168.251.50:8080/alfresco LogLevel warn ErrorLog /var/log/apache2/alfresco-443-error.log CustomLog /var/log/apache2/alfresco-443-access.log combined ServerSignature Off </VirtualHost> Now, ProxyPass works, when I go to alfresco/alfrsco in a browser application opens, but rewriting to https doesn't work. Plese help. Regards when I go to 192.168.251.50 Tomcat configuration page shows up. When I go to 192.268.251.50:8080 - the same as above when I go to 192.168.251.50:8080/alfresco - alfresco app page shows app when I go to alfresco/alfresco - same as above when i go to https://alfresco or https://alfresco i get an error connecting to a server

    Read the article

  • Apache2 VirtualHost on Debian not working

    - by milo5b
    I am having some problems with Apache2 configuration. I have already tried to look for documentation on the web (Apache's site, Debian's site, here on serverfault, etc), but nothing really helps. I have tried different configurations, but my current configuration is the following (/etc/apache2/sites-available/default): <VirtualHost *:80> ServerAdmin [email protected] ServerName mysite.dev ServerAlias mysite.dev DocumentRoot /var/www/mysite.dev/httpdocs/ ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName livesite.com ServerAlias www.livesite.com DocumentRoot /var/www/livesite.com/httpdocs/ <Directory /var/www/livesite.com/httpdocs/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> mysite.dev it's just an entry in hosts file on my client machine, while livesite.com it's an actual DNS record which would resolve to the same IP as the IP set in hosts file for mysite.dev. The problem is that when i try to type mysite.dev in my browser, it would automatically go to livesite.com. I tried to have different /etc/apache2/sites-enabled/ files (/etc/apache2/sites-enabled/mysite.dev , /etc/apache2/sites-enabled/livesite.com ) - and of course with the actual sites-available related files, but achieving the same results. I have tried to have a peak on error.log and access.log but there's nothing I can see. My httpd.conf contains: AccessFileName .htaccess And I have no /etc/apache2/conf.d/virtual.conf file. Any help would be greatly appreciated - if I did not provide enough info please let me know I will do my best to provide all necessary info. Thanks

    Read the article

  • Reporting Services 2008: Virtual directories not visible in IIS7..

    - by Ryan Barrett
    I'm having some problems with Reporting Services on Windows Server 2008 Standard. I've installed server 2008 as a standalone webserver (with roles/features of an web application server). On top of that, I've installed Sql Server 2008 Standard with Reporting Services (and the rest of the BI tools). Problem is, I want to modify the rights on the virtual directories. However, the virtual directories aren't appearing in IIS 7 management tool. I can connect to reporting services, albeit only with the local windows admin account. I can download Report Builder fine from an session on the server (but not from any clients). I've tried removing the default website from IIS, and that stops the reporting services website from working. The machine (a VM) isn't for production use - it's used on a closed network internally for testing and development purposes. I need to be able to let my fellow developers login without a password, and they must be able to install ReportBuilder 2.0. Must not be linked to a domain or active directory in any form. Google isn't much help, the results suggest I modify the virtual directory Does anyone have any suggestions?

    Read the article

  • Recover LVM2 volume group after one HDD failed

    - by Bernd
    I had two HDDs, each one containing a LVM partition which formed a volume group. Then I had two LVs, one for my / directory and one for my /home/ directory. Yesterday where I had my / dir failed. I'm trying to recover at least my /home/ dir. What I've done so far: Boot a live system Extract LVM2 metadata from the working HDD using dd Copy metadata to /etc/lvm/backup/vg0 Now I'm trying to do this: pvcreate --restore /etc/lvm/backup/vg0 --uuid "[uuid of my working hdd]" /dev/sdb2 But I always get: Couldn't find device with uuid '[uuid of broken hdd]'. Couldn't find device with uuid '[uuid of working hdd]'. Device /dev/sdb2 not found (or ignored by filtering). I confirmed that /dev/sdb2 exists and I've commented out all filtering settings from /etc/lvm/lvm.conf so I don't know what might be causing pvcreate not to find the device. So: What might be the problem? Is it even possible to restore this partition? (As I'm writing this I'm starting to think it's impossible D:) Edit: Okay, looks like I've got it figured out. I was using a Ubuntu 8.10 CD (yeah, I know it's not supported anymore) and it seems that was the problem. When I started from a Ubuntu 10.04 CD everything worked 'fine', I could mount my LVM partitions partially without problems. (Will answer the question in 4 hours. But if anyone has still got some hints/tips, please share! :)

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • Powershell Copy-Item fails silently

    - by R W
    I have a powershell 2.0 script running on Windows Server 2008 R2 64bit that copies some Hyper-V .vhd files to another server as a 'backup solution'. The script gets a list of the .vhd's to copy then iterates over that list to copy them using Copy-Item. It also writes some logging info to a file as well. The files are copied to another server (Windows Server 2003 Sp2) into a directory compressed with NTFS compression. One of the files isn't copied. It's relatively big ~ 68Gb. The others are 20Gb or less. The wierd thing is that during the copy process the file appears on the destination server and the log file generated seems to indicate the file is copied due to the difference in the times of the log file entries. I see no error messages on the log file and nothing in the event log of either machine. Here's the code that does the copy. Get-ChildItem $VMSource *.vhd -Recurse | foreach-object { $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) started" $fullname = $_.FullName Add-Content $logFileName "$time : Copying $fullname to $VMDestination" Copy-Item $fullname $VMDestination -Force -ErrorAction SilentlyContinue -ErrorVariable errors foreach($error in $errors) { if ($error.Exception -ne $null) { Add-Content $logFileName "'tERROR COPYING FILE : $($error.Exception)" } } $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) finished" } I can only think there's some problem with copying a file that big to a compressed directory maybe? Any ideas?

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • IIS7 binding to subdomain causing authentication errors (TFS 2010)

    - by Tommy Jakobsen
    I'm trying to bind a IIS web site (Team Foundation Services 2010) to a subdomain, which is causing authentication errors. First I'll explain what I've done to set it up. This is the fist time I do this, so please correct me if I'm wrong. The web server is a stand-alone Windows Server 2008 R2 x64, running IIS7 with .NET Framework 4. I have the following A-records, pointing to my server: server.mydomain.com *.server.mydomain.com So all subdomains of server.mydomain.com points to the server. In IIS7 I have a web site (TFS 2010) on port 8080, with a virtual directory (named tfs) that is using Windows Authentication. I have one binding on the web site pointing to all unassigned IP addresses, port 8080 and having a host name of tfs.server.mydomain.com. Now, shouldn't I be able to access the virtual directory through: http://tfs.server.mydomain.com/tfs That is not working. However, I can access it through: http://tfs.server.mydomain.com:8080/tfs But, it won't let me authenticate using a Windows account (Server\Username). A windows account that I can authenticate with, when accessing the site through http://localhost:8080/tfs. What am I missing here?

    Read the article

  • Citrix Metaframe/RD - screen refresh weirdness

    - by southof40
    I access a clients W2003 machine (XEN Virtualization) using RD over Citrix Metaframe. Everything used to be fine. Some weeks ago things turned bad ! All is well initially but after, say, 5 minutes the screen will stop refreshing. Rather weirdly you can then still proceed in a way as you can make the screen refresh by getting the RD window to go through a restore/maximise cycle (this is only possible using the ALT-BREAK shortcut as everything else is locked up). This then allows you to proceed by typing something and going ALT-BREAK to see the results. Using menus is just not possible at all. There's some indications that clearing the java cache between sessions helps. Also that the lockup happens more quickly if you make the 'lots of stuff happen' on the screen - for instance if you do a directory listing of a big directory then often that will cause the lockup to occur. Similary opening a dense Excel workbook and then scrolling it will cause the lockup to occur. Any Metaframe veterans out there who recognise these symptoms ? I'd be very grateful as it's driving me nuts.

    Read the article

  • 2nd instance of mysql closes/doesnt start no warnings/errors?

    - by acidzombie24
    I have an external HD and i'd like to run a 2nd mysql instance on it. I used the windows installer to install/configure mysqld as a service on windows7. I took the my.ini from C:\Program Files\MySQL\MySQL Server 5.5\my.ini Then edited the port (client and mysqld), datadir and innodb_data_home_dir. After running this command "C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld" --defaults-file="f:/dev/my.ini" I found an error which was all about the innodb_data_home_dir directory not existing. After that I ran the command again. Mysqld simply starts up for a second then immediately closes. I see no message in my command prompt. I know this command line args are correct as i see the mysqld service using the same one except a different my.ini path. Also it did tell me about the directory not existing so i know it is reading the new ini file. How do i figure out why this 2nd instance of mysqld is closing? How do i get 2 instance running? I'm using v 5.5

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >