Search Results

Search found 75614 results on 3025 pages for 'file location'.

Page 989/3025 | < Previous Page | 985 986 987 988 989 990 991 992 993 994 995 996  | Next Page >

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Is this way of using Excel 2007 Pivot table for BI scalable ?

    - by Sim
    Hi all, Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question I tried several SaaS BI solution (GoodData, ZohoReports) and while they're good, they seem not to fully support what we need We're looking at 'bout 2 millions record for every 2 months My current approach Our (10) sites currently gathers data from all their branches and consolidate them into 1 Excel file with Pivot table and embed source data In HQ, I will request 10 sites to send back those Excel files periodically We will import those Excel to our MSSQL server There will be a master Excel file, that will also have the same pivot table (as those came from site Excel file), and datasource is the MSSQL server More details For testing, I currently use MSSQL 2008 Express on my laptop So far, I imported our transactions for the past 2 months and there are 2 millions+ row in 1 table in MSSQL (we just use 1 table, corresponding to our common pivot table structure). DB size is ~ 600 MB In the master Excel file, if not including the source data, it's just < 10MB. Including the source data will increase the size to 60 MB (so I supposed Office 2007 automatically zip the data ?) I try using the Pivot (drag-and-drop fields) and the performance so far is OK (my laptop specs: C2D T7200, 3GB RAM, Windows XP) So my question is : If we're looking at full year transaction (roughly 15 millions rows in MSSQL 2008 Express, 3.6 GB in size), is there any issue with that 15 million rows in 1 table in SQL Express ? Is there any performance issue with the pivot table at that time ? Can it still embed the source data ? (I google-ed but didn't find the maximum size of source data Excel 2007 can embed) Any other suggestions on how we can better do this ? Given that we can't afford the full BI solution, any light-weight/budget/SaaS BI that you can recommend ? Thanks

    Read the article

  • Guide To Setting Accessing PhpMyAdmin On NGINX, Ubuntu 11.04, EC2 Remote MySQL Instance

    - by darkAsPitch
    I have setup a domain name to run on amazon ec2 running ubuntu 11.04, nginx and php5-fpm. The domain name works great, I have setup it's own sites-available configuration file and sym-linked it to sites-enabled. I installed phpmyadmin via sudo apt-get install phpmyadmin and followed the instructions. I then added this just above my /etc/nginx/nginx.conf file and restarted nginx. server { listen 80; server_name phpmyadmin.domain.com; location / { root /usr/share/phpmyadmin; index index.php; } #make sure all php files are processed by fast_cgi location ~ \.php { # try_files $uri =404; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I have also added the appropriate dns A records for phpmyadmin.domain.com phpmyadmin.domain.com just shows a 404 error code. All other subdomains do not respond at all so at least something is working here. FYI I have edited the /etc/phpmyadmin/config.inc.php file so that I can connect to a remote MySQL Database. What else do I need to do?

    Read the article

  • Nginx map module 301 redirecting

    - by Reinier Korth
    I've rebuild my website in Ruby on Rails and now I want to 301 redirect a lot of old urls using Nginx's http://wiki.nginx.org/HttpMapModule For some reason I can't get it to work. It works fine without the rewrite ^ $new permanent; line. Does anyone see what I'm missing? This my nginx.conf: server { server_name example.com; return 301 $scheme://www.example.com$request_uri; } # 301 redirect list map $uri $new { /test123 http://www.example.com/test123; /bla http://www.example.com/bladiebla; } server { server_name www.example.com; rewrite ^ $new permanent; root example/public; location ^~ /assets/ { gzip_static on; expires max; add_header Cache-Control public; } try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn-<%= application %>; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • Deploying a Django application in a virtual Ubuntu Server

    - by mfsaint
    I have a virtualbox machine running Ubuntu Server 10.04LTS. My intention is to this machine to work like a VPS, this way I can learn and prepare for when I get a VPS service. Apache+mod_wsgi for deploying the Django app seems the right choice to me. I have the domain (marianofalcon.com.ar) but nothing else, no DNS. The problem is that I'm pretty lost with all the deployment stuff. I know how to configure mod_wsgi(with the django.wsgi file) and apache(creating a VirtualHost). Something is missing and I don't know what it is. I think that I lack networking skills ant that's the big problem. Trying to host the app on a virtualbox adds some difficulty because I don't know well what IP to use. This is what I've got: file placed at: /etc/apache2/sites-available: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.my-domain.com ServerAlias my-domain.com Alias /media /path/to/my/project/media DocumentRoot /path/to/my/project WSGIScriptAlias / /path/to/your/project/apache/django.wsgi ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> django.wsgi file: import os, sys wsgi_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.dirname(wsgi_dir) sys.path.append(project_dir) project_settings = os.path.join(project_dir,'settings') os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()

    Read the article

  • Connecting to same public IP from different locations yields different results

    - by DHall
    Since yesterday I've been unable to access one of my favorite time-wasting sites, boston.com. It starts to load but then it gets redirected to pagesinxt or something like that. After some investigation, I've narrowed it down to an issue with cache.boston.com, but only from my work location. I found the IP (216.38.160.107) , but even that doesn't work correctly from here at work. When I do a telnet 216.38.160.107 80 GET http://cache.boston.com/universal/css/hp_bgcom.css from another location, I get a nice long CSS, as expected. From here, I get an error (trimmed for size): HTTP/1.1 400 Bad Request Your request could not be processed. Request could not be handled This could be caused by a misconfiguration, or possibly a malformed request. For assistance, contact your network support team. Is there any way I can troubleshoot this further on my end? Tracert doesn't tell me anything too useful: Tracing route to vwrpx1.ttn.xpc-mii.net [216.38.160.107] over a maximum of 30 hops: 1 * * * Request timed out. Since it's not really work-related, I don't really want to bring it up to our network team unless I know what's going on, or if there's some risk to the network (ex. malware or something)

    Read the article

  • How to organise storage for media content such as video and music?

    - by thor
    Currently, we have a single server hosting all content: music, video and software. This content is downloaded by users through HTTP. Now free space is coming to an end and we are exploring different ways of extending our storage capacity. We want to do it cheap, simple and reliable (protected from disk/ server faults). Currenly, we see two ways: Add a couple of cheap servers with 4 disks (RAID1 ?), run some distributed file-system on top, like GlusterFS. Pros: hopefully, we will see all our disks as single flat file system, just dump content into it and be done. Cons: could be tricky in configuration and handling of faults. Add a couple of cheap servers, all running HTTP servers. Each piece of content (be it a music file or video) is placed on randomly selected two servers. Pros: don't have to deal with RAID, as content is duplicated; single server failure does not bring down any part of content; doubled distribution capacity (as any signle file could be downloaded from any of two servers hosting it). Cons: requires some scripting on part of distribution of content, adding/ removing servers. Do we miss any other ways? Which of the aforementioned options seems to be the best?

    Read the article

  • Using <VirtualHost> over .htaccess for mod_rewrite

    - by DarkWolffe
    I have a LAMP stack installed on Ubuntu 12.10 with three sites created under /etc/apache2/sites-available, all of which are working. My problem lies in wanting to use those files over .htaccess for appending the .php file extension from the URL. My file currently stands as such: # The VGC <VirtualHost *:80> ServerAdmin [email protected] ServerName thevgc.net ServerAlias www.thevgc.net DocumentRoot /var/www/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/www/> Options Indexes +FollowSymLinks +MultiViews Includes RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ $1.php [L,QSA] AddType application/x-httpd-php .php AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> I'm almost certain I'm doing something wrong. All I know is that my .htaccess files refused to append the extension, or rather find the file that has the same name and load that file, so I wanted to go about this method. Any suggestions? Here is an example page from my site.

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • Password-less login into localhost

    - by Brad
    I am trying to setup password-less login into my localhost because it's required for a tutorial. I went through the normal steps of generating an rsa key and appending the public key to authorized_keys but I am still prompted for a password. I've also enabled RSAAuthentication and PubKeyAuthentication in /etc/ssh_config. Following other suggestions I've seen, I tried: chmod go-w ~/ chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys But the problem persists. Here is the output from ssh -v localhost: (tutorial)bnels21-2:tutorial bnels21$ ssh -v localhost OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to localhost [::1] port 22. debug1: Connection established. debug1: identity file /Users/bnels21/.ssh/id_rsa type 1 debug1: identity file /Users/bnels21/.ssh/id_rsa-cert type -1 debug1: identity file /Users/bnels21/.ssh/id_dsa type -1 debug1: identity file /Users/bnels21/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 1c:31:0e:56:93:45:dc:f0:77:6c:bd:90:27:3b:c6:43 debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /Users/bnels21/.ssh/known_hosts:11 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/bnels21/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Offering RSA public key: id_rsa3 debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /Users/bnels21/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Password: Any suggestions? I'm running OSX 10.8.

    Read the article

  • Super slow opening my downloads folder

    - by Mark
    I have an exe file in my download folder that I half downloaded through utorrent (it's not piracy, a legit file from people who use bittorrent to distribute large files). I think I tried to open it while it was still sharing, that is, did not stop the upload. That actually froze my computer. When I restart in utorrent I set the file to be deleted. Unfortunately even though utorrent doesn't see that file anymore, it's still visible in my download folder. Whenever I try to open my download folder it literally takes 10 minutes or more. It opens, but is empty and the blue progress bar needs a long time to complete. After completion I can use the download folder normally, but opening and closing things in that folder takes a long time. I see the exe that I tried to download. I tried to delete it. But it was taking so long 30+ minutes that I eventually just hit cancel. That doesn't even work, and it was slowing down the computer. Couldn't figure out how to stop the delete so I just pulled the plug. Should I just forget about that dl folder and set a new one? Is there something I can do? Thanks.

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • How to make a Windows Vista boot / recovery cd from a running system without using an original CD / DVD

    - by Giorgio
    I have just repaired a friends computer (replaced motherboard) and now I am trying to repair the Windows (Windows Vista) partitions. Unfortunately, probably due to the fact that he tried to start it several times after the old motherboard had stopped working (no signal on video) now the partition table or the file systems (or both?) appear to be damaged. I managed to boot Windows a couple of times but could not complete the boot. I tried to repair the partition table and file systems using Linux RIP (booting from USB stick) but the Linux utilities say that the file system is damaged and I should run chkdsk /f from Windows. So I now need a Windows boot CD from which I can boot and run chkdsk or any other Windows utilities that can repair the file system. Is there an easy way to create such a CD? Or can I download it for free somewhere? All the links to Windows Vista boot / repair CD's I have found on the internet refer to non-free stuff. Any hint? EDIT I have a working laptop with Windows Vista installed. So one solution would be to make a bootable CD or USB from it so that I can boot the desktop and run the repair utilities. However, I do not have the Windows Vista installation DVD, because both computers were bought with Windows pre-installed.

    Read the article

  • MS Excel and Access - which is better for reports?

    - by Nat
    Where I work, staff have just started to use a basic table in excel (1 october) to record sales which has about 10 columns (name, client, renewed, discount, paid etc). I record the data (total sold etc) every hour and email it to the manager. Each staff has the their own file on the network which they use constantly for that day (eg. John 08-10.xlsx; John 09-10.xlsx etc) and have been told to save the file after they complete a row with client data. I can see the file (in read only mode) to update the report but I am sure there must be a way of doing an autoupdate of their worksheets in real time. I can link worksheets and workbooks to my main workbook but manually. Does anyone have suggestions on have to do this on Excel? Or would Access allow me to make a report which shows the sales total for that hour without the staff closing the file or constantly clicking save every few minutes? We use office 2010. thanks

    Read the article

  • Setup: Eclipse in Ubuntu with Apache2 and Subversion

    - by Ricalsin
    Trying to setup Eclispe. I am running ubuntu 10.10 (Maverick). Apache2.2.16 Subversion 1.6.12 The Eclipse help/about/installed software says: Eclipse Platform 3.5.2 Subclipse 1.0.0 Version Control with Subversion 1.1.1 The Subclips wiki I followed is here I have installed the libsvn-java app as discussed. I added the line "-Djava.library.path=/usr/lib/jni" to the eclipse.ini file I checked the Eclipse help/about/confirguration settings and both of these lines are listed: eclipse.vmargs=-Djava.library.path=/usr/lib/jni java.library.path=/usr/lib/jni I checked that those files are in those directories. Still, when I check the preferencesteamsvn an error dialog shows: Failed to load JavaHL Library. These are the errors that were encountered: no libsvnjavahl.1 in java.library.path Incompatible JavaHL library loaded 1.3.x or later required I followed the "Testing JavaHL libraries" troubleshooting section at the bottom of the wiki: I downloaded the tarbal and ran it in a folder on my desktop with no problems. Then, I followed the instructions and placed that file INSIDE the path (usr/lib/jni/testJavaHL) and ran it from there. There are 50 tests performed and each one of them came back with this same error (posting only one for brevity): 50) testCommitRevprops(org.tigris.subversion.javahl.BasicTests)java.io.FileNotFoundException: /usr/lib/jni/testJavaHL/local_tmp/greek_files/iota (No such file or directory) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:209) at java.io.FileOutputStream.<init>(FileOutputStream.java:160) at org.tigris.subversion.javahl.WC.materialize(WC.java:70) at org.tigris.subversion.javahl.SVNTests.buildGreekFiles(SVNTests.java:303) at org.tigris.subversion.javahl.SVNTests.setUp(SVNTests.java:222) at org.tigris.subversion.javahl.RunTests.main(RunTests.java:111) FAILURES!!! Tests run: 50, Failures: 0, Errors: 50 Any ideas as to how/why the "local_tmp/greek_files/iota" is appended to the directory? I assume that's my problem.. I'm also having a problem with newrepository location = ...as the directory location of my svn repository is one level above the home directory - which is prepended to whatever I place in the dialog box - resulting in this error: svn: '/home/ricalsin/file:/home/svn' does not exist Thank you for any help.

    Read the article

  • Permissions in OS X for iTunes library with multiple users

    - by John
    I currently have a lot of music on an external drive and my iTunes set up from there. However, periodically, when the external drive isn't connected, iTunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my Mac's HD is large enough to house the music collection. However, I have 4 family members – all with their own logins – using this same gob of music. I don't want four copies of the library, only one with all libraries referencing it. So, what I want to do is: Move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. Get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let iTunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the iTunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any XML library file editing if possible. Am I dreaming?

    Read the article

  • FTP Server with advanced features

    - by Nikolas Sakic
    Hi, We supply zone-files to our customers. Some zone files are big about 300MB and some are quite small, maybe like 1MB. We had this issue that someone setup a script to continually download the file. Imagine downloading 300MB file a few hundred times a day. Since, we don't have packet-shaper to throttle the traffic, we need to upgrade ftp server and use add-on modules to limit the download somehow. We currently use proftpd server. Also note that there are different users for different domains - say, if you want to download zone file for .INFO domain, then you use a particular user. That user can't download any other zone's file. This is what we are looking for: Have maximum of 400MB download per user per day. Or even have different download limit for different users per day. Have one connection per user at any time. Max # of connection (non-simultaneous) per user per day is 5. Anyone trying to exceed that gets banned for 24 hours. Has anyone used FTP server with similar restrictions above? Does anyone have any ideas where I can start? Any help would be appreciated. Thanks. -N

    Read the article

  • .htaccess issue on Apache Web Server in Ubuntu VM

    - by Neon Flash
    I just installed Apache Web Server on Ubuntu 11.04 in VMWare Workstation. I created a basic HTML page, named it index.html and placed it in /var/www directory (document root). I am able to access this web page from my Host OS (Windows 7), by pointing the browser to: http://192.168.2.2/index.html where, 192.168.2.2 is the IP Address of the Ubuntu VM. Next, to test various configurations of .htaccess files, I created a new directory in /var/www called, members. Inside this directory, I created and placed a .htaccess file with the following configuration: AuthUserFile /www/Neon/auth/.htpasswd AuthName "neon's home" AuthType Basic require valid-user IndexIgnore */* I created a directory path like /var/www/Neon/auth/ and then placed a .htpasswd file inside it. To place the username and hash inside the .htpasswd file: I created a username "neon" and calculated the DES hash of a password and placed it inside .htpasswd file in format: username:hash Now, when I try to access the web page: http://192.168.2.2/members/ It does not prompt me to enter the username and password with a popup box. Instead it just displays the index.html which is placed inside members directory. I would like to get this configuration working :)

    Read the article

  • Windows-to-linux: Putty with SSH and private/public key pair

    - by Johnny Kauffman
    I spent about 3 hours trying to figure out how to connect to a linux box from my windows machine using putty without having to send the password. This is connecting to an Ubuntu server that is using OpenSSH. The private key is SSH-2 RSA, 1024 bits. I am connecting using SSH2. I have run into the more common problems already: Putty generated the public key in the "wrong format". I have corrected this (as seen on this blog post). However, since I am not yet connected, I cannot absolutely confirm that this file is in the correct format. The key is all on a single line now, and I have tried adding/removing line breaks at the end of the file. I've also tried the public file doctoring process a few times to ensure that I haven't flubbed up the manual conversion. Even so, I have no way to verify accuracy here. The permissions were at once point wrong as well, specifically meaning that the file had too many permissions. I had to solve this too and I know it got past this because I no longer see a related error in /var/log/auth.log. I've tried both authorized_keys and authorized_keys2 in case the server has an old version of OpenSSH, but this changed nothing. I do have access as a user. After this keyfile stuff fails, I can enter my password instead The only remaining nibble of information I have is that it claims I have the alleged password wrong: sshd[22288]: Failed password for zzzzzzz from zz.zz.zz.zz port 53620 ssh2 Even so, as far as I can tell, this is just a lazy try/catch somewhere, since I don't think there's a password involved at all. I see nothing else in any of the /var/log files of use. What else could be wrong?

    Read the article

  • nginx - proxy_pass is working - Apache isn't doing what it should...

    - by matthewsteiner
    So, I've got this in my nginx.conf: location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/example.com/public/; access_log off; expires 30d; } location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So anything that is a "static file" that exists will just be done with nginx. Otherwise, it should pass it off to Apache. Right now, static files are working correctly. However, if something is passed to apache and it's example.com or subdomain.example.com, apache just spits out the "Apache 2 Test Page" that you get if there's nothing there. Apache worked fine before, so I'm guessing it has to do with the way nginx is "asking". I'm not sure though. Any ideas?

    Read the article

  • Multi- authentication scenario for a public internet service using Kerberos

    - by StrangeLoop
    I have a public web server which has users coming from internet (via HTTPS) and from a corporate intranet. I wish to use Kerberos authentication for the intranet users so that they would be automatically logged in the web application without the need to provide any login/password (assuming they are already logged to the Windows domain). For the users coming from internet I want to provide traditional basic/form- based authentication. User/password data for these users would be stored internally in a database used by the application. Web application will be configured to use Kerberos authentication for users coming from specific intranet ip networks and basic/form- based authentication will be used for the rest of the users. From a security perspective, are there some risks involved in this kind of setup or is this a generally accepted solution? My understanding is that server doesn't need access to KDC (see Kerberos authentication, service host and access to KDC) and it can be completely isolated from AD and corporate intranet. The server has a keytab file stored locally that is used to decrypt tickets sent by the users coming from intranet. The tickets only contain username and domain of the incoming user. Server never sees the passwords of authenticated users. If the server would be hacked and the keytab file compromised, it would mean that attacker could forge tickets for any domain user and get access to the web application as any user. But typically this is the case anyway if hacker gains access to the keytab file on the local filesystem. The encryption key contained in the keytab file is based on the service account password in AD and is in hashed form, I guess it is very difficult to brute force this password if strong Kerberos encryption like AES-256-SHA1 is used. As the server has no network access to intranet, even the compromised service account couldn't be directly used for anything.

    Read the article

  • vim does not preserve symlink over sshfs

    - by HighCommander4
    I'm having some trouble with symlinks and sshfs. I use the '-o follow_symlinks' option to follow symlinks on the server side, but whenever I edit a symlinked file on the client side with vim, a copy of it is made on the server side, i.e. it's no longer a symlink. Set up a symlink on the server side: me@machine1:~$ echo foo > test.txt me@machine1:~$ mkdir test me@machine1:~$ cd test me@machine1:~/test$ ln -s ../test.txt test.txt me@machine1:~/test$ ls -al test.txt lrwxrwxrwx 1 me me 11 Jan 5 21:13 test.txt -> ../test.txt me@machine1:~/test$ cat test.txt foo me@machine1:~/test$ cat ../test.txt foo So far so good. Now: me@machine2:~$ mkdir test me@machine2:~$ sshfs me@machine1:test test -o follow_symlinks me@machine2:~$ cd test me@machine2:~/test$ vim test.txt [in vim, add a new line "bar" to the file] me@machine2:~/test$ cat test.txt foo bar Now observe what this does to the file on the server side: me@machine1:~/test$ ls -al test.txt -rw-r--r-- 1 me me 19 Jan 5 21:24 test.txt me@machine1:~/test$ cat test.txt foo bar me@machine1:~/test$ cat ../test.txt foo As you can see, it made a copy and only edited the copy. How can I get it to work so it actually follows the symlink when editing the file?

    Read the article

  • Printer Naming Conventions

    - by John
    I am working on building a new AD print server and have been pondering the idea of renaming the printers. Right now we have a mix of names that varies between a mashup of some that reference location, some that reference functional area, and some that are rather generic. Some examples include the following: Dell 5210 - Building Name HP 4350 - Functional Area Dell Color 3100 - Location and Building Number HP 6500 - Mailroom I was curious what other people used to name their printers with. What conventions do you use and how to design a method that will be able to be used for a while without needing to rename them all again. Also, the naming methodology needs to be understandable for regular computing users so that they can relatively easily find the printer that they need. I was looking at some ideas from here, but I was not sure how practical some of them are. Any help that you can provide is appreciated. Note: I have reviewed many of the postings on here in relation to server naming conventions and found that they were not specific enough for the use case I present above.

    Read the article

  • Reducing both pagfile.sys and hiberfil.sys in Windows 7

    - by greenber
    I recently used Defraggler to consolidate my free space areas on my D: drive preparatory to using Disk Manager to break my drive into two areas, one as my "data area" for Windows 7 (normally on my C: drive) and to experiment around with Windows 8. The Defraggler program works so well I ran it on my C: drive and I ended up with a lot of free space both on my C: drive and my D: drive. I was very happy. And then I woke up the next day and I've got virtually no free space left, something like 8 MB on my C: drive and about 3 GB on my D: drive. I then ran Wintree (which gives a nifty graphical representation of disk usage) and found I had a large page file and a large hiberfil. So I temporarily turned off hibernate and reduced the page file size to 2000megabytes and then rebooted so that both would take effect. It had no effect on the C: drive or the D: drive. That makes no sense to me. What caused the free space on each drive to disappear, why doesn't the page file size being reduced and the hibernate file being turned off free up disk space to either the C: or the D: drive? Would it make sense to delete the two files in question and, if so, how do I go about doing that? Safe mode? Thanks. Ross

    Read the article

< Previous Page | 985 986 987 988 989 990 991 992 993 994 995 996  | Next Page >