Search Results

Search found 56811 results on 2273 pages for 'pam petropoulos@oracle com'.

Page 933/2273 | < Previous Page | 929 930 931 932 933 934 935 936 937 938 939 940  | Next Page >

  • Lookup Active Directory entry by implicit UPN

    - by Michael-O
    In our company exists a forest-wide UPN suffix company.com and almost all user accounts have the explicit UPN set to fistname.lastname@company.com. This value is also set in the Active Directory userPrincipalName attribute. Now we have an application where users perform authentication through Kerberos. So we are given the Kerberos principal, i.e. implicit UPN. We'd like to look up that user and retrieve several LDAP attributes. Since iUPN and userPrincipalName do not match anymore, the lookup is not possible. Is there any "official" way to retrieve a mapping from the Active Direcory? My workaround is to perform a LDAP bind against the realm component and search for the sAMAccountName attribute which matches the user id component of the iUPN. Searching for the mere sAMAccountName in the forest is not possible because the value is unique in the domain only.

    Read the article

  • why does and EBS volumes mounted in an Ubuntu 12.04 EC2 instance as /dev/sdh1 appear as /dev/xvdh1?

    - by Andres
    When mounting an EBS volume on ubuntu specified as /dev/sdh1 it actually mounts it at /dev/xvdh1. The aws console still thinks it's mounted at /dev/sdh1 so it took a while to realize that it was actually mounted, just in the wrong place I ran into this problem a long time ago using ubuntu on ec2. I just ran into it again https://forums.aws.amazon.com/post!reply.jspa?messageID=351382 and it seems like I'm not alone: https://forums.aws.amazon.com/thread.jspa?threadID=68957&tstart=0 I haven't found a good answer as to why this happens or how to fix it. Any ideas?

    Read the article

  • Can I create an SSH user which can access only certain directory?

    - by RiMMER
    I have a Virtual Private Server which I can connect to using SSH with my root account, being able to execute any linux command and access all the disk area, obviously. I would like to create another user account, which would be able to access this server using SSH too, but only to a certain directory, for example /var/www/example.com/ For example, imagine this user has a HUGE error.log file (500 MB) located in /var/www/example.com/logs/error.log When accessing this file using FTP, this user needs to download 500 MB to view the last lines of the log, but I'd like him to be able to execute something like this: tail error.log Therefore I need him to be able to access the server using SSH, but I don't want to grant him access to all server areas. How can I do this?

    Read the article

  • What is the Best Free Linux Gateway

    - by rockinthesixstring
    I'm looking at moving away from using my DIR-825 as a gateway and moving into a Linux box to do it all for me. I've found IPCop, but I'm looking for something with a little more power. My main goal is basically to be able to point different external domain names to different internal servers. backup.example.com - 192.168.0.5 home.example.com - 192.168.0.1 I host my DNS on my own dedicated server (windows), so I don't know much about doing the gateway thing in my home (my hosting provider does it all for me). Do any of you know of any free Linux Distros that can accomplish what I'm looking for?

    Read the article

  • Using mod_rewrite to shutdown website.

    - by moolagain
    Hi, I am trying to shutdown a website to everyone except my ip address. I almost have it working. I cannot access www.mysite.com, but I can access all folders that have another .htaccess file in them. I have a .htaccess file in /www with the following code: #Use this when website is down RewriteEngine on #this allows access through my ip RewriteCond %{REMOTE_ADDR} !^(66\.777\.888\.99)$ RewriteRule !down.php$ /down.php [L] Some folders in my site have .htaccess files in them. If I have a file with the line: RewriteEngine on I can still access the folder. For example, if I have the second .htaccess file in /www/about, then I can still access mysite.com/about (but the .css file included on that page actually loads down.php). If I delete "RewriteEngine on" I get redirected to down.php. Any ideas? I think my mod_rewrite gets confused with multiple .htaccess files. Thanks!

    Read the article

  • Curl authentication

    - by Jack Humphries
    I am trying to download a file with cURL from a password protected directory on my site. It is not working. Instead of the downloading the requested file, it downloads a HTML file that says, "Authentication Required!" I'm not sure what the problem is. I've tried both of these (with the same result). The username and password are correct (and if the link below is used in a web browser, the file downloads successfully). 1) The username and password are included as part of the URL. curl https://username:[email protected]/auth/file.dmg --O /file.dmg; 2) The username and password are included as an option. curl -u username:wordpass.1 https://www.example.com/auth/file.dmg --O /file.dmg;

    Read the article

  • How to make subdirectory the document root of a web domain or localhost

    - by Ben Huh
    I have a subdirectory abc in the document root /var/www/html I want to be able to run any file any_file.html within the subdirectory by typing in the browser: localhost/any_file instead of localhost/abc/any_file.html or my_domain.com/any_file instead of my_domain.com/abc/any_file.html I tried writing in httpd.conf: <Directory "/var/www/html/abc"> RewriteEngine On RewriteBase / RewriteRule %{REQUEST_FILENAME} %{REQUEST_FILENAME}\.html </Directory> But it doesn't work. Options FollowSymLinks is activated in <Directory> so I believe I would not need to write this again. Does anyone knows why and how to solve it? Thanks. Update: I have another subdirectory efg which I need to be able to access through localhost.

    Read the article

  • Disk space profiling in Unix

    - by user1677770
    I'm looking for a tool to summarize how disk space is being used on very large partitions. Our file system is around 950TB, mostly broken up into 20TB partitions. There are some really nice graphical tools for visualising these file spaces: http://www.disksavvy.com/disksavvy_screenshots.html http://methylblue.com/filelight/ But I'm really not sure how well they will scale. Does anybody have any experience of these tools and can make any recommendations? Even something that parses and summarises a really big du output would be a good start.

    Read the article

  • How can I install mod_dav_svn 1.6 on CentOS 5.4?

    - by Vincenzo
    I'm trying to install mod_dav_svn on CentOS 5.4, and this is what I see: # yum --enablerepo=rpmforge install mod_dav_svn Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirrors.adams.net * base: mirror.sanctuaryhost.com * extras: mirror.sanctuaryhost.com * rpmforge: fr2.rpmfind.net * updates: mirror.steadfast.net Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package mod_dav_svn.x86_64 0:1.4.2-4.el5_3.1 set to be updated --> Processing Dependency: subversion = 1.4.2-4.el5_3.1 for package: mod_dav_svn --> Running transaction check ---> Package subversion.i386 0:1.4.2-4.el5_3.1 set to be updated --> Finished Dependency Resolution [...] Version 1.4.2 is older than my installed Subversion 1.6.9 (I installed it before). How and where can I get mod_dav_svn in version 1.6.9?

    Read the article

  • FREEBSD creating new port

    - by su55
    Hi, I have a script here that I want to create as a port in freebsd and then make it as package so that I can install on some machines. script is below. !/usr/local/bin/bash if [ ! -f "/suid.old" ] then find / -perm -4000 -o -perm -2000 -ls /suid.old else find / -perm -4000 -o -perm -2000 -ls /suid.new diff suid.old suid.new newchanges fi exit 0 if [ -s "/newchanges" ] then mail -s "changes has occured" someone@gmail.com else mail -s "No changes has occured" someone@gmail.com /newchanges fi How can I accomplish this?

    Read the article

  • DNS name not on cert

    - by blsub6
    I've got an interesting one... My users have always typed in 'mail' to get to their mail. There was an internal DNS A record that resolved that to the IP of the mail server. I'm putting in an Exchange server to replace that. In order for people to get their mail, I try putting in an A record that does the same thing as the previous one. When I try to get to OWA, it tells me that the certificate on the server is not trusted. I only have the names: mail.mydomain.com autodiscover.mydomain.com autodiscover.mydomain.internal mydomain.internal mailserver.mydomain.internal so when the browser sees that this cert is trying to cover https://mail/owa it says the cert's not trusted. What amy I supposed to do about that?

    Read the article

  • Location Services are always disabled in Mac OS X Lion

    - by rplusg
    A simple location services program was working fine on my machine and suddenly stopped working. Upon further exploring the problem, I realized that some process has disabled location services in System Preferences » Security & Privacy » Privacy. I checked Enable Location Services, but again it got disabled automatically. After some research I found that it's not just my program, even built-in system functions are also failing because of this problem for example System Preferences » Date & Time » Time Zone failed to get the current location. Every time I check Enable Location Services, I see the following error in the console logs: 16/10/12 11:23:15.636 AM [0x0-0x42042].com.apple.systempreferences: ERROR,Time,372059595.636,Function,"CLInternalSetLocationServicesEnabled",CLInternalSetLocationServicesEnabled failed 16/10/12 11:23:15.638 AM [0x0-0x42042].com.apple.systempreferences: STACK,Time,372059595.636,1 CoreLocation 0x00007fff8f9957be CLInternalSetLocationServicesEnabled + 110 Notes: WiFi is on I didn't install iOS Simulator I use Xcode Version 4.5 (4G182) I use Boot Camp and made my MacBook Pro dual boot (Mac OS X Lion and Windows 7) I do only Mac development but not iOS

    Read the article

  • 1Gigabit vs 1.25Gibabit mismatch

    - by Joel Coel
    I need to re-connect the network to a small old outbuilding that hasn't been used in several years. I have to use the existing 62.5um multi-mode fiber run. This end of the fiber is already connected. For the end in the building, I was looking at this pair: http://www.tp-link.com/products/productDetails.asp?class=switch&content=spe&pmodel=TL-SM311LM http://www.tp-link.com/products/productDetails.asp?class=&content=spe&pmodel=TL-SL2210WEB If you look at the sfp first (first link), it's listed at 1.25Gpbs. That's odd, because IIRC the fiber should really only do 1Gbps. It's also supposed to work with the switch I posted (2nd link), but the gbic port on the switch also only shows 1Gbps. What am I missing here?

    Read the article

  • IIS site hacked with ww.robint.us malware

    - by sucuri
    A bunch of IIS sites got hacked with a javascript malware pointing to ww.robint.us/u.js. Google cache says more than 1,000,000 different pages got affected: http://www.google.com/#hl=en&source=hp&q=http%3A%2F%2Fww.robint.us%2Fu.js http://blog.sucuri.net/2010/06/mass-infection-of-iisasp-sites-robint-us.html My question is: Did anyone here got hacked with that and still have any logs (or network dump) available for analysis? If you do, have you spotted anything interesting in there? Sites as big as wsj.com got hacked and some people are saying that maybe a zero-day on IIS/ASP.net is in the wild...

    Read the article

  • php rsync with exec() not working

    - by mojeime
    Why this: rsync -avz -e ssh /home/userneme/folder [email protected]:/var/www/folder works from cronjob and this: exec("rsync -avz -e ssh /home/userneme/folder [email protected]:/var/www/folder"); doesn't work. I know exec is working because i have a few places in my appp that do convercion from pdf to jpg with ImageMagick (exec). SOLVED exec is working OK it was a permission issue on remote server. "Local" server is shared reseller account and remote server is my first VPS Ubuntu 10.10 LAMP box. If only I had a system administrator since i'm just a software developer forced to do this and i stink at it :) Thank You all!

    Read the article

  • How to setup external mail addresses without external autodiscover tries?

    - by Tarnschaf
    We have a little Exchange/Outlook installation here that fetches the mails from our provider with POP3. Now to be able to send emails outside our organisation, I added another SMTP address to the Exchange User: my.boss@ourcompany.com (Default / Reply Address) [email protected] Sending email works using the default address. But now there is an error message each time we start Outlook. Outlook tries to autodiscover using autodiscover.ourcompany.com which doesn't exist. Our autodiscover files are placed on our local server. I think all the servers are discovers, because everything works as expected. Everything except the error message on each Outlook start. (The error message is actually because of an invalid certificate but I don't see why Outlook should contact an external host at all!) So how can I solve this? Forcing Autodiscover on every Outlook client to use the local hosts? Or ist there an even better way?

    Read the article

  • Nginx reverse proxy IP issue

    - by Tiffany Walker
    For some reason Apache is still seeing my SERVERS ip. Is this an nginx problem? /etc/nginx.conf user nobody; # no need for more workers in the proxy mode worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 63.6.1.12:80; server_name photo-rolldomain.com www.domain.com; access_log /usr/local/apache/domlogs/domain.com-bytes_log bytes_log; access_log /usr/local/apache/domlogs/domain.com combined; root /home/mtech/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location @backend { internal; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ /\.ht { deny all; } }

    Read the article

  • How to configure Postfix client relay to Exchange 2010 server

    - by helcim
    I'm getting (delivery temporarily suspended: SASL authentication failed; server myserver.com[xxx.xxx.xxx.x] said: 535 5.7.3 Authentication unsuccessful) when I try to relay mail from Postfix 2.5.5-1.1 on Debian Lenny box to Exchange 2010. I think I tried all possible combinations but I'm definitely missing something. Here is relevant part of main.cf: broken_sasl_auth_clients = yes smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_pix_workarounds = smtp_sasl_type = cyrus smtp_always_send_ehlo = yes relayhost = myserver.com And I got libsasl2-modules installed. Anybody managed to successfully relay mail between Postfix and Exchange? Oh, and I already double-checked if password is right.

    Read the article

  • How do I troubleshoot an IPsec tunnel (from a cellular router to a public server)?

    - by Hanno Fietz
    I'm new to IPsec and struggling with a setup that might soon be widely used in our operations (provided I do understand it, eventually...). A cellular router (blackbox by netModule, from its log messages it seems to be running Linux and OpenSwan) connects a sensor network on customers' sites with our public server. We need to be able to connect into the local network, so I had the cell provider give me a public IP (a dynamic one). The way their setup works, the public IPs only allow IPsec traffic. I set up OpenSwan on our Ubuntu server (running Jaunty). This is my connection config from /etc/ipsec.conf: conn gprs-field-devices left=my.pub.lic.ip [email protected].com #leftsubnet=192.168.1.129/25 right=%any [email protected].com #rightsubnet=192.168.1.1/25 #rightnexthop=%defaultroute auto=add On the router, all I have is the Web UI, in which I made the following settings: "Remote endpoint": public IP of server, same as "left" above "Local Network Address": 192.168.1.1 "Local Network Mask": 255.255.255.128 "Remote Network Address": 192.168.1.129 "Remote Network Mask": 255.255.255.128 The pluto process on the server is listening for connections on port 500. It can't open a tunnel, obviously, because it doesn't know at which IP the client is. I set up a passphrase as PSK for @field.econemon.com in /etc/ipsec.secrets and also configured it in the router (which doesn't seem to support certificates). My problem is, nothing happens. The router just says, IPsec is "down". When I copy-paste the IP into ipsec.conf (for "right="), and ask the server to ipsec auto --up gprs-field-devices, it just hangs until I press Ctrl-C. Is there anything wrong with my setup? How can I debug this further? My router gives the following loglines that seem related, but don't tell me anything: Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.d/hostkey.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.d/netbox0.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: "netbox00" #1: initiating Main Mode Feb 21 23:08:20 Netbox daemon.err ipsec__plutorun: 104 "netbox00" #1: STATE_MAIN_I1: initiate Feb 21 23:08:20 Netbox daemon.err ipsec__plutorun: ...could not start conn "netbox00" Feb 21 23:08:22 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: ignoring informational payload, type NO_PROPOSAL_CHOSEN Feb 21 23:08:22 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: received and ignored informational message Feb 21 23:08:28 Netbox user.warn parrot.system_controller[762]: IPSECCTRLR: Tunnel 0 is down for 0 seconds Feb 21 23:08:40 Netbox user.warn parrot.system_controller[762]: IPSECCTRLR: Tunnel 0 is down for 10 seconds Feb 21 23:08:52 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: ignoring informational payload, type NO_PROPOSAL_CHOSEN

    Read the article

  • Order of mod_rewrite rules in .htaccess not being followed

    - by user39461
    We're trying to enforce HTTPS on certain URLs and HTTP on others. We are also rewriting URLs so all requests go through our index.php. Here is our .htaccess file. # enable mod_rewrite RewriteEngine on # define the base url for accessing this folder RewriteBase / # Enforce http and https for certain pages RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # rewrite all requests for file and folders that do not exists RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?query=$1 [L,QSA] If we don't include the last rule (RewriteRule ^(.*)$ index.php?query=$1 [L,QSA]), the HTTPS and HTTP rules work perfectly however; When we add the last three lines our other rules stop working properly. For example if we try to goto https:// www.domain.com/en/customer/login, it redirects to http:// www.domain.com/index.php?query=en/customer/login. It's like the last rule is being applied before the redirection is done and after the [L] flag indicating the the redirection is the last rule to apply.

    Read the article

  • Hanging page loads every n loads - SOLVED

    - by Christian
    Hi Guys I recently moved my site to a new server (Apache 2, PHP5, MySQL5). The site is an Invision based forum. Every few posts / topics it just hangs. The data has been written because if you stop and reload, the post / thread is there. I thought it was a write issue initially, but nope. So, the data is written but the page load never completes. It doesn't leave the page where the data has been input. Whats the best way to trouble shoot this issue? The only thing I have done recently is reduce my MySQL timeouts, but I can't see that being an issue as the values are still big enough and there are no mentions of timeouts in the MySQL log. (For the record there is nothing in PHP's error log either) Thanks in advance! EDIT: I checked my server-status. It all looked ok, but I have a suspicion I was hitting my ServerLimit, so I doubled that. Also enabled my Keepalives. Will keep an eye on it. EDIT 2: Its now been a few days and this is still occuring. I have more info though; Apache is throwing seg faults, but enabling core dumps does not produce them. I have tried disabling the modules in apache but it just stops things from working. I fear it may actually be DNS related. If I watch Live Headers in Firefox, absolutely nothing happens during this 'hanging' period. After that, the responses come back fairly promptly. UPDATE (05/04): I built the latest versions of Apache and PHP from source, no luck. I then removed those and used the remi repo to update all my packages to the latest stable. Segfaults seem to have stopped, but the hanging is continuing. ini's are at; www.skylinesaustralia.com/php.ini www.skylinesaustralia.com/my.cnf www.skylinesaustralia.com/httpd.conf UPDATE - SOLVED! - The issue was having a gigantic query cache size in MySQL. It was 2GB, changing it to 64M sorted it. Thanks for all the help everybody, much appreciated!!

    Read the article

  • Include requested hostname in access_log

    - by Aaron J Spetner
    I would like my access_log to list the host name that the client is requesting (e.g. when requesting http://www.example.com/test I should see "www.example.com" in the log). The only thing I have found so far is to use %v in the LogFormat directive, but this only gives "the canonical ServerName of the server serving the request" (as described by Apache at http://httpd.apache.org/docs/2.0/mod/mod_log_config.html#formats). This does not help me for requests that use a hostname that is not specified in a ServerName directive. Is there any way to log the requested hostname? Thanks

    Read the article

  • vPopmail / xinetd.

    - by Lorren Biffin
    I'm attempting to setup vpopmail on my CentOS server (Media Temple). Everything is working like a charm, with the exception that I cannot login to the server from any pop3 client. Upon trying to login I get the following error: Sending of password did not succeed. Mail server mail.(mydomain).com responded: Login failed. I'm running qmail (of course) with xinetd (not tcpserver). I've placed a file called pop3 into the folder /etc/xinetd.d with the content: service pop3 { disable = no socket_type = stream protocol = tcp wait = no user = root server = /var/qmail/bin/qmail-popup server_args = mail.(mydomain).com /home/vpopmail/bin/vchkpw /var/qmail/bin/qmail-pop3d Maildir log_type = FILE /var/log/xinetd.log log_on_success = HOST log_on_failure = HOST RECORD } Can anybody offer any guidance here? I've been unsuccessfully trying to make this happen for over a week.

    Read the article

  • Configuring Nginx for Wordpress and Rails

    - by Michael Buckbee
    I'm trying to setup a single website (domain) that contains both a front end Wordpress installation and a single directory Ruby on Rails application. I can get either one to work successfully on their own, but can't sort out the configuration that would let me coexist. The following is my best attempt, but it results in all rails requests being picked up by the try_files block and redirected to "/". server { listen 80; server_name www.flickscanapp.com; root /var/www/flickscansite; index index.php; try_files $uri $uri/ /index.php; location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/flickscansite$fastcgi_script_name; } passenger_enabled on; passenger_base_uri /rails; } An example request of the Rails app would be http://www.flickscan.com/rails/movies/upc/025192395925

    Read the article

  • How to block a user in apache httpd server from accessing a *.php file inside a Directory, instead user should access this using Directory name

    - by Oxi
    My requirement looks Simple, But Googling Did not help me yet. my query is i want to Throw a 404 page to a user(Not Re-Direct to another folder or file), who is trying to Access *.php files in my website ex: when a client asks for www.example.com/home/ i want to show the content , but when user says www.example.com/home/index.php i want to show a 404 page. i tried different methods, nothing worked for me, one of which tried is shown below <Directory "C:/xampp/htdocs/*"> <FilesMatch "^\.php"> Order Deny,Allow Deny from all ErrorDocument 403 /test/404/ ErrorDocument 404 /test/404/ </FilesMatch> </Directory> Thanks in Advance

    Read the article

< Previous Page | 929 930 931 932 933 934 935 936 937 938 939 940  | Next Page >