Search Results

Search found 54311 results on 2173 pages for 'http head'.

Page 605/2173 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • Specifying a source in puppet doesn't seem to work

    - by Mr Wilde
    I have been attempting to create a manifest for installing postgres 9.1 using puppet on a Centos 5 server. I have been trying to adapt the instructions at http://wiki.postgresql.org/wiki/YUM_Installation in order to achieve this and when I go through a manual process, I have been able to. It would seem to me therefore that a puppet manifest containing package { 'postgresql91-server': ensure => installed, source => 'http://yum.postgresql.org/9.1/redhat/rhel-5-x86_64/pgdg-centos91-9.1-4.noarch.rpm' } however on attempting to apply this manifest I get err: /Stage[main]//Package[postgresql91-server]/ensure: change from absent to present failed: Could not find package postgresql91-server Any expert puppeteers out there able to help me?

    Read the article

  • Date header returned by IIS7 is wrong

    - by James Hollingworth
    I am serving an ASP.NET application from IIS 7 but we are experiencing some weird cookie issues. The code works fine in other environments so we are assuming this is specific to this server (related question). We have been looking at the http headers returned and someone pointed out that the date http header is showing the 1st of Jan rather than today's date (so far it always shows that date regardless of what the current date is). The system clock is set correctly (and we can print out the current time/date via DateTime.Now correctly as well) so we can't work out why it's now working. Does anyone have any ideas? Is this a red-herring? Thanks, James

    Read the article

  • Error when running adprep32 /rodcprep, trying to add a 2008 domain controller to a 2003 domain

    - by virtuist
    I'm trying to migrate a Small Business Server 2003 to Server 2008 domain. The problem is when I run the adprep32 /rodcprep command as specified as the final step in Step 3 of this article: http://www.experts-exchange.com/Software/Server_Software/Email_Servers/Exchange/A_2881-Migrate-Small-Business-Server-2003-to-Exchange-2010-and-Windows-2008-R2.html I get an error "Adprep could not contact a replica for partition..." which is described in detail here: http://support.microsoft.com/kb/949257 I've also attached the AdPrep.log file for full details. So when I try to run DCPromo on my new Server 2008 PDC (it's not the PDC yet, but want it to be soon), I get an error saying that /rodcprep hasn't ran so there could be errors if I continue. Anyone ran into this or have any suggestions on how to help? Can Dsmgmt be ran on server 2003 to help solve this? Assuming it's a partition error.

    Read the article

  • log4j-1.2.8.jar gets deleted from the path when a Webservice is created from Eclipse

    - by Seema
    When I try to create a Webservice from Eclipse, the log4j-1.2.8.jar which is configured in the project's build path just gets deleted, and when I try to invoke the Webservice it gives error as below: 2014-06-05 11:47:48,742 ERROR ServiceRequester:55 - RemoteException 2014-06-05 11:47:48,742 ERROR ServiceRequester:56 - ------ AxisFault faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.generalException faultSubcode: faultString: java.lang.NoClassDefFoundError: org/apache/log4j/Logger; nested ex ception is: java.lang.NoClassDefFoundError: org/apache/log4j/Logger faultActor: faultNode: faultDetail: {http://xml.apache.org/axis/}hostname:INPUSCPC07719 java.lang.NoClassDefFoundError: org/apache/log4j/Logger; nested exception is: java.lang.NoClassDefFoundError: org/apache/log4j/Logger We also tried to place this jar to a different path than where the project is located, but it still delete this jar from that path too. Can anyone help in this?

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • How to integrate Thunderbird with SpamAssassin running on the server?

    - by haimg
    I'm trying to integrate SpamAssassin running on a server with Thunderbird. Basically I need to be able to select several emails in Thunderbird and send them back to SpamAssassin for training, either as spam or ham. I tried several approaches: Tried "Report Spam" plugin, which is able to send message back to server either as an email attachment or via HTTP post. However, the plugin is rather buggy... Does not support sending several messages at once, "report as ham" is not working at all, etc. Wanted to make a custom button that will copy selected messages to a separate IMAP folder (I could create "LearnAsSpam" and "LearnAsHam" folders in IMAP that will get processed automatically on server), but don't even know how to approach this in Thunderbird, don't want to learn Thunderbird extention authoring... Server-side, I'm prepared to do some custom programming or integration needed (can receive a message via HTTP / SMTP / whatever), my stumbling block is Thunderbird... So, how can I send emails from Thunderbird back to SpamAssassin running on email server for Bayesian training, with as few keystrokes as possible?

    Read the article

  • Nginx reverse proxy IP issue

    - by Tiffany Walker
    For some reason Apache is still seeing my SERVERS ip. Is this an nginx problem? /etc/nginx.conf user nobody; # no need for more workers in the proxy mode worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 63.6.1.12:80; server_name photo-rolldomain.com www.domain.com; access_log /usr/local/apache/domlogs/domain.com-bytes_log bytes_log; access_log /usr/local/apache/domlogs/domain.com combined; root /home/mtech/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location @backend { internal; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ /\.ht { deny all; } }

    Read the article

  • basic device that can connect to internet

    - by Hellnar
    Hello, I am looking for a cheap solution to my problem: I need to find either an already existing common device (that is used in restaurants, bars clubs) or a cheap new device that I will distribute to those places, which can connect to internet (via the already existing ethernet or wireless based internet) and do HTTP request/receive response and retrieve information ? (For instance can a POS device connect to internet?) For a project, I need to do identity validation on several restaurants and bars and not all of them have computers. So I will be giving "cheap and easy to use devices" and non-IT personal can use it to do http request to my server and get response. All I can think of is Cell phones and SMS.

    Read the article

  • How can I add a favicon to a bookmarklet in Google Chrome?

    - by pattulus
    I'm on OS X and I want my bookmarklets to have favicons. I already found two articles but they didn't help much: http://www.tapper-ware.net/blog/?p=97#comment-2076 It's a great article but as I understand it this doesn't seem to work for Chrome :( http://www.tech-recipes.com/rx/3032/google_chrome_how_to_change_icons_on_the_bookmarks_bar/ The problem with this tipp is - if I'm wrong, then please correct me - that after I cleaned the history, the cache, etc. the whole thing will be gone again. If there is a chance to modify the bookmarklets by hosting them myself I'd instantly do it, but I found no solution so far.

    Read the article

  • Apache SSL for login and NON-SSL for everything else (.htacces)

    - by The Devil
    Hey I've almost figured it out on my own but there's something I'm missing. I want to set a couple of directories and files to require SSL and everything else that's not related to those files and dirs to point back to http. So far I have this: RewriteEngine on RewriteBase / # Force ssl for login & admin RewriteCond %{HTTPS} !on RewriteRule ^/?(admin(.*)|login\.php)$ https://%{SERVER_NAME}/$1 [R,NC,L] # Force non-ssl for others RewriteCond %{HTTPS} on RewriteRule ^/?(admin(.*)|login\.php)$ http://%{SERVER_NAME}/$1 [R,NC,L] I'm sure I'm doing something wrong but I just can't figure it out.... The first condition works perfect - whenever I access login.php or /admin/ it points to https. But the second one doesn't... Where have I mistaken ? Thanks in advance!

    Read the article

  • Wordpress network admin pointing to root as opposed to subdirectory

    - by Ian
    I've installed Wordpress on my nginx server in /blogs and new networks will be in /blogs/blogname. All my main site links point to example.com/blogs, but when I go to network admin the links point to http://www.example.com/wp-admin/network/ instead of http://www.example.com/blogs/wp-admin/network/ Here's the multisite section in my config: define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); $base = '/blogs'; define('DOMAIN_CURRENT_SITE', 'www.example.com'); define('PATH_CURRENT_SITE', '/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); If I try changing PATH_CURRENT_SITE to /blogs, I get a db connection error. Thanks.

    Read the article

  • How to prove that an email has been sent?

    - by bguiz
    Hi, I have a dispute on my hands in which the other party (landlord's real estate agent) dishonestly claims to not have received an email that I truly did send. My questions is, what are the ways to prove that the email was indeed sent? Thus far, the methods that I have already thought of are: Screenshot of the mail in the outbox Forwarding a copy of the original email I am aware of other things like HTTP/ SMTP headers etc that would exist as well. Are these useful for my purposes, and if so how do I extract these? The email in question was sent using Yahoo webmail ( http://au.mail.yahoo.com/ ). Edit: I am not seeking legal advice here, just technical advice as to how to gather this information.

    Read the article

  • Domain name is forwarding to my localhost, no idea why

    - by Dustin Fineout
    On my local development machine, I have a WAMP setup (Windows Vista Home Premium, Apache 2, MySQL and PHP 5). One of my projects is rehash.dustinfineout.com, which may be related to the problem... For some reason, when I try to visit http://www.rehash.com in a browser, it forwards automatically to 127.0.0.1 loopback/localhost. I discovered this entirely accidentally. I have already looked at the http.conf and extra/httpd-vhosts.conf Apache configuration files and these are not causing it. I also checked the windows hosts file but that had no entries in it either (C:/WINDOWS/System32/drivers/etc/hosts - maybe there is another location I need to check). Any ideas? Just to clarify, rehash.com is NOT my domain.

    Read the article

  • Configure IIS 7 Reverse Proxy to connect to TeamCity Tomcat

    - by Cynicszm
    We have an IIS 7 webserver configured and would like to create a reverse proxy for a TeamCity installation using Tomcat on the same machine. The IIS server site is https://somesite and I would like the TeamCity to appear as https://somesite/teamcity redirecting to http://localhost:portnumber. I have installed the IIS URL Rewrite extension and the Application Request Routing to try and setup a reverse proxy but can't get it working. The closest answer I found is an old StackOverflow question: http://stackoverflow.com/questions/331755/how-do-i-setup-teamcity-for-public-access-over-https which unfortunately doesn't have any working example. I've searched a quite a bit but can't seem to find a relevant example. Any help is appreciated!

    Read the article

  • Google Chrome suspicious connections

    - by Poni
    I'm using Chrome at Windows and with TCPView (of the SysInternals freeware suit) I see that chrome.exe establish connections to these IPs: 173.194.37.104 209.85.146.138 Using http://www.ipaddresslocation.org/ I check about these IPs and see they're related to Google. Now, in order to clarify, these are the exact things I do: Open up chrome, the default page is set to BLANK (i.e no homepage whatsoever). Then I get into my website which has a blank page, so no "other" http requests are made. Right from this point there is a persistent connection, usually to '173.194.37.104'. What are these?? Very suspicious.. Edit #1: - I'm in 'incognito' mode right from start, when launching Chrome, using a shortcut with the '-incognito' switch. - I've turned off all phishing protections and other "advance" features in order to reduce Chrome's network activity.

    Read the article

  • Setting up Tornado with Nginx on Ubuntu 10.04 for production use

    - by DjangoRocks
    Hi all, I understand that there's an nginx configuration file at http://www.friendfeed.com But i don't really know how to set up Tornada for production use on Ubuntu 10.04 with Nginx. Here's my situation and assumptions: 1) Assuming my Tornado project is set up as such: project/ src/ static/ templates/ project.py And I have installed Tornado by downloading the repositary from Github and than sudo python setup.py install 2) I've installed Nginx and started it based on the instructions here : http://library.linode.com/web-servers/nginx/installation/ubuntu-10.04-lucid My questions are: Where does my nginx configuration file go ? Within the src/ folder? After configuring Nginx, how do I start my Tornado project?

    Read the article

  • Windows 7 Start Menu folder editing help

    - by Flasimbufasa
    I'd like to be able to have the windows 7 start menu link to folders and not link to the stupid libraries. In Windows vista you could add the the Downloads folder into the start menu with messing with the registry: [HKEY_CURRENT_USER\Software\Classes\CLSID\{ED228FDF-9EA8-4870-83b1-96b02CFE0D52}] @="Downloads" [HKEY_CURRENT_USER\Software\Classes\CLSID\{ED228FDF-9EA8-4870-83b1-96b02CFE0D52}\DefaultIcon] @="imageres.dll,-184" [HKEY_CURRENT_USER\Software\Classes\CLSID\{ED228FDF-9EA8-4870-83b1-96b02CFE0D52}\InProcserver32] @="shell32.dll" [HKEY_CURRENT_USER\Software\Classes\CLSID\{ED228FDF-9EA8-4870-83b1-96b02CFE0D52}\shell\open\command] @="explorer.exe shell:Downloads" ;© 2008 Ramesh Srinivasan - http://www.winhelponline.com/blog/ - Created on July 10 2008 [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\CLSID\{ED228FDF-9EA8-4870-83B1-96B02CFE0D52}] @="Downloads" I'd like to be able to change the link within the registry for Windows 7 Ultimate x64 to where the "Documents" link actually takes me to MY DOCUMENTS O: How revolutionary would this be? Could someone with some more registry editing knowledge help me out with this? link to the site where I downloaded this .bat: http://www.winhelponline.com/blog/add-downloads-folder-to-the-windows-vista-start-menu/

    Read the article

  • CLI-Based monitoring tool for KVM

    - by Pinnacle
    I am developing a scheduler for running VMs on KVM. The scheduling has over-commitment of resources like memory and CPU. For this, I need a CLI-based monitoring tool that keeps me giving information about the resource usage of each VM, because it might be the case that due to over-provisioning of resources, VMs on a particular host are running very slowly depending on the benchmarks/programs each VM is running, and then I need to migrate a VM to another host and so on. I looked into libvirt-based tools like collects, MUNIN, Nagios-vert, etc.( http://libvirt.org/apps.html#monitoring ) I also looked into Ubuntu utility perf-kvm ( http://manpages.ubuntu.com/manpages/maverick/man1/perf-kvm.1.html ) I want to ask which CLI-based would be recommended by the community so that I can make a automated scheduler that takes care of the above situation.

    Read the article

  • mplayer dumpstream sometimes fails

    - by User1
    I'm trying to rip the video at http://videolectures.net/ecml07%5Fgetoor%5Fisr/, so I can play it at a faster speed. I paste http://193.2.4.216/2007/pascal/ecml07%5Fwarsaw/getoor%5Flise/ecml07%5Fgetoor%5Fisr%5F01.wmv into a firefox browser in Windows and MediaPlayer plays the thing. However if I try mplayer -dumpstream, it gets stuck into an infinite loop trying to play the file. If I use wget to download the link, I get a small text file which basically points to the same URL. How can I get mplayer to download this stream?

    Read the article

  • How to make fonts smooth and readable in Debian/Ubuntu?

    - by jmdeldin
    What is the best, most foolproof way of getting nice font rendering in Linux? Currently, I am experiencing thin, ugly fonts (shown below). I have wasted too much time tweaking fonts.conf, and I have yet to find a decent combination. I am running Debian 6.0 with no desktop environment (just Openbox for a window manager) in a VM on a Macbook Pro (OS X 10.7.4). Screenshots The following screenshots were taken without fonts.conf and .Xdefaults tweaks. running in "native" Openbox environment: http://i.imgur.com/10bnH.png running over X11, which looks a little worse than Openbox: http://i.imgur.com/sq8jk.png Thank you!

    Read the article

  • Intermediate SSL Certificates on Azure Websites

    - by amhed
    I have successfully configured an Extended-Validation Certificate on an Azure Website following this article: http://www.windowsazure.com/en-us/documentation/articles/web-sites-configure-ssl-certificate/ The main (non-technical) stakeholder of the web application went through great lengths to validate that our site is secure. He went to this site to check the validity of our SSL: http://www.whynopadlock.com/ The site throw the following error: `SSL verification issue (Possibly mis-matched URL or bad intermediate cert.). Details: ERROR: no certificate subject alternative name matches`` The certificate is installed using IP Based SSL instead of SNI. This is done this way because some site visitors still use Internet Explorer 8 on Windows XP, which has no support for SNI and throws a security warning. Is my certificate correclty installed? I received three .CRT files from my SSL provider: PrimaryIntermediate.crt SecondaryIntermediate.crt EndCertificate.crt This is how I exported our certificate as a .PFX file to Azure: openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt

    Read the article

  • Nginx + PHP FASTCGI FAILS - how to debug ?

    - by Niro
    I have a server on AMAZON EC2 running Nginx +PHP with PHP FASTCGI via port 9000. The server runs fine for a few minutes and after a while (several thousands of hits in this case) FastCGI Dies and Nginx returns 502 Error. Nginx log shows 2010/01/12 16:49:24 [error] 1093#0: *9965 connect() failed (111: Connection refused) while connecting to upstream, client: 79.180.27.241, server: localhost, request: "GET /data.php?data=7781 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "site1.mysite.com", referrer: "http://www.othersite.com/subc.asp?t=10" How can I debug what is causing FastCGI to die?

    Read the article

  • Issues with configuration of Apache and mod_auth_sspi

    - by TekiusFanatikus
    I've been able to get this working using XAMP with Apache 2.0.55 and XAMP Apache 2.2.14 without any problems. However, when I attempt to configure our intranet server (Apache 2.0.59), I don't get the same results. The results are that the following variables contain the information desired: $_SERVER["REMOTE_USER"] AND $_SERVER["PHP_AUTH_USER"]. In this case, they are blank. I'm expecting "domain/user_name". Conf file stuff: <Directory "/xxx/xampp/htdocs/"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks Includes ExecCGI Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride All AllowOverride None # # Controls who can get stuff from this server. # #Order allow,deny #Allow from all Order allow,deny Allow from all #NT Domain Login AuthName "Intranet" AuthType SSPI SSPIAuth On SSPIAuthoritative On SSPIDomain "xxxx" SSPIOfferBasic Off SSPIPerRequestAuth On SSPIOmitDomain Off # keep domain name in userid string SSPIUsernameCase lower Require valid-user </Directory> I would like to note that I've modified the paths to reflect the intranet environment. I'm using the following module: http://sourceforge.net/projects/mod-auth-sspi/ Once the module is installed and the conf file is modified, the intranet environment's server scope isn't populated with the expected variables. Edit #1 <Directory "/path_here"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks Includes ExecCGI Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride All AllowOverride None # # Controls who can get stuff from this server. # #Order allow,deny #Allow from all Order allow,deny Allow from all #NT Domain Login AuthName "Intranet" AuthType SSPI SSPIAuth On SSPIAuthoritative On SSPIDomain "domain_here" SSPIOfferBasic On SSPIPerRequestAuth On SSPIOmitDomain Off # keep domain name in userid string SSPIUsernameCase lower Require valid-user </Directory>

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >