Search Results

Search found 16455 results on 659 pages for 'hosts allow'.

Page 175/659 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • once VPNed into pfSense, unable to hit the public URLs of my websites - they are routed to the pfSense box

    - by Sean
    I have a pfSense box setup as the firewall/router/VPN appliance at my colo. Once I VPN into the colo (either pptp or openvpn, pptp preferred due to multiple clients and ease of configuration), I am able to hit all my servers by their private 10.10.10.x ip and am able to browse the public internet without issue. When I try and hit the URL of a domain hosted by one of my servers, I am prompted for credentials. If I login using the pfSense credentials, I'm connected to pfSense as if I'd used it's internal IP. If I hack my hosts file to point url - server private IP it works fine, but this is obviously not a good solution. To recap: not connected to VPN - www.myurl.com works connected to VPN - www.myurl.com never makes it to the correct server, but is sent only to the pfSense box I'm sure it's something small that I've missed in the pfSense config.

    Read the article

  • Blocking non-virtual host access in Apache?

    - by cmbrnt
    I'm running an apache-server, with a bunch of virtual hosts for about seven domain names. Now I'd like to disallow access for clients who try to access my server using only its IP-adress. So: When someone accesses my website through www.domain.com, they reach the site hosted in /var/www/domain.com/public_html/. When someone enters the ip-address of the server they reach a 403 Forbidden-message. The problem with this is that they are theoretically able to reach my other sites through bruteforce, when getting http://11.22.33.44/domain.com/public_html/. I rather want them to reach a 403 Forbidden all the time, as long as they don't access my server by a valid domain name. How do I solve this problem?

    Read the article

  • Uninstall SQL Server 2005 Express after Demoting the DC

    - by Walter Aman
    A Windows Server 2003 SP2 hosting a now orphan installation of SQL 2005 Workgroup was pressed into service as a DC in a disaster recovery scenario. It has since been demoted. The server also hosts legacy apps for which we lack reinstallation resources; thus our desire to preserve it as close to intact as possible while removing the orphaned roles. All efforts to remove SQL 2005 thru Control Panel and ARPWrapper /remove fail with error 29528. Should I abandon this and leave the orphan SQL dormant, or is it reasonable to remove it post-demote?

    Read the article

  • Do I really need twho different Exchange servers 2007|2010?

    - by lrosa
    Given am Exchange Server 2003 running on a dedicated server on a LAN protected by a Linux in DMZ, Microsoft says that if you upgrade, you should install two different servers (meaning two boxes, two licenses of Windows Server and two installations of Exchange) with different Exchange "server role". Exchange is installed in a safe LAN, there is a Linux relay in DMZ that feeds messages to Exchanges and gets from it the messaged to be delivered on the Net (smart relay). The mail traffic is about 2000 Internet messages/day and probably another 2000 msg/day sent by local users within the organization. The servers hosts 200 users.

    Read the article

  • CentOS 5 - Unable to resolve addresses for NFS mounts during boot

    - by sagi
    I have a few servers running CentOS 5.3, and am trying to get 2 NFS mount-points to mount automatically on boot. I added 2 lines similar to the following to fstab: server1:/path1 /path1 nfs soft 0 0 server2:/path2 /path2 nfs soft 0 0 When I run 'mount -a' manually, the mount points are properly mounted as expected. However, when I reboot the machine, only /path2 is mounted. For /path1 I get the following error: mount: can't get address for server1 It obviously looks like a DNS issue, but the record is properly configured in all the DNS servers and is mounted properly if I re-try the mount after the reboot is completed. I could properly fix this by using IP address instead of hostnames in /etc/fstab or adding server1 to /etc/hosts but I would rather not do that. What might be the reason for failing to resolve this specific address during boot time? Why the problem is only with the 1st mount point and the 2nd is properly mounted despite having identical configuration?

    Read the article

  • installing mod_wsgi giving 403 error

    - by John Smiith
    installing mod_wsgi giving 403 error httpd.conf i added code below WSGIScriptAlias /wsgi "C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py" <Directory "C:/xampp/www/htdocs/wsgi_app/"> AllowOverride None Options None Order deny,allow Allow from all </Directory> wsgi_handler.py status = ‘200 OK’ output = ‘Hello World!’ response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Note: localhost is my virtual host domain and it is working fine but when i request http://localhost/wsgi/ got 403 error. <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/www/htdocs/localhost" ServerName localhost ServerAlias www.localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" combined </VirtualHost> Error log [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] Options ExecCGI is off in this directory: C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache Note: My apache is not in c:/xampp/bin/apache it is in c:/xampp/bin/server-apache/

    Read the article

  • How to remove the dlinksearch browser search hijack

    - by Bish
    Hi Gang, For the last few weeks all the machines on my home network are having the same problem whilst browsing the internet. When the user enters an invalid URL in the browser address bar, instead of the default browser behaviour, the request is sent to http://www1.dlinksearch.com/. As far as I can tell this is all machines and all browsers. It is so consistent I am wondering whether it has anything to do with our router. We use a DLink DIR-655 router so maybe the clue is in the name :) Anyhow, I cannot figure out how to disable/remove the offending behaviour. I've checked hosts files, spyware, AV etc. etc. Anybody have any ideas? Paul P.S. Apologies if this is not the right place to ask this type of question. I'm a bit stuck

    Read the article

  • How do I negotiate for colo space?

    - by randy melder
    I guess this isn't a technical question, but it definitely is something IT teams deal with, so here goes: I'm looking at getting a rack at a local colocation facility. I'm weighing the options versus building out in a cloud platform. We are REALLY low bandwidth and power. There's a total of six hosts for the total operation. You can assume we use <= 10 amps of power and <= 2Mbps 95th percentile. Do you have any advice for getting the best deal?

    Read the article

  • Configuring sendmail to use one outbound MTA exclusively

    - by Charlie Martin
    I have a sendmail problem, and I'm anything but a sendmail guru -- I could use some help. My problem is that I have a system intended to be more or less an "appliance" -- it's not intended to have an admin. Because of this, it needs to be able to "call home" by sending email. As we have configured it, this works fine -- using sendmail, it finds the appropriate relay by looking up an MX record and everything works fine. Now, however, because of security concerns, we want to limit it to using exactly one relay, so for example relay.corp.example.com. Should the user configure it to use, say, fubar.example.com, the mail sending should fail or be deferred. I thought that by configuring sendmail with a /etc/mail/server.switch file containing hosts files without dns, I'd get that effect. This doesn't work -- instead, if it gets mail addressed to [email protected], it tries to talk directly to example.com, and ignores the configured server. Any ideas?

    Read the article

  • Cannot connect to telnet server

    - by BloodPhilia
    So, I can't use telnet to connect to any server but it works fine from a different computer. It just says it can't connect. I tried the following things: Disable firewall and AV protection. (Basically, there was no security feature left online) Telnet is set to "Trusted" in my AV protection. (Kaspersky Internet Security 2011) Using Putty to telnet, but apparently Putty's connection is also inhibited. (Says it can't connect to host) Disabling the telnet client in Control Panel and then re-enabling it. (Windows 7 Ultimate) hosts file is clean. Checked for nasties using MBAM and KIS 2011 as well as going though my HijackThis logs, nothing found. I can connect to the same machines/servers through the web browser, ping, tracert, etc. Only telnet seems to be blocked. Any other thoughts?

    Read the article

  • Website filtering for OpenVPN clients

    - by Asche
    I am currently trying to block some websites by their domain names for all the clients of my OpenVPN server. My first idea was to use the /etc/hosts file. But, its effects seem to be limited to the host only and not to be taken in consideration by OpenVPN. I then tried to configure bind9 and to interface it with OpenVPN, but that solution was unsuccessful and uneasy to use. After this, I considered using iptables to drop all the packets from/to those websites but that forum thread made me thought otherwise since iptables' behavior with FQDN may generate complex issues. Have you got a solution to block websites for all clients using an OpenVPN server on which I am root?

    Read the article

  • nginx doesn't find the directory but apache does

    - by Jack Spairow
    I use apache as the backend server and nginx on the frontend. Apache listens to port 8080 and nginx to port 80. What I do is have the root point to the public folder foreach virtualhost: <VirtualHost *:8080> ServerAdmin webmaster@localhost ServerName site.com ServerAlias site.com *.site.com DocumentRoot /var/www/site.com/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here's the nginx config: server { listen 80; access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; root /var/www/site.com/public; index index.php index.html; server_name site.com *.site.com; location / { location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; proxy_cache one; proxy_cache_use_stale error timeout invalid_header updating; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 301 302 20m; proxy_cache_valid 404 1m; proxy_cache_valid any 15m; } } location ~ /\.(ht|git) { deny all; } } The problem is Apache resolves the domain just fine (site.com:8080), but nginx shows instead a 502 Bad Gateway (site.com:80). I tried looking at the error_log and access_log but I can't find any hint for why can't nginx work. EDIT: The problem was I wasn't able to include that isolated config for nginx.

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. What is the mistake if any one can point out in above configuration, or else I need to check any thing else? sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK I am using Apache2 on Ubuntu 12.04

    Read the article

  • Scalable WordPress Host for High-Volume Site?

    - by Jonathan Eunice
    I need recommendations for a scalable web host for a high volume WordPress web site. For my purposes, high-volume might be 100K-500K visitors/hour. Might think towards a 1M/hour burst rate as a "high water mark." I know WP isn't the highest-performing platform out there (ha!), but it's non-negotiable. I can do "the usual optimizations" (e.g. put static files in a CDN, run and follow the advice of performance analyzers like YSlow, etc). But it will still be WordPress, and there will be a dozen or so plugins involved. So, where to host the site? Most "what's the best WordPress host?" discussions seem to focus on lowest-cost. I need the opposite. What are the high-volume, scalable, or clustered WordPress hosts with which you've had great experiences?

    Read the article

  • Update a war on Tomcat start up

    - by pater
    I want to setup an update process for an application running on Tomcat. The server which hosts tomcat is only open during working hours (it is an intraner application for a small company). I was thinking that I could upload the new war to the server and set up "something" to run on the next server boot. This something could be a bat file that will be executed on server start up but before the start up of the Tomcat service and it will delete the old war and its exploded folder. When I update manually the war I also delete the work folder of Tomcat (just to be sure). I know about hot deployment but I do not consider it an option since I am not very sure for the implications it might have on the users current working sessions. Is there a way to run such a bat file before Tomcat start up or an alternative way to do this update? Tomcat version isn't an issue. Now is running Tomcat 6 but I can upgrade to version 7 if needed.

    Read the article

  • mysql single database relocation

    - by asdmin
    I would like to know if it's possible to operate different databases on different filesystem locations. Background: we are a hosting service, which hosts mysql, web, and smtp to it's customer, but all our services (sql, smtp, http) are located in a different place. We are going to assign a single logical volume to a customer, which will accommodate the customer's mailing, weppages and (hopefully) sql database. Web pages and mailing are already covered, but I am not able to find a configuration setting which would enable me to specify the location of a database (the directory where mysql stores the DB). Let me please highlight, the target here is to relocate different databases to different locations in the filesystem, not moving them from a single place to an another (single) place. Also please do not bother answering with soft and hard symbolic links. ;) Thanks

    Read the article

  • How to set up Aptana Studio 3 with a Bitbucket private repo

    - by Titus
    I have just started playing around with Git and would like to push a personal project to a newly created, private repo on Bitbucket using Aptana Studio 3. I tried to use the Git integration in Aptana but I couldn't figure out where to enter my username and password for Bitbucket anywhere. I tried using the Team > Share Project context menu but that keeps throwing the following message: Warning: Permanently added the RSA host key for IP address '207.223.240.181' to the list of known hosts. Permission denied (publickey). fatal: The remote end hung up unexpectedly I'm pretty sure that's because my repo is private. However, I couldn't find a provision to provide any form of credentials for linking to a private repo. Any ideas?

    Read the article

  • mod_fcgi produces random 500 Errors

    - by DmitrySemenov
    php 5.4.7 via mod_fcgi when I run the site sometimes it works, sometimes it crashed with 500 Internal Error, this is what I see in error.log everytime I run the script [Mon Sep 24 18:50:43 2012] [warn] [client 68.231.194.198] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Mon Sep 24 18:50:43 2012] [error] [client 68.231.194.198] Premature end of script headers: api.php any ideas? vhost config: <VirtualHost :80> ServerAdmin [email protected] DocumentRoot "/home/www/sites/test.com/html/development" ServerName test.com ServerAlias www.test.com ErrorLog "/home/www/sites/test.com/logs/error_log" CustomLog "/home/www/sites/test.com/logs/access_log" common <IfModule mod_fcgid.c> <Directory /home/www/sites/test.com/html/development> Options +ExecCGI AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/www/php-fcgi-scripts/php-fcgi-starter .php Order allow,deny Allow from all </Directory> FcgidMaxRequestLen 1073741824 </VirtualHost> fcgi.d conf LoadModule fcgid_module modules/mod_fcgid.so # Use FastCGI to process .fcg .fcgi & .fpl scripts AddHandler fcgid-script fcg fcgi fpl # Sane place to put sockets and shared memory file FcgidIPCDir /var/run/mod_fcgid FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm IdleTimeout 300 BusyTimeout 300 ProcessLifeTime 7200 IPCConnectTimeout 300 IPCCommTimeout 7200 PHP_Fix_Pathinfo_Enable 1 php-fcgi-starter.php #!/bin/sh PHP_CGI=/usr/local/php547/bin/php-cgi PHP_INI=/etc/php547-fastcgi.ini export PHP_FCGI_TIMEOUT=1200 #export PHP_FCGI_CHILDREN=6 export PHP_FCGI_MAX_REQUESTS=1000 exec $PHP_CGI -c $PHP_INI

    Read the article

  • esx5 debian VM vlan setup

    - by Kstro21
    i have a server with ESX5, have a switch with about 20 vlans, this is how setup the trunk port interface GigabitEthernet0/1/1 description ToOper port link-type trunk undo port trunk allow-pass vlan 1 port trunk allow-pass vlan 2 to 14 stp disable ntdp enable ndp enable bpdu enable then, i created a standar switch(sw1) using the vSphere Client, the VLAN ID is set to All (4095), i also created a VM with Debian 6, with a NIC connected to sw1, now, i want to configure this NIC for a selected group of vlans auto vlan10 iface vlan10 inet static address 11.10.1.0 netmask 255.255.255.224 mtu 1500 vlan_raw_device eth0 auto vlan14 iface vlan14 inet static address 11.10.1.65 netmask 255.255.255.248 mtu 1500 vlan_raw_device eth0 so, when i restart the network using /etc/init.d/networking restart, i got this error Reconfiguring network interfaces...SIOCSIFADDR: No such device vlan14: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device SIOCSIFBRDADDR: No such device vlan14: ERROR while getting interface flags: No such device SIOCSIFMTU: No such device vlan14: ERROR while getting interface flags: No such device Failed to bring up vlan14. done. this is just part of the error, so, my questions is: is this possible?, i mean, what i'm trying to achieve using ESX Virtual Machines, VLANS, etc is this a Debian problem? can be solved? i've read about a file named z25_persistent-net.rules in Debian but it doesn't exist in my installation. in the In the vSphere Networking for ESX5 guide, you can read: If you enter 0 or leave the option blank, the port group can see only untagged (non-VLAN) traffic. If you enter 4095, the port group can see traffic on any VLAN while leaving the VLAN tags intact. So, in theory, it should work, right? Hope you can help me up with this one Thanks

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • configuring apache with mod_mono for .net app

    - by Mystere Man
    I'm having a huge problem getting mod_mono and apache configured to work correctly. I've had this working at one time, but I can't seem to figure out where i'm going wrong. I'm using mono-server4. I'm trying to use a seperate port from the main website. So I have in /etc/apache2/sites-available (with a link from sites-enabled) a vhost configuration that looks like this: <VirtualHost *:9999> ServerName XXX ServerAdmin web-admin@XXX DocumentRoot /var/xxx MonoServerPath XXX "/usr/bin/mod-mono-server4" MonoDebug XXX true MonoSetEnv XXX MONO_IOMAP=all MonoApplications XXX "/:/var/xxx" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias XXX SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost> I used mono-server4-admin to create the application mono-server4-admin --path=/var/xxx --app=/XXX --port=9999 When i start apache, it gives the error: Syntax error on line 13 of /etc/apache2/sites-enabled/xxx: Server alias 'XXX, not found. This corresponds with the MonoSetServerAlias statement. So I commented it out, and when I do that apache starts. However, when I try to access the site, I get a 500 error. The access log indicates that it's trying to access the app on port 80, rather than 9999. I'm not sure what the problem is here. Can anyone help me get figure out where I went wrong? My mono-server4-hosts.conf contains this: # start /etc/mono-server4/conf.d/RMRSite/10_XXX Alias /XXX "/var/xxx" AddMonoApplications default "/XXX:/var/xxx" <Directory /var/xxx> SetHandler mono <IfModule mod_dir.c> DirectoryIndex index.aspx </IfModule> </Directory> # end /etc/mono-server4/conf.d/XXX/10_XXX Also, my /etc/mono-server4/conf.d/XXX/10_XXX contains this: This is the configuration file for the XXX virtualhost path = /var/xxx alias = /XXX vhost = localhost port = 9999

    Read the article

  • What is the quickest and safest way to test new software and revert all changes, if needed?

    - by calbar
    I'm looking for Windows software that will allow me to quickly create a "checkpoint", do whatever I might need to do to my computer - install programs/drivers/updates, create/delete personal files, reboot the system multiple times, open questionable attachments - and then revert the entire system back to when the checkpoint was created. Essentially I want Windows Restore Points that save my personal files and partitions, too. It sounds like disk imaging might be the ticket, but creating them is much too slow and the restore process too involved... I'm hoping to sacrifice full disaster recovery for speed. Creating a checkpoint should be as close to one-click as possible, and rolling back should be a matter of selecting a restore point and rebooting. Ding! I'm familiar with Sandboxie, True Image Home "Try and Decide", Returnil, and a number of other "virtual system" apps that actively "catch" changes and allow you to commit or reject them. I'm not interested in these for a number of reasons - I prefer the "cut and dry" restore point approach. Finally, I'll note that I've just recently become aware of Comodo Time Machine. It sounds absolutely perfect, however, a quick skim through the user forums show more than a few horror stories of corrupted, unbootable systems. Any positive personal experience with the software to suppress my superstitions, or suggestions for more established alternatives would be greatly appreciated - Comodo Time Machine seems relatively new. Thanks for your help!

    Read the article

  • How do I prevent my ASP .NET site from continually prompting for user credentials?

    - by gilles27
    I'm trying to get an ASP .NET website up and running on IIS6. The site will run in its own application pool, and uses Windows authentication, with anonymous access turned off. When I run the app pool under NETWORK SERVICE, everything works fine. However we need the app pool to run under a different account, because this account needs some extra privileges (we are printing Word documents). This new account is a member of the local users group, and the IIS_WPG group. It has also been granted the "Log on as a service right". When I browse to the site I am prompted for credentials, not once, but several times. When the page finally loads it looks wrong because the style sheets have not been applied. My suspicion is that I am being prompted once for each file (e.g. all the images, styles and script files) the browser requests, and that for some reason the website is unable to validate those credentials in order to serve the files back. If I allow anonymous access the page loads fine - we don't want to allow it but I mention it in case it offers any further clues. My theory is that perhaps the account the app pool runs under needs permissions to validate domain credentials? If that is so, how do I enable this?

    Read the article

  • Dynamic Subdomains

    - by crash
    On my new site I want to have dynamic subdomains. I'm trying to make it so that the subdomains use the same web root as the main domain, all under a single CodeIgniter installation. For example, subdomain.example.com would lead to example.com/subdomain, which is actually example.com/index.php/subdomain. I've already the DNS, virtual hosts set up but I"m getting caught up on the .htaccess. The effect of the linked htaccess is that when navigating to any subdomain, it gets caught up in an infinite loop. (Error log after one request.) It's the same effect for www., which should just resolve to the main domain.

    Read the article

  • Storing Cards and PCI Compliance

    - by Nimbuz
    I'm developing a SaaS service and will be managing payments as a merchant for customers, and since we'll be using multipe payment processors depending on users location, amount and other factors so its important to store card details. I did some research and from what I understood all you need is a PCI compliant host (VPS, Dedicated or Private Cloud) and get it validated and certified through some provider like TrustWave etc... Is that correct or am I missing something? Also, would be great if you could suggest a few (not necessasrily cheap, but affordable) PCI compliant hosts. Many thanks

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >