Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 1112/3479 | < Previous Page | 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119  | Next Page >

  • How to map email addresses on subdomains

    - by Glen Little
    Is it possible to create email addresses like these: [email protected] [email protected] [email protected] and have them all handled by one mail server, as three different mail boxes? (Many examples I've seen talk about directing mail to [email protected] into the same mail box as [email protected] - but this is not what I'm looking for.) I haven't specified the server technology being used because I'm wondering if this is generally possible. If you know that server x can do this, please mention it in your answer! Is it correct that MX records can be set to direct email to all subdomains *.mydomain.com to one mail server? Is that still true if there are also web sites at those subdomains (using A records)? Thanks!

    Read the article

  • Apache load balancer with https real servers and client certificates

    - by Jack Scheible
    Our network requirements state that ALL network traffic must be encrypted. The network configuration looks like this: ------------ /-- https --> | server 1 | / ------------ |------------| |---------------|/ ------------ | Client | --- https --> | Load Balancer | ---- https --> | server 2 | |------------| |---------------|\ ------------ \ ------------ \-- https --> | server 3 | ------------ And it has to pass client certificates. I've got a config that can do load balancing with in-the-clear real servers: <VirtualHost *:8666> DocumentRoot "/usr/local/apache/ssl_html" ServerName vmbigip1 ServerAdmin [email protected] DirectoryIndex index.html <Proxy *> Order deny,allow Allow from all </Proxy> SSLEngine on SSLProxyEngine On SSLCertificateFile /usr/local/apache/conf/server.crt SSLCertificateKeyFile /usr/local/apache/conf/server.key <Proxy balancer://mycluster> BalancerMember http://1.2.3.1:80 BalancerMember http://1.2.3.2:80 # technically we aren't blocking anyone, but could here Order Deny,Allow Deny from none Allow from all # Load Balancer Settings # A simple Round Robin load balancer. ProxySet lbmethod=byrequests </Proxy> # balancer-manager # This tool is built into the mod_proxy_balancer module allows you # to do simple mods to the balanced group via a gui web interface. <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from all </Location> ProxyRequests Off ProxyPreserveHost On # Point of Balance # Allows you to explicitly name the location in the site to be # balanced, here we will balance "/" or everything in the site. ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID </VirtualHost> What I need is for the servers in my load balancer to be BalancerMember https://1.2.3.1:443 BalancerMember https://1.2.3.2:443 But that does not work. I get SSL negotiation errors. Even when I do get that to work, I will need to pass client certificates. Any help would be appreciated.

    Read the article

  • Nginx not working properly on subdomains

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • FreeBSD 9 (amd64) reboot/shutdown process is very slow

    - by nbari
    I have a Dell Poweredge 2900 III with FreeBSD 9 (amd64), the server uses mfi wich handles a raid10, I had to reboot the server, but notice that either when rebooting or shutting down the server, something is going wrong, besides taking to much time to reboot/shutdown, after rebooting I notice that that some ldap instances within some jails could'nt start and this was because the database was corrupted. This make me think that probably something was wrong with the disks or mfi card, but checking the disk array / logs everything seems to be working fine. My set up is something like this: Host server has the minimum base of FreeBSD 9 amd64, within I create some jails, the ones contain services like mysql, email, and some others ldap. With FreeBSD 7 and 8 I didn't notice this behavior but with FreeBSD 9 something is not working well. I did a clean installation of FreeBSD 9 and root filesystem is using ZFS. Attached is an image hoping some one can give me a hint of what to check or any kind of advice. reboot capture screen image

    Read the article

  • VPN/IPsec for RD, port 80,

    - by Andrew
    I have two Windows VPS's that talk to each other. 1) I already firewall Remote Desktop to a few IPs. I want to require me to connect instead to the server via VPN (IPSec?) in order to then be able to RD. 2) I want to also only permit access to port 80 on the server if I am VPN'd in to the server (authenticated) 3) If possible, I want to secure communication between other VPS's I have using the same method? Thanks

    Read the article

  • Unable to access, make directories (and files) with ftp

    - by Kriem
    I'm having trouble with my new server and accessing its directories. I updated my proftpd.conf with: DefaultRoot / No I'm able to see the root directory of my server. But, trying to access some directories gives different results. For example, I can access /vars but I can't access /home or /root How can I overcome this? This is what my ftp client says after trying to access /root: Server said: /root: No such file or directory Error -125: remote chdir failed This is what my ftp client says after trying to create a new directory in /: Server said: untitled folder: Permission denied Error -140: remote mkdir failed

    Read the article

  • DNS propagation

    - by Paddington
    I have 1 primary DNS server (ns1.mydomain.com) running on Fedora and 2 secondary ones (ns2 and ns3). DNS changes made on my web servers first goes to the primary name server and then propagates to the secondary servers. After making a DNS change on a domain on the web server, I can't see the new dns information on my ns1 when I perform: dig @ns1 A blahblah.com I then went to the master records on the names server (uses named) in the directory /var/named/run-root/var/named/masters and I see the A record has been updated appropriately. Tailing the logs /var/log/messages is not showing any errors. What could be the issue?

    Read the article

  • What is a proper MySql replication configuration for frequent db updates and rare selects?

    - by serg555
    We currently have 1 master db on its own server and slave db on app server. App executes very frequent but light updates (like increasing counters), and occasional (once in a few minutes) heavy selects (which is the most important part of the app). When app was connected only to master db there were no performance issues. With slave db introduction CPU load avg on app server increased to about 6-10 during that heavy select period (from 3-4 as before). When server doesn't run those frequent updates it seems like performance for selects stays within the limits. So I have a feeling that those updates is what is causing the performance drop (also these frequent updates are not critical so if slave db doesn't have them in sync with master for some time it would be ok). What would be a good db replication setup for such kind of app? What are the replication parameters we could tweak? Thanks.

    Read the article

  • Nginx not working properly on subdomains [SOLVED]

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Rescue system running TFS that BSODs, into vmware esxi

    - by 3molo
    Hi, After moving to new facilities, one of our old Dell servers running Windows Server 2003 R2 on PowerEdge 2650 HW BSODs with 0x8e. The server runs Team Foundation Server, so we have a few guys dependent on it. No one here knows TFS, so we have no idea how difficult it would be to setup from scratch. We have the MSSQL database(s) backed up, recent and fresh copy. Tried removing/refitting memory modules, but with no success. The system boots into safe mode but hangs occasionally. I booted a linux livecd and did a dd of both c: and d:, so I have all the data in compressed images on a vmware machine. For the guest, I created a 38G (actually it became 40GB) partition to act as C:, and booted a live cd. I then uncompressed the compressed disk image of c: and dd'd it to the new c: using 'gunzip -dc c.img.gz | dd of=/dev/sda1 bs=1M'. The operation ran for about 1000 seconds, and completed successfully. I assumed it would at least try to boot windows (but most likely BSOD due to not having correct drivers), but the Vmware ESXi guest does not seem to recognize it as a bootable disk. We don't have the vmware enterprise license, so the vmware converter cold cloning is not an option. Did I do something wrong in my dd's etc with the ISOs, or why would it not (try to) boot? Am I wasting my time? What other approach is there? Will continue to try to remove services and drivers to make the physical machine at least work reasonably well in safe mode. What do you suggest? 1. Continue to get the dd'd images to the virtual disk and get it to boot. 2. Install a new windows server, get team foundation server and restore from backup. 3. Focus on the old problematic hardware Any help appreciated

    Read the article

  • Diagnosing Random Network Lag

    - by uesp
    I'm having trouble diagnosing some random lag on a 6 server LAMP cluster serving a MediaWiki site. While we're serving some 100 pages/sec the servers themselves are running fine with less than 0.5 load, no locked processes, no paging, no errors being logged, etc.... Lag is present on all servers and is random: one minute its fine the next it's there. DNS lookups on the servers are randomly slow. For example time nslookup google.com varies randomly from a few milliseconds to several seconds and sometimes times out entirely. While we use IP addresses internally on the cluster this may be a symptom of the root issue. We are not running our own DNS server. The Apache server-status pages randomly lag or time out. Benchmarking using ab between servers shows a few loads sometimes take 3000 ms (almost exactly). Benchmarking server-status on the local server itself usually shows no issue (it showed a lag only once among a few hundred tests). The servers are sitting behind a switch and a firewall which I don't have any access to so I don't know their setup or status. While we are under heavier than normal load a 2 Mbps incoming and 20 Mbps outgoing traffic shouldn't be stressing the switch or firewall should it? My feeling is that it is the switch/firewall or something above them in the ISP like their DNS but can't confirm it. I need some other tests or methods of diagnosing this lag to try and narrow down the ultimate cause.

    Read the article

  • NDR for meeting requests

    - by Adam
    We've got a mailbox for each department (e.g [email protected] and [email protected]) and everyone in that department has access to it, access is granted using Exchange Management Console. If I send a calendar invite to [email protected], I get a Undeliverable report: Delivery has failed to these recipients or groups: User_A The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. User_B The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. User_C The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. The users are no longer in AD or Exchange but we cannot find any mention of them within any deligates or permissions anywhere. We only started to get this problem AFTER we upgraded our DCs from Windows Server 2003 to Windows Server 2008 and Exchange server from Windows Server 2003, with Exchange 2005 to Windows Server 2008, with exchange 2010.

    Read the article

  • FTP from batch file

    - by Buzkie
    I'm trying to use a batch file to download a package off my FTP server. echo username >ftp.txt echo >>ftp.txt echo cd directory >>ftp.txt echo get filename >>ftp.txt ftp -s:ftp.txt server.com The server is set to allow anonymous logins on username but when I run the script I get an error: 331 Password required for username If there is any other useful information let me know. -Alex

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • Can't set up Usermin correctly to allow users to login outside of local network, what am I missing?

    - by thecraic
    I'm fairly new at creating a server, but the biggest problem I am currently having at the moment is getting Usermin set up to be accessible from outside the LAN. I talked to other people that use it and was told that all I need to do is type the url:20000 to access the login screen, but that doesn't work. I have also tried the ip:20000 and that doesn't lead to anything. Instead I get the error message: Error - Bad Request This web server is running in SSL mode. Try the URL https://hostname:10000/ instead. (where hostname is my server's hostname) I know it must be a configuration issue, but I have checked all my settings and as far as I can tell I don't have the ports blocked anywhere. I have the correct ports forwarded on my router and my server firewall doesn't have the port block either. Is there anything I am missing? Any help would be appreciated and I will add more information upon request. Thank You.

    Read the article

  • High Load average threshold in linux

    - by user2481010
    My one of friend said that his server load average sometime goes above 500-1000, for me it is strange value because I never saw load average more than 10. I asked him give me some snapshot of top and memory usages, he gave following details: TOP USAGES top - 06:06:03 up 117 days, 23:02, 2 users, load average: 147.37, 44.57, 15.95 Tasks: 116 total, 2 running, 113 sleeping, 0 stopped, 1 zombie Cpu(s): 16.6%us, 6.9%sy, 0.0%ni, 9.2%id, 66.5%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8161648k total, 7779528k used, 382120k free, 3296k buffers Swap: 5242872k total, 1293072k used, 3949800k free, 168660k cached Free $ free -gt total used free shared buffers cached Mem: 7 6 1 0 0 4 -/+ buffers/cache: 1 5 Swap: 4 0 4 Total: 12 6 6 Total cpu $ nproc 8 my question is it possible load average more than 100 on 8 core,12 GB mem Server? because I read many tutorial,article on load average, it said that thumb rule is "number of cores = max load" according to thumb rule here is max load average 16 then how his server running with 147.37 load server? he said that it is least value (147.37) some time goes more than 500.

    Read the article

  • Can't get nmap to work under Windows 7 64 bit

    - by jitbit
    I'm trying to install and run the nmap tool to test my server, but it keeps saying Note: Host seems down. If it is really up, but blocking our ping probes, try -P0 and showing all the server ports are closed. Which is not true - the server is up and has lots of open ports. Any ideas? UPDATE: Just to clarify - the server can be pinged and port-scanned fine by other programs. It's juts nmap that does not work. Even "google.com" seems to be down for nmap.

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Top - what does Virtual memory size mean? ...linux/ubuntu

    - by user42159
    I am running Top to monitor my server performance and 2 of my java processes show virtual memory of upto 800MB-1GB. Is that a bad thing? What does virtual memory mean? And oh btw, I have swap of 1GB and it shoes 0% used. So I am confused. Java process = 1 Tomcat server + my own java deamon Server = Ubuntu 9.10 (karmic)

    Read the article

  • Chroot for Mysql running on Ubuntu 10.10?

    - by Calvin Froedge
    Prompted from a question about MySQL server security best practices, I've been running through this list (with a few minor alterations) to properly secure my server database server: http://www.greensql.net/publications/mysql-security-best-practices On step 10, I'm told to change the root directory for the mysql user using chroot, but very few specifics are provided and I'm not sure where to start. Does anyone know of a good resource for walking me through the steps to properly create a chrooted environment for Ubuntu 10.10?

    Read the article

  • Why does IIS refuse to serve ASP.NET content?

    - by Michael Haren
    My Windows Server 2003 Std server refuses to server ASP.NET content. It serves regular html just fine but anything .net, even a one line html file with an ASPX extention fails silently. Things I've tried: Nothing in the event log or IIS WWW logs when it fails. Fiddler shows no response I reinstalled .NET with C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727aspnet_regiis.exe -U C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727aspnet_regiis.exe -I I give obscenely high permissions on everything I can think of (full control, read, write, etc.) to all possibly relevant users (IUSER*, ASP.NET, etc.). I confirmed that ASP.Net v1 and v2 Web Service Extensions are "allowed" in IIS Confirmed that the Server Manager had IIS and ASP.Net roles enabled Again: this is the scenario: http://localhost/Test/Default.htm <-- Works great! http://localhost/Test/Default.aspx <-- Bombs silently with no message at all Any guidance will be much appreciated! Solution: I reinstalled per the instructions below and it works now. Thanks all!

    Read the article

  • IIS, SSL with client certs on web farm

    - by Jeremy
    We're building a web service that will be deployed on an IIS 7.5 farm, and secured through SSL, and also requiring client certs that will be mapped to Active Directory accounts. My understanding is that the server cert needs to be generated for a specific server. If that is the case then we will need a server cert for each server in the farm. Because the farm will be load balanced, how do we generate client certs that will work with any of the servers in the farm?

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

< Previous Page | 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119  | Next Page >