Search Results

Search found 17651 results on 707 pages for 'unix domain sockets'.

Page 642/707 | < Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >

  • PST backup with Volume Shadow Copy Service

    - by NoMadMan
    I was asked to implement the task of backing up 35 PST files ranging from 800Mb to 2000Mb. Windows XP and Windows 2000 workstations are assigned to the users and we have a Windows 2000 domain controller we use to back up files on 3x 500Gb external hard drives. I found several methods from applications to scripts. Local or remote applications would be my last resort. I came across this script based on Volume Shadow Copy Service. CopyWithVss I wanted to know if there would be a problem if the path had spaces. Would mounting the destination path of each PST folder with a drive letter be more practical? My concern with mounting option is that i would eventually run out of letters since I have 35 and possibly more workstations to back up. Lastly, can someone give me an example of CopyWithVss if it were run on a production network? The script is a bit cryptic even after reading several times. Where in the script do I enter the source and the destination? I'm a Mac user so please excuse my ignorance with Windows platform.

    Read the article

  • How to connect the virtual networks of vmware guests running on different hosts?

    - by gyrolf
    In a test setup, we are running several virtual machines on a single vmware workstation host. All virtual machines are connected via a "host only" network. This runs fine up to 2 or 3 virtual machines (depending on the host hardware). To allow more virtual machines, we want to use more host machines. Details about the environment and applications: Host PCs are running Windows XP in a corporate intranet. VMware used is Workstation 6.5 Guests are running Windows Server 2003 All guests act as Web Servers One of the guests additionally acts as Windows File server, offering shared folders for the other guests to connect to. Restrictions: VMware guests shall not be visible from the intranet. Changes to the host PC are restricted by corporate policy. In the virtual network, no domain controller exists. All virtual machines are member of the same workgroup. Running the virtual network as NAT is possible. Port forwarding might be used if it does not conflict with ports used by the host PC. Looking for a solution, I found hints about using router or vpn software on the hosts, but without any details how to setup. (I found a similar question Sharing the network between 2 VMware hosts, but the answer was not sufficient for me.)

    Read the article

  • Can I autoregister my servers hostname in my local DNS? [on hold]

    - by Christian Wattengård
    We have evaluated a W2k12 server as a domain controller at work. This has the extra benefit of registering every "subordinate" computers name in it's DNS so that I don't have to go around remembering IP's all the time. (And it let's me easily run dhcp also on my "pop-up" dev-servers). We need to rework our work network for several odd reasons, and in this new scenario there was no money for an extra Windows 2012 license. We have at our disposal several old boxes that run linux quite well. Is it possible to set up a DNS-server-"appliance" that somehow autoregisters it's own hostname.. Scenario: Router (N66u) on 172.20.20.1. Runs DHCP on 172.20.20.100-200 range. Server [verdant] of a *nix flavor on 172.20.20.2 Laptop [speedy] of W8 flavor on DHCP assigned Laptop [canary] of W8 flavor on DHCP assigned Desktop [lianyu] of Ubuntu flavor on DHCP assigned What I would like is that all of the above servers (except possibly the router) would be available on verdant.starling.lan and canary.starling.lan and so on. This is how it works right now (except the Ubuntu box... I haven't cracked that one yet) because Windows just does this for you.. I would also be able to do this without any manual labor on the server. When I tell my box it's name is smoak it should "immediately" be available as smoak.starling.lan without any extra configuration on my part. How can I do this in a Linux (Ubuntu) environment?

    Read the article

  • How can I configure myhostname to work with Postfix?

    - by John Kelly Ferguson
    I'm going through the process of setting up a Discourse forum on my server (Ubuntu 12.04 x64) and am getting stuck at the point where I have to configure mailers. I'm following Discourse's instructions and am stuck trying to configure postfix for Mandrill. It is says to check my fully-qualified domain name by typing hostname -f When I enter in hostname -f, I get localhost. As far as I know, entering in hostname -f should return mydomainname.com. When I just enter in hostname, I get mydomainname which is correct because that is what I set my hostname to in /etc/hostname. Looking at some of my other settings, my /etc/hosts file reads: 127.0.0.1 localhost mydomainname # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters And in my /etc/postfix/main.cf file, I have myhostname set like this: myhostname = mydomainname.mydomainname.com (Should this be myhostname = mail.mydomainname.com instead?) And mydestination is the following: mydestination = mydomainname.com, localhost, localhost.localdomain, localhost I'm not that familiar with configuring hostnames. I've been reading Postfix's instructions, but haven't been able to figure it out yet. Any help on how to get this to work would be greatly appreciated. Thanks.

    Read the article

  • Windows 2008 R2 file share - any way to "lock it down" outside of a 3rd party app?

    - by TheCleaner
    I have a 3rd party app that "makes a call" to write files to a file share on our network using the currently logged in credentials of the Windows domain user. Meaning the 3rd party app doesn't pass the apps credentials but simply issues a behind the scenes copy command to take a source file specified and copy/move it to the destination "repository" on the file share. The basic premise is that it keeps revisions/approvals for Document Control (think svn/git I guess, similar to this question: Lock down Windows folder to only be updatable by SVN). This all works fine...but here's my issue: I need a way to lock down the file share from being accessed/modified outside of using the 3rd party app (meaning prevent explorer/word/excel/etc from getting to that share). I know I can do the following: make the share a hidden share ($) - this definitely helps. Most users would have zero clue on how to get to such a share. Solves probably 95% of my issue. go one step further and set the "Hidden" attribute on the folders in the hidden share - this would go a little further in that even if a user knows the path to the hidden share like \\server\hidden$ they still won't see folders in that share without changing their explorer options to "show hidden files/folder Any other ideas on how I can lock this down? The users still need modify rights to this share/folders since the 3rd party app relies on their Windows permissions to that location when copying the files into it. I can't really use 3rd party tools to password protect the folder/share without causing the 3rd party app functions to fail.

    Read the article

  • Share one ssl certificate between multiples vhost

    - by Cesar
    I have a setup like this: <VirtualHost 192.168.1.104:80> ServerName domain1 DocumentRoot /home/domain/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> ServerName domain2 DocumentRoot /home/domain2/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> DocumentRoot /home/domain3/public_html ServerName domain3 ... </VirtualHost> <VirtualHost 192.168.1.104:443> ServerName domain3 SSLCertificateFile /usr/share/ssl/certs/certificate.crt SSLCertificateKeyFile /usr/share/ssl/private/private.key SSLCACertificateFile /usr/share/ssl/certs/bundle.cabundle ... </VirtualHost> I want to use domain3 certificate in the other domains, preferably without having to repeat all the <VirtualHost 192.168.1.104:443> config. In other words I want something like this: If the vhost has no explicit ssl config use cert for domain3 (/usr/share/ssl/certs/certificate.crt) Notes: 1.- I for sure will be setting more vhosts in the future 2.- I know (and don't care) of the ssl warnings the browser will show (hostname mismatch) If this possible? how?

    Read the article

  • Low-profile, PCI Express, x1 video card with VGA-out?

    - by Dandy
    I just bought an Acer Aspire EasyStore H340 system (http://us.acer.com/acer/productv.do?LanguageISOCtxParam=en&kcond61e.c2att101=54825&sp=page16e&ctx2.c2att1=25&link=ln438e&CountryISOCtxParam=US&ctx1g.c2att92=450&ctx1.att21k=1&CRC=936243954) It comes with Windows Home Server, which I don't particularly care for (too dumbed down for my liking)--I bought the box mainly for the form factor, and intend to install Server 2008 on it and have it run as a small domain controller. The geniuses at Acer however went out of their way to ensure you can only run Home Server--you can only connect to it via the Home Server Connector software, as it has no video-out whatsoever (so essentially, there's no way to even get into the BIOS). It only has a low-profile, PCI Express x1 slot. It turns out to be way harder than I thought to locate a video card that both has an x1 connector, and is low-profile (I'd really rather not snip the bracket at the back just so it'll fit the case). I know they're out there, and I've seen one with Display Port outputs, but I don't have a monitor with this connection. So to reiterate, it needs to be: - x1 connector - low profile - VGA out (though DVI would be okay; I have some spare adapters) Can anyone recommend anything at all?

    Read the article

  • Sending eMails in a external subnet in vmware ESXi

    - by user80658
    This might be a bit hard for me to explain - and it is a pretty individual situation. I got a native server at Hetzner (www.hetzner.de). The public IP is 88.[...].12. I got ESXi running on this server. I can access the esxi console by the public ip, but none of the virtual machines. That's why I bought a public subnet with 8 (6 usable) IPs (46.[...]) and an additional public ip (88.[...].26). This additional public ip belongs to the first virtual maschine - a firewall appliance - which is connected to the WAN. This need to be done this way - since it is the official way by hetzner. My 46. subnet is behind the firewall. I got a virtualmin server with dovecot imap/pop3 server. When sending a email, most provider (gmail) will accept those mails, but a lot will put it into spam (aol). My theory is: The MX line of my domain says of course the ip of the virtual machine (46.[...]), but in the raw email it says that email is sent by the ip of the firewall (88.[...].26), which doesnt sound trustworthy. A solution would be if the firewall could handle mail, but it simply cant. How can I prevent this problem? Thanks.

    Read the article

  • G4 server running slow

    - by Abby Kach
    I have HP proliant ML 350 servers. We have 8 remote locations where users connect and log on to our server through DYNDNS to access our company ERP's to conduct day to day work. The base of our company ERP's is oracle for which we have a separate server.Now the problem is day by day the load on the server is increasing and the speed is getting slower and slower and users are facing a lot of issues . so I are planning to implement Sonic wall VPN. I conducted a demo of sonic wall but it was slower than the current speed of dyndns. the configuration of my server is as follows :- Linux HP ProLiant 370 Intel Xenon 3.20 GHZ 150 GB (72 * 2) 3 GB Suse Omega HP ProLiant 370 Intel Xenon 3.20 GHZ 300GB (72.8 * 4) Raid 5 4 GB Windows Server 2K3 Enterprise Edition Storage Box HP Storage Works 1400 Intel Xenon 2.00 GHZ 4 TB(1 TB * 4) Raid 5 2 GB Windows Server 2K8 Enterprise Edition Domain & Terminal HP ProLiant 350 Intel Xenon 3.20 GHZ 250 GB(72.8 * 3) Raid 5 4 GB Windows Server 2K3 Enterprise Edition Can some one help me as to how can i speed up my network at remote locations and reduce the problems of speed etc..

    Read the article

  • Exchange 2003: Accounts with only OWA access unable to change passwords when expired or forced

    - by radioactive21
    We have accounts whith only OWA access, because they are generic accounts and we do not want the accounts to be used as machine logins. We have a password policy that users must change their passwords every 6 months. The problem we are having is that since the accounts are not loging into the machines, when the password policy kicks in it is preventing users with OWA only access from changing their password. Also, when we select "User must change the password at next logon" it also causes the same issue. We have two exchange servers the main one and a front end one. what we have been doing with these generic account is in properties, under the "account" tab we restricted "log on to" to the front end server. Just to clarify, when we have no restrictions, users can change their passwords via the web without any issues. It is only when we force them to only login via OWA that they cant change passwords. I tried adding our domain controler and main exchange server to the "This user can log on to The following computers" in the account tab, but still it is not allowing them to change passwords. Currently I have to manually reset the passwords for OWA only accounts. Is there anyway to allow OWA acconts to change passwords? EDIT: Users restricted to only OWA can change their password via the web browser without any issues when there are no restrictions. In other words normally they can just log into outlook via the web and change their password, but when the password policy expires or we force them to change their password at next login, they are unable to.

    Read the article

  • Setting up apache vhost for Icinga

    - by DKNUCKLES
    It's been a while since I've worked with Apache so please be kind - I'm also aware of this question but it hasn't been much help to me. I'd like to set up a simple vHost w/ Apache for my Icinga instance. Icinga is up and running and I can access it from x.x.x.x/icinga, however would like to be able to access it externally as well as internally. I have set up the /etc/hosts file and the following is my barebones vhost statement in httpd.conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /usr/share/icinga ServerName icinga.domain.com ErrorLog logs/icinga.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> I also have the following in my .htaccess file <Directory> Allow From All Satisfy Any </Directory> An entry has been made for the instance in the Windows DNS server on my network, however when I try to access the site by URL I am greeted with Internal Server Error. Reviewing the /var/log/icinga.com-error_log I see the following entry. [Thu Dec 13 16:04:39 2012] [alert] [client 10.0.0.1] /usr/share/icinga/.htaccess: <Directory not allowed here Can someone help me spot the error of my ways?

    Read the article

  • Uninstallation of WSUS form SBS 2008

    - by Logik
    I am not much experienced system admin, but i came across the client who was having SBS 2008. The server was running out of HDD space. So to recover some, I removed its WSUS role (twas not needed).this removed WSUS 3.0 SP1 & freed a lot of space. This SBS is: Domian controller, DNS, DHCP, File server. After i removed WSUS i disabled windows update service & i rebooted serer & checked from one of client if the shared folders are accessible. they were. Next day all of sudden i got call from them saying they can't login into their domain. I looked into server, the Active directory service was stopped. I never remember touching any service other than windows update. How come AD service stopped running all of sudden. Is removing WSUS have such impact? I am not aware of any such thing.

    Read the article

  • Windows 2008 Server can't connect to FTP

    - by stivlo
    I have Windows 2008 Server R2, and I am trying to install FTP services. My problem is I can't connect from outside, FileZilla complains with: Error: Connection timed out Error: Could not connect to server Here is what I did. With the Server Manager, I've installed the Roles FTP Server, FTP Service and FTP Extensibility. In Internet Information Services version 7.5, I've chosen Add FTP Site, enabled Basic Authentication, Allow a user to connect Read and Write. In FTP Firewall support on the main server, just after start page, I've set Data Channel Port Range to 49100-49250 and set the external IP Address as the one I see from outside. If I click on FTP IPv4 Address and Domain Restrictions, and click on Edit Feature Settings, I see that access for unspecified clients is set to Allow, so I click OK without changing those defaults. In FTP SSL Policy, I've set to Require SSL connection, certificate is self signed. I tried to connect with FileZilla from the same host and it works, however it doesn't work remotely, as I said above. I've enabled pfirewall.log, but apparently nothing gets logged. The server is in Amazon EC2, and on the security group inbound firewall rules, I've set that ports 21 and ports 49100-49250 accepts connections from everywhere. What else should I be checking to solve the problem?

    Read the article

  • Storing Windows Updates for reuse

    - by Saiyine
    At work we update lots of computers using Windows Update. Windows XP and 7 all day long, rarely some Vista. We do it through a corporate proxy, as connecting them to a domain to build a Wsus server is out of the question, so we download about two gigabytes a day of the very same updates everyday. I've tried WSUS Offline. It's pretty complete but when it finishes it's common to be still missing hundreds of megabytes of updates, because its intention is not to fully update a system but to install the critical updates, as the developers explain in the forums. Now I'm trying with Autoupdater. It's far worse, with poor capabilities for non-English Windows XP, but at least it gives the option to install non critical updates in Windows 7. It still misses hundres of megabytes of updates after fully updating the system. And finally, both doesn't install the driver related updates of Windows 7, so they at most save us a couple of hundreds of megabytes and a reboot (with the associated login to the computer and to the proxy) out of three or four. So, is it possible to somewhat extract the installed updates in a Windows 7 system and not having to download the same updates again and again at least with machines with the same hardware? Or even better, a generic package with all the updates?

    Read the article

  • Is it bad to redirect http to https?

    - by jasondavis
    I just installed an SSL Certificate on my server. I use a web hosting panel called ZPanel that is an open source project. It then set up a redirect for all traffic on my domain on Port 80 to redirect it to Port 443. In other words, all my http://example.com traffic is now redirected to the appropriate https://example.com version of the page. The redirect is done in my Apache Virtual Hosts file with something like this... RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] My question is, are there any drawbacks to using SSL? Since this is not a 301 Redirect, will I lose link juice/ranking in search engines by switching to https? I appreciate the help. I have always wanted to set up SSL on a server, just for the practice of doing it, and I finally decided to do it tonight. It seems to be working well so far, but I am not sure if it's a good idea to use this on every page. My site is not eCommerce and doesn't handle sensitive data; it's mainly for looks and the thrill of installing it for learning. UPDATED ISSUE Strangely Bing creates this screenshot from my site now that it is using HTTPS everywhere...

    Read the article

  • How do I start the Workstation Service so I can use `net use`?

    - by nitefrog
    I have a Windows 7 machine that logs into a domain. The machine can net view and see the different shares, but when I try to use, net use * \\name\share, I get an error stating that the service is not started. Yet when I issue a net start, it states the service is already started. My other win7 machines work fine; they can see and use any of the shares. Is there a security setting that needs to be disabled or enabled? I really need to get this working, and I have no other ideas as the other machines have no problem accessing the shares on different systems. The error I am getting is , "The Workstation Service Has Not Been Started", but like I said other machines can connect fine, and when I issue a, "net start workstation", it states the service is already started. In addition the error number I am receiving is 2138. UPDATE: On the machine that is having issues. From the troubled machine if I issue a Net View \\name I can see all the shares on the machine I want to connect to. When I try to net use * \\name\sharename I get the error The Workstation service has not started. I have set both settings sc config lanmanworkstation start = auto and sc config lanmanserver start = auto on the Windows7 computer that is having issues. I have rebooted the computer and still no dice. I can net view any computer on the network and see all shares, but I cannot access any of the shares in which I can see. In the registry under the HKLM\System\CurrentControlSet\Services Both LanmanServer and LanmanWorkstation start is set to 2. Screen capture of net use and view: The Services: This is really weird. What am I missing? It has to be a security setting...

    Read the article

  • Does SNI represent a privacy concern for my website visitors?

    - by pagliuca
    Firstly, I'm sorry for my bad English. I'm still learning it. Here it goes: When I host a single website per IP address, I can use "pure" SSL (without SNI), and the key exchange occurs before the user even tells me the hostname and path that he wants to retrieve. After the key exchange, all data can be securely exchanged. That said, if anybody happens to be sniffing the network, no confidential information is leaked* (see footnote). On the other hand, if I host multiple websites per IP address, I will probably use SNI, and therefore my website visitor needs to tell me the target hostname before I can provide him with the right certificate. In this case, someone sniffing his network can track all the website domains he is accessing. Are there any errors in my assumptions? If not, doesn't this represent a privacy concern, assuming the user is also using encrypted DNS? Footnote: I also realize that a sniffer could do a reverse lookup on the IP address and find out which websites were visited, but the hostname travelling in plaintext through the network cables seems to make keyword based domain blocking easier for censorship authorities.

    Read the article

  • hosting company blocking google bots and crawlers [closed]

    - by Jayapal Chandran
    Hi, I am having a site for the past three years and it is very active for the past two years. Until not the site is working well and also now but not after the hosting company blocked google bots. Many pages appeared in the first page of the google search. After they started blocking i couldn't see my links in the first page instead they appeared after 5 pages or they did not appear at all. Will hosting companies be so stupid that they block and dont mention it to their users. They want to protect themselves by making the websites at stake. I display google ads and not this month i got only half for this 10 days. I have made requests to other hosting companies like blue host and monster host that i wan to transfer my domain by making a condition that the will not block google bots which stops the business indirectly. so any kind of help will be helpful. how can i claim what i lost from the hosting company. what other hosting companies consider the users (by informing the events like changing the IP or blocking google bot.) It was really working hard to bring up my site but these people just crashed down my site in a few days. :-(

    Read the article

  • Symantec Antivirus Corporate -- two problems

    - by Alex C.
    We have a Windows network with a domain and about 50 clients. A few months ago, we installed Symantec Antivirus, Corporate Edition ver. 10.1.8.8000. There are two problems. The larger problem is that the software isn't very good at stopping viruses. In the last month, four different machines have become infected with those viruses that masquerade as antivirus software. Two machines I was able to clean with MalWareBytes. The other two were hopeless, and I had to reinstall Windows. Is there something I can do to make the Symantec product more effective? As far as I can tell, it successfully updates definitions nightly and pushes the definitions to the clients. The smaller problem is that the Symantec client applications sometimes initiate scans at random (and inappropriate) times. One of my co-workers complained to me yesterday that her computer was running very slow. I looked at the scan history and found that Symantec had scanned the computer three times during the past two days, and each time during the workday. No threats were found. Not sure why it's doing this, but I'd like it to stop. Any help would be appreciated. Thanks.

    Read the article

  • How do you apply proxy settings per computer instead of per user?

    - by Oliver Salzburg
    So far, I've used a user group policy object utilizing Internet Explorer maintenance to set a proxy for the user in IE. We have now deployed the Enterprise Client (EC) starter group policy to our domain and this policy affects this behavior. The EC group policy uses the policy Make proxy settings per-machine (rather than per-user). This policy describes itself as: This policy is intended to ensure that proxy settings apply uniformly to the same computer and do not vary from user to user. Great! This seems to be an improvement over my previous setup. If you enable this policy, users cannot set user-specific proxy settings. They must use the zones created for all users of the computer. What zones and where do I configure the proxy settings for them? I assumed the policy would simply take the user settings and apply them, but that's not what's happening. Now no proxy server is set at all. So my previous settings obviously no longer have any effect. So far, I've only come up with solutions that involved direct manipulation of the Windows registry. Which is fine, I guess, but the way the proxy is configured for users makes it appear as if there could be a higher level approach.

    Read the article

  • nginx doesn't find the directory but apache does

    - by Jack Spairow
    I use apache as the backend server and nginx on the frontend. Apache listens to port 8080 and nginx to port 80. What I do is have the root point to the public folder foreach virtualhost: <VirtualHost *:8080> ServerAdmin webmaster@localhost ServerName site.com ServerAlias site.com *.site.com DocumentRoot /var/www/site.com/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here's the nginx config: server { listen 80; access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; root /var/www/site.com/public; index index.php index.html; server_name site.com *.site.com; location / { location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; proxy_cache one; proxy_cache_use_stale error timeout invalid_header updating; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 301 302 20m; proxy_cache_valid 404 1m; proxy_cache_valid any 15m; } } location ~ /\.(ht|git) { deny all; } } The problem is Apache resolves the domain just fine (site.com:8080), but nginx shows instead a 502 Bad Gateway (site.com:80). I tried looking at the error_log and access_log but I can't find any hint for why can't nginx work. EDIT: The problem was I wasn't able to include that isolated config for nginx.

    Read the article

  • optimize mod_rewrite in htaccess

    - by clarkk
    I got some mod_rewrite conditions in a .htaccess file which I have extended from time to time.. But I don't think its very well written (I'm still quite new to mod_rewrite) Some times requests end up in infinite loops And just now I added SSL to the file.. When requesting https:// I get a 404 error The requested URL /_secure/_secure/ was not found on this server. Somehow it adds an extra _secure to the path? .htacces # set language RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_URI} ^/(da|en)/(.*)(\?%{QUERY_STRING})?$ [NC] RewriteRule ^(.*)$ /%2?%{QUERY_STRING}&set_lang=%1 [L] # put 'www' as subdomain if none is given RewriteCond %{HTTP_HOST} ^([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.%1/$1 [L,R=301] # rewrite subdomain RewriteCond %{HTTP_HOST} ^(admin|files)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_(admin|files)/ [NC] RewriteRule ^(.*)$ /_%1/$1 [L] # redirect to subdomain RewriteCond %{HTTP_HOST} ^www\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^_([^/]+)/ http://$1.%1/ [L,R=301] # start SSL on 'secure' subdomain if not started RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^(secure)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ https://%1.%2/$1 [L,R=301] # rewrite 'secure' subdomain RewriteCond %{HTTP_HOST} ^(demo|secure)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_secure/ [NC] RewriteRule ^(.*)$ /_secure/$1 [L] # rewrite 'api' subdomain RewriteCond %{HTTP_HOST} ^api\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_api/ [NC] RewriteRule ^(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)? /_api/?%{QUERY_STRING}&v=$1&i=$2&k=$3&a=$4&t=$5&f=$6 [L] # redirect non-active subdomain to 'www' RewriteCond %{HTTP_HOST} !^(admin|api|demo|files|secure|www)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.domain.com [L,R=301] # hide file extensions RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !\.php$ [NC] RewriteCond %{REQUEST_URI} ^/([^/]*)/(?:([^/]*)/)?(?:([^/]*)/)?$ [NC] RewriteRule ^(.*)$ /%1.php?%{QUERY_STRING}&subpage=%2&subsection=%3 [L]

    Read the article

  • How to secure a group of Amazon EC2 instances

    - by ks78
    I have several Amazon EC2 instances running Ubuntu 10.04 and I've recently started using Amazon's Route 53 as my DNS. The purpose of doing that was to allow the instances to refer to each other by name rather than private IP (which can change). I've pointed my domain name (via GoDaddy) to Amazon's name servers, allowing me to access my EC2 webservers. However, I noticed I can now access the EC2 instances which I don't want to be public, such as the dedicated MySQL Server. I was thinking Amazon's Security Groups would still be in effect when using Route 53, but that doesn't seem to be the case. Before I started using Route 53, I was thinking of having one instance run a reverse proxy, which would help protect the web servers behind it. Then IP-restrict all the other instances. I know IP restricting can be done using the firewall within each instance, but should I ever need to access them from another IP address, I'd need a way in. Amazon's control panel made it a breeze to open a port when necessary. Does anyone have any suggestions for keeping EC2 instances secure, but also accessible to their administrator? Also, what's the best topology for a group of EC2 instances, consisting of web servers and a dedicated database server, from a security perspective? Does having a reverse proxy server even make sense?

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

  • Strange WiFi problem - network is "not connected" but works

    - by GalacticCowboy
    Background I'm using a Windows XP tablet PC connected to my home network. I do broadcast the SSID, but otherwise the wireless network is locked down as tight as I can make it - WPA2 with a strong key, MAC filtering, etc. I've had this computer for about 4 years and the router for about 6 months. Before that, the previous router was set up in the same way, and I've never had any particular problems. Problem This morning - as I type this, as a matter of fact - Windows is reporting that none of my network connections are connected. Yet somehow my Internet connectivity still works! ipconfig reports a valid IP address, domain, etc., even though Windows apparently doesn't know about it. I tried repairing the connection and it had no effect - the repair eventually timed out. I'm pretty sure that I could reboot the computer and/or the router, etc. and it would fix the problem. However, I'm more interested in knowing if any of you have ever seen anything like this, and what might have caused it? Since it works I'm not inclined to mess with it too much. My concern is that it's a precursor to bigger problems.

    Read the article

< Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >