Search Results

Search found 12484 results on 500 pages for 'host'.

Page 190/500 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • apache on Cent OS opening default page on https

    - by Asghar
    I am new to apache and SSL and configuration, i got verysign certificte to secure my site. i have public, private and ca_intermediate cert files. i have configured ssl.conf as below VirtualHost _default_:443> DocumentRoot /var/www/mydomain.com/web/ ServerName mydomain.com:443 ServerAlias www.mydomain.com # Use separate log files for the SSL virtual host; note that LogLevel # is not inherited from httpd.conf. ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel warn # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on problem is that when i access www.mydoamin.com with "HTTP" it works fine, but when i access using "HTTPS" it just opens apache default page. but with green "HTTPS" means my certificates are installed correctly. How can i get rid of this situtaion. Thanks EDIT Output of apachectl -S -bash-3.2# apachectl -S [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:80 has no VirtualHosts [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: _default_:8081 localhost.localdomain (/etc/httpd/conf/sites-enabled/000-apps.vhost:10) *:8080 is a NameVirtualHost default server localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) port 8080 namevhost localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) *:443 is a NameVirtualHost default server mydomain.com (/etc/httpd/conf.d/ssl.conf:81) port 443 namevhost mydomain.com (/etc/httpd/conf.d/ssl.conf:81) *:80 is a NameVirtualHost default server app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost mydomain.com (/etc/httpd/conf/sites-enabled/100-mydomain.com.vhost:7) Syntax OK

    Read the article

  • System Slow After Uprading Ubuntu

    - by Aragon N
    I have an Ubuntu network machine which has release of 10.04.1 LTS Lucid. On this system I have Apache, PostgreSQL and django. For some app. development I have to install PGP and php-curl. Due to being on network, I have exported a VMware machine to the Internet and firstly I have upgraded the system and then installed php5 packages on it. I don't know is it all about django or apache configuration. Maybe some Apache settings had changed. In this case in apache where I have to look at ? After all replacing it with its old place, I see that the new system query is slow according to another. Old system query time : 140 ms New system query time : 9.11 s I have checked /etc/network interface and it seems there is no problem. I have checked /etc/resolv.conf and it seems OK I have checked /etc/nsswitch.conf and only host section is different from old one which old system has hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 and then I have checked time host -t A services.myapp.com and I got real 0m0.355s user 0m0.010s sys 0m0.020s and I have checked apache2 HostnameLookups : find /etc/apache2/ -type f | xargs grep -i HostnameLookups It returned : /etc/apache2/apache2.conf:# HostnameLookups: Log the names of clients or just their IP addresses /etc/apache2/apache2.conf:HostnameLookups Off and now what can I have to check for boosting my system as before?

    Read the article

  • Trouble with nginx and serving from multiple directories under the same domain

    - by Phase
    I have nginx setup to serve from /usr/share/nginx/html, and it does this fine. I also want to add it to serve from /home/user/public_html/map on the same domain. So: my.domain.com would get you the files in /usr/share/nginx/html my.domain.com/map would get you the files in /home/user/public_html/map With the below configuration (/etc/nginx/nginx.conf) it appears to be going to my.domain.com/map/map as noticed by this: 2011/03/12 09:50:26 [error] 2626#0: *254 "/home/user/public_html/map/map/index.html" is forbidden (13: Permission denied), client: <edited ip address>, server: _, request: "GET /map/ HTTP/1.1", host: "<edited>" I've tried a few things but I'm still not able to get it to cooperate, so any help would be greatly appreciated. ####################################################################### # # This is the main Nginx configuration file. # ####################################################################### #---------------------------------------------------------------------- # Main Module - directives that cover basic functionality #---------------------------------------------------------------------- user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; #---------------------------------------------------------------------- # Events Module #---------------------------------------------------------------------- events { worker_connections 1024; } #---------------------------------------------------------------------- # HTTP Core Module #---------------------------------------------------------------------- http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; server { listen 80; server_name _; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } location /map { root /home/user/public_html/map; index index.html index.htm; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } include /etc/nginx/conf.d/*.conf; }

    Read the article

  • Cannot Connect To VMWare Guest OS Using Either RDP or VNC

    - by Humanier
    I have a PC (Windows XP SP3) with VMWare Workstation 7 installed. The VMWare hosts Windows Server 2003 Enterprise Edition R2. RealVNC (4.1.3) is installed on both OS'es. Both of them use Hamachi2. Host OS (WinXP) also runs ZoneAlarm Firewall. Hamachi network is set as trusted. My goal is to allow RDP and VNC connections to be made to the guest OS (Windows Server 2003). Both options work absolutely fine if I connect from the host OS. However I have problems when other computers from our Hamachi network try to connect the guest OS (Win2K3). RDP connections. RDP window opens, shows black content and after 15-20 seconds displays following error: RealVCN connections. Users are able to connect but all they see is a black screen inside VNC window. At the same their input (keystrokes or mouse moves/clicks) are visible when looking at the console window of the Win2K3. I really appreciate any ideas on how to resolve the mentioned problems.

    Read the article

  • Centos iptables configuration for Wordpress and Gmail smtp

    - by Fabrizio
    Let me start off by saying that I'm a Centos newby, so all info, links and suggestions are very welcome! I recently set up a hosted server with Centos 6 and configured it as a webserver. The websites running on it are nothing special, just some low traffic projects. I tried to configure the server as default as possible, but I like it to be secure as well (no ftp, custom ssh port). Getting my Wordpress to run as desired, I'm running into some connection problems. 2 things are not working: installing plugins and updates through ssh2 (failed to connect to localhost:sshportnumber) sending emails from my site using the Gmail smtp (Failed to connect to server: Permission denied (13)) I have the feeling that these are both related to the iptables configuration, because I've tried everything else (I think). I tried opening up the firewall to accept traffic for ports 465 (gmail smtp) and ssh port (lets say this port is 8000), but both the issues remain. Ssh connections from the terminal are working fine though. After each change I tried implementing I restarted the iptables service. This is my iptables configuration (using vim): # Generated by iptables-save v1.4.7 on Sun Jun 1 13:20:20 2014 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 465 -j ACCEPT COMMIT # Completed on Sun Jun 1 13:20:20 2014 Are there any (obvious) issues with my iptables setup considering the above mentioned issues? Saying that the firewall is doing exactly nothing in this state is also an answer... And again, if you have any other suggestions for me to increase security (considering the basic things I do with this box), I would love hear it, also the obvious ones! Thanks!

    Read the article

  • SSL Connection Error

    - by toffee.beanns
    I have purchased a comodo ssl cert and have submitted the Certificate Signing Request (CSR) generated by my server to the ssl management site. With the 3 files it returned me with, - AddTrustExternalCARoot.crt - PositiveSSLCA2.crt - www_mydomainname_com.crt I have uploaded them to my /etc/ssl/ssl-certs folder and have updated my virtual host in my sites-available and restarted accordingly. NameVirtualHost 107.167.120.195:80 #sample ip address NameVirtualHost 107.167.120.195:443 #sample ip address ......... #normal http virtual host (working well) <VirtualHost 107.167.120.195:443> ServerAdmin [email protected] ServerName mydomainname.com ServerAlias www.mydomainname.com DocumentRoot /var/www/mydomainname SSLEngine on SSLCertificateFile /etc/ssl/ssl-certs/www_mydomainname.com.crt SSLCertificateKeyFile /etc/ssl/ssl-certs/server.key SSLCertificateChainFile /etc/ssl/ssl-certs/PositiveSSLCA2.crt </VirtualHost> I have also enabled ran 'a2enmod ssl' and it's enabled. This is the error I get when I access the webpage https in chrome: SSL connection error Error code: ERR_SSL_PROTOCOL_ERROR Unable to make a secure connection to the server. This may be a problem with the server, or it may be requiring a client authentication certificate that you don't have. I have also checked out my apache log files and there seems to be an error saying that the Common Name (CN) is not the same as the server. RSA server certificate CommonName (CN) `www.mydomainname.com' does NOT match server name!? and Invalid method in request \x16\x03\x01 What should I do?

    Read the article

  • PTR and A record must match?

    - by somecallmemike
    RFC 1912 Section 2.1 states the following: Make sure your PTR and A records match. For every IP address, there should be a matching PTR record in the in-addr.arpa domain. If a host is multi-homed, (more than one IP address) make sure that all IP addresses have a corresponding PTR record (not just the first one). Failure to have matching PTR and A records can cause loss of Internet services similar to not being registered in the DNS at all. Also, PTR records must point back to a valid A record, not a alias defined by a CNAME. It is highly recommended that you use some software which automates this checking, or generate your DNS data from a database which automatically creates consistent data. This does not make any sense to me, should an ISP keep matching A records for every PTR record? It seems to me that it's only important if the IP address that the PTR record describes is hosting a service that is sensitive to DNS being mismatched (such as email hosting). In that case the forward zone would be configured under a domain name (examples follow the format 'zone - record'): domain.tld -> mail IN A 1.2.3.4 And the PTR record would be configured to match: 3.2.1.in-addr.arpa -> 4 IN PTR mail.domain.tld. Would there be any reason for the ISP to host a forward lookup for an IP address on their network like this?: ispdomain.tld -> broadband-ip-1 IN A 1.2.3.4

    Read the article

  • What is the recommended glusterFS configuration for a growing website?

    - by montana
    Hello, I have a website that is tracking towards 50 million hits per day average, and within the next 3 months should be over 100 million hits per day. We are trying to use GlusterFS v 3.0.0 (with latest patches as of 1-17-2010) Currently, we've just upgraded to a load balancer environment that has 3 physical hosts with 6 Xen-Server 5.5u1 VM's (2 on each host) to serve webpage traffic. Each machine has 6 Raid-6 local storage drives (7200RPM-SATA). The old machine we came from had 1 mirrored SAS 10k drive. We also set up glusterFS currently with 3 bricks, one on each host, and it is serving the 6 VM's as clients. In testing, everything seemed fine. However when we went to production, it seemed that there just wasn't enough I/O's available to serve traffic even upwards of 15mil hits. Weeks prior, our old server was able to handle traffic, maxed out, at 20mil. Is there any recommended configurations for such an application, or things to be aware of that isn't apparent with their documentation at gluster.org for a site our size?

    Read the article

  • Virtualizing an Inline network appliance with VirtualBox (or VMWare)

    - by Tzury Bar Yochay
    My device, which is a Linux based IP in-liner is transparent to the network peripherals, that is, no IP address assigned to any of its interfaces. For the sake of the conversation, let's use ADSL connection as an example, while the device is inspecting the bi-directional traffic, the network is behaving same as if device was not there, attached to the wire (see Physical setup at the attached diagram). I wonder if I can enclosed that "device" within a Windows machine and have it operated virtually so it still seats inline between the ADSL router and the Windows netwroking interface by using virtual NICs, (or whatever their name is in windows), and inspecting the traffic, same as if it was on a separate physical device, the drawing under "Virtual Setup" in the attached diagram show what I am trying to achieve. Reading a bit on the VirtualBox docs, seems like binding the right side is relatively simple, perhaps I should have one network adapter set as Bridge Networking and VirtualBox will connect it to the physical NIC on the host machine, and network packets are exchanged directly, circumventing the host operating system's network stack (WinXP in my case). However, I have no idea how to achieve the left side of my diagram, which requires adding virtual NICs to windows and configure them correctly in a way to make that pipeline possible. I would appreciate any help. by the way, if that is not possible with VirtualBox but with other virtualization solution (e.g. VMWare), I would accept the other as well.

    Read the article

  • IPC between multiple processes on multiple servers

    - by z8000
    Let's say you have 2 servers each with 8 CPU cores each. The servers each run 8 network services that each host an arbitrary number of long-lived TCP/IP client connections. Clients send messages to the services. The services do something based on the messages, and potentially notify N1 of the clients of state changes. Sure, it sounds like a botnet but it isn't. Consider how IRC works with c2s and s2s connections and s2s message relaying. The servers are in the same data center. The servers can communicate over a private VLAN @1GigE. Messages are < 1KB in size. How would you coordinate which services on which host should receive and relay messages to connected clients for state change messages? There's an infinite number of ways to solve this problem efficiently. AMQP (RabbitMQ, ZeroMQ, etc.) Spread Toolkit N^2 connections between allservices (bad) Heck, even run IRC! ... I'm looking for a solution that: perhaps exploits the fact that there's only a small closed cluster is easy to admin scales well is "dumb" (no weird edge cases) What are your experiences? What do you recommend? Thanks!

    Read the article

  • nginx with ssl: I get a 403 and log "directory index of '...dir...' is forbidden" log message. works fine with unencrypted connection

    - by user72464
    As mentioned in the title, I had nginx working fine with my rails app, until I tried to add the ssl server. The unencrypted connection still works but the ssl always returns me a 403 page with the following line in the error log: directory index of "/home/user/rails/" is forbidden, client: [my ip], server: _, request: "GET / HTTP/1.1", host: "[server ip]" Below my nginx.conf server block: server { listen 80; listen 443 ssl; ssl_certificate /etc/ssl/server.crt; ssl_certificate_key /etc/ssl/server.key; client_max_body_size 4G; keepalive_timeout 5; root /home/user/rails; try_files $uri/index.html $uri.html $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://0.0.0.0:8080; } error_page 500 502 503 504 /500.html; location = /500.html { root /home/user/rails; } } the /home/user/rails directory and it's parent have all read to all rights. and they belong to the user nginx. the certificate and key file have the following rights: -rw-r--r-- 1 nginx root 830 Nov 8 09:09 server.crt -rw--w---- 1 nginx root 887 Nov 8 09:09 server.key any clue?

    Read the article

  • Using URL rewrite module for http to https redirect

    - by johnnyb10
    Following ruslany's suggestion on the URL Rewrite Tips page here, I'm trying to use URL Rewrite to redirect http:// requests for my site to https://. I've written and tested the rule using a test site I set up, and so now the final piece is to create a second site (http) to redirect to my https site. (I need to use a second site because I don't want to uncheck the "Require SSL encryption" checkbox on my existing site.) I'm an IIS newbie so my question is: how do I do this? Should I create a site with the same name and host header, only it will be bound to http? Will IIS let me create a site with the same name? I don't want to screw anything up with my existing site (which is a SharePoint site, currently used by external users). That site currently has http and https bound to it. So my assumption is that, using ISS (not SharePoint), I will create a new site (http only) with the same name and host header as my existing site, and add the URL Rewrite rule to the http site. And then I guess I should remove the http binding from my existing site? Does that seem correct? Any advice, gotchas, etc., would be appreciated. Thanks.

    Read the article

  • Exchange 2010 sends out spam.

    - by Magnus Gladh
    Hi. I have an Exchange Server 2010, that uses a smart host to send out mails. A day ago the owner of smart host contact us and told us that we send out spam. I have try different open relay test on the net and all of them come back saying that this server is secured and can not be used as relay server. But I can see in my Exchange Queue Viewer that it keeps coming in new messages. Here is an example of how it looks. Identity: mailserver\3874\13128 Subject: Olevererbart:: [email protected] Pfizer -75% now Internet Message ID: <[email protected]> From Address: <> Status: Ready Size (KB): 6 Message Source Name: DSN Source IP: 255.255.255.255 SCL: -1 Date Received: 2010-12-09 21:46:22 Expiration Time: 2010-12-11 21:46:22 Last Error: Queue ID: mailserver\3874 Recipients: [email protected] How can I secure our exchange server more, to stop this from happening? Could I have got an virus that hooks up to our exchange server and send mail throw that? As I can see the From Address is always <, is there someway that I can stop sending mails that don't have a from address that I describe? Pleas help

    Read the article

  • IRC server connecting to another server

    - by Oxinabox
    I'm setting up an IRC server using IRC-Hybrid, I want my server to connect to another server, so that people on my server can connect to channels on that other server. I know this can be done, the GIMP IRC, is the same as the GNOME IRC My ircd.conf contains the following: connect { name = "aabstractname"; host = "128.64.2.1; send_password = "somepass"; accept_password = "somepass"; encrypted = no; port = 6667; class = "server"; autoconn = yes; compressed = yes; fakename = "irc.sd.dom.asn.au"; }; So when i run: /etc/init.d/ircd-hybrid restart it should be connecting to 128.64.2.1, but the log on 128.64.2.1, doesn't show anything Do I need entry on the host 128.64.2.1? I can't find any documentation for ircd.conf I'ld really like that documentation so I can check all my settings are right.

    Read the article

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • WAMP starts Apache or Mysql, but not both?

    - by ladenedge
    When I install WAMP, the Apache and Mysql services are set to run as the LocalService user and all works well. However, because I need to access remote UNC paths in my PHP code, I need to run at least Apache as a user that exists on both the local host and the remote host - I'll call him WampUser. When both Apache and Mysql are set to start as WampUser, I cannot start both at the same time. If both are stopped, I can start either successfully. When I attempt to start the other, I get Error 1053: The service did not respond to the start or control request in a timely fashion. This error appears immediately - there is no timeout. When at least one of the services is set to start as LocalService, both start fine. I can, therefore, solve my problem by setting Apache to WampUser and Mysql to LocalService, but I'm more interested in why this is happening in the first place. I'm especially curious because this situation does not occur on other servers - something I've done to this server has made these two services exclusive when running as the same user. Here are some miscellaneous data points: I am using Windows Server 2003. I've provided recursive Full Control to the C:\wamp directory for WampUser. Nothing appears in the event log after the service fails. No log entries appear in either the Mysql log or the Apache error log. Neither application appears in the process list when the appropriate service is stopped. Any ideas?

    Read the article

  • Automating first time login process in Windows Server 2008 R2 SP1 virtual machine

    - by George Durzi
    I have a set of Windows 2008 Server R2 SP1 Enterprise Edition virtual machines running in Hyper-V. The host server has 64GB of RAM and two SSD drives (one drive for the host OS, and the second one for the VMs). The virtual machines are as follows: Domain Controller: 4GB RAM Exchange Server: 4GB RAM Terminal Services: 50GB RAM We use this setup for a travelling training class where users remote desktop to one of the VMs - let's call it the Terminal Services or "TS" VM - where tools such as Visual Studio are installed. The students go through some labs on the TS VMs in Visual Studio. Overall, this setup works great. However, when users are collectively logging in for the first time, the VM really struggles to keep up while all the user profiles are created. It can take some users up to 10 minutes to login. The number varies from 30 to 40 students. A workaround to this would be to manually remote desktop to the TS virtual machine using all the accounts to ensure that the local profile is created in advance. I'm looking for a way to automate the first time login process on the TS virtual machine. I am envisioning iterating through the accounts in a certain Active Directory OU, and then somehow initiating a remote desktop session to the TS VM to log them in for the first time. Are there ways to do this? Thanks

    Read the article

  • nginx - proxy_pass is working - Apache isn't doing what it should...

    - by matthewsteiner
    So, I've got this in my nginx.conf: location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/example.com/public/; access_log off; expires 30d; } location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So anything that is a "static file" that exists will just be done with nginx. Otherwise, it should pass it off to Apache. Right now, static files are working correctly. However, if something is passed to apache and it's example.com or subdomain.example.com, apache just spits out the "Apache 2 Test Page" that you get if there's nothing there. Apache worked fine before, so I'm guessing it has to do with the way nginx is "asking". I'm not sure though. Any ideas?

    Read the article

  • How to manage multiple email addresses on multiple domains in Exchange

    - by CAD bloke
    Using Hosted Exchange Server, mostly because I use an iPhone, webmail & Outlook on 2 laptops. I want to keep everything consistent and unfragmented. Also, I want push notifications. I have 2 domains, a professional one & a personal one. Each domain has about 5 (give or take) email addresses I use for various purposes. Each domain also has a few parked domains (.net, .org, .info) aliased to the .com domain. I would like to keep emails from the 2 domains separated. Do I need an extra mail box, meaning extra expense or can I create another Exchange user on the same mailbox and create an extra account in Outlook? In either case I will have to wait for iOS4 on the iPhone to manage 2 Exchange accounts. Or am I better off just using a set of rules and folders? The aliased domains are another joy to behold entirely. It looks like I will have to add each email address variant individually. Alternatively, I reckon I may just leave the aliased domains at the pop3 host and let Outlook gather those as edge-cases. Surely I can't be the only one making my life this difficult. Anyone out there done this? From the left field - is this (much) easier in gMail? I'm not committed to Exchange (yet). Previously I used Outlook as a pop3 client with a set of filters to direct incoming traffic to folders. This worked with the aliased domains because my host directed all the aliased TLDs to the same mailbox.

    Read the article

  • Remote Desktop Zooming

    - by codeulike
    Using Remote Desktop from a device with a hi-res screen (say, a Surface Pro) is decidedly tricky - as everything displays 1:1 scale and so looks tiny. If the machine you are remoting into runs Server 2008 R2 or later, you can change the dpi zooming setting (see here). But for older hosts, that doesn't work. Using normal Remote Desktop, you can connect with a lower resolution, say 1280x768, and turn on smart-sizing. However smart-sizing can scale down (to display a huge desktop in a small area) but does not seem to scale up (to display a small desktop in a big area). Using the Windows 8 Remote Desktop App, you can zoom - but you cannot set the default resolution of the host. What I want is a lower resolution in the host, scaled up to fit my screen. So both of those are close to what I want, but dont quite work. So question is: Does the Remote Desktop App allow screen resolution to be set somehow? Is there some other Remote Desktop client that can handle zooming better?

    Read the article

  • Windows Server 2008 R2 Virtual Network Setup

    - by jpearl01
    Hi all, Some background: I'm very much new to networking in general, and virtualization in particular. I'm trying to set up a series of VMs as we are transitioning to a thin client setup. I have been supplied a limited number of static ip addresses. The server is located in an offsite building which houses the network we use to connect to the internet, share folders etc. The setup I've been trying to go for is this: The host OS (Windows Server 2008 R2) is bound to one nic using one of the static ips (say, Nic1 and ip 10.255.6.61). I've set up another external virtual network attached to another physical nic , and a virtual private network attached to no nic. There is one VM running the same os (as the host). This VM is connected to both the external virtual network (and uses another static ip say Nic2 and ip 10.255.6.62) and also to the virtual private network (I gave it a static random ip 192.168.88.1 subnet mask 255.255.255.0). This virtual private network is connected to all the other VMs. I'd like to share the internet connection with all the other VMs on the private virtual network, and so I installed the RRAS role on the server connected to Nic2, and selected the option to share the internet over the vpn. I've run through the RRAS wizard a few times, trying different configurations, but none of them seem to be letting the other vms connect to the 'net. The vms seem to connect to the virtual private network fine, they are assigned an ip address and everything, but no internet, and no rest of the network either. The other problem is in general I connect to the vms with RDP. Will that be possible with a setup like this? i.e. will the vms show up as computers on the network? If not, what are my other options? Thanks! ~josh

    Read the article

  • SSL Handshake negotiation on Nginx terribly slow

    - by Paras Chopra
    I am using Nginx as a proxy to 4 apache instances. My problem is that SSL negotiation takes a lot of time (600 ms). See this as an example: http://www.webpagetest.org/result/101020_8JXS/1/details/ Here is my Nginx Conf: user www-data; worker_processes 4; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_proxied any; server_names_hash_bucket_size 128; } upstream abc { server 1.1.1.1 weight=1; server 1.1.1.2 weight=1; server 1.1.1.3 weight=1; } server { listen 443; server_name blah; keepalive_timeout 5; ssl on; ssl_certificate /blah.crt; ssl_certificate_key /blah.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; location / { proxy_pass http://abc; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The machine is a VPS on Linode with 1 G of RAM. Can anyone please tell why SSL Hand shake is taking ages?

    Read the article

  • FastCGI on lighttpd no data received

    - by Michael Sh
    I have a simple FastCGI script: public static void main (String args[]) { int count = 0; while(new FCGIInterface().FCGIaccept()>= 0) { count ++; System.out.println("Content-type: text/html\n\n"); System.out.println("<html>"); System.out.println( "<head><TITLE>FastCGI-Hello Java stdio</TITLE></head>"); System.out.println("<body>"); System.out.println("<H3>FastCGI Hello Java stdio</H3>"); System.out.println("request number " + count + " running on host " + System.getProperty("SERVER_NAME")); System.out.println("</body>"); System.out.println("</html>"); } } Set up with lighttpd as: server.modules += ( "mod_fastcgi" ) fastcgi.debug = 1 fastcgi.server = ( "/cgi" => ( "fastcgi" => ("port" => 8888, "host" => "127.0.0.1", "bin-path" => "/var/www/tiny.fcgi", "min-procs" => 1, "max-procs" => 1, "check-local" => "disable" )) ) In the log: 2012-11-24 04:35:04: (mod_fastcgi.c.1367) --- fastcgi spawning local proc: /var/www/tiny.fcgi port: 54321 socket max-procs: 1 2012-11-24 04:35:04: (mod_fastcgi.c.1391) --- fastcgi spawning port: 54321 socket current: 0 / 1 2012-11-24 04:35:39: (mod_fastcgi.c.3061) got proc: pid: 0 socket: tcp:127.0.0.1:54321 load: 1 The problem is that there is no data being sent from the server to browser. Am I missing something here?

    Read the article

  • Block users from Social networking websites while firewall is down

    - by SuperFurryToad
    We currently have a SonicWall firewall, which does a pretty good job a blocking Social networking websites like Facebook and Bebo. The problem we are having is that sometimes we need to temporarily disable our firewall blocklist so we can update our company's page on Facebook for example. Whenever we do this, have see an avalanche of users logging on to their Facebook pages during work time. So what we need a way to block access while the firewall is down. For the sake of argument, we have two groups of users - "management" and "standard users". "standard users" would have no access to Facebook, but "management" users would have access. Perhaps something like a host file redirect for non-management users. This could probably be enforced via group policy that would call a bat file to copy down the host file, depending if the user was management or not. I'm keen to hear any suggestions for what the best practice would be for this in a Windows/AD environment. Yes, I know what we're doing here is trying to solve a HR problem using IT. But this is the way management wants it and we have a lot of semi-autonomous branch offices that we don't have a lot of day to day contact with, so an automated way of enforcing this would be the most preferable method.

    Read the article

  • Is there a better way to do bonded vlan tagged interfaces with XEN

    - by AJ01
    We have a number of XEN servers all running CentOS or RHEL. The VM's that they run are all required to be on their own VLAN for no other reason than the customer expects them to be. Long story short however, I can't change this right now. We are also required to have bonding enabled on the interfaces. So to accommodat this we enslave eth1 and eth2 to bond0. We then create a seperate interface called bond0.VLANID where VLANID corresponds to the correct vlan; eg ifcfg-bond0.204 DEVICE=bond0.204 BOOTPROTO=static ONBOOT=yes VLAN=yes BRIDGE=xenvlan204 Bridge to XEN As you will see, we eventually have to bridge this out to XEN, and we do this by adding another interface called xenvlan204 (in this instance) which contains; ifcfg-xenvlan204 DEVICE=xenvlan204 BOOTPROTO=none ONBOOT=yes TYPE=bridge XEN Vm Config Finally in our XEN config for each VM, we add vif = [ "bridge=xenvlan204" ] This then allows the vm host to access that particular vlan The Problem We've noticed a few problems with this setup. One being that we currently create the interfaces manually. Which means if we add more vlan enabled interfaces and bridges we usually have to restart xend which is something I'm not so hot about. Also lower level staff have their heads melted by the number of interfaces and the risk of a mistake occurring is high. Secondly, it can take sometime for a host to come up if it has a number of vlan taged interfaces. Thirdly, its just not scaling well on the management aspects The Question Is there a better more flexible way to do this (in particular with Xen that ships with centos 5.3, 5.4 and 5.5 as we have to support all three) that leverages either scripting or other solutions to allow an arbitrary amount of interfaces to be created when a vm is instanced. Your advise and expertise is more that welcomed.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >