Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 288/792 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Can we put random URL entries on DNS

    - by ring bearer
    Using microsoft DNS All/most of our local hosts ( with in ) are in following domain *.company.org So a host name will look like mymachine001.company.org Is it possible to set up wild card DNS entries of the form ? *.subd.company.com Note: The URL ends with .com, all other hosts so far ever set up in the DNS were of the format *.company.org what i am trying to achieve is the following. A user with in internal network types a url http://someprefix.subd.company.com in browser and enters. Since there is a wild card entry in DNS, the user gets routed to host mapped to *.subd.company.com in the DNS Note : at the same time, company.com has a public DNS entry and that is mapped to a physical IP in some other network (data center)

    Read the article

  • Connect to CentOS LAMP instance from Windows PCs

    - by Gnanesh
    I have a CentOS 6 machine running on our network which has a simple LAMP installation on it. I have some files there which I would want to access through other Windows PC which I am able to do so using the IP address of the CentOS machine. Since the IP address of the CentOS machine also could be dynamic I would want to connect to it using the computer / host name But I am not able to do so using the computer / host name of the CentOS machine. Can someone help me point out what I may be missing and help me out to resolve this?

    Read the article

  • Windows Server 2008 R2 loses ability to connect to network share

    - by JamesB
    I could sure use some help with this one: I've got two Windows Server 2008 R2 x64 Terminal Servers, as well as several 2003 servers (DNS / Wins / AD / DC). On the two 2008 boxes, every now and then they will get in this mode where you can't map a drive to a random server. I say random server because it's not always the same server that you can't map to. Here is a summary of what I can and can't do: net view \\servername Sometimes this works, sometimes it does not. net view \\FQDN This always works. net view \\IPAddress This always works. ping servername Sometimes this works, sometimes it does not. ping FQDN This always works. ping IPAddress This always works. I've been looking all over for a solution to this. It sure seems like Microsoft would have a hotfix by now. The kicker to this is that it sometimes works great, especially after a reboot. It may run for 2 weeks just fine, but all of a sudden it will fail to resolve the remote server name. It will then be this way for a few days, then it might start working again. Also, while it's in the mode of not working, the other servers have no problem getting there. It's just these 2008 R2 Terminal Servers. Setting a static entry in the Hosts file and LMHosts does not make it work. All servers have static IPs and they are registered in DNS and Wins just fine. Here is a long thread on MS Technet of the exact same problem, but they don't have a good solution. Here is their workaround (It was from June of 2010): Good news - a hotfix is in the works and a workaround has been identified: Root cause is that since this is SMB1 all user sessions are on a single TCP connection to the remote server. The first user to initiate a connection to the remote SMB server has their logon-ID added to the structure defining the connection. If that user logs off all subsequent uses of that TCP session fail as the logon-id is no longer valid. As a workaround for now to keep the issue from happening you will want to have the user not logoff the Terminal Server only disconnect their sessions. Any word from anyone out there about a solution? Any help would sure be appreciated. Thanks, James

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • Flash Media Server slow over SSL

    - by Antilogic
    We are using FMS to host a VoD site. We host FMS internally (we do not use a CDN). We recently installed an SSL certificate to alleviate connection issues for clients (they're networks either block or don't support RTMP), however we're noticing that when streaming in RTMPS connections are drastically slower (on the order of Mbps). I know SSL causes some amount of over head but both client and server show almost no signs of exertion. Speedtest.net and a locally hosted speed test confirm that bandwidth is not an issue. I'm really not a network guru, so I'm at a loss as to where to check next. Do any of you have an idea why streaming media would run so slow over SSL?

    Read the article

  • DNS: domain2 points to domain1

    - by Yar
    I have one domain ("domain1") that is set up with hosting and mail (hosted by Gmail Apps). This domain works perfectly. I want a second domain ("domain2") to forward to domain1, but I don't want to use "DNS Forwarding." I would like to have it act EXACTLY like domain1, so that domain2/whatever points to the same resource as domain1/whatever WITHOUT AN HTTP REDIRECT NOR BROWSER TRICKS LIKE FRAMES. I would also love to be able to send mail to "blah@domain2" and have it go to "blah@domain1". Can this be set up, and how? I am using GoDaddy as registrar and DNS host for both domains. GoDaddy is also the web host for domain1, and mail hosting is with Google Apps.

    Read the article

  • VMware 1.0.1 Windows7 - Ubuntu hostnames

    - by Kyle K
    I'm trying to run Ubuntu as the guest OS using VMWware 1.0.10 with Windows 7 Ultimate as the host OS. I had this set up previously with Win XP as the host OS and in fact I'm using the same .vmx My problem is I can't get either Win7 or Ubuntu to be able to ping the other by hostname. After installing Samba and Winbind on Ubuntu, I was able to get this working when under WinXP, but for some reason I can't get it to work under Win7. I can ping by IP Address, and the guest OS even shows up by hostname under the Windows networking panel (but of course I can't do anything with it), but pinging using short hostnames just won't work. Also, Win7 firewall is turned off completely.

    Read the article

  • JBoss https on port other than 8080 not working

    - by MilindaD
    We have a server with two JBoss instances where one runs on 8080, the other on 8081. We need to have HTTPS enabled for the 8081 server, firstly we tried enabling https on the 8080 port instance by generating the keystore and editing the server.xml and it successfully worked. However when we tried the same thing for 8081 it did not, note that we removed https for the 8080 server first before enabling it for 8081. This is what was used for both server.xml for 8080 and 8081. The only difference was that the port was changed from 8080 to 8081 when trying to enable https for 8081 port instance. What am I doing wrong and what needs to be changed? NOTE : When I meant enabled for 8080 I meant when you visit https:// URL:8484 you will actually be visiting the 8080 port instance. However when ssl is enabled for 8081 and I visit https:// URL:8484 I get that the web page is unavailable. COMMENTLESS VERSION <Server> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Service name="jboss.web"> <!-- https --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- https1 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server> WITH COMMENTS VERSION <Server> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Use a custom version of StandardService that allows the connectors to be started independent of the normal lifecycle start to allow web apps to be deployed before starting the connectors. --> <Service name="jboss.web"> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="${jboss.server.home.dir}/conf/zara.keystore" keystorePass="zara2010" clientAuth="false" sslProtocol="TLS" compression="on" /> --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <!-- The JAAS based authentication and authorization realm implementation that is compatible with the jboss 3.2.x realm implementation. - certificatePrincipal : the class name of the org.jboss.security.auth.certs.CertificatePrincipal impl used for mapping X509[] cert chains to a Princpal. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles --> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <!-- A subclass of JBossSecurityMgrRealm that uses the authentication behavior of JBossSecurityMgrRealm, but overrides the authorization checks to use JACC permissions with the current java.security.Policy to determine authorized access. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles <Realm className="org.jboss.web.tomcat.security.JaccAuthorizationRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> --> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <!-- Uncomment to enable request dumper. This Valve "logs interesting contents from the specified Request (before processing) and the corresponding Response (after processing). It is especially useful in debugging problems related to headers and cookies." --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve" /> --> <!-- Access logger --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" prefix="localhost_access_log." suffix=".log" pattern="common" directory="${jboss.server.log.dir}" resolveHosts="false" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host. Does not provide SSO across a cluster. If this valve is used, do not use the JBoss ClusteredSingleSignOn valve shown below. A new configuration attribute is available beginning with release 4.0.4: cookieDomain configures the domain to which the SSO cookie will be scoped (i.e. the set of hosts to which the cookie will be presented). By default the cookie is scoped to "/", meaning the host that presented it. Set cookieDomain to a wider domain (e.g. "xyz.com") to allow an SSO to span more than one hostname. --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host AND to all other hosts in the cluster. If this valve is used, do not use the standard Tomcat SingleSignOn valve shown above. Valve uses a JBossCache instance to support SSO credential caching and replication across the cluster. The JBossCache instance must be configured separately. By default, the valve shares a JBossCache with the service that supports HttpSession replication. See the "jboss-web-cluster-service.xml" file in the server/all/deploy directory for cache configuration details. Besides the attributes supported by the standard Tomcat SingleSignOn valve (see the Tomcat docs), this version also supports the following attributes: cookieDomain see above treeCacheName JMX ObjectName of the JBossCache MBean used to support credential caching and replication across the cluster. If not set, the default value is "jboss.cache:service=TomcatClusteringCache", the standard ObjectName of the JBossCache MBean used to support session replication. --> <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <!-- Check for unclosed connections and transaction terminated checks in servlets/jsps. Important: The dependency on the CachedConnectionManager in META-INF/jboss-service.xml must be uncommented, too --> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server>

    Read the article

  • rsync osx to linux

    - by Nick
    I did a backup to a remote nfs folder with rsync, from a MAC to a Remote Debian. The final backup is 58GB less than the original. Rsync says that everything was OK, and nothing to update. Macintosh:/Volumes/Data1 root# du -sh Produccion/ 319G Produccion/ root@Disketera:/mnt/soho_storage/samba/shares# du -sh Produccion/ 260G Produccion/ can I trust in rsync? I'm using rsync -av --stats /Volumes/Data1/Produccion/ /mnt/red/ (/mnt/red is my samba mountpoint) Some differents folders root@Disketera:/mnt/soho_storage/samba/shares/Produccion/tiposok# du -sh * 0 IndoSanBol 0 IndoSans-Bold 0 IndoSans-Italic 0 IndoSans-Light 0 IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 12K TCL IndoSans_mac Macintosh:/Volumes/Data1/Produccion/tiposok root# du -sh * 36K IndoSanBol 40K IndoSans-Bold 36K IndoSans-Italic 36K IndoSans-Light 36K IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 160K TCL IndoSans_mac

    Read the article

  • Connecting/Adding a private network on windows server 2008

    - by WhyMe
    Hey all, I have a dual server configuration on a host provider using VPS. I was told by my Host provider that in order to use free bandwidth between my two servers (they are in the same location) I need to add a alias "subnet" to a specific ip (A private network, VPN). How do I add an aliased ip in widnwos? in Linux the relevant command is supposed to be (By my search in blogs) "ifconfig eth0:1 10.129.175.165 netmask 255.255.255.0" They also said that another way to connect between the servers (should also be faster) is to use "private lan", but as it happens I don't know how to define one :(. Is there a windows equivalent or another way to do this? I have checked my ip config and found no indication of the private lan or the VPN ip.

    Read the article

  • Why doesn't my computer work at full speed?

    - by kubilas
    My motherboard is a Gigabyte GA-8I915PL-G with an Intel Pentium 4 630 3,0 GHz which doesn't run at it's default speed. It's currently at FSB 800, CPU Host 200 and CPU 3000 MHz; but sometimes it runs at FSB 533, CPU Host 133 and CPU 2025 MHz. Sometimes it's even at FSB 75 and CPU 1128 MHz. When I configure the default settings in Easy Tune then my computer doesn't work. Sometimes I need to clear the CMOS so I can set the default settings in the BIOS, but that doesn't always help. I've updated the BIOS, what else can I do to fix this problem?

    Read the article

  • How can I switch Linux running OS from disk to running from RAM without restarting?

    - by vfclists
    Is it possible to switch to running Linux from RAM or RAM disk after starting starting initially from disk? eg. You need to make an image of your hard disk, FTP it to a remote location, some time later you want the image back, so you start the system from disk as usual, restore the image you FTP'd from the remote location back into place. More like a CloneZilla backup and restore, without booting the server from CD or USB disk, but starting from the normal hard disk? Notes on environment I should have mentioned it earlier. It is a remotely hosted VM where I cannot boot into a recovery console mode or do a netinstall. It will always boot onto the same disk. Which means that if there is some serious corruption I can't repair it offline, which is why being able to ftp a previously saved backup into place is so important

    Read the article

  • Varnish server in front of nginx server with multiple virtualhosts

    - by Garreth 00
    I have tried to search for a solution for this, but can't find any documentation/tips on my specific setup. My setup: Backendserver: ngnix: 2 different websites (2 top domains) in virtualenv, running gunicorn/python/django Backendserver hardware(VPS) 2gb ram, 8 CPU Databaseserver: postgresql - pg_bouncer Backendserver hardware (VPS) 1gb ram, 8 CPU Varnishserver: only running varnish Varnishserver hardware (VPS) 1gb ram, 8 CPU I'm trying to set up a varnish server to handle rare spike in traffic (20 000 unique req/s) The spike happens when a tv program mention one of the sites. What do I need to do, to make the varnish server cache both sites/domains on my backendserver? My /etc/varnish/default.vcl : backend django_backend { .host = "local.backendserver.com"; .port = "8080"; } My /usr/local/nginx/site-avaible/domain1.com upstream gunicorn_domain1 { server unix:/home/<USER>/.virtualenvs/<DOMAIN1>/<APP1>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain1.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80 default_server; listen 8080; client_max_body_size 4G; server_name www.domain1.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain1; break; } } } My /usr/local/nginx/site-avaible/domain2.com upstream gunicorn_domain2 { server unix:/home/<USER>/.virtualenvs/<DOMAIN2>/<APP2>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain2.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80; listen 8080; client_max_body_size 4G; server_name www.domain2.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain2; break; } } } Right now, If I try the Ip of the varnishserver I only get served domain1.com. Would everything be correct if I change the DNS of the two domain to point to the varnishserver, or is there extra setup before it would work? Question 2: Do I need a dedicated server for varnish, or could I just install varnish on my backendserver, or would the server run out of memory quick?

    Read the article

  • IE8/IE7/IE6/IE5 on WinXP Use The Wrong Certificate

    - by Marco Calì
    For some reason IE8/IE7/IE6/IE5 on Windows XP, instead to use the certificate that is listed on the nginx website config, is using another certificate that is used from other websites. Checking the nging config file for the website everything is fine. A confirm of this is that all the other browsers (Chrome/Firefox/Safari/IE9) are using the correct certificate. This is the nginx configuration for the app: server { listen 80; listen 443 ssl; server_name mydomain.com; ssl_certificate /root/certs/mydomain.com/mydomain.bundle.crt; ssl_certificate_key /root/certs/mydomain.com/mydoamin.key; access_log /opt/webapps/cs_at/logs/access.log; location / { add_header P3P 'CP="CAO PSA OUR"'; proxy_pass http://127.0.0.1:20004; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; } }

    Read the article

  • Is it possible to use WebMatrix with pure IIS?

    - by Mike Christensen
    I'd like to check out WebMatrix for publishing our site to IIS automatically (right now, I have to zip it up, copy it out, Remote Desktop into the server, unzip it, etc). However, every example I can find on how to setup WebMatrix involves Azure, or using a .publishsettings file that you'd get from your hosting provider. I'm curious if I can publish to a normal, every day IIS server running on Windows Server 2008. So far, all I've done to the IIS server is install Web Deploy, which I believe is the protocol that WebMatrix uses to publish. When I enter the Remote Site Settings screen, I select Enter settings. I select Web Deploy as the protocol, type in my NT domain credentials (I'm an Admin on that server). I put in the site URL for the Site Name and Destination URL. When I click Validate Connection, I get: Am I doing something wrong, or is this just not possible to do?

    Read the article

  • How does Apache handle port forwarding?

    - by vfclists
    I setup a localhost portforwarding configuration in the coLinux .conf file, forwarding port 8090 to port 80 in the VM. When http://localhost:8090 is entered in the browser, I get the correct response from nginx, but with Apache the response get the error /htdocs not found in the log. However if I do a local port forwarding from 8090 to port 80 via SSH Apache responds fine. Is there something about the way Apache handles the port redirection that causes it to fail? PS, For those unfamiliar with coLinux it allows localhost connections to get to the VM by forwarding localhost ports on the Windows host to ports on the VM, as the 10.x.x.x IP it not accessible from the Windows host.

    Read the article

  • Address VMWare Fusion Linux guest by hostname?

    - by amrox
    I have a Ubuntu Server 9.04 image set up in VMWare Fusion 3.0.0, using the NAT option for the guest's network connection. From the Mac host, I can ssh to the linux guest just fine using it's IP address, but I would like to be able to refer to it by hostname for connivence. ie: mac-host:~ ssh [email protected] I had a similar setup using Parallels a couple years ago, but I don't remember how it was set up. It may have "just worked". Any suggestion on how to make this work?

    Read the article

  • Are there other application layer firewalls like Microfot TMG (ISA) that do advanced http rules?

    - by Bret Fisher
    Since the old days ISA and now TMG have had several great features that I often want to deploy to my customers because of the enhanced functionality and security, but often the cost of an additinal server HW, Windows Server, and TMG license is too much to justify when compaired to a $300-500 appliance. Are there other gateway firewalls that can perform one or more of these application layer features: pre-auth incoming http traffic against AD/LDAP before sending packets to internal server (forms auth or basic creds popup)? read host headers of incoming http traffic (even on https) to a single public IP and route packets to different internal servers based on that host header?

    Read the article

  • View Public Key in Domain Key for a Domain

    - by Josh
    Using Jeff's blog post I'm creating domain keys for my account. I wanted to verify the setup using Get or Host command with Bind for Windows but I'm lost one of the commands. I can see view the _domainkey. txt file with this command: host -t txt _domainkey.stackoverflow.com but I'm at a loss at how I'd find the selector record. Jeff points out it can be anything before the before the period in "._domainkey.domain.com" but how would I list all records if I didn't know the exact query name? Is there a wildcard I could use to view all TXT or all records under this section?

    Read the article

  • Route multiple subdomains on one external ip to multiple internal ips

    - by Abenil
    i have several subdomains(git.example.org, build.example.org, etc.), i have a router with an external ip and i have several virtual machines on a host computer with internal ips. Now i want to route git.example.org to internal ip 10.0.2.1 and build.example.org to internal ip 10.0.2.2. How can I do this? I setup in the Router that all traffic on port 80 is comming to my host computer with internal ip 10.0.2.3 and installed Squid on that computer. I added the following lines to the squid.conf file: cache_peer 10.0.2.1 parent 80 0 no-query originserver name=server_1 cache_peer_domain server_1 git.example.org cache_peer 10.0.2.2 parent 80 0 no-query originserver name=server_2 cache_peer_domain server_2 build.example.org But this is not working for me. :( Any help appreciated. Regards Nils Update: Here is the solution for Apache http://serverfault.com/a/273693

    Read the article

  • Would NetBSD be a good choice for a web server?

    - by Alexander
    I've the choice of crafting a NetBSD image for a Xen VPS host, and was just wanting to play around as I like BSD and wished to use it for my general web hosting. I will be hosting a low-mid traffic website and maybe a few other simple services. Do you think NetBSD would be a sufficient choice, in terms of general performance of multiple system users and fair amount of traffic to Apache compared to what Linux could normally handle? I am concerned if I do start to really like it and keep it, I may be limiting myself if I am to move further with my web host and get more traffic (and maybe a lot of FTP access and user shell accounts) Ken

    Read the article

  • Is Sql Azure useful without windows azure?

    - by KallDrexx
    I am currently doing some research to get some preliminary IT cost projections for a project, and I was looking at Azure. Since this is a startup, I do not want to deal with the IT operations myself and instead am looking at having it all professionally hosted. I am looking at azure due to the SLA assurances, already in place disaster recovery operations, and the reliability. I'm playing with some numbers, and I am wondering if hosting my database on Sql Azure is an option, while hosting the actual webpage on another host until I need the frontend scalability of Azure. Is this actually feasible or will the latency in requests between the web host and azure be too much and I would be better off hosting both on the same service?

    Read the article

  • Installing Fedora Core 12 on a VPS

    - by cinqoTimo
    I'm working on a Linux VPS that is running FC6, and I want to reinstall with the latest version FC12. The VPS is running on the Virtuozzo platform. I am also running Plesk on the VPS. This is all managed hosting through Network Solutions, so I don't have access to the actual box. My question is, how does one reinstall a different OS through Virtuozzo? I see "reinstall VPS" in virtuozzo, but this just burns a fresh copy of FC6? Is this a host-by-host thing, or are there best practices for accomplishing this..?

    Read the article

  • difference between server and desktop

    - by user1241438
    I want to set up a webserver. I would like to buy a hardware for that and i am trying to understand if i should buy a desktop and host the webserver on that or do i have to buy some used server from ebay and host on it. Example is http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=180986172861&ssPageName=ADME:X:RTQ:US:1123 But what is the difference in desktop and server? These days even desktops are coming with high RAM. Only other difference i see is servers have RAID HARD disk. Is there any other difference?

    Read the article

  • Leave Windows Session Logged On

    - by Kyle Brandt
    Is a bad idea for any reason to leave accounts logged onto Windows remote desktop sessions? So instead of logging off, just closing the session so it locks. In this case, the limited number of remote desktop connections is not an issue. I am just wondering if anyone has seen sessions leak memory over time or maybe security issues with doing this, etc... I could see if programs were left open they might suck up and or leak memory, but has anyone seen this with Microsoft software such as Control Panels, Management Consoles, and Exchange System Administrator?

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >