Search Results

Search found 12546 results on 502 pages for 'aidan host'.

Page 152/502 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Can we put random URL entries on DNS

    - by ring bearer
    Using microsoft DNS All/most of our local hosts ( with in ) are in following domain *.company.org So a host name will look like mymachine001.company.org Is it possible to set up wild card DNS entries of the form ? *.subd.company.com Note: The URL ends with .com, all other hosts so far ever set up in the DNS were of the format *.company.org what i am trying to achieve is the following. A user with in internal network types a url http://someprefix.subd.company.com in browser and enters. Since there is a wild card entry in DNS, the user gets routed to host mapped to *.subd.company.com in the DNS Note : at the same time, company.com has a public DNS entry and that is mapped to a physical IP in some other network (data center)

    Read the article

  • Remote Desktop Services create LAN and WAN user groups

    - by PHLiGHT
    I'm setting up one server with the gateway, server host and web access roles on it. I know that isn't ideal but I don't expect to have many simulatenous users. I want users to access remote desktop web access and connect to the server host via the gateway as outlined here which avoids opening 3389 to the internet. Users will be connecting from the LAN and the WAN. What I'm looking to do is to allow some users LAN access but not WAN access and added plus would be if security settings (such as no clipboard) would be different when accessing via the WAN. Is this possible? It seems all users can logon to remote desktop web access by default. They can't run the remoteapps once logged in though without the proper permissions. Can I prevent them from even logging into remote web access? Since they renamed it from terminal services to remote desktop services it has made my Googling a bit harder. Thanks!

    Read the article

  • DNS: domain2 points to domain1

    - by Yar
    I have one domain ("domain1") that is set up with hosting and mail (hosted by Gmail Apps). This domain works perfectly. I want a second domain ("domain2") to forward to domain1, but I don't want to use "DNS Forwarding." I would like to have it act EXACTLY like domain1, so that domain2/whatever points to the same resource as domain1/whatever WITHOUT AN HTTP REDIRECT NOR BROWSER TRICKS LIKE FRAMES. I would also love to be able to send mail to "blah@domain2" and have it go to "blah@domain1". Can this be set up, and how? I am using GoDaddy as registrar and DNS host for both domains. GoDaddy is also the web host for domain1, and mail hosting is with Google Apps.

    Read the article

  • VMware 1.0.1 Windows7 - Ubuntu hostnames

    - by Kyle K
    I'm trying to run Ubuntu as the guest OS using VMWware 1.0.10 with Windows 7 Ultimate as the host OS. I had this set up previously with Win XP as the host OS and in fact I'm using the same .vmx My problem is I can't get either Win7 or Ubuntu to be able to ping the other by hostname. After installing Samba and Winbind on Ubuntu, I was able to get this working when under WinXP, but for some reason I can't get it to work under Win7. I can ping by IP Address, and the guest OS even shows up by hostname under the Windows networking panel (but of course I can't do anything with it), but pinging using short hostnames just won't work. Also, Win7 firewall is turned off completely.

    Read the article

  • Flash Media Server slow over SSL

    - by Antilogic
    We are using FMS to host a VoD site. We host FMS internally (we do not use a CDN). We recently installed an SSL certificate to alleviate connection issues for clients (they're networks either block or don't support RTMP), however we're noticing that when streaming in RTMPS connections are drastically slower (on the order of Mbps). I know SSL causes some amount of over head but both client and server show almost no signs of exertion. Speedtest.net and a locally hosted speed test confirm that bandwidth is not an issue. I'm really not a network guru, so I'm at a loss as to where to check next. Do any of you have an idea why streaming media would run so slow over SSL?

    Read the article

  • JBoss https on port other than 8080 not working

    - by MilindaD
    We have a server with two JBoss instances where one runs on 8080, the other on 8081. We need to have HTTPS enabled for the 8081 server, firstly we tried enabling https on the 8080 port instance by generating the keystore and editing the server.xml and it successfully worked. However when we tried the same thing for 8081 it did not, note that we removed https for the 8080 server first before enabling it for 8081. This is what was used for both server.xml for 8080 and 8081. The only difference was that the port was changed from 8080 to 8081 when trying to enable https for 8081 port instance. What am I doing wrong and what needs to be changed? NOTE : When I meant enabled for 8080 I meant when you visit https:// URL:8484 you will actually be visiting the 8080 port instance. However when ssl is enabled for 8081 and I visit https:// URL:8484 I get that the web page is unavailable. COMMENTLESS VERSION <Server> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Service name="jboss.web"> <!-- https --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- https1 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server> WITH COMMENTS VERSION <Server> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Use a custom version of StandardService that allows the connectors to be started independent of the normal lifecycle start to allow web apps to be deployed before starting the connectors. --> <Service name="jboss.web"> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="${jboss.server.home.dir}/conf/zara.keystore" keystorePass="zara2010" clientAuth="false" sslProtocol="TLS" compression="on" /> --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <!-- The JAAS based authentication and authorization realm implementation that is compatible with the jboss 3.2.x realm implementation. - certificatePrincipal : the class name of the org.jboss.security.auth.certs.CertificatePrincipal impl used for mapping X509[] cert chains to a Princpal. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles --> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <!-- A subclass of JBossSecurityMgrRealm that uses the authentication behavior of JBossSecurityMgrRealm, but overrides the authorization checks to use JACC permissions with the current java.security.Policy to determine authorized access. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles <Realm className="org.jboss.web.tomcat.security.JaccAuthorizationRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> --> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <!-- Uncomment to enable request dumper. This Valve "logs interesting contents from the specified Request (before processing) and the corresponding Response (after processing). It is especially useful in debugging problems related to headers and cookies." --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve" /> --> <!-- Access logger --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" prefix="localhost_access_log." suffix=".log" pattern="common" directory="${jboss.server.log.dir}" resolveHosts="false" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host. Does not provide SSO across a cluster. If this valve is used, do not use the JBoss ClusteredSingleSignOn valve shown below. A new configuration attribute is available beginning with release 4.0.4: cookieDomain configures the domain to which the SSO cookie will be scoped (i.e. the set of hosts to which the cookie will be presented). By default the cookie is scoped to "/", meaning the host that presented it. Set cookieDomain to a wider domain (e.g. "xyz.com") to allow an SSO to span more than one hostname. --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host AND to all other hosts in the cluster. If this valve is used, do not use the standard Tomcat SingleSignOn valve shown above. Valve uses a JBossCache instance to support SSO credential caching and replication across the cluster. The JBossCache instance must be configured separately. By default, the valve shares a JBossCache with the service that supports HttpSession replication. See the "jboss-web-cluster-service.xml" file in the server/all/deploy directory for cache configuration details. Besides the attributes supported by the standard Tomcat SingleSignOn valve (see the Tomcat docs), this version also supports the following attributes: cookieDomain see above treeCacheName JMX ObjectName of the JBossCache MBean used to support credential caching and replication across the cluster. If not set, the default value is "jboss.cache:service=TomcatClusteringCache", the standard ObjectName of the JBossCache MBean used to support session replication. --> <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <!-- Check for unclosed connections and transaction terminated checks in servlets/jsps. Important: The dependency on the CachedConnectionManager in META-INF/jboss-service.xml must be uncommented, too --> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server>

    Read the article

  • Connecting/Adding a private network on windows server 2008

    - by WhyMe
    Hey all, I have a dual server configuration on a host provider using VPS. I was told by my Host provider that in order to use free bandwidth between my two servers (they are in the same location) I need to add a alias "subnet" to a specific ip (A private network, VPN). How do I add an aliased ip in widnwos? in Linux the relevant command is supposed to be (By my search in blogs) "ifconfig eth0:1 10.129.175.165 netmask 255.255.255.0" They also said that another way to connect between the servers (should also be faster) is to use "private lan", but as it happens I don't know how to define one :(. Is there a windows equivalent or another way to do this? I have checked my ip config and found no indication of the private lan or the VPN ip.

    Read the article

  • Why doesn't my computer work at full speed?

    - by kubilas
    My motherboard is a Gigabyte GA-8I915PL-G with an Intel Pentium 4 630 3,0 GHz which doesn't run at it's default speed. It's currently at FSB 800, CPU Host 200 and CPU 3000 MHz; but sometimes it runs at FSB 533, CPU Host 133 and CPU 2025 MHz. Sometimes it's even at FSB 75 and CPU 1128 MHz. When I configure the default settings in Easy Tune then my computer doesn't work. Sometimes I need to clear the CMOS so I can set the default settings in the BIOS, but that doesn't always help. I've updated the BIOS, what else can I do to fix this problem?

    Read the article

  • Varnish server in front of nginx server with multiple virtualhosts

    - by Garreth 00
    I have tried to search for a solution for this, but can't find any documentation/tips on my specific setup. My setup: Backendserver: ngnix: 2 different websites (2 top domains) in virtualenv, running gunicorn/python/django Backendserver hardware(VPS) 2gb ram, 8 CPU Databaseserver: postgresql - pg_bouncer Backendserver hardware (VPS) 1gb ram, 8 CPU Varnishserver: only running varnish Varnishserver hardware (VPS) 1gb ram, 8 CPU I'm trying to set up a varnish server to handle rare spike in traffic (20 000 unique req/s) The spike happens when a tv program mention one of the sites. What do I need to do, to make the varnish server cache both sites/domains on my backendserver? My /etc/varnish/default.vcl : backend django_backend { .host = "local.backendserver.com"; .port = "8080"; } My /usr/local/nginx/site-avaible/domain1.com upstream gunicorn_domain1 { server unix:/home/<USER>/.virtualenvs/<DOMAIN1>/<APP1>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain1.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80 default_server; listen 8080; client_max_body_size 4G; server_name www.domain1.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain1; break; } } } My /usr/local/nginx/site-avaible/domain2.com upstream gunicorn_domain2 { server unix:/home/<USER>/.virtualenvs/<DOMAIN2>/<APP2>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain2.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80; listen 8080; client_max_body_size 4G; server_name www.domain2.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain2; break; } } } Right now, If I try the Ip of the varnishserver I only get served domain1.com. Would everything be correct if I change the DNS of the two domain to point to the varnishserver, or is there extra setup before it would work? Question 2: Do I need a dedicated server for varnish, or could I just install varnish on my backendserver, or would the server run out of memory quick?

    Read the article

  • IE8/IE7/IE6/IE5 on WinXP Use The Wrong Certificate

    - by Marco Calì
    For some reason IE8/IE7/IE6/IE5 on Windows XP, instead to use the certificate that is listed on the nginx website config, is using another certificate that is used from other websites. Checking the nging config file for the website everything is fine. A confirm of this is that all the other browsers (Chrome/Firefox/Safari/IE9) are using the correct certificate. This is the nginx configuration for the app: server { listen 80; listen 443 ssl; server_name mydomain.com; ssl_certificate /root/certs/mydomain.com/mydomain.bundle.crt; ssl_certificate_key /root/certs/mydomain.com/mydoamin.key; access_log /opt/webapps/cs_at/logs/access.log; location / { add_header P3P 'CP="CAO PSA OUR"'; proxy_pass http://127.0.0.1:20004; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; } }

    Read the article

  • Address VMWare Fusion Linux guest by hostname?

    - by amrox
    I have a Ubuntu Server 9.04 image set up in VMWare Fusion 3.0.0, using the NAT option for the guest's network connection. From the Mac host, I can ssh to the linux guest just fine using it's IP address, but I would like to be able to refer to it by hostname for connivence. ie: mac-host:~ ssh [email protected] I had a similar setup using Parallels a couple years ago, but I don't remember how it was set up. It may have "just worked". Any suggestion on how to make this work?

    Read the article

  • Would NetBSD be a good choice for a web server?

    - by Alexander
    I've the choice of crafting a NetBSD image for a Xen VPS host, and was just wanting to play around as I like BSD and wished to use it for my general web hosting. I will be hosting a low-mid traffic website and maybe a few other simple services. Do you think NetBSD would be a sufficient choice, in terms of general performance of multiple system users and fair amount of traffic to Apache compared to what Linux could normally handle? I am concerned if I do start to really like it and keep it, I may be limiting myself if I am to move further with my web host and get more traffic (and maybe a lot of FTP access and user shell accounts) Ken

    Read the article

  • Are there other application layer firewalls like Microfot TMG (ISA) that do advanced http rules?

    - by Bret Fisher
    Since the old days ISA and now TMG have had several great features that I often want to deploy to my customers because of the enhanced functionality and security, but often the cost of an additinal server HW, Windows Server, and TMG license is too much to justify when compaired to a $300-500 appliance. Are there other gateway firewalls that can perform one or more of these application layer features: pre-auth incoming http traffic against AD/LDAP before sending packets to internal server (forms auth or basic creds popup)? read host headers of incoming http traffic (even on https) to a single public IP and route packets to different internal servers based on that host header?

    Read the article

  • View Public Key in Domain Key for a Domain

    - by Josh
    Using Jeff's blog post I'm creating domain keys for my account. I wanted to verify the setup using Get or Host command with Bind for Windows but I'm lost one of the commands. I can see view the _domainkey. txt file with this command: host -t txt _domainkey.stackoverflow.com but I'm at a loss at how I'd find the selector record. Jeff points out it can be anything before the before the period in "._domainkey.domain.com" but how would I list all records if I didn't know the exact query name? Is there a wildcard I could use to view all TXT or all records under this section?

    Read the article

  • How does Apache handle port forwarding?

    - by vfclists
    I setup a localhost portforwarding configuration in the coLinux .conf file, forwarding port 8090 to port 80 in the VM. When http://localhost:8090 is entered in the browser, I get the correct response from nginx, but with Apache the response get the error /htdocs not found in the log. However if I do a local port forwarding from 8090 to port 80 via SSH Apache responds fine. Is there something about the way Apache handles the port redirection that causes it to fail? PS, For those unfamiliar with coLinux it allows localhost connections to get to the VM by forwarding localhost ports on the Windows host to ports on the VM, as the 10.x.x.x IP it not accessible from the Windows host.

    Read the article

  • Is Sql Azure useful without windows azure?

    - by KallDrexx
    I am currently doing some research to get some preliminary IT cost projections for a project, and I was looking at Azure. Since this is a startup, I do not want to deal with the IT operations myself and instead am looking at having it all professionally hosted. I am looking at azure due to the SLA assurances, already in place disaster recovery operations, and the reliability. I'm playing with some numbers, and I am wondering if hosting my database on Sql Azure is an option, while hosting the actual webpage on another host until I need the frontend scalability of Azure. Is this actually feasible or will the latency in requests between the web host and azure be too much and I would be better off hosting both on the same service?

    Read the article

  • Route multiple subdomains on one external ip to multiple internal ips

    - by Abenil
    i have several subdomains(git.example.org, build.example.org, etc.), i have a router with an external ip and i have several virtual machines on a host computer with internal ips. Now i want to route git.example.org to internal ip 10.0.2.1 and build.example.org to internal ip 10.0.2.2. How can I do this? I setup in the Router that all traffic on port 80 is comming to my host computer with internal ip 10.0.2.3 and installed Squid on that computer. I added the following lines to the squid.conf file: cache_peer 10.0.2.1 parent 80 0 no-query originserver name=server_1 cache_peer_domain server_1 git.example.org cache_peer 10.0.2.2 parent 80 0 no-query originserver name=server_2 cache_peer_domain server_2 build.example.org But this is not working for me. :( Any help appreciated. Regards Nils Update: Here is the solution for Apache http://serverfault.com/a/273693

    Read the article

  • nginx automatic failover load balancing

    - by robinmag
    Hi, I'm using nginx and NginxHttpUpstreamModule for loadbalancing. My config is very simple: upstream lb { server 127.0.0.1:8081; server 127.0.0.1:8082; } server { listen 89; server_name localhost; location / { proxy_pass http://lb; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } But with this config, when one of 2 backend server is down, nginx still routes request to it and it results in timeout half of the time :( Is there any solution to make nginx to automatically route the request to another server when it detects a downed server. Thank you.

    Read the article

  • Windows Server 2003: Remapping external domain

    - by Chuck Harmston
    We're playing a going-away prank on a coworker, and would like to use a rule in our internal DNS server to redirect techcrunch.com to point at one of our internal development servers. Basically, I'd like to accomplish the same thing as adding a line to a Linux /etc/hosts file, only for the entire network. I have access to our DNS server. How would you go about doing this? I created an entry in the reverse lookup subnet with the 'Host Name' of techcrunch.com and the 'Host IP' of our development server, a Linux box running Debian on which I've created a virtualhost to handle requests to techcrunch.com. It doesn't appear to be working, however, and my expertise has reached its limit. Thanks!

    Read the article

  • Monit send alert to nagios NSCA

    - by mYzk
    I want to monitor a web with monit check host function and if the site is down then alert it to nagios nsca so it sends the info to nagios and nagios marks it as status OK or host down for example. The problem is that how can I make the monit alert fuction send the info to nagios nsca. I am not sure that this will work, but what I came up with is: set alert exec 'echo -e "nagios nsca format" | /usr/local/nagios/bin/send_nsca -H serveraddress -c /usr/local/nagios/etc/send_nsca.cfg' Would this work and is it the best solution to work with or can it be done some other way?

    Read the article

  • Installing Fedora Core 12 on a VPS

    - by cinqoTimo
    I'm working on a Linux VPS that is running FC6, and I want to reinstall with the latest version FC12. The VPS is running on the Virtuozzo platform. I am also running Plesk on the VPS. This is all managed hosting through Network Solutions, so I don't have access to the actual box. My question is, how does one reinstall a different OS through Virtuozzo? I see "reinstall VPS" in virtuozzo, but this just burns a fresh copy of FC6? Is this a host-by-host thing, or are there best practices for accomplishing this..?

    Read the article

  • difference between server and desktop

    - by user1241438
    I want to set up a webserver. I would like to buy a hardware for that and i am trying to understand if i should buy a desktop and host the webserver on that or do i have to buy some used server from ebay and host on it. Example is http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=180986172861&ssPageName=ADME:X:RTQ:US:1123 But what is the difference in desktop and server? These days even desktops are coming with high RAM. Only other difference i see is servers have RAID HARD disk. Is there any other difference?

    Read the article

  • How to force rsync to use destination directory as root

    - by thepurplepixel
    I have a simple script to one-way-sync files/folders within a directory: #!/bin/bash HOST='<hostname>' USER='<username>' DIR='/downloads/' SOURCE='/srv/torrents' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats --progress -i $SOURCE $HOST:$DIR find $SOURCE -type d -empty -prune -exec rmdir -p \{\} \; However, when this rsync operation runs, it creates a folder, torrents in /downloads on the destination machine. How can I force rsync to put all folders & files from /srv/torrents (remote) into /downloads/ (local) instead of creating /downloads/torrents as a separate directory?

    Read the article

  • configuring linux server to send traffic to local machines using local IP address

    - by gkdsp
    Two linux servers, server1 and server2, are on the same local network (they also have access to an external network). Server2 has a local IP of 192.168.0.2 and a host name of host2.mydomain.com. QUESTION 1: If an application on server1 sends traffic to server2 using a host name of host2.mydomain.com, what determines whether this traffic is routed to server2 using the local or external network? QUESTION 2: To ensure that all traffic sent from server1 to server2 always uses the local network, could I simply include in the server1 /etc/hosts file the following? 192.168.0.2 host2.mydomain.com ...the thinking being, if the servers are always on the same network there should never be a need for server2 to send traffic to server1 via the external network (that I can think of anyway). Is this done in practice, or is some other method preferred?

    Read the article

  • Can this USB3 behaviour be anything else than a hardware failure?

    - by Jonas Wielicki
    While my motherboard is half a year old now (ASUS M5A99X EVO), I only recently made use of the USB3 boards (after purchase of USB3 external harddrive). However, I am encountering issues. I am running linux 3.6.7-4.fc16.x86_64. Initially, the harddrive worked fine with USB3 (amazing ˜160MB/s), but I had some problems after putting after putting the harddrive to sleep manually after use (backup) with hdparm -Y. After some time, the device disappears from lsusb and i see the following in dmesg: [ 1924.091107] xhci_hcd 0000:05:00.0: xHCI host not responding to stop endpoint command. [ 1924.091114] xhci_hcd 0000:05:00.0: Assuming host is dying, halting host. [ 1924.091147] xhci_hcd 0000:05:00.0: HC died; cleaning up [ 1924.091233] usb 11-1: USB disconnect, device number 2 [ 1924.091272] sd 6:0:0:0: Device offlined - not ready after error recovery Testing with my (USB3 capable) notebook, I could not immediately reproduce the behaivour. I put the drive to sleep with hdparm -Y and waited for like an hour, but it was still listed in lsusb and responded after a few seconds delay when I tried after the hour of waiting. After an hour, on the desktop, the device would've usually vanished. Googling for this issue, I came across hints that playing around with IOMMU settings and upgrading the BIOS might help. I upgraded the BIOS and tried both with and without IOMMU enabled, got similar results. Most disturbing is, that one of the two USB 3.0 hubs sometimes also disappears from lsusb (or does not show up after boot at all). I've also heard that there are some hardware issues with ASUS USB3 ports. Applying mechanic force to the capble doesn't push the issue to one side or the other. Also, udev seems to reenumerate all devices if I plug the HDD into the USB 3.0 port without success (I can notice from my keyboard layout being changed to the default, which I do not use normally). The drive is externally powered and the external power supply is plugged in (it also stays powered when unplugging from USB, although it will spin down then). So before I try to return the board, I wanted to find out whether this can be anything else than a failure on the motherboard?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >