Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 105/389 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • migration of physical server to a virtual solution, what i have to do?

    - by bibarse
    Hello I'm new in this forum, so i would like that you forgive me for my blissfully and my low English level. I'm a trainee in company one month ago, and my mission is to migrate 3 physicals servers to a virtualization technology. The company edit softwares for E-learning so there are lots of data like videos, flash and compressed (zip). This is some inventory of the servers: OS: Debian, 2 redhat, apache, php/mysql, sendMail/Dovecot, webmin with virtualmin template to create dynamically the web sites because there is no sysadmin ... The future provider will be responsible of to secure, update and create the virtual machines (outsourcing) and with a RedHat OS's. So i want that you help me to choose a virtualisation technologie (for the i prefer KVM of Redhat RHEV, VMWare is expensive), how evaluate the hardware needs (this for evolution of 4 or 5 years) and to elaborate a good planing to don't forget any think. Thank you for your responses.

    Read the article

  • Access Windows VPN DNS from Ubuntu

    - by user46427
    I am using Ubuntu 10.04 to access a Windows VPN. I connect to the VPN from Ubuntu, and when I open a Windows 7 virtual machine (VirtualBox), everything works great ... I can access local network drives, ping local servers, remote into local machines, etc. However, I can do none of this from Ubuntu. With the VPN connected, I cannot even ping anything within the VPN local network. I'm guessing it's a DNS issue that Windows is handling automatically but Ubuntu needs a setting somewhere to tell it to use the DNS servers of the VPN network? Any ideas? I'm a relative novice to Ubuntu, esp. VPN in Ubuntu. [EDIT] Actually, I'm almost positive it is DNS, because if I get the IP address from the Windows VM I can use Terminal Server Client to remote into a machine.

    Read the article

  • Remotely enter encryption key?

    - by Jason Swett
    This might be a really dumb question but here goes, anyway. I just bought a couple servers. I already installed Ubuntu with encrypted LVM on one and I'm planning on doing the same with the other. This means that every time I boot up each of these machines, I have to enter the passphrase. And I'll have to do this every morning because I'll power each machine off each night for security reasons. Here's the problem: I don't have monitors or keyboards for these servers. It seems to me I have two options: Somehow enter the passphrase remotely Buy a KVM switch I doubt #1 is an option but I want to make sure it's not before I buy a KVM. Is it possible to enter the passphrase remotely? AND is it a good idea?

    Read the article

  • Where to run Java and PHP code continuously

    - by az1112
    I'm sorry if this is the wrong forum for a question like this, but you have to start asking questions somewhere to get anywhere :) The question is pretty simple: I need a server where I can run Java and PHP scripts 24 hours a day, 7 days a week. I need to be able to access this server via SSH, and I need to be able to retrieve the data generated by scripts using SCP. Also, I need to be able to run 10-20 scripts simultaneously. What is the name of the thing I should be looking for? Is it a dedicated server? I'm confused for 2 reasons: 1) because there seem to be all kinds of servers out there; 2) because most companies advertising dedicated servers seem to be aiming them at people who want to host websites. But I don't want to host a website; I just want to run my code.

    Read the article

  • Can't connect to server from certain machines

    - by Joel Coel
    On a small college campus we have a VLAN setup for the computer labs. These machines get assigned IP addresses in the 192.168.7.xxx range. In the server room, all of the server are on the default VLAN and assigned an IP address in the 10.1.1.xxx range. For the most part this works, but the lab machines are unable to connect to one of the servers. They can't even ping it. They can talk to other servers on the same switch as this server just fine. At first I thought it might be a vlan issue, but I changed the server port vlan to match other known-working ports with no effect. Any ideas?

    Read the article

  • Keeping folder of files in sync over 3 machines

    - by Wizzard
    Morning, Got 3 machines that have user content on them, which I need to keep in sync. This is a 3 way sync. Currently I run rsync but we just don't handle deletes. Have looked at something like gluster, but that seems a little over the top Any other software out there to do a 3 way sync, or a good network file system...? There is for web servers so we don't want a slow / IO hungry process. 3 servers... user content could be added to 1 and needs to be moved to other two.

    Read the article

  • Should I install an AV product on my domain controllers?

    - by mhud
    Should I run a server-specific antivirus, regular antivirus, or no antivirus at all on my servers, particularly my Domain Controllers? Here's some background about why I'm asking this question: I've never questioned that antivirus software should be running on all windows machines, period. Lately I've had some obscure Active Directory related issues that I have tracked down to antivirus software running on our domain controllers. The specific issue was that Symantec Endpoint Protection was running on all domain controllers. Occasionally, our Exchange server triggered a false-positive in Symantec's "Network Threat Protection" on each DC in sequence. After exhausting access to all DCs, Exchange began refusing requests, presumably because it could not communicate with any Global Catalog servers or perform any authentication. Outages would last about ten minutes at a time, and would occur once every few days. It took a long time to isolate the problem because it was not easily reproducible and generally investigation was done after the issue resolved itself.

    Read the article

  • who deleted my files?

    - by akalter
    I have some linux servers. On two of our server we have MySQL. We have daily backup on both machine. But the scripts are different. I saw both scripts. On one of them I saw the "delete older files" algorithm, but in the other this is happening but not from the script. I am trying to discover who deletes my files, because of that I want to use same script on both machine because of that in the script with the deletion I also copy the files to the another server, and I want to do that in both servers. Who have an idea who deleted my older backups? Thank you!

    Read the article

  • Why can't we reach some (but not all) external web service via VPN connection?

    - by Paul Haldane
    At work (UK university) we use a set of Windows servers running WS2008R2 and RRAS which offer VPN service to students in our accommodation. We do this to associate the network connections with individuals. Before they've connected to the VPN all they can talk to is the stuff thats needed to setup the VPN and a local web site with documentation on how to connect. Medium term we'll probably replace this but it's what we're using at the moment. VPN on the 2008 servers allocates client a private (10.x) address. Access to external sites is through NAT on the campus routers (same as any other directly connected client on a private address). Non-VPN connections aren't seeing this problem. Older servers run WS 2003 and ISA2004. That setup works but has become unreliable under load. Big difference there was that we were allocating non-RFC1918 addresses to the clients (so no NAT required). Behaviour we're seeing is that once connected to the VPN, clients can reach local web sites (that is sites on the campus network) but only some external sites. It seems (but this may be chance) that the sites we can reach are Google ones (including YouTube). We certainly have trouble reaching Microsoft's Office 365 service (which is a pain because that's where mail for most of our students is). One odd bit of behaviour is that clients can fetch (using wget on a Windows 7 client) http://www.oracle.com/ (which gets a 301 redirect) but hangs when asked to fetch http://www.oracle.com/index.html (which is what the first URL redirects to). Access works reliably if we configure clients to use our local web proxies (Squid). My gut tells me that this is likely to be something in the chain dropping replies either based on HTTP inspection or the IP address in the reply. However I'm puzzled about why we're seeing this with the VPN clients. Plan for tomorrow (when I'm back in the office) is to setup a web server on external connection so that we can monitor behaviour at both ends of the conversation (hoping that the problem manifests itself with our test server). Any suggestions for things we should be looking at?

    Read the article

  • SSH - SFTP/SCP only + additional command running in background

    - by Chris
    there are many solutions described to get ur SSH-connection forced to only run SFTP by modifying the sshd_config by adding a new group match and give that new group a Forcecommand internal-sftp Well that works great but i would love to have a little more feature. My servers automatically ban IP's which try to connect often in a short time. So when you use any SFTP-Client, which opens multiple connections to work faster it can get banned instandly by the server for a long time. The servers have a script to whitelist users by administrator. I've modified this script to whitelist the user, which runs the script. All i need to do is now get the server to execute that script, when somebody logins. On SSH it's no problem, just put it in .bashrc or something like, but the Forcecommand don't runs these scripts on login. Is there any way to run such a shellscript before or at the same time as the Forcecommand get fired?

    Read the article

  • Why would TCP wrappers stop working for sshd?

    - by toby1kenobi
    On a couple of CentOS 5 servers sshd seems to have become 'unwrapped' - previously I was using TCP wrappers and hosts.allow/hosts.deny to control access, but these are now not being used. If I execute $ldd /usr/sbin/sshd | grep libwrap $ it outputs nothing, whereas on servers where TCP wrappers are still working I see libwrap.so.0 => /lib64/libwrap.so.0 (0x00002b2fbcb81000) Does anyone know what might cause this, or how it could be rectified? Updated As requested: $ rpm -qV openssh-server S.5....T c /etc/pam.d/sshd S.?....T c /etc/ssh/sshd_config S.5..... /usr/sbin/sshd

    Read the article

  • Multiple Session using port 1081 in one box using SSH

    - by regmaster
    Hi Guru's, I am setting Linux Hopping Station to another different servers. My current config to connect to another servers is using different port to connect. e.g ssh -D 1080 -p 22 [email protected] ssh -D 1081 -p 22 [email protected] Now what I would like to have to share the same port from the same box. ssh -D 1080 -p 22 [email protected] ssh -D 1080 -p 22 [email protected] But when I share it, I will get below error: bind: Address already in use channel_setup_fwd_listener: cannot listen to port: 1080 Could not request local forwarding. How could I configure the same port? help. thank you. I want to share the same port because this is needed when configuring firewall in Citrix Firewall on other machine, not needed to many many ports and keep changing when changing connection. thank you.

    Read the article

  • Can't connect Tomcat via JMX

    - by Icarokun
    I couldn't connect to one Tomcat server via JMX in a Linux virtual machine. There was no firewall running. All seemed ok. By searching on the web I found out that I have to use the -Djava.rmi.server.hostname property to fix it. It worked, but I don't understand why. My machine has five Tomcat servers running, all of them have JMX enabled in consecutive ports (8008, 8018, 8028...), all of them have the same configuration and only one of them had this issue connecting JMX. No firewall, no -Djava.rmi.server.hostname property in any Tomcat. I understand the problem, but I don't understand why four of my Tomcat servers worked and one of them not. Why is this?

    Read the article

  • HTTPS load balancing based on some component of the URL

    - by user38118
    We have an existing application that we wish to split across multiple servers (for example: 1000 users total, 100 users split across 10 servers). Ideally, we'd like to be able relay the HTTPS requests to a particular server based on some component of the URL. For example: Users 1 through 100 go to http://server1.domain.com/ Users 2 through 200 go to http://server2.domain.com/ etc. etc. etc. Where the incoming requests look like this: https://secure.domain.com/user/{integer user # goes here}/path/to/file Does anyone know of an easy way to do this? Pound looks promising... but it doesn't look like it supports routing based on URL like this. Even better would be if it didn't need to be hard-coded- The load balancer could make a separate HTTP request to another server to ask "Hey, what server should I relay to for a request to URL {the URL that was requested goes here}?" and relay to the hostname returned in the HTTP response.

    Read the article

  • CNAME lookup failed temporarily. (#4.4.3)

    - by klickverbot
    A friend of mine just told me that he can't send mails to accounts on one of my servers via the SMTP server provided by his ISP. The error message in the bounce he gets reads: Hi. This is the qmail-send program at aon.at. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: CNAME lookup failed temporarily. (#4.4.3) I'm not going to try again; this message has been in the queue too long. Any ideas what could be the reason for this? I have double-checked the DNS records for my domain, but they seem perfectly fine, and from any other mail servers I tested, delivery works flawlessly…

    Read the article

  • MongoDB PHP EC2 Setup Configuration

    - by nathansizemore
    I am new to web development and server set up. I am looking for some advice or a link to a tutorial on setting up a production system up. Right now, I have a server (Ubuntu, Apache, MongoDB, and PHP). It receives a request, PHP queries Mongo, and PHP sends out the requested data. How do I make that work with more servers? I've read that you can make a cluster of a primary and two slave nodes which work as separate servers running Mongo, but do those also run PHP? Or is the primary the only one running the PHP? I have read some docs on Mongo site and a video of someone from 10gen going through it, but they are geared towards people that seem to already understand this stuff, I have no idea and need to start from a beginning stage. If anyone can help me understand where PHP (Acting as my API) lives in these clusters, that would be greatly appreciated! Thanks in advance for any help!

    Read the article

  • Configuring a backup DNS server

    - by mattyh88
    I would like to setup a backup DNS server. I have added all my name servers in my domain name panel. (ns1.domain.com, ns2.domain.com, etc ...) If a person would try to go to domain.com and the first name server would fail, will it automatically try and use ns2.domain.com? All my DNS servers have the same master zones configured. Is that the way to go? Is it that easy or am I missing something here? :)

    Read the article

  • Load Balancing and High Availability for Web Site

    - by nzgirl
    We've developing a database driven (70%/30% read/write load) website using C#.NET, IIS and MS SQL Server 2008 to be hosted on Windows 2008. Due to contractual reasons our setup has to be hosted on our own physical/virtual servers instead of a cloud solution at this stage. Could someone outline or link to some best practices that would provide both high availability (priority at the moment) and eventually load balancing for our site. We're probably looking at some sort of 2 SQL server mirrored system and 2 ISS web servers to start with. Thanks in advance.

    Read the article

  • Can I rent exclusive time on a powerful server running linux? [closed]

    - by Mark Borgerding
    My company is involved in a proposal that requires speed estimates of our software on a server with the latest & greatest processors. This is not the first time we've been in this situation. The servers themselves are too expensive to buy a new one every time, so we end up extrapolating from what we have. There are so many variables: processor generation & speed, memory speed, memory channels, cache configurations; it makes extrapolation difficult and error-prone. Is there a business that rents time on the newest servers? At least part of the time we'd need exclusive access to an otherwise quiescent system either via ssh shell access or unattended batch jobs. I am not looking for general cloud computing services. I don't need much time on the server, but it needs to be exclusive. And the server needs to be pretty cutting edge for a solid basis of estimate.

    Read the article

  • Request to server x Reply from server y

    - by klaasio
    I need some advice from you guys: I'm dealing with a custom loadbalancer/software for which we will use 2 main servers and about 8 slave servers. In short: User sends request to main server, main server will receive and handle the requests, sends a request to a slave server and slave server should send data DIRECTLY to the "user". User - Main server Main server - Slave server Slave server - User The reason for which data should be send directly to the user and not through the main server is because of bandwidth and low budget. Now I have the following idea's: -IPinIP, but that is not possible in Layer7 (so far i know there some expensive routers for that) -IP Spoof, using C/C++ we will make it look like the reply came from main server. But I was thinking, perhaps the reply "slave server - User" could just come from a different IP without causing issues in the firewall from the user or his anti-virus. I don't know so well about "home" firewalls/routers and/or anti-virus software. I guess the user machine wouldn't handle it well?

    Read the article

  • HAproxy to web host sub directory?

    - by daemonza
    Hi for reasons outside my control, I need to load balance two servers, that run a non-virtual host enabled app on IIS. Normally in HAProxy I would load balance servers(apache, tomcat, etc) like this : acl is_www_example_com hdr_end(host) -i www.example.com use_backend www_example_com if is_www_example_com backend www_example_com balance roundrobin cookie SERVERID insert nocache indirect option httpchk HEAD / HTTP/1.0 option httpclose option forwardfor server node1 192.168.1.1:80 cookie node1 server node1 192.168.1.2:80 cookie node1 Which will route to the node 1 and node 2 server and serve up the virtual host site. if I need to route to www.example.com/application/data How would I be able to do it, with the above example, if at all even possible?

    Read the article

  • QPS for dnscache

    - by vedaprasad
    I have 2 internal DNS servers ( ns1 & ns 2 ) on ubuntu 12.04 which run dnscache , and my clients resolv.conf have something like nameserver ns1 nameserver ns2 nameserver 8.8.8.8 since all my load is taken by ns1 , where as ns2 sits idle until ns1 is down or not serving my request . i would like to add these 2 server under a LB VIP . but my network team wants to know the QPS of the ns servers so that their LB is loaded . so is there any way to find out the QPS of dns queries running Dncache

    Read the article

  • Make a server ( other than the router ) to be the default gateway for a subnet

    - by powerguy123
    I am trying to make a server ( lets call it server_A) which is different from the router to be the gateway for a subnet. Why do I want this ? I want to host a loadbalancer on server_A using LVS-NAT, and I dont want to implement a V-Lan or IP-IP tunneling. I have modified the routing tables of the remaining servers on the subnet to use server_A as the gateway. I have set server_A to not send ICMP reroute packets. But most traffic from servers in that subnet to outside clients are still being sent through the original gateway, bypassing server_A. Is there any other configuration I need to set in order to achieve my goal ?

    Read the article

  • Can i use a Windows 2008 r2 Cluster for file redundancy

    - by JERiv
    I'm researching a sever clustering architecture as a redundancy and backup solution for a client, and something that isn't made clear is whether or not i can use server clustering to replace a file server with backup solution. Forgive my Elementary understanding of server clustering but supposing: 2 Sites (NJ, CA) Identical Servers at each site setup as a Remote Site Cluster nodes with Windows Enterprise server 2008 r2 Services: File, Terminal, AD, and maybe DNS Will the following will be true: Files (including data drives) will be synced between the two servers eliminating the need for third party backup/mirroring software to sync/backup files. Also supposing i use roaming profiles w/ folder redirection; How will client computer in the WAN access their data through the cluster (i.e. will they automatically choose the best route)

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >