Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 192/537 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • What is needed for 'Previous Versions' to be visible on the client OS?

    - by Zoredache
    I have servers with Shadow Copies enabled taking snapshots a couple times a day. From the server, if you look at the local devices you can see the Previous Versions being populated reliably. But from remote clients, the ability for an end-user to see the Previous Versions seems to be very hit-or-miss. For the sake of this question you can assume that all my clients are Windows 7 and the Servers are Windows Server 2008 R2. Is there an exhaustive list of everything that is required for end user to see Previous Versions? Are their any requirements for a certain level of share or filesystem permissions, other then read access? Does something need to be open on the firewall, other then what is already in-place for normal Windows networking?

    Read the article

  • DNS forwarders limitations

    - by thejartender
    My question is very simple (maybe a tad too simple), but I will try and phrase it in a way to hopefully assist future visitors. I have just set up (successfully I hope) a DNS server at with some name server records on Ubuntu 12.10 while I am waiting for it to propagate I would like to know for future reference if I can use more than 2 forwarders in my /etc/named.conf.options.Would this speed up propagation? Do make this question and answer more valuable what other public DNS servers are available over and above Google's public DNS adresses: 8.8.8.8 8.8.4.4 I would also like to know if a restart of bind means that my servers will need to re-propagate? Is there a methodology to update settings while bind is running?

    Read the article

  • Commit charge peak higher than system limit

    - by Grubsnik
    We are seeing some very strange behaviour on our servers and google didn't turn up anything usefull, so I'm tossing it out here. A standard server is configured with 4GB Ram, 2 4GB pagefiles and running windows server 2003. The servers are running 50-120 vb6/.net applications which normally consume no more than 100mb of memory, but will occasionally run up to 300 mb. The issue with a single process spending way too much memory is being traced down somewhere else, but the thing that is baffling us is that the reported peak charge is vastly higher than what we have available. As the image above shows, we are getting reported peaks that are way higher than what the system is actually capable of delivering. This number has been seen as high as 29GB, which makes no sense at all for a system with a limit of 12GB. Does anyone have an idea what is going on?

    Read the article

  • Diagnose cause of long running requests in IIS 7.0

    - by Shlomi Fruchter
    We are running an ASP.NET web application on IIS 7.0, Windows Server 2008 R2, with SQL Server 2008 R2 for DB. On weekends, when the traffic is high, the request queue length in the IIS servers increase (up to 800 requests) and then drops, every minute or so. I can see that the servers are handling some requests which, according to the 'Current Requests' view in IIS Manager, are long running (thier Time Elapsed value ranges from 20 to 50 seconds). Those requests are not necessarily heavy queries, actually, I can't understand why they are taking so long. Can it be because the client is closing the connection on his side? Thanks, Shlomi

    Read the article

  • Hyper-V and host-installed hardware devices: can guest VMs access?

    - by gravyface
    Have a couple of servers I'd like to setup as Hyper-V Servers, with a couple of Windows 2008 Standard VMs. On the host, we have a few hardware devices we'd like to be accessible to the guest; I'm not sure if these are supported via a raw "pass-thru" on Hyper-V (which I don't have a lot of experience with) if the same drivers are installed on the guest. Hardware in question is a Brooktrout fax card, a SCSI adapter for the tape drive, and a 9-pin serial connected to one of the core firewalls for management.

    Read the article

  • SAN/NAS with high availability?

    - by netvope
    I have two servers that I plan to use for storage. Each of them has a few SATA disks directly attached. I want the storage to be available even if one of the storage servers is down (preferably the clients wouldn't even notice that the fail-over, although I'm not sure if this is possible). The clients may access the storage via NFS and samba, but this is not a must; I could use something else if needed. I found this guide, Installing and Configuring Openfiler with DRBD and Heartbeat, which apparently does the thing I want. It relies on three components, Openfiler, DRBD, and Heartbeat, and all three of them need to be configured separately. I'm wondering are there simpler solutions? Is using DRBD+Heartbeat the best practice for a situation like mine? I'm also interested to know if there are alternatives that don't depend on DRBD.

    Read the article

  • zero downtime during database scheme upgrade on SQL 2008

    - by eject
    I have web application on IIS7 with SQL server 2008 as RDBMS. Need get 0 downtime during future upgrades of ASP.NET code and DB schema as well. I need to get right scenario for this. I have 2 web servers and 2 sql servers and one http load balancer whcih allows to switch web backend server for web requests. Main goal is to make 1st web server and DB server up and running, update code and db schema on 2nd server and then switch all the requests to 2nd server and then main problem - how to copy data from 1st database 2nd (which was changed during upgrade).

    Read the article

  • Testing performance from around the world - how do I get a linux shell easily in multiple countries?

    - by Matthew O'Riordan
    We are building a socket based service where latency is paramount, and as such we have servers distributed into 7 data centres around the world. However, whilst we know we're bringing the servers closer to the clients, it's very difficult to know how effective this is, and importantly, what difference this makes compared to our competitors. As such, we want to run simple scripts that test latency and throughput for both our service and our competitors, which is easy enough using Amazon, however Amazon only have 7 data centres. We would like to know for example how we perform in locations all over the world such as South Africa, Australia, China, Peru etc. Does anyone know of any service where we could piggy back off their global infrastructure and run some scripts to test this performance? The obvious contenders are people like Monitis, but I don't think they would allow us to run custom scripts, only standard protocol monitors. Thanks for your help. Matt

    Read the article

  • rsyslog server - Can you split up and organize logs?

    - by Jakobud
    I recently setup one of our servers as an rsyslog server. I now have our firewall setup to log everything to that rsyslog server. But there doesn't seem to be an organization of the logs. All the firewall logs are just being dumped into the /var/log/messages on the rsyslog server. I guess I was maybe expecting them to at least be in a machine specific log file or directory. How can I organize the incoming logging? If I setup 20 servers to all log everything to a central rsyslog server, I really don't want everything being dumped into one big file or a few files. How can I setup rsyslog to tell it where to log what? Like if all the logs for a specific server were in it's own directory/file, etc... Is this possible?

    Read the article

  • Distributed Server Monitoring Solution

    - by MaterialEdge
    I belong to an independent IT firm that manages and maintains about 50 business clients networks, ranging from small 5 system networks to 200+ systems. Because we are unable to directly monitor each server at these locations (distributed over a very large area) on a regular basis I am looking for a method to monitor and alert us to any problems that may arise so that we can respond quickly with, hopefully, preventative measures. I'm not sure what solutions are available for this type of situation, but something that utilizes a central server at our business with all client servers sending alerts or logs to it for daily monitoring might work best. All these servers are running a Windows Server OS. In your opinion, what would be the best course of action to accomplish this?

    Read the article

  • Network monitoring library, or objects, for a cloud

    - by Andrew Smith
    I am looking for library to support server / switch monitoring, to actually be able to check with the device if it's working OK. However this requires some sort of auto-detection and device support. Basically I need to automatically detect a new device, start monitoring it like CPU and PING. So how do I auto-detect the machine remotely, this is something I need library for. Rackspace has something like this - "Cloud Monitoring API". But is there anything opensource which can be used same way? The Nagios and others doesnt have such API, and the big and expensive systems are too big to handle in public cloud, so there must be some other network monitoring engine with API, which can add a new servers automatically and support user isolation for example so I dont see other servers except mine.

    Read the article

  • Can I create a DC without a DNS Server?

    - by onik
    So as the title says, I need to promote a standalone Win2008R2 server to a Domain Controller, and I don't a DNS Server (I think), as there will be no clients connected to the domain, it will be only used for Remote Desktop Services. Yes, I know, it's considered bad practice to install other roles on the DC, but in this case, it's necessary. Do I need to install the DNS Server, and if I do, how to make it as transparent as possible? EDIT: Seems that I need to install the DNS Server, so I can I configure it not to mess up my entire domain? For example: The server I need to promote is rdc.mydomain.com, and it has an A entry to it's IP in the current DNS, while other servers under mydomain.com are running Linux and don't need to know anything about this Windows box. The domain uses a third-party DNS and all edits and updates need to be done via a separate web page, our servers don't have write/update access.

    Read the article

  • How to check use of userva boot option on Win 2K3 server

    - by Tim Sylvester
    I have some 32-bit Win2K3 servers running an application that fails now and then apparently due to heap fragmentation. (Process virtual bytes grows, private bytes does not) I do not have access to the source code or build process of this application. I have modified the boot.ini file on one of these servers to include /userva=2560, half way between the normal mode of operation and the /3GB option. Normally it takes weeks to reach the point of failure, but I'd like to see right away whether this has actually had any effect. As I understand it, this option limits the kernel to the remaining address space (1536MB instead of 2048), but does not necessarily give an application the extra address space, depending on the flags in the application's PE header. How can I determine whether the O/S is allowing a particular application, running in production, to access address space above 2GB? Additionally, what's the best way to monitor the system to ensure that the kernel is not starved for address space, and more generally how should I go about finding the optimal value for this setting?

    Read the article

  • Windows 2008 Server Unable to activate with other methods

    - by matt king
    I'm trying to activate windows 2008 server SP2 today since the activation trial is done. I do not have an internet connection with this server so I can not activate online, and with the other servers in this farm I've been able to run the slmgr -ipk xxxxx-xxxxx-xxxxx-xxxxx-xxxxx and then it would open up the activate by phone method and we would just activate that way. I say again, I don't have an internet connection so I can not do the online activate. If I do the slui 4 it brings up the activation window but show me other ways to activate is still greyed out. I've disabled the NIC on this Hyper-V server and I still can not get the other way to activate to show up... Anyone have any ideas? This computer is one of my AD servers so.. it being in notification mode kind of sucks. Thanks.

    Read the article

  • How to bypass Forefront TMG for downloading from Adobe Cloud

    - by user1006272
    I hope that this question has not been asked as I've spent a couple of days googling around trying to find a solution. I have one computer that needs to download from Adobe Cloud to install applications like Photoshop etc... The issue I'm having is that Adobe uses a download manager program (AdobeApplicationManager.exe) that just keeps incrementing the time left on the download of any app like Photoshop. Is there a way to allow just the download manager from that one computer to bypass any filtering settings in Forefront TMG 2010? I have very little knowledge of servers / ISA servers / Forefront TMG and have been thrown into this position by luck I guess. Any help with this would be highly appreciated. Thanks in advance.

    Read the article

  • Nagios remote monitoring: NRPE Vs. SSH

    - by sam
    We use Nagios to monitor quite a few (~130) servers. We monitor CPU, Disk, RAM and a few other things on each server. I've always used SSH to run the remote commands, purely because it requires little to no additional config on the remote server, just install nagios-plugins, create the nagios user and add the SSH key, all of which I've automated into a shell script. I've never actually considered the performance implications of using SSH over NRPE. I'm not too bothered about the load hit on the Nagios server (It's probably over-speced for what it does, it's never been over 10% CPU), but we run each remote check every 30 seconds and each server has 5 different checks performed. I assume SSH requires more resources for each check but is there a huge difference? (I.E. enough of a difference to warrant the switch to NRPE). If it's any help, we monitor a mix of physical servers (Normally with 8, 12 or 16 physical cores) and Amazon EC2 medium/large instances.

    Read the article

  • Highly Available Web Application (LAMP)

    - by Anthony Rizzo
    I work for a small company who provides a web application for thousands of users. Earlier this year they had one server hosted one company. We recently acquired another server in a different location with the hopes of one day making this a redundant failover machine. I understand what to do with the mysql replication, I plan on using a master-master replication setup, and rsync to sync the scripts and files, however I am at a stand still about how to configure the fail-over. Ideally I would like the two machines to accept requests, like a round robin dns, however if one machine goes down I do not want requests to go that machine. All of the solutions I am come across assumes high availability of servers in the same location, these servers are in two completely different locations with different public ip address. Any help would be great. Thanks

    Read the article

  • What Sort of Server Setup Am I Likely to Need? - School A/V streaming

    - by DeathMagus
    My prior experience with servers has generally been limited to home file-sharing servers, low-traffic web-servers, and the like. This leaves me with the technical knowledge of how to set up a system, but little experience in terms of scaling said system. My current project, however, has me as the technical lead in setting up a school for online audio and video streaming. The difficulty I'm running into is that I don't quite have the experience to guess what they'll need, and they don't have the experience to tell me - so I've tried to ask as many pertinent questions about what they want to do with their server, and here's what I found out: About 1000 simultaneous users, and hoping to expand (possibly significantly) Both video and audio streaming, at obviously the highest quality possible Support for both live and playlist-based streaming. Probably only one channel, but as it's an educational opportunity, I imagine letting them have a few more wouldn't hurt. No word on whether they're locked into Windows or whether Linux is acceptable. Approximate budget - $7000. It may actually be about $2k less than this, because of a mishap with another technology firm (they ordered a $7000 DV tape deck for some reason, and now the company wants them to pay a 30% restocking fee). The tentative decisions I've already made: I'm planning on using Icecast 2 for my streaming server, fed by VLC Shoutcast encoding. Since the school already has a DMZ set up, I plan on placing the Icecast server in there, and feeding it through their intranet from a simple workstation computer in their studios. This system isn't in any way mission critical - it's an education tool (they're a media magnet school), so I figure redundancy is not worthwhile to them from a cost:benefit perspective. What I don't know is this: How powerful of a server will I need? What is likely to be my major throttle - bandwidth? How can I mitigate that? Will I need anything special for the encoding workstation other than professional video and audio capture cards and a copy of VLC? Are there any other considerations that I'm simply missing? Thanks a lot for any help - if there's more information you need, let me know and I'll tell you all I can.

    Read the article

  • DNS propagation

    - by Paddington
    I have 1 primary DNS server (ns1.mydomain.com) running on Fedora and 2 secondary ones (ns2 and ns3). DNS changes made on my web servers first goes to the primary name server and then propagates to the secondary servers. After making a DNS change on a domain on the web server, I can't see the new dns information on my ns1 when I perform: dig @ns1 A blahblah.com I then went to the master records on the names server (uses named) in the directory /var/named/run-root/var/named/masters and I see the A record has been updated appropriately. Tailing the logs /var/log/messages is not showing any errors. What could be the issue?

    Read the article

  • Solaris 10 5/09 can't find SATA disk

    - by anon
    We need to run standard Solaris 10 on a few development servers (Dell 530s) because we can't get a commercial application running on OpenSolaris (we're still trying). However, we are finding that sometimes when Solaris 10 goes to do the install, after setup screens, it can't find the SATA drive. We tried the BIOS setting described here: BigAdmin On some Dells 530s, Solaris GA installs fine, but on others it doesn't. OpenSolaris always installs. Is there some way we can determine (eg. installing OpenSolaris and examining the SATA driver used) what OpenSolaris detected and use some option or driver from it to get Solaris 10 installed on our development Dell servers?

    Read the article

  • Amazon EC2 Web server in Load Balancer gives 503

    - by dale
    we've been running our web servers at Amazon with load balancer and auto-scaling for over a year with no problem. All of a sudden today the request began to get aborted with the error: 503 ... Backend server is at capacity The web servers are at 1% CPU and no other alarms trigger. We use Amazons load balancer and nginx. Lots of requests like this are showing up in the access_log. 10.246.114.93 - - [05/Jun/2014:20:16:09 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:09 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:09 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:09 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:10 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:10 +0000] "-" 400 0 "-" "-" 10.246.114.93 - - [05/Jun/2014:20:16:10 +0000] "-" 400 0 "-" "-" 10.229.15.214 - - [05/Jun/2014:20:16:10 +0000] "-" 400 0 "-" "-" 10.229.15.214 - - [05/Jun/2014:20:16:10 +0000] "-" 400 0 "-" "-" Any thoughts?

    Read the article

  • Design Question

    - by dturner71
    Can I have two independent Connection servers attached to the same vCenter server? Here's my scenario. I'm setting up View 4 to provide desktops to two seperate Windows domains that are on different IP subnets seperated by a firewall. One cluster of physical servers, one vCenter server, linked clones. As I understand it View Connection server has to be a member of a Windows domain in order for quickprep to work. So the way to provide desktops to both Windows domains is to have a Connection server in each one right? Then open ports in the firewall so the Connection server from the other subnet can communicate with vCenter. Any reason why this won't work? Or is there a better way to accomplish it?

    Read the article

  • VMware ESX 3.5 Host Health shown as unknown

    - by dunxd
    I have an ESX 3.5 update 5 cluster of five host servers, all fully patched as of this Friday. Today I noticed that one of the servers has the Hardware Health status as unknown in Virtual Center Infrastructure Client. When I look at the Health Status view under configuration for that host, all the items are status Unknown. The server is exactly the same configuration as the others - same model (HP DL360 G5), memory, NICs etc. I have tried restarting the management service with service mgmt-vmware restart but this has not resolved the issue. Asides from this, I am not seeing any issues with the cluster - however, I hate having a blind spot like this. Any ideas?

    Read the article

  • How secure is using "Normal password" for SMTP with connection type = STARTTLS?

    - by harshath.jr
    I'm using an email client for the first time - for the most part I've always used gmail via the web interface. Now I'm setting up thunderbird to connect to an email server of my own (on my own server, own domain name, etc). The server machine (and the email server on it) was preconfigured for me. Now i figured out away by which I'm able to send and receive email, but I noticed that in the outgoing and incoming servers section, the connection type was STARTTLS (and not SSL/TLS), and the Authentication Type was "Normal Password". Does this mean that the password will be sent across in plain text? I'm very paranoid about security - its the only way that it works for me. Can someone please post links that explain how SMTP (my outbound server) and IMAP (my inbound server) servers work, and what connection type means what? Thanks! PS: If this question does not belong here, please redirect me.

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >