Search Results

Search found 1419 results on 57 pages for 'availability'.

Page 6/57 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • SQL server availability issue: large query stops other connections from connecting

    - by Carlos
    I've got a high-spec (multicore, RAID) server running MS SQL 2008, with several databases on it. I have a low throughput process that periodically needs a small amount of information from one of the DBs, and the code seems to work fine. However, sometimes when one of my colleagues does a huge query against one of the other DBs, I see full CPU usage on the machine, and connections from my app time out. Why does this happen? I would have thought the many cores and harddisks would somehow (together with cleverly written DB server) be able to keep at least some of the resources free for other apps? I'm pretty sure he doesn't use multiple connections for his query. What can I do to prevent this?

    Read the article

  • Multi domain server with dedicated SSL's HA

    - by user3692800
    I am hosting a server with 150 domain names (websites), each of the ssl's requere dedicated IP address. So server windows 2008, with 150 IP addressees and 150 websites. I need to have high availability solution. So thinking setting up AWS but ELB will not be a solution... and max IP's I can get per instance is 12 addresses. So what can I do to have all 150 sites hosted on one instance and be HA with instance in different availability zone.

    Read the article

  • High Availability Clustering and Virtualization

    - by tmcallaghan
    I'm trying to understand how the various virtualization vendors (specifically Amazon EC2, but also VMware and Xen) enable software vendors to provide a real HA solution in the environment where the servers are virtualized. Specifically, if I'm running any HA application (exchange, databases, etc) I need to ensure that my redundant virtual "servers" aren't located on the same physical server. Using in-house virtualization solutions (VMware, Xen, etc) I can provision accordingly as well as check the virtual - physical arrangement. I could, however, accidentally "vmotion" to the same physical hardware. With EC2, I don't even have the ability at provision time to select different physical servers. Since their Cluster Compute Instances are 1 virtual server per physical server it seems to be the only way to guarantee I don't have a false sense of redundancy. Any ideas or thoughts would be helpful. What are others doing about this problem? If the vendors provided an API where I could get something as simple as a unique physical system identifier I could at least know if I'm going to have an issue. -Tim

    Read the article

  • Availability of big files on multiple servers

    - by Imises
    I have to handle many (1'000 - 30'000) big files ranging from 200MB up to 2GB. The demand for these files is variable (0 - 300 downloads / file). This is why a single file must saved on 2 or more servers. My servers are placed in different datacenters (France), with different size HDDs (750GB to 4TB). Currently I share the files using PHP and ncftpget / ncftpput, but it's very slow. I need a solution to handle balancing these files across 7+ servers.

    Read the article

  • Biztalk Server 2009 - Failover Clustering and Network Load Balancing (NLB)

    - by FullOfQuestions
    Hi, We are planning a Biztalk 2009 set up in which we have 2 Biztalk Application Servers and 2 DB Servers (DB servers being in an Active/Passive Cluster). All servers are running Windows Server 2008 R2. As part of our application, we will have incoming traffic via the MSMQ, FILE and SOAP adapters. We also have a requirement for High-availability and Load-balancing. Let's say I create two different Biztalk Hosts and assign the FILE receive handler to the first one and the MSMQ receive handler to the second one. I now create two host instances for each of the two hosts (i.e. one for each of my two physical servers). After reviewing the Biztalk Documentation, this is what I know so far: For FILE (Receive), high-availablity and load-balancing will be achieved by Biztalk automatically because I set up a host instance on each of the two servers in the group. MSMQ (Receive) requires Biztalk Host Clustering to ensure high-availability (Host Clustering however requires Windows Failover Clustering to be set up as well). No loading-balancing option is clear here. SOAP (Receive) requires NLB to achieve Load-balancing and High-availability (if one server goes down, NLB will direct traffic to the other). This is where I'm completely puzzled and I desperately need your help: Is it possible to have a Windows Failover Cluster and NLB set up at the same time on the two application servers? If yes, then please tell me how. If no, then please explain to me how anyone is acheiving high-availability and load-balancing for MSMQ and SOAP when their underlying prerequisites are mutually exclusive! Your help is greatly appreciated, M

    Read the article

  • Difference between all servers in one cluster and more than one cluster with servers?

    - by silla
    Not sure I understand what´s the difference or how it works when servers a running in one cluster or if there are more than one clusters with servers in it - regard High availability & Load Balancing. For me they are somehow the same, there is not really a big difference. Let´s make a simple example: 2 Servers in 1 Cluster 2 Clusters with each 1 Server - 1. If one Server failure, the other one is able to continue the work. The same for Load Balancing, these two Servers are able to balance the work together. - 2. The same thing! If one Server failure... The only thing that could be a problem with point 1. is if the Cluster fails (then both of the Server are dead). But is this even possible? I was reading stuff about clustering and high availability but I think I do not get this really. Probably I did not really understand how a cluster is working. Are these 2 points with 1 Cluster and 2 Clusters somehow the same or are there really some big differences? What should I know about it? Thank you

    Read the article

  • Heartbeat (Linux HA) and NetApp?

    - by Drew
    Does anyone have any experience setting up a high availability two node Linux cluster using heartbeat (linux-ha.org) and NetApp storage (preferably using SnapDrive for Linux)? Basically I would like to mount the same NetApp LUN over Fibre Channel to two servers in an Active/Passive mode (only one server can access the LUN at a time) Thanks!

    Read the article

  • Announcing: General Availability of Demantra 7.3.1.4!

    - by user702295
    Announcing: General Availability of Demantra 7.3.1.4! This new release brings important usability upgrades and key requested customer enhancements. Key features released in Demantra 7.3.1.4: - Improved user interface - Improved mobile support - Embed Demantra-Anywhere in Advanced Planning Command Center - Aggregate work orders for Asset Intensive Planning Additionally: - Demantra 7.3.1.4 is certified with VCP 12.1.3.8 only. Availability via patch 14405087.

    Read the article

  • Network load balancing, efficience and limits?

    - by Vimvq1987
    I'm about to study about NLB on Windows Server 2003. It archives both of my interests now: scalability and high-availability. But I don't know about its power in production environment. Is NLB a efficient solution? How does it implement in real-world? Is it popular? What are its limit? Thank you so much for answering my questions. :)

    Read the article

  • About High Availabiltity

    - by Invincible
    I know this is not perfect place to ask this question. Can anybody suggest some good resource or book on High-Availability? I want to learn as much as I can on high-availability before start building my system. Thanks in advance!

    Read the article

  • Seven Worlds will collide…. High Availability BI is not such a Distant Sun.

    - by Testas
    Over the last 5 years I have observed Microsoft persevere with the notion of Self Service BI over a series of conferences as far back as SQLBits V in Newport. The release of SQL Server 2012, improvements in Excel and the integration with SharePoint 2010 is making this a reality. Business users are now empowered to create their own BI reports through a number of different technologies such as PowerPivot, PowerView and Report Builder. This opens up a whole new way of working; improving staff productivity, promoting efficient decision making and delivering timely business reports. There is, however; a serious question to answer. What happens should any of these applications become unavailable? More to the point, how would the business react should key business users be unable to fulfil reporting requests for key management meetings when they require it?  While the introduction of self-service BI will provide instant access to the creation of management information reports, it will also cause instant support calls should the access to the data become unavailable. These are questions that are often overlooked when a business evaluates the need for self-service BI. But as I have written in other blog posts, the thirst for information is unquenchable once the business users have access to the data. When they are unable to access the information, you will be the first to know about it and will be expected to have a resolution to the downtime as soon as possible. The world of self-service BI is pushing reporting and analytical databases to the tier 1 application level for some of Coeo’s customers. A level that is traditionally associated with mission critical OLTP environments. There is recognition that by making BI readily available to the business user, provisions also need to be made to ensure that the solution is highly available so that there is minimal disruption to the business. This is where High Availability BI infrastructures provide a solution. As there is a convergence of technologies to support a self-service BI culture, there is also a convergence of technologies that need to be understood in order to provide the high availability architecture required to support the self-service BI infrastructure. While you may not be the individual that implements these components, understanding the concepts behind these components will empower you to have meaningful discussions with the right people should you put this infrastructure in place. There are 7 worlds that you will have to understand to successfully implement a highly available BI infrastructure   1.       Server/Virtualised server hardware/software 2.       DNS 3.       Network Load Balancing 4.       Active Directory 5.       Kerberos 6.       SharePoint 7.       SQL Server I have found myself over the last 6 months reaching out to knowledge that I learnt years ago when I studied for the Windows 2000 and 2003 (MCSE) Microsoft Certified System Engineer. (To the point that I am resuming my studies for the Windows Server 2008 equivalent to be up to date with newer technologies) This knowledge has proved very useful in the numerous engagements I have undertaken since being at Coeo, particularly when dealing with High Availability Infrastructures. As a result of running my session at SQLBits X and SQL Saturday in Dublin, the feedback I have received has been that many individuals desire to understand more of the concepts behind the first 6 “worlds” in the list above. Over the coming weeks, a series of blog posts will be put on this site to help understand the key concepts of each area as it pertains to a High Availability BI Infrastructure. Each post will not provide exhaustive coverage of the topic. For example DNS can be a book in its own right when you consider that there are so many different configuration options with Forward Lookup, Reverse Lookups, AD Integrated Zones and DNA forwarders to name some examples. What I want to do is share the pertinent points as it pertains to the BI infrastructure that you build so that you are equipped with the knowledge to have the right discussion when planning this infrastructure. Next, we will focus on the server infrastructure that will be required to support the High Availability BI Infrastructure, from both a physical box and virtualised perspective. Thanks   Chris

    Read the article

  • What database is easy to maintain and manage in a cluster?

    - by Sanoj
    I'm looking for a database (DBMS) that is easy to scale out. I would like to have high availability so I need a multi-master cluster, where the data is replicated to two or more physical computers. I would also like to be able to start with one node (no replication), and then scale out to more nodes as needed without a reinstallation or downtime. I would like to have a DBMS that are easy to maintain and manage. It should be easy to add nodes, remove nodes, take live backup and monitor the use of resources. It doesn't have to be a relational database system, so a NoSQL is okey. And I would like to have a free version so I can test it in small scale and compare it with alternatives. What alternatives do I have?

    Read the article

  • (simple) linux HA with vmware vsphere?

    - by derhelge
    I hope my upcoming question is specific enough, and you are able and willing to support :-) We have several openSUSE VMs in an ESX-Cluster (three ESX-Servers) with an attached iSCSI-SAN. All of those Linux VMs are "single point of failure"-configured, which means in the case of a Web-Server: LAMP, storage, etc. everything on this machine. This was very simple and in case of a failure (in the last years: kernel panics or apache crashes) a simple reboot triggered by a script did it. But the problem is: How to upgrade/maintain the w(eb-)application or the underlying OS without downtime? This wasn't really managable and i did this in the early morning ;) How can i achieve a "simple" High-Availability Cluster now? I thought of: DRBD with heartbeat with 2 VMs. And for the storage a RDM (raw device mapped) LUN and change the read-write-permissions for both VMs. Is this a good idea? Anyone has a better solution?

    Read the article

  • SonicWall HA "gotchas"?

    - by Mark Henderson
    We're looking to move away from PFSense and CARP to a pair of SonicWall NSA 24001 configured in Active/Passive for High Availability. I've never dealt with SonicWall before, so is there anything I should know that their sales guy won't tell me? I'm aware that they had an issue with a lot of their devices shutting down connectivity because of a licensing fault, and they have an overtly complex management GUI (on the older devices at least), but are there any other big "gotchas" that I need to be aware of before committing a not insubstantial amount of money towards these devices? 1If you're outside the US, the SonicWall global sites suck balls. Use the US site for all your product research, and then use your local site when you're after local information.

    Read the article

  • Resilient Linux Mail Server Setup

    - by Coops
    How would people design a resilient mail server setup with Linux? On an application level what the system needs to provide is both an incoming and outgoing mail service (i.e. SMTP & IMAP), along with filtering and archive storage (the archive part isn't critical yet, so we'll look at this later probably). What is required on top of this is a resilient system, i.e. one which will handle individual server failures without interrupting service. As such I would term this a High Availability mail system. This is in contrast to a High Performance mail setup, as in our case the volume of mail being handled isn't the important factor, it's simply that it stays online. Having not approached this problem before, the first thing I thought of was a clustered file system (gfs/gluster/etc), combined with heartbeat to failover a floating IP to another box in the case of a server failure. Combined with postfix & dovecot does this sound feasible to people?

    Read the article

  • Highly Available Web Application (LAMP)

    - by Anthony Rizzo
    I work for a small company who provides a web application for thousands of users. Earlier this year they had one server hosted one company. We recently acquired another server in a different location with the hopes of one day making this a redundant failover machine. I understand what to do with the mysql replication, I plan on using a master-master replication setup, and rsync to sync the scripts and files, however I am at a stand still about how to configure the fail-over. Ideally I would like the two machines to accept requests, like a round robin dns, however if one machine goes down I do not want requests to go that machine. All of the solutions I am come across assumes high availability of servers in the same location, these servers are in two completely different locations with different public ip address. Any help would be great. Thanks

    Read the article

  • What happens if an OpenStack cloud controller dies?

    - by magu
    I've been reading up on OpenStack and how we can re-create an EC2/S3-style cloud for our internal development and I'm having a hard time finding information on how the OpenStack cloud controller provides redundancy of the cloud management services. I know I can setup multiple Swift and Nova nodes, but not a single document/article/howto/wiki contains information on: a) what happens if the cloud controller node dies; and b) how to setup redundant cloud controllers. It seems to me that, although it is massively scalable, there is a big single-point-of-failure built into OpenStack. Can anyone with more experience on OpenStack please shed some light as to how it all works in regards to high-availability?

    Read the article

  • How to configure HA iSCSI for Solaris 10

    - by Noah
    BACKGROUND: We have a StarWind NAS that we are currently using for High Availability storage with our Windows network. Starwind has mirrored drives and multiple ip paths, that the Windows Server combines into one HA disk store. QUESTION: How do I accomplish the same thing under Solaris 10? I've looked at ZFS but to document seems to indicate that ZFS wants to do its own raid/mirroring. I can also attach via iSCSI from Solaris and am presented with both drives being served by the Starwind NS. So, how do I configure solaris so that disk M1 and M2 are considered as a single fault tolerant drive?

    Read the article

  • How to setup a simple Ubuntu Server Tomcat cluster on VirtualBox for testing?

    - by Alex Pakka
    I am looking for a step by step instructions to setup at leat two (and later more) simple Ubuntu Virtual Core 12.10 Server VMs on Oracle VirtualBox under Windows 7 64bit. The test setup would be: Apache HTTP server on the Windows host acting as a Load Balancer. The result will be that going to http://localhost:8080 would balance between two nodes and prove session replication. Two lean, small footprint Ubuntu Server guest nodes with Java 7 and Tomcat 7. The intention is to help everyone doing High Availability / Load Balancing development and testing to create a reasonable environment on the local workstation or mainstream notebook in as little time as possible.

    Read the article

  • Can I use Veritias Storage Manager to provide HA storage using server-local storage?

    - by Paul
    I have a need to provide an high-availability ftp/http file repository. Upload will happne to one server, but the uploaded file must be immediately visisble on all other servers I can handle the failover of the servers themeselves using load balancers. But in the event of failure of one server, the other servers must see the same contents of the repository. Normally, I'd use a SAN for this, but in this case the data centre standards do not allow SAN/external storage - all storage will be local to the servers. Cam I use Veritas Storage Manager (or any other product) to manage mirroring hte contents between servers in this way? Or does that require a SAN? I couldn't tell either way from a quick look at the data sheets etc.

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by user65124
    Hi there. We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • DNS failover in a two datacenter scenario

    - by wanson
    I'm trying to implement a low-cost solution for website high availability. I'm looking for the downsides of the following scenario: I have two servers with the same configuration, content, mysql replication (dual-master). They are in different datacenters - let's call them serverA and serverB. Users use serverA - serverB is more like a backup. Now, I want to use DNS failover, to switch users from serverA to serverB when serverA goes down. My idea is that I setup DNS servers (bind/powerdns) on serverA and serverB - let's call them ns1.website.com and ns2.website.com (assuming I own website.com). Then I configure my domain to use them as its nameservers. Both DNS servers will return serverA IP as my website's IP. If serverA goes down I can (either manually or automatically from serverB) change configuration of serverB's DNS, to return IP of serverB as website's IP. Of course the TTL will be low, as it's supposed to be in DNS failovers. I know that it may take some time to switch to serverB (DNS ttl, time to detect serverA failure, serverB DNS reconfiguration etc), and that some small part of users won't use serverB anyway. And I'm OK with that. But what are other downsides of such an approach? An alternative scenario is that ns1.website.com will return serverA IP as website's IP, and ns2.website.com will return serverB IP as website's IP. But AFAIK clients not always use primary nameserver and sometimes would use secondary one. So some small part of users would use serverB instead of serverA which is not quite what I'd like. Can you confirm that DNS clients behave like that and can you tell what percentage of clients would possibly use serverB instead of serverA (statistically)? This one also has the downside that when serverA goes back up, it will be automatically used as website's primary server, which is also a bad situation (cold cache, mysql replication could fail in the meantime etc). So I'm adding it only as a theoretical alternative. I was thinking about using some professional DNS failover companies but they charge for the number of DNS requests and the fees are very high (why?)

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >