Search Results

Search found 4461 results on 179 pages for 'availability groups'.

Page 2/179 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Openldap, groups, admin groups, etc

    - by Juan Diego
    We have a samba server as PDC with OpenLDAP. So far everything is working, even windows 7 can log on to the Domain. Here is the tricky part. We have many departments, each department has it's own IT guys, and these IT guy should be able to create users in their department and change any info of the users in their department. My Idea was to create 2 groups for each department, For example: Department1 and Admins Department1. Admins Deparment1 has "write" priviledges for members of group Department dn: ou=People,dc=mydomain,dc=com,dc=ec objectClass: top objectClass: organizationalUnit ou: People dn: cn=Admins,ou=Group,dc=mydomain,dc=com,dc=ec objectClass: groupOfNames objectClass: top cn: Admins dn: cn=Admins Department1,cn=Admins,ou=Group,dc=mydomain,dc=com,dc=ec objectClass: groupOfNames objectClass: top cn: Admins Department1 member: uid=jdc,ou=People,dc=mydomain,dc=com,dc=ec structuralObjectClass: groupOfNames I dont know if you should make Department1 as part of Domain Users dn: cn=Deparment1,cn=Domain Users,ou=Group,dc=mydomain,dc=com,dc=ec objectClass: groupOfNames objectClass: top cn: Deparment1 member: uid=user1,ou=People,dc=mydomain,dc=com,dc=ec Or just create the deparments like this. dn: cn=Deparment1,ou=Group,dc=mydomain,dc=com,dc=ec objectClass: groupOfNames objectClass: top cn: Deparment1 member: uid=user1,ou=People,dc=mydomain,dc=com,dc=ec I seems that when you use smbldap tools bydefault the users are part of Domain Users even if you dont have them as part of Domain Users in the memberUid attribute, when I use finger they showup as part of the Domain Users group. I dont want the Departments Admins to be Domain Admins because they have power over all the users, unless I am mistaken. I also have trouble with the ACLs. I was trying to create an acl for members of this Admins group, I was trying with this search, but didnt work ldapsearch -x "(&(objectClass=organizationalPerson)(member=cn=Admins Department1,ou=Group,dc=mydomain,dc=com,dc=ec))" I am open to suggestions.

    Read the article

  • Android 2.0 contact groups manipulation

    - by Bao Le
    I would manipulate the contact groups in Android 2.O. My code is following: To get a list of group (with id and title): final String[] GROUP_PROJECTION = new String[] { ContactsContract.Groups._ID, ContactsContract.Groups.TITLE }; Cursor cursor = ctx.managedQuery(ContactsContract.Groups.CONTENT_URI, GROUP_PROJECTION, null, null, ContactsContract.Groups.TITLE + " ASC"); Later, on an ListView, I select a group (onClick event) and read all contacts belong to this selected group by following code: String where = ContactsContract.CommonDataKinds.GroupMembership.GROUP_ROW_ID + "=" + groupid + " AND " + ContactsContract.CommonDataKinds.GroupMembership.MIMETYPE + "='" + ContactsContract.CommonDataKinds.GroupMembership.CONTENT_ITEM_TYPE + "'"; Problem: ContactsContract.Groups._ID in the first query does not match with the ContactsContract.CommonDataKinds.GroupMembership.GROUP_ROW_ID in the second query. Any solution/suggestion? Thanks

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

  • Cloud Computing = Elasticity * Availability

    - by Herve Roggero
    What is cloud computing? Is hosting the same thing as cloud computing? Are you running a cloud if you already use virtual machines? What is the difference between Infrastructure as a Service (IaaS) and a cloud provider? And the list goes on… these questions keep coming up and all try to fundamentally explain what “cloud” means relative to other concepts. At the risk of over simplification, answering these questions becomes simpler once you understand the primary foundations of cloud computing: Elasticity and Availability.   Elasticity The basic value proposition of cloud computing is to pay as you go, and to pay for what you use. This implies that an application can expand and contract on demand, across all its tiers (presentation layer, services, database, security…).  This also implies that application components can grow independently from each other. So if you need more storage for your database, you should be able to grow that tier without affecting, reconfiguring or changing the other tiers. Basically, cloud applications behave like a sponge; when you add water to a sponge, it grows in size; in the application world, the more customers you add, the more it grows. Pure IaaS providers will provide certain benefits, specifically in terms of operating costs, but an IaaS provider will not help you in making your applications elastic; neither will Virtual Machines. The smallest elasticity unit of an IaaS provider and a Virtual Machine environment is a server (physical or virtual). While adding servers in a datacenter helps in achieving scale, it is hardly enough. The application has yet to use this hardware.  If the process of adding computing resources is not transparent to the application, the application is not elastic.   As you can see from the above description, designing for the cloud is not about more servers; it is about designing an application for elasticity regardless of the underlying server farm.   Availability The fact of the matter is that making applications highly available is hard. It requires highly specialized tools and trained staff. On top of it, it's expensive. Many companies are required to run multiple data centers due to high availability requirements. In some organizations, some data centers are simply on standby, waiting to be used in a case of a failover. Other organizations are able to achieve a certain level of success with active/active data centers, in which all available data centers serve incoming user requests. While achieving high availability for services is relatively simple, establishing a highly available database farm is far more complex. In fact it is so complex that many companies establish yearly tests to validate failover procedures.   To a certain degree certain IaaS provides can assist with complex disaster recovery planning and setting up data centers that can achieve successful failover. However the burden is still on the corporation to manage and maintain such an environment, including regular hardware and software upgrades. Cloud computing on the other hand removes most of the disaster recovery requirements by hiding many of the underlying complexities.   Cloud Providers A cloud provider is an infrastructure provider offering additional tools to achieve application elasticity and availability that are not usually available on-premise. For example Microsoft Azure provides a simple configuration screen that makes it possible to run 1 or 100 web sites by clicking a button or two on a screen (simplifying provisioning), and soon SQL Azure will offer Data Federation to allow database sharding (which allows you to scale the database tier seamlessly and automatically). Other cloud providers offer certain features that are not available on-premise as well, such as the Amazon SC3 (Simple Storage Service) which gives you virtually unlimited storage capabilities for simple data stores, which is somewhat equivalent to the Microsoft Azure Table offering (offering a server-independent data storage model). Unlike IaaS providers, cloud providers give you the necessary tools to adopt elasticity as part of your application architecture.    Some cloud providers offer built-in high availability that get you out of the business of configuring clustered solutions, or running multiple data centers. Some cloud providers will give you more control (which puts some of that burden back on the customers' shoulder) and others will tend to make high availability totally transparent. For example, SQL Azure provides high availability automatically which would be very difficult to achieve (and very costly) on premise.   Keep in mind that each cloud provider has its strengths and weaknesses; some are better at achieving transparent scalability and server independence than others.    Not for Everyone Note however that it is up to you to leverage the elasticity capabilities of a cloud provider, as discussed previously; if you build a website that does not need to scale, for which elasticity is not important, then you can use a traditional host provider unless you also need high availability. Leveraging the technologies of cloud providers can be difficult and can become a journey for companies that build their solutions in a scale up fashion. Cloud computing promises to address cost containment and scalability of applications with built-in high availability. If your application does not need to scale or you do not need high availability, then cloud computing may not be for you. In fact, you may pay a premium to run your applications with cloud providers due to the underlying technologies built specifically for scalability and availability requirements. And as such, the cloud is not for everyone.   Consistent Customer Experience, Predictable Cost With all its complexities, buzz and foggy definition, cloud computing boils down to a simple objective: consistent customer experience at a predictable cost.  The objective of a cloud solution is to provide the same user experience to your last customer than the first, while keeping your operating costs directly proportional to the number of customers you have. Making your applications elastic and highly available across all its tiers, with as much automation as possible, achieves the first objective of a consistent customer experience. And the ability to expand and contract the infrastructure footprint of your application dynamically achieves the cost containment objectives.     Herve Roggero is a SQL Azure MVP and co-author of Pro SQL Azure (APress).  He is the co-founder of Blue Syntax Consulting (www.bluesyntax.net), a company focusing on cloud computing technologies helping customers understand and adopt cloud computing technologies. For more information contact herve at hroggero @ bluesyntax.net .

    Read the article

  • Maximum Availability with Oracle GoldenGate

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Database offers a variety of built-in and optional products for maximum availability, and it is well known for its robust high availability and disaster recovery solutions. With its heterogeneous, real-time, transactional data movement capabilities, Oracle GoldenGate is a key part of the Maximum Availability Architecture for Oracle Database. This week on Thursday Dec. 13th we will be presenting in a live webcast how Oracle GoldenGate fits into Oracle Database Maximum Availability Architecture (MAA). Joe Meeks from the Oracle Database High Availability team will discuss how Oracle GoldenGate complements other key products within MAA such as Active Data Guard. Nick Wagner from GoldenGate PM team will present how to upgrade to latest Oracle Database release without any downtime. Nick will also cover 2 new features of  Oracle GoldenGate 11gR2:  Integrated Capture for Oracle Database and Automated Conflict Detection and Resolution. Nick will provide in depth review of these new features with examples. Oracle GoldenGate also offers maximum availability for non-Oracle databases, such as HP NonStop, SQL Server, DB2 (LUW, iSeries, or zSeries) and more. The same robust, reliable real-time, bidirectional data movement capabilities apply to all supported databases.  I'd like to invite you to join us on Thursday Dec. 13th 10am PT/1pm ET to hear from the product experts on how to use GoldenGate for maximizing database availability and to ask your questions. You can find the registration link below. Webcast: Maximum Availability with Oracle GoldenGate Thursday Dec. 13th 10am PT/1pm ET Look forward to another great webcast with lots of interaction with the audience. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • When tab groups are loaded, Firefox becomes unresponsible for minutes (Unresponsive script)

    - by unor
    I have several tab groups (~ 20) in Firefox. I can start the browser without any problems. However, as soon as I … click at the "Group tabs" icon in the toolbar, or right-click on a tab and hover over "Move to tab group", … Firefox becomes unresponsible/freezes for a rather long time (more than 2 minutes). It seems to load all tab groups (it doesn't load all the pages! I deactivated this in the settings). While this is happening, I get several "Unresponsive script" warnings, like: Script: chrome://global/content/bindings/tabbox.xml:0 (most of the time) Script: chrome://global/content/bindings/tabbox.xml:418 Script: chrome://browser/content/tabview.js:400 Script: chrome://browser/content/tabview.js:522 Script: resource://modules/sessionstore/SessionStore.jsm:3578 Script: resource:///components/PageThumbsProtocol.js:79 (rare) Script: resource://gre/modules/XPCOMUtils.jsm:323 (rare) (probably also other warnings, didn't record them yet, though) On all of these I click "Continue". After ~ 2-3 minutes and 3-5 warnings, I can use Firefox again. Now I can switch tab groups without any problems. Why is this happening? How can I prevent the long loading time? Is there maybe a about:config setting I could try? I started Firefox in Safe Mode (= without any add-ons): the problem still exists.

    Read the article

  • Cross-forest universal groups on Windows Server?

    - by DotGeorge
    I would like to create a Universal Group whose members are a mix of cross-forests users and groups. In the following example, two forests are mentioned (US and UK) and two domains in each forest (GeneralStaff and Java): For example, the universalDevelopers group may comprise of members from UK.Java.Developers and US.Java.Developers. Then, for example, there may be a group of universalSales which contains the users UK.GeneralStaff.John and US.GeneralStaff.Dave. In UK forest at the minute, I can freely add members and groups from the UK. But there is no way to add members from the US forest, despite having a two-way trust in place... e.g. I can login with US members into UK and vice-versa. A further complication is that, with a Universal group in the UK (which contains three domains), I can only add two of the three. It can't see the third. Could people please provide some thoughts on why cross-forest groups can't be created and ways of 'seeing' all domains within a forest. EDIT: This is on a combination of Windows 2003 and 2008 server. Answers can be regarding either. Thanks!

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • LDAP query on linux against AD returns groups with no members

    - by SethG
    I am using LDAP+kerberos to authenticate against Active Directory on Windows 2003 R2. My krb5.conf and ldap.conf appear to be correct (according to pretty much every sample I found on the 'net). I can login to the host with both password and ssh keys. When I run getent passwd, all my ldap user accounts are listed with all the important attributes. When I run getent group, all the ldap groups and their gid's are listed, but no group members. If I run ldapsearch and filter on any group, the members are all listed with the "member" attribute. So the data is there for the taking, it's just not being parsed properly. It would appear that I simply am using an incorrect mapping in ldap.conf, but I can't see it. I've tried several variations and all give the same result. Here is my current ldap.conf: host <ad-host1-ip> <ad-host2-ip> base dc=my,dc=full,dc=dn uri ldap://<ad-host1> ldap://<ad-host2> ldap_version 3 binddn <mybinddn> bindpw <mybindpw> scope sub bind_policy hard nss_reconnect_tries 3 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 3 nss_map_objectclass posixAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute gidNumber msSFU30GidNumber nss_map_attribute uidNumber msSFU30UidNumber nss_map_attribute cn cn nss_map_attribute gecos displayName nss_map_attribute homeDirectory msSFU30HomeDirectory nss_map_attribute loginShell msSFU30LoginShell nss_map_attribute uniqueMember member pam_filter objectcategory=User pam_login_attribute sAMAccountName pam_member_attribute member pam_password ad Here's the kicker: this config works 100% fine on a different linux box with a different distro. It does not work on the distro I am planning on switching to. I have installed from source the versions of pam_ldap and nss_ldap on the new box to match the old box, which fixed another problem I was having with this setup. Other relevant info is the original AD box was Windows 2003. It's mirror died a horrible hardware death so I'm trying to add two more 2003-R2 servers to the mirror tree and ultimately drop the old 2003 box. The new R2 boxes appear to have joined the DC forest properly. What do I need to do to get groups working? I've exhausted all the resources I could find and need a different angle. Any input is appreciated. Status update, 7/31/09 I have managed to tweak my config file to get full info from the AD and performance is nice and snappy. I replaced the back-rev'd copies of pam_ldap and nss_ldap with the current ones for the distro I'm using, so it's back to a standard out-of-the-box install. Here's my current config: host <ad-host1-ip> <ad-host2-ip> base dc=my,dc=full,dc=dn uri ldap://<ad-host1> ldap://<ad-host2> ldap_version 3 binddn <mybinddn> bindpw <mybindpw> scope sub bind_policy soft nss_reconnect_tries 3 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 3 nss_connect_policy oneshot referrals no nss_map_objectclass posixAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute gidNumber msSFU30GidNumber nss_map_attribute uidNumber msSFU30UidNumber nss_map_attribute cn cn nss_map_attribute gecos displayName nss_map_attribute homeDirectory msSFU30HomeDirectory nss_map_attribute loginShell msSFU30LoginShell nss_map_attribute uniqueMember member pam_filter objectcategory=CN=Person,CN=Schema,CN=Configuration,DC=w2k,DC=cis,DC=ksu,DC=edu pam_login_attribute sAMAccountName pam_member_attribute member pam_password ad ssl off tls_checkpeer no sasl_secprops maxssf=0 The remaining problem now is when you run the groups command, not all subscribed groups are listed. Some are (one or two), but not all. Group memberships are still honored, such as file and printer access. getent group foo still shows that the user is a member of group foo. So it appears to be a presentation bug, and does not interfere with normal operation. It also appears that some (I have not determined exactly how many) group searches do not resolve correctly, even though the group is listed. eg, when you run "getent group bar", nothing is returned, but if you run "getent group|grep bar" or "getent group|grep <bar_gid>" you can see that it indeed listed and your group name and gid are correct. This still seems like an LDAP search or mapping error, but I can't figure out what it is. I'm a heckuva lot closer than earlier in the week, but I'd really like to get this last detail ironed out.

    Read the article

  • Add multiple @groups to valid users

    - by skids89
    In smb.conf I have the line valid users = @Staff @Directors Is this a valid syntax to add two groups to the valid users line? It does not seem to work right on our xp pro clients. If not which of the following is the proper way (if any) to make two groups valid users of this network drive? Which is proper for windows clients? valid users = +Staff +Directors Or do I need to use the & valid users = &Staff &Directors Or some combo of the two? valid users = &+Staff &+Directors valid users = +&Staff +&Directors

    Read the article

  • Linux - Debian - Original groups that a user is in

    - by Auxiliary
    This is embarrassing; I used the usermod command to add myself to the disk group. However I forgot to use the append option! so I'm not a member of any of the groups that I was originally, now terminal says that I'm not even a sudoer (How rude!). So the question is: What are the original groups that a normal Administrator type of user is in? BTW, I am using Linux Mint Debian Edition (LMDE). Any help is appreciated.

    Read the article

  • Using Active Directory Security Groups as Hierarchical Tags

    - by Nathan Hartley
    Because active directory security groups can... hold objects regardless of OU. be used for reporting, documentation, inventory, etc. be referenced by automated processes (Get-QADGroupMember). be used to apply policy be used by WSUS I would like to use security groups as hierarchical tags, representing various attributes of a computer or user. I am thinking of (computer centric) tags something like these: /tag/vendor/vendorName /tag/system/overallSystemName /tag/application/vendorsApplicationName /tag/dependantOn/computerName /tag/department/departmentName /tag/updates/Group1 Before fumbling through implementing this, I thought I would seek comments from the community. Specifically in the areas: Does this make sense? Would it work? Has anyone else attempted this? Is there a good reference on the matter I should read? How best to implement the hierarchy? Tag_OU\Type_OU\GroupName (limits quantity in OU, uniqueness not guaranteed) Tag_OU\Type_OU\Tag-Type-GroupName (limits quantity in OU, uniqueness guaranteed, verbose) etc ... Thanks in advance!

    Read the article

  • Fixing Broken Groups

    - by themaestro
    Hey, I just got onto a new project with the student government at my University and we're trying to get our webserver into a more workable state. The current problem is that all of us for some reason have sudo power on the server, but we can't write/create files anywhere on the server (as far as we can tell) currently. Our groups are currently as follows: /srv/ice/db$ groups goshri sshamim rmenezes goshri : goshri sshamim : sshamim ptx rmenezes : rmenezes ptx daifotis : daifotis ptx We added a few of us to ptx because we thought that might give us write access but it didn't. We have a bunch of webapps running on this server but since it's university things change hands quickly. What can we do to give us read access?

    Read the article

  • Managing scalability and availability with two servers running Apache Httpd, Apache Mina and MySQL

    - by celalo
    Hello, I am not a developer and I don't much experience with scalable server architectures. But I am in need of a highly available and scalable system for one of my projects. There is going to be two servers I am going to use for the time being. Both with 4 core CPUs and 8 GB RAM with RAID structures running CentOS 5.4. I will also have feature called "Failover IP" which enables to direct an IP address to another server within short time. The applications which will be run on the servers: There is going to be a Java application based on Apache Mina server for handling TCP requests from some hundreds of network devices where the devices are going to send request as much as one request per minute. Handling those requests, includes parsing the requests and inserting a few rows to the Database. Parsing requests before inserting data to the DB does take neglectable time. There is going to be MySQL server, as I stated above. Also there is going to be a PHP web application running on Apache Httpd Server which uses the same DB with the Java application. What I wish to have is to make use of those two servers at the most. I was imagining to have the servers identical, sharing the work load. MySQL could be a cluster maybe? And if some application fails or the whole machine goes down, the other will continue serving the requests seamlessly. Reminding that a "Failover IP" feature will be available for me to take advantage of. Also, It should be kept in mind that number of servers could increase in time, to meet the demand. What can you suggest? Which kind of tools I can make use of? Which kind of monitoring software (paid/unpaid) I have? Thanks in advance.

    Read the article

  • VMware Virtual vCenter and High Availability

    - by rufo
    To continue with this question: Should be Vmware vCenter server high available? According to the response there even if vCenter is down HA will continue to work. So, if my vCenter is a VM, using the express sql edition in the same VM, and that VM is hosted in the same cluster it manages (and the cluster is setup for HA): Am I correct to assume that if the host that hosts the vCenter goes down HA will vmotion the vCenter VM to another host and it will continue to function? BTW: my environment is small, two ESXi 5.0 hosts, with about 50 VMs, using iSCSI shared storaged for everything.

    Read the article

  • OpenVPN multiple servers on the same subnet, high availability

    - by andre
    Hey everyone. Let me start by saying that my Linux experience isn't super awesome but I can usually find my way around things easily. Over at work we have an OpenVPN setup that's been due for some improvement for a while now. The main server (tap mode) runs in our office, behind a rather slow DSL connection. The main problem is that, since I'm usually out of the office, every time I want to access something on the virtual network I have to go through that server to get anywhere else. We have two servers up on 100 Mbit connections that we use for development and production purposes, about 3 more servers in the office (one of them behind a different T1 line for VOIP) and about two dozen clients who use the network on a daily basis from various locations. We've had situations where network routing (outside of our control) would not allow people to reach our main OpenVPN server whilst the other locations were connectable. Also any time someone outside the office wants to fetch something from any of the servers (say, a 500 MB code repository), a whopping 20 KB/s download speed is just unacceptable these days (did I mention slow DSL? ok). We had to implement traffic shaping on this server since maxing out this connection was fairly trivial. I had the thought of running two (or more) OpenVPN servers in the network. These would have to have the same subnet though, as our application relies on virtual network's IP addresses for some of its core functionality. The clients would also preferably retain the same IP addresses but that's not vital. For simplicity, lets call the current server office and the second server I'm setting up, cloud. Call the server on the T1 phone. This proved to be rather complex because as soon as I connect to cloud, I cannot see office. Any routes to a server that would go through office also do not work while I'm connected to cloud (no ping, nothing) and vice-versa. There's no rules for iptables that would be blocking the traffic either. Recently I came across this article on linuxjournal but the solution they provide seems to only cover the use of two servers and somewhat outdated (can't even find much documentation, their wiki is offline). They also state that adding more servers would be a complex task. Ideally I would like to keep the existing server office running the virtual network and also run the OpenVPN daemon on the cloud and phone servers (100 Mbit and very reliable connection, respectively) so that we're on safe ground in case of a hardware failure, DSL failure, etc. So, in essence, I'm looking for a highly available OpenVPN solution (fix, patch, hack, tweak, whatever you want to call it) that will accept connections on multiple hosts (2 or more) whilst keeping the same IP address subnet regardless of the server to which you connect to. Thanks for reading and sorry for the long post, I hope it gets the point across :P

    Read the article

  • High availability for databases (DRBD + GFS)?

    - by EvanAlm
    Does it work to have like an MySQL (or any other relational database) on the GFS (with DRBD) and have multiple nodes reading and writing to it? Is that the "best" way of providing a HA database/application setup if so? Is RHEL (cluster suite) a good way to set up this? (or centos)

    Read the article

  • SAN/NAS with high availability?

    - by netvope
    I have two servers that I plan to use for storage. Each of them has a few SATA disks directly attached. I want the storage to be available even if one of the storage servers is down (preferably the clients wouldn't even notice that the fail-over, although I'm not sure if this is possible). The clients may access the storage via NFS and samba, but this is not a must; I could use something else if needed. I found this guide, Installing and Configuring Openfiler with DRBD and Heartbeat, which apparently does the thing I want. It relies on three components, Openfiler, DRBD, and Heartbeat, and all three of them need to be configured separately. I'm wondering are there simpler solutions? Is using DRBD+Heartbeat the best practice for a situation like mine? I'm also interested to know if there are alternatives that don't depend on DRBD.

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • Managing Internal Yum Repository Groups

    - by elmt
    What is the best method for handling yum groups dependencies? For example, take this comps.xml file <comps> <group> <id>production</id> <name>Production</name> <default>true</default> <description>Packages required to run</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">ssh</packagereq> </packagelist> </group> <group> <id>development</id> <name>Development</name> <default>false</default> <description>Packages required to develop</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">gcc</packagereq> </packagelist> </group> </comps> which is packaged with createrepo -g comps.xml x86_64. The ssh and gcc rpms are not installed in the x86_64 directory. If I run yum groupinstall development, yum is smart enough to pull the gcc package from the RHEL repo even though the groups are defined in my internal repository. However, is this the proper way of doing this, or should I copy the rpms to my local repository and recreate the repo?

    Read the article

  • Google Groups layout problem

    - by mark
    I have this strange problem when browsing Google Groups. The sidebar, the one displaying the various menu options, like Home, Discussions, Pages, is rendered on the left side of the page on top of the group discussions space. Below is a snapshot taken from FireFox (version 3.5.2). I have experienced the same problems with other browsers several times as well, but FireFox is my main browser and it is really broken there.

    Read the article

  • PHP/MySQL User system w/ groups.

    - by Ben C
    I have a user system set up in a 'users' table, and I have groups set up in a 'groups' table. Essentially, I want users to be able to join any and all the groups that they want, in the same was as one would on facebook. How would one go about structuring this in a mysql/php database system? Just a quick summary would be helpful! I've looked around, but I can only find info on creating 'groups' where the users just have a sort of 1, 2 or 3 permission rank. Thanks!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >