Search Results

Search found 4291 results on 172 pages for 'cluster analysis'.

Page 1/172 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to configure a zone cluster on Solaris Cluster 4.0

    - by JuergenS
    This is a short overview on how to configure a zone cluster on Solaris Cluster 4.0. This is a little bit different as in Solaris Cluster 3.2/3.3 because Solaris Cluster 4.0 is only running on Solaris 11. The name of the zone cluster must be unique throughout the global Solaris Cluster and must be configured on a global Solaris Cluster. Please read all the requirements for zone cluster in Solaris Cluster Software Installation Guide for SC4.0. For Solaris Cluster 3.2/3.3 please refer to my previous blog Configuration steps to create a zone cluster in Solaris Cluster 3.2/3.3. A. Configure the zone cluster into the already running global clusterCheck if zone cluster can be created # cluster show-netprops to change number of zone clusters use # cluster set-netprops -p num_zoneclusters=12 Note: 12 zone clusters is the default, values can be customized! Create config file (zc1config) for zone cluster setup e.g: Configure zone cluster # clzc configure -f zc1config zc1 Note: If not using the config file the configuration can also be done manually # clzc configure zc1 Check zone configuration # clzc export zc1 Verify zone cluster # clzc verify zc1 Note: The following message is a notice and comes up on several clzc commands Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1"... Install the zone cluster # clzc install zc1 Note: Monitor the consoles of the global zone to see how the install proceed! (The output is different on the nodes) It's very important that all global cluster nodes have installed the same set of ha-cluster packages! Boot the zone cluster # clzc boot zc1 Login into non-global-zones of zone cluster zc1 on all nodes and finish Solaris installation. # zlogin -C zc1 Check status of zone cluster # clzc status zc1 Login into non-global-zones of zone cluster zc1 and configure the shell environment for root (for PATH: /usr/cluster/bin, for MANPATH: /usr/cluster/man) # zlogin -C zc1 If using additional name service configure /etc/nsswitch.conf of zone cluster non-global zones. hosts: cluster files netmasks: cluster files Configure /etc/inet/hosts of the zone cluster zones Enter all the logical hosts of non-global zones B. Add resource groups and resources to zone cluster Create a resource group in zone cluster # clrg create -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Note1: Use command # cluster status for zone cluster resource group overview. Note2: You can also run all commands for zone cluster in global cluster by adding the option -Z to the command. e.g: # clrg create -Z zc1 -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Set up the logical host resource for zone cluster In the global zone do: # clzc configure zc1 clzc:zc1 add net clzc:zc1:net set address=<zone-logicalhost-ip> clzc:zc1:net end clzc:zc1 commit clzc:zc1 exit Note: Check that logical host is in /etc/hosts file In zone cluster do: # clrslh create -g app-rg -h <zone-logicalhost> <zone-logicalhost>-rs Set up storage resource for zone cluster Register HAStoragePlus # clrt register SUNW.HAStoragePlus Example1) ZFS storage pool In the global zone do: Configure zpool eg: # zpool create <zdata> mirror cXtXdX cXtXdX and # clzc configure zc1 clzc:zc1 add dataset clzc:zc1:dataset set name=zdata clzc:zc1:dataset end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p zpools=zdata app-hasp-rs Example2) HA filesystem In the global zone do: Configure SVM diskset and SVM devices. and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/data clzc:zc1:fs set special=/dev/md/datads/dsk/d0 clzc:zc1:fs set raw=/dev/md/datads/rdsk/d0 clzc:zc1:fs set type=ufs clzc:zc1:fs add options [logging] clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/data app-hasp-rs Example3) Global filesystem as loopback file system In the global zone configure global filesystem and it to /etc/vfstab on all global nodes e.g.: /dev/md/datads/dsk/d0 /dev/md/datads/dsk/d0 /global/fs ufs 2 yes global,logging and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/zone/fs (zc-lofs-mountpoint) clzc:zc1:fs set special=/global/fs (globalcluster-mountpoint) clzc:zc1:fs set type=lofs clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: (Create scalable rg if not already done) # clrg create -p desired_primaries=2 -p maximum_primaries=2 app-scal-rg # clrs create -g app-scal-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/zone/fs hasp-rs More details of adding storage available in the Installation Guide for zone cluster Switch resource group and resources online in the zone cluster # clrg online -eM app-rg # clrg online -eM app-scal-rg Test: Switch of the resource group in the zone cluster # clrg switch -n zonehost2 app-rg # clrg switch -n zonehost2 app-scal-rg Add supported dataservice to zone cluster Documentation for SC4.0 is available here Example output: Appendix: To delete a zone cluster do: # clrg delete -Z zc1 -F + Note: Zone cluster uninstall can only be done if all resource groups are removed in the zone cluster. The command 'clrg delete -F +' can be used in zone cluster to delete the resource groups recursively. # clzc halt zc1 # clzc uninstall zc1 Note: If clzc command is not successful to uninstall the zone, then run 'zoneadm -z zc1 uninstall -F' on the nodes where zc1 is configured # clzc delete zc1

    Read the article

  • New MySQL Cluster 7.3 Previews: Foreign Keys, NoSQL Node.js API and Auto-Tuned Clusters

    - by Mat Keep
    At this weeks MySQL Connect conference, Oracle previewed an exciting new wave of developments for MySQL Cluster, further extending its simplicity and flexibility by expanding the range of use-cases, adding new NoSQL options, and automating configuration. What’s new: Development Release 1: MySQL Cluster 7.3 with Foreign Keys Early Access “Labs” Preview: MySQL Cluster NoSQL API for Node.js Early Access “Labs” Preview: MySQL Cluster GUI-Based Auto-Installer In this blog, I'll introduce you to the features being previewed. Review the blogs listed below for more detail on each of the specific features discussed. Save the date!: A live webinar is scheduled for Thursday 25th October at 0900 Pacific Time / 1600UTC where we will discuss each of these enhancements in more detail. Registration will be open soon and published to the MySQL webinars page MySQL Cluster 7.3: Development Release 1 The first MySQL Cluster 7.3 Development Milestone Release (DMR) previews Foreign Keys, bringing powerful new functionality to MySQL Cluster while eliminating development complexity. Foreign Key support has been one of the most requested enhancements to MySQL Cluster – enabling users to simplify their data models and application logic – while extending the range of use-cases for both custom projects requiring referential integrity and packaged applications, such as eCommerce, CRM, CMS, etc. Implementation The Foreign Key functionality is implemented directly within the MySQL Cluster data nodes, allowing any client API accessing the cluster to benefit from them – whether they are SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA, HTTP/REST or the new Node.js API - discussed later.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation.  This represents a further enhancement to MySQL Cluster's support for on0line schema changes, ie adding and dropping indexes, adding columns, etc.  Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  Getting Started with MySQL Cluster 7.3 DMR1: Users can download either the source or binary and evaluate the MySQL Cluster 7.3 DMR with Foreign Keys now! (Select the Development Release tab). MySQL Cluster NoSQL API for Node.js Node.js is hot! In a little over 3 years, it has become one of the most popular environments for developing next generation web, cloud, mobile and social applications. Bringing JavaScript from the browser to the server, the design goal of Node.js is to build new real-time applications supporting millions of client connections, serviced by a single CPU core. Making it simple to further extend the flexibility and power of Node.js to the database layer, we are previewing the Node.js Javascript API for MySQL Cluster as an Early Access release, available for download now from http://labs.mysql.com/. Select the following build: MySQL-Cluster-NoSQL-Connector-for-Node-js Alternatively, you can clone the project at the MySQL GitHub page.  Implemented as a module for the V8 engine, the new API provides Node.js with a native, asynchronous JavaScript interface that can be used to both query and receive results sets directly from MySQL Cluster, without transformations to SQL. Figure 1: MySQL Cluster NoSQL API for Node.js enables end-to-end JavaScript development Rather than just presenting a simple interface to the database, the Node.js module integrates the MySQL Cluster native API library directly within the web application itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The new Node.js API joins a rich array of NoSQL interfaces available for MySQL Cluster. Whichever API is chosen for an application, SQL and NoSQL can be used concurrently across the same data set, providing the ultimate in developer flexibility.  Get started with MySQL Cluster NoSQL API for Node.js tutorial MySQL Cluster GUI-Based Auto-Installer Compatible with both MySQL Cluster 7.2 and 7.3, the Auto-Installer makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments – whether on-premise or in the cloud. Implemented with a standard HTML GUI and Python-based web server back-end, the Auto-Installer intelligently configures MySQL Cluster based on application requirements and auto-discovered hardware resources Figure 2: Automated Tuning and Configuration of MySQL Cluster Developed by the same engineering team responsible for the MySQL Cluster database, the installer provides standardized configurations that make it simple, quick and easy to build stable and high performance clustered environments. The auto-installer is previewed as an Early Access release, available for download now from http://labs.mysql.com/, by selecting the MySQL-Cluster-Auto-Installer build. You can read more about getting started with the MySQL Cluster auto-installer here. Watch the YouTube video for a demonstration of using the MySQL Cluster auto-installer Getting Started with MySQL Cluster If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3 and the Early Access previews). Or use the new MySQL Cluster Auto-Installer! Download the Guide to Scaling Web Databases with MySQL Cluster (to learn more about its architecture, design and ideal use-cases). Post any questions to the MySQL Cluster forum where our Engineering team and the MySQL Cluster community will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section here or in the blogs referenced in this article. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. The first Development Release of MySQL Cluster 7.3 and the Early Access previews give you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases, by using the comments of this blog.

    Read the article

  • Oracle Solaris Cluster 4.2 Event and its SNMP Interface

    - by user12609115
    Background The cluster event SNMP interface was first introduced in Oracle Solaris Cluster 3.2 release. The details of the SNMP interface are described in the Oracle Solaris Cluster System Administration Guide and the Cluster 3.2 SNMP blog. Prior to the Oracle Solaris Cluster 4.2 release, when the event SNMP interface was enabled, it would take effect on WARNING or higher severity events. The events with WARNING or higher severity are usually for the status change of a cluster component from ONLINE to OFFLINE. The interface worked like an alert/alarm interface when some components in the cluster were out of service (changed to OFFLINE). The consumers of this interface could not get notification for all status changes and configuration changes in the cluster. Cluster Event and its SNMP Interface in Oracle Solaris Cluster 4.2 The user model of the cluster event SNMP interface is the same as what was provided in the previous releases. The cluster event SNMP interface is not enabled by default on a freshly installed cluster; you can enable it by using the cluster event SNMP administration commands on any cluster nodes. Usually, you only need to enable it on one of the cluster nodes or a subset of the cluster nodes because all cluster nodes get the same cluster events. When it is enabled, it is responsible for two basic tasks. • Logs up to 100 most recent NOTICE or higher severity events to the MIB. • Sends SNMP traps to the hosts that are configured to receive the above events. The changes in the Oracle Solaris Cluster 4.2 release are1) Introduction of the NOTICE severity for the cluster configuration and status change events.The NOTICE severity is introduced for the cluster event in the 4.2 release. It is the severity between the INFO and WARNING severity. Now all severities for the cluster events are (from low to high) • INFO (not exposed to the SNMP interface) • NOTICE (newly introduced in the 4.2 release) • WARNING • ERROR • CRITICAL • FATAL In the 4.2 release, the cluster event system is enhanced to make sure at least one event with the NOTICE or a higher severity will be generated when there is a configuration or status change from a cluster component instance. In other words, the cluster events from a cluster with the NOTICE or higher severities will cover all status and configuration changes in the cluster (include all component instances). The cluster component instance here refers to an instance of the following cluster componentsnode, quorum, resource group, resource, network interface, device group, disk, zone cluster and geo cluster heartbeat. For example, pnode1 is an instance of the cluster node component, and oracleRG is an instance of the cluster resource group. With the introduction of the NOTICE severity event, when the cluster event SNMP interface is enabled, the consumers of the SNMP interface will get notification for all status and configuration changes in the cluster. A thrid-party system management platform with the cluster SNMP interface integration can generate alarms and clear alarms programmatically, because it can get notifications for the status change from ONLINE to OFFLINE and also from OFFLINE to ONLINE. 2) Customization for the cluster event SNMP interface • The number of events logged to the MIB is 100. When the number of events stored in the MIB reaches 100 and a new qualified event arrives, the oldest event will be removed before storing the new event to the MIB (FIFO, first in, first out). The 100 is the default and minimum value for the number of events stored in the MIB. It can be changed by setting the log_number property value using the clsnmpmib command. The maximum number that can be set for the property is 500. • The cluster event SNMP interface takes effect on the NOTICE or high severity events. The NOTICE severity is also the default and lowest event severity for the SNMP interface. The SNMP interface can be configured to take effect on other higher severity events, such as WARNING or higher severity events by setting the min_severity property to the WARNING. When the min_severity property is set to the WARNING, the cluster event SNMP interface would behave the same as the previous releases (prior to the 4.2 release). Examples, • Set the number of events stored in the MIB to 200 # clsnmpmib set -p log_number=200 event • Set the interface to take effect on WARNING or higher severity events. # clsnmpmib set -p min_severity=WARNING event Administering the Cluster Event SNMP Interface Oracle Solaris Cluster provides the following three commands to administer the SNMP interface. • clsnmpmib: administer the SNMP interface, and the MIB configuration. • clsnmphost: administer hosts for the SNMP traps • clsnmpuser: administer SNMP users (specific for SNMP v3 protocol) Only clsnmpmib is changed in the 4.2 release to support the aforementioned customization of the SNMP interface. Here are some simple examples using the commands. Examples: 1. Enable the cluster event SNMP interface on the local node # clsnmpmib enable event 2. Display the status of the cluster event SNMP interface on the local node # clsnmpmib show -v 3. Configure my_host to receive the cluster event SNMP traps. # clsnmphost add my_host Cluster Event SNMP Interface uses the common agent container SNMP adaptor, which is based on the JDMK SNMP implementation as its SNMP agent infrastructure. By default, the port number for the SNMP MIB is 11161, and the port number for the SNMP traps is 11162. The port numbers can be changed by using the cacaoadm. For example, # cacaoadm list-params Print all changeable parameters. The output includes the snmp-adaptor-port and snmp-adaptor-trap-port properties. # cacaoadm set-param snmp-adaptor-port=1161 Set the SNMP MIB port number to 1161. # cacaoadm set-param snmp-adaptor-trap-port=1162 Set the SNMP trap port number to 1162. The cluster event SNMP MIB is defined in sun-cluster-event-mib.mib, which is located in the /usr/cluster/lib/mibdirectory. Its OID is 1.3.6.1.4.1.42.2.80, that can be used to walk through the MIB data. Again, for more detail information about the cluster event SNMP interface, please see the Oracle Solaris Cluster 4.2 System Administration Guide. - Leland Chen 

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • MySQL at Mobile World Congress (on Valentine's Day...)

    - by mat.keep(at)oracle.com
    It is that time of year again when the mobile communications industry converges on Barcelona for what many regard as the premier telecommunications show of the year.Starting on February 14th, what better way for a Brit like me to spend Valentines Day with 50,000 mobile industry leaders (my wife doesn't tend to read this blog, so I'm reasonably safe with that statement).As ever, Oracle has an extensive presence at the show, and part of that presence this year includes MySQL.We will be running a live demonstration of the MySQL Cluster database on Booth 7C18 in the App Planet.The demonstration will show how the MySQL Cluster Connector for Java is implemented to provide native connectivity to the carrier grade MySQL Cluster database from Java ME clients via Java SE virtual machines and Java EE servers.  The demonstration will show how end-to-end Java services remain continuously available during both catastrophic failures and scheduled maintenance activities.The MySQL Cluster Connector for Java provides both a native Java API and JPA plug-in that directly maps Java objects to relational tables stored in the MySQL Cluster database, without the overhead and complexity of having to transform objects to JDBC, and then SQL  The result is 10x higher throughput, and a simpler development model for Java engineers.Stop by the stand for a demonstration, and an opportunity to speak with the MySQL telecoms team who will share experiences on how MySQL is being used to bring the innovation of the web to the carrier network.Of course, if you can't make it to Barcelona, you can still learn more about the MySQL Cluster Connector for Java from this whitepaper and are free to download it as part of MySQL Cluster Community Edition  Let us know via the comments if you have Java applications that you think will benefit from the MySQL Cluster Connector for JavaI can't promise that Valentines Day at MWC will be the time you fall in love with MySQL Cluster...but I'm confident you will at least develop a healthy respect for it  

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • Cluster Node Recovery Using Second Node in Solaris Cluster

    - by Onur Bingul
    Assumptions:Node 0a is the cluster node that has crashed and could not boot anymore.Node 0b is the node in cluster and in production with services active.Both nodes have their boot disk mirrored via SDS/SVM.We have many options to clone the boot disk from node 0b:- make a copy via network using the ufsdump command and pipe to ufsrestore - make a copy inserting the disk locally on node 0b and creating the third mirror with SDS- make a copy inserting the disk locally on node 0b using dd commandIn this procedure we are going to use dd command (from my experience this is the best option).Bare in mind that in the examples provided we work on Sun Fire V240 systems which have SCSI internal disks. In the case of Fibre Channel (FC) internal disks you must pay attention to the unique identifier, or World Wide Name (WWN), associated with each FC disk (in this case take a look at infodoc #40133 in order to recreate the device tree correctly).Procedure:On node 0b the boot disk is c1t0d0 (c1t1d0 mirror) and this is the VTOC:* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory      0      2    00          0   2106432   2106431      1      3    01    2106432  74630784  76737215      2      5    00          0 143349312 143349311      4      7    00   76737216  50340672 127077887      5      4    00  127077888  14683968 141761855      6      0    00  141761856   1058304 142820159      7      0    00  142820160    529152 143349311We will insert the new disk on node 0b and it will be seen as c1t2d0.1) On node 0b we make a copy via dd from disk c1t0d0s2 to disk c1t2d0s2# dd if=/dev/rdsk/c1t0d0s2 of=/dev/rdsk/c1t2d0s2 bs=8192kA copy of a 72GB disk will take approximately about 45 minutes.Note: as an alternative to make identical copy of root over network follow Document ID: 47498Title: Sun[TM] Cluster 3.0: How to Rebuild a node with Veritas Volume Manager2) Perform an fsck on disk c1t2d0 data slices:   1.  fsck -o f /dev/rdsk/c1t2d0s0 (root)   2.  fsck -o f /dev/rdsk/c1t2d0s4 (/var)   3.  fsck -o f /dev/rdsk/c1t2d0s5 (/usr)   4.  fsck -o f /dev/rdsk/c1t2d0s6 (/globaldevices)3) Mount the root file system in order to edit following files for changing the node name:# mount /dev/dsk/c1t2d0s0 /mntChange the hostname from 0b to 0a:# cd /mnt/etc# vi hosts # vi hostname.bge0 # vi hostname.bge2 # vi nodename 4) Change the /mnt/etc/vfstab from the actual:/dev/md/dsk/d201        -       -       swap    -       no      -/dev/md/dsk/d200        /dev/md/rdsk/d200       /       ufs     1       no      -/dev/md/dsk/d205        /dev/md/rdsk/d205       /usr    ufs     1       no      logging/dev/md/dsk/d204        /dev/md/rdsk/d204       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d206        /dev/md/rdsk/d206       /global/.devices/node@2 ufs     2       noglobalto this (unencapsulate disk from SDS/SVM):/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs     2       no globalIt is important that global device partition (slice 6) in the new vfstab will point to the physical partition of the disk (in our case slice 6).Be careful with the name you use for the new disk. In this case we define it as c1t0d0 because we will insert it as target 0 in node 0a.But this could be different based on the configuration you are working on.5) Remove following entry from /mnt/etc/system (part of unencapsulation procedure):rootdev:/pseudo/md@0:0,200,blk6) Correct the link shared -> ../../global/.devices/node@2/dev/md/shared in order to point to the nodeid of node 0a (in our case nodeid 1):# cd /mnt/dev/mdhow it is now.... node 0b has nodeid 2lrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@2/dev/md/shared# rm shared# ln -s ../../global/.devices/node@1/dev/md/shared sharedhow is going to be... with nodeid 1 for node 0alrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@1/dev/md/shared7) Change nodeid (in our case from 2 to 1):# cd /mnt/etc/cluster# vi nodeid8) Change the file /mnt/etc/path_to_inst in order to reflect the correct nodeid for node 0a:# cd /mnt/etc# vi path_to_instChange entries from node@2 to node@1 with the vi command ":%s/node@2/node@1/g"9) Write the bootblock to the disk... just in case:# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0Now the disk is ready to be inserted in node 0a in order to bootup the node.10) Bootup node 0a with command "boot -sx"... this is becasue we need to make some changes in ccr files in order to recreate did environment.11) Modify cluster ccr:# cd /etc/cluster/ccr# rm did_instances# rm did_instances.bak# vi directory - remove the did_instances line.# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/directory # grep ccr_gennum /etc/cluster/ccr/directory ccr_gennum -1 # /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure # grep ccr_gennum /etc/cluster/ccr/infrastructure ccr_gennum -112) Bring the node 0a down again to the ok prompt and then issue the command "boot -r"Now the node will join the cluster and from scstat and metaset command you can verify functionality. Next step is to encapsulate the boot disk in SDS/SVM and create the mirrors.In our case node 0b has metadevice name starting from d200. For this reason on node 0a we need to create metadevice starting from d100. This is just an example, you can have different names.The important thing to remember is that metadevice boot disks have different names on each node.13) Remove metadevice pointing to the boot and mirror disks (inherit from node 0b):# metaclear -r -f d200# metaclear -r -f d201# metaclear -r -f d204# metaclear -r -f d205# metaclear -r -f d206verify from metastat that no metadevices are set for boot and mirror disks.14) Encapsulate the boot disk:# metainit -f d110 1 1 c1t0d0s0# metainit d100 -m d110# metaroot d10015) Reboot node 0a.16) Create all the metadevice for slices remaining on boot disk# metainit -f d111 1 1 c1t0d0s1# metainit d101 -m d111# metainit -f d114 1 1 c1t0d0s4# metainit d104 -m d114# metainit -f d115 1 1 c1t0d0s5# metainit d105 -m d115# metainit -f d116 1 1 c1t0d0s6# metainit d106 -m d11617) Edit the vfstab in order to specifiy metadevices created:old:/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs      2       no  globalnew:/dev/md/dsk/d101        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/md/dsk/d105        /dev/md/rdsk/d105       /usr    ufs     1       no      logging/dev/md/dsk/d104        /dev/md/rdsk/d104       /var    ufs     1       no      logging#/dev/md/dsk/106       /dev/md/rdsk/d106       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d106        /dev/md/rdsk/d106       /global/.devices/node@1 ufs     2       noglobal18) Reboot node 0a in order to check new SDS/SVM boot configuration.19) Label the mirror disk c1t1d0 with the VTOC of boot disk c1t0d0:# prtvtoc /dev/dsk/c1t0d0s2 > /var/tmp/VTOC_c1t0d0 # fmthard -s /var/tmp/VTOC_c1t0d0 /dev/rdsk/c1t1d0s220) Put DB replica on slice 7 of disk c1t1d0:# metadb -a -c 3 /dev/dsk/c1t1d0s721) Create metadevice for mirror disk c1t1d0 and attach the new mirror side:# metainit d120 1 1 c1t1d0s0# metattach d100 d120# metainit d121 1 1 c1t1d0s1# metattach d101 d121# metainit d124 1 1 c1t1d0s4# metattach d104 d124# metainit d125 1 1 c1t1d0s5# metattach d105 d125# metainit d126 1 1 c1t1d0s6# metattach d106 d126

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • MySQL Connect: What to Expect From the Wondrous Land of MySQL Cluster

    - by Mat Keep
    The MySQL Connect conference is only a couple of weeks away, with MySQL engineers, support teams, consultants and community aces busy putting the final touches to their talks. There will be many exciting new announcements and sharing of best practices at the conference, covering the range of MySQL technologies. MySQL Cluster will a big part of this, so I wanted to share some key sessions for those of you who plan on attending, as well as some resources for those who are not lucky enough to be able to make the trip, but who can't afford to miss the key news. Of course, this is no substitute to actually being there….and the good news is that registration is still open ;-) Roadmap: Whats New in MySQL Cluster Saturday 29th, 1300-1400, in Golden Gate room 5.                                                                                        Bernd Ocklin, director of MySQL Cluster development, and myself will be taking a look at what follows the latest MySQL Cluster 7.2 release. I don't want to give to much away - lets just say its not often you can add powerful new functionality to a product while at the same time making life radically simpler for its users. For those not making it to the Conference, a live webinar repeating the talk is scheduled for Thursday 25th October at 09.00 pacific time. Hold the date, registration will be open for that soon and published to our MySQL Webinars page Best Practices Getting Started with MySQL Cluster, Hands-On Lab Saturday 29th, 1600-1700, in Plaza Room A.                                                              Santo Leto, one of our lead MySQL Cluster support engineers, regularly works with users new to MySQL Cluster, assisting them in installation, configuration, scaling, etc. In this lab, Santo will share best-practices in getting started. Delivering Breakthrough Performance with MySQL Cluster Saturday 29th, 1730-1830, in Golden Gate room 5. Frazer Clement, lead MySQL Cluster software engineer, will demonstrate how to translate the awesome Cluster benchmarks (remember 1 BILLION UPDATEs per minute ?!) into real-world performance. You can also get some best practices from our new MySQL Cluster performance guide  MySQL Cluster BoF Saturday 29th, 1900-2000, room Golden Gate 5.                                                                                                           Come and get a demonstration of new tools for the installation and configuration of MySQL Cluster, and spend time with the engineering team discussing any questions or issues you may have. Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster Sunday 30th, 1145 - 1245, in Golden Gate room 7.   In this session, JD Duncan and Andrew Morgan will present how to get started with both Memcached and new NoSQL APIs. JD and I recently ran a webinar demonstrating how to build simple Twitter-like services with Memcached and MySQL Cluster. The replay is available for download.  Case Studies: MySQL Cluster @ El Chavo, Latin America’s #1 Facebook Game Sunday 30th, 1745 - 1845, in Golden Gate room 4.                             Playful Play deployed MySQL Cluster CGE to power their market leading social game. This session will discuss the challenges they faced, why they selected MySQL Cluster and their experiences to date. You can read more about Playful Play and MySQL Cluster here  A Journey into NoSQLand: MySQL’s NoSQL Implementation Sunday 30th, 1345 - 1445, in Golden Gate room 4.                                          Lig Turmelle, web DBA at Kaplan Professional and esteemed Oracle Ace, will discuss her experiences working with the NoSQL interfaces for both MySQL Cluster and InnoDB Evaluating MySQL HA Alternatives Saturday 29th, 1430-1530, room Golden Gate 5                                                                                   Henrik Ingo, former member of the MySQL sales engineering team, will provide an overview of various HA technologies for MySQL, starting with replication, progressing to InnoDB, Galera and MySQL Cluster What about the other stuff? Of course MySQL Connect has much, much more than MySQL Cluster. There will be lots on replication (which I'll blog about soon), MySQL 5.6, InnoDB, cloud, etc, etc. Take a look at the full Content Catalog to see more. If you are attending, I hope to see you at one of the Cluster sessions...and remember, registration is still open

    Read the article

  • I need advice on how to debug a cluster

    - by alcor
    I'm the only developer of a complex critical software system, written in Visual C++ 2005. It's deployed on a classical Microsoft cluster scenario (active/passive), that has Windows Server 2003 R2. If a server A goes down, the other one (B) starts and take the ownership of its duties. You have to know that: both servers have the same Microsoft patches/fixes, same hardware, same everything. both servers use the same memory storage (a RAID-6 through fiber channel). this software has a main module who launch the peripheral modules. if a peripheral module crashes, the main module restarts it. When I switch the application in one of the two servers (let's say the B server) two of the peripheral modules of the main applications just started to crash apparently without reason about 2 seconds after the start of the peripheral module. What could I do to analyze/inspect/resolve this weird situation?

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • MySQL Cluster 7.3: On-Demand Webinar and Q&A Available

    - by Mat Keep
    The on-demand webinar for the MySQL Cluster 7.3 Development Release is now available. You can learn more about the design, implementation and getting started with all of the new MySQL Cluster 7.3 features from the comfort and convenience of your own device, including: - Foreign Key constraints in MySQL Cluster - Node.js NoSQL API  - Auto-installation of higher performance distributed, clusters We received some great questions over the course of the webinar, and I wanted to share those for the benefit of a broader audience. Q. What Foreign Key actions are supported: A. The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL Q. Where are Foreign Keys implemented, ie data nodes or SQL nodes? A. They are implemented in the data nodes, therefore can be enforced for both the SQL and NoSQL APIs Q. Are they compatible with the InnoDB Foreign Key implementation? A. Yes, with the following exceptions: - InnoDB doesn’t support “No Action” constraints, MySQL Cluster does - You can choose to suspend FK constraint enforcement with InnoDB using the FOREIGN_KEY_CHECKS parameter; at the moment, MySQL Cluster ignores that parameter. - You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB. - You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected. Q. Does adding or dropping Foreign Keys cause downtime due to a schema change? A. Nope, this is an online operation. MySQL Cluster supports a number of on-line schema changes, ie adding and dropping indexes, adding columns, etc. Q. Where can I see an example of node.js with MySQL Cluster? A. Check out the tutorial and download the code from GitHub Q. Can I use the auto-installer to support remote deployments? How about setting up MySQL Cluster 7.2? A. Yes to both! Q. Can I get a demo Check out the tutorial. You can download the code from http://labs.mysql.com/ Go to Select Build drop-down box Q. What is be minimum internet speen required for Geo distributed cluster with synchronous replication? A. if you're splitting you cluster between sites then we recommend a network latency of 20ms or less. Alternatively, use MySQL asynchronous replication where the latency of your WAN doesn't impact the latency of your reads/writes. Q. Where you can one learn more about the PayPal project with MySQL Cluster? A. Take a look at the following - you'll find press coverage, a video and slides from their keynote presentation  So, if you want to learn more, listen to the new MySQL Cluster 7.3 on-demand webinar  MySQL Cluster 7.3 is still in the development phase, so it would be great to get your feedback on these new features, and things you want to see!

    Read the article

  • Slides of my HOL on MySQL Cluster

    - by user13819847
    Hi!Thanks everyone who attended my hands-on lab on MySQL Cluster at MySQL Connect last Saturday.The following are the links for the slides, the HOL instructions, and the code examples.I'll try to summarize my HOL below.Aim of the HOL was to help attendees to familiarize with MySQL Cluster. In particular, by learning: the basics of MySQL Cluster Architecture the basics of MySQL Cluster Configuration and Administration how to start a new Cluster for evaluation purposes and how to connect to it We started by introducing MySQL Cluster. MySQL Cluster is a proven technology that today is successfully servicing the most performance-intensive workloads. MySQL Cluster is deployed across telecom networks and is powering mission-critical web applications. Without trading off use of commodity hardware, transactional consistency and use of complex queries, MySQL Cluster provides: Web Scalability (web-scale performance on both reads and writes) Carrier Grade Availability (99.999%) Developer Agility (freedom to use SQL or NoSQL access methods) MySQL Cluster implements: an Auto-Sharding, Multi-Master, Shared-nothing Architecture, where independent nodes can scale horizontally on commodity hardware with no shared disks, no shared memory, no single point of failure In the architecture of MySQL Cluster it is possible to find three types of nodes: management nodes: responsible for reading the configuration files, maintaining logs, and providing an interface to the administration of the entire cluster data nodes: where data and indexes are stored api nodes: provide the external connectivity (e.g. the NDB engine of the MySQL Server, APIs, Connectors) MySQL Cluster is recommended in the situations where: it is crucial to reduce service downtime, because this produces a heavy impact on business sharding the database to scale write performance higly impacts development of application (in MySQL Cluster the sharding is automatic and transparent to the application) there are real time needs there are unpredictable scalability demands it is important to have data-access flexibility (SQL & NoSQL) MySQL Cluster is available in two Editions: Community Edition (Open Source, freely downloadable from mysql.com) Carrier Grade Edition (Commercial Edition, can be downloaded from eDelivery for evaluation purposes) MySQL Carrier Grade Edition adds on the top of the Community Edition: Commercial Extensions (MySQL Cluster Manager, MySQL Enterprise Monitor, MySQL Cluster Installer) Oracle's Premium Support Services (largest team of MySQL experts backed by MySQL developers, forward compatible hot fixes, multi-language support, and more) We concluded talking about the MySQL Cluster vision: MySQL Cluster is the default database for anyone deploying rapidly evolving, realtime transactional services at web-scale, where downtime is simply not an option. From a practical point of view the HOL's steps were: MySQL Cluster installation start & monitoring of the MySQL Cluster processes client connection to the Management Server and to an SQL Node connection using the NoSQL NDB API and the Connector J In the hope that this blog post can help you get started with MySQL Cluster, I take the opportunity to thank you for the questions you made both during the HOL and at the MySQL Cluster booth. Slides are also on SlideShares: Santo Leto - MySQL Connect 2012 - Getting Started with Mysql Cluster Happy Clustering!

    Read the article

  • Microsoft Cluster or Oracle Fail Safe error

    - by osdm
    Trying to get Oracle Database 10g to work on Microsoft cluster (using Oracle Fail Safe, not RAC). Everything installed, but when trying to verify group or add database to group I get following error: FS-10220: Network name MSK00-NST01-1 maps to IP address 10.1.11.74 in the cluster resource but maps to IP address 10.1.1.74 on the system MSK00-NST01-1 is cluster name, 10.1.11.74 is first node IP, 10.1.1.74 is second node IP. Oracle documentation says "The cluster and the system must have the same IP address mapping for a network name. Check that either the network name server or the local host file has the same IP address mapping as the cluster." Where is the error - in Oracle configuration or in cluster configuration? What are possible ways to correct it? Thanks a lot for any ideas.

    Read the article

  • Hyper-V cluster VS regular cluster

    - by Sasha
    We need to choice between Hyper-V and regular cluster technologies. What is the advantage and disadvantage of these approaches? Update: We have to physical servers and want to build reliably solution using cluster approach. We need to clustering our application and DB (MS SQL). We know that we can use: Regular Windows Cluster Service. Application and DB will be migrating from one node to other. Hyper-V Failover Cluster. Virtual machine will be migrating from one node to other. Combined variant. DB mirroring for MS SQL and Hyper-V for our application. We need to make a choice between this approach. So we need to know advantage and disadvantage of these approaches?

    Read the article

  • Heavy write to Galera cluster - table locked, cluster practically unusable

    - by Joe
    I set up Galera Cluster on 3 nodes. It works perfectly for reading data. I have done simple application to make some test on the cluster. Unfortunately I have to say that the Cluster fails totally when I try to do some writing. Maybe it can be configured differently or I do sth wrong? I have a simple stored procedure: CREATE PROCEDURE testproc(IN p_idWorker INTEGER) BEGIN DECLARE t_id INT DEFAULT -1; DECLARE t_counter INT ; UPDATE test SET idWorker = p_idWorker WHERE counter = 0 AND idWorker IS NULL limit 1; SELECT id FROM test WHERE idWorker = p_idWorker LIMIT 1 INTO t_id; SELECT ABS(MAX(counter)/MIN(counter)) FROM TEST INTO t_counter; SELECT COUNT(*) FROM test WHERE counter = 0 INTO t_counter; IF t_id >= 0 THEN UPDATE test SET counter = counter + 1 WHERE id = t_id; UPDATE test SET idWorker = NULL WHERE id = t_id; SELECT t_counter AS res; ELSE SELECT 'end' AS res; END IF; END $$ Now my simple C# application creates for example 3 MySQL clients in separate threads and each one executes the procedure every 100ms until there is no record where column 'counter' = 0. Unfortunately - after about 10 seconds sth is going bad. On servers there is process 'query_end' that never ends. After that - you cannot make update on the test table, MySQL returns: ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction . You cant even restart mysql. What you can do is to restart server, sometimes whole cluster. Is Galera Cluster so unreliable when you do massive concucurrent writing/updates? Hard to believe.

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • Oracle Solaris Cluster 4.1 Released

    - by Larry Wake
    Today we announced the release of Oracle Solaris Cluster 4.1 ( download ; existing customers can just update from the package repository ).New capabilities include:  Oracle Solaris 10 Zone Clusters: The easiest way to update and consolidate existing Solaris 10 application environments is with Oracle Solaris 10 Zones within Oracle Solaris 11 -- not only do you get higher system utilization, but you can immediately leverage new features such as network virtualization.With Oracle Solaris Cluster 4.1, you can now cluster these zones, for even higher availability. Expanded disaster recovery operations: Oracle Solaris Cluster 4.1 introduces managed switchover and disaster-recovery takeover of applications and data using ZFS Storage Appliance replication services in a multi-site, multi-cluster configuration. Faster application recovery with improved storage failure detection and resource dependency management. Labeled security support for providing both high availability and high security, leveraging Oracle Solaris 11 Trusted Extensions. Learn more: Oracle Solaris Cluster at the Oracle Technology Network Data Sheet  What's New in Oracle Solaris Cluster 4.1  FAQs

    Read the article

  • Accessing a storage-side snapshot of a cluster-shared volume

    - by syneticon-dj
    From time to time I am in the situation where I need to get data back from storage-side snapshots of cluster shared volumes. I suppose I just never figured out a way to do it right, so I always needed to: expose the shadow copy as a separate LUN offline the original CSV in the cluster un-expose the LUN carrying the original CSV make sure my cluster nodes have detected the new LUN and no longer list the original one add the volume to the list of cluster volumes, promote it to be a CSV copy off the data I need undo steps 5. - 1. to revert to the original configuration This is quite tedious and requires downtime for the original volume. Is there a better way to do this without involving a separate host outside of the cluster?

    Read the article

  • Oracle Coherence, Split-Brain and Recovery Protocols In Detail

    - by Ricardo Ferreira
    This article provides a high level conceptual overview of Split-Brain scenarios in distributed systems. It will focus on a specific example of cluster communication failure and recovery in Oracle Coherence. This includes a discussion on the witness protocol (used to remove failed cluster members) and the panic protocol (used to resolve Split-Brain scenarios). Note that the removal of cluster members does not necessarily indicate a Split-Brain condition. Oracle Coherence does not (and cannot) detect a Split-Brain as it occurs, the condition is only detected when cluster members that previously lost contact with each other regain contact. Cluster Topology and Configuration In order to create an good didactic for the article, let's assume a cluster topology and configuration. In this example we have a six member cluster, consisting of one JVM on each physical machine. The member IDs are as follows: Member ID  IP Address  1  10.149.155.76  2  10.149.155.77  3  10.149.155.236  4  10.149.155.75  5  10.149.155.79  6  10.149.155.78 Members 1, 2, and 3 are connected to a switch, and members 4, 5, and 6 are connected to a second switch. There is a link between the two switches, which provides network connectivity between all of the machines. Member 1 is the first member to join this cluster, thus making it the senior member. Member 6 is the last member to join this cluster. Here is a log snippet from Member 6 showing the complete member set: 2010-02-26 15:27:57.390/3.062 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=main, member=6): Started DefaultCacheServer... SafeCluster: Name=cluster:0xDDEB Group{Address=224.3.5.3, Port=35465, TTL=4} MasterMemberSet ( ThisMember=Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) OldestMember=Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) ActualMemberSet=MemberSet(Size=6, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) Member(Id=2, Timestamp=2010-02-26 15:27:17.847, Address=10.149.155.77:8088, MachineId=1101, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:296, Role=CoherenceServer) Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer) Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) Member(Id=5, Timestamp=2010-02-26 15:27:49.095, Address=10.149.155.79:8088, MachineId=1103, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:3229, Role=CoherenceServer) Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) ) RecycleMillis=120000 RecycleSet=MemberSet(Size=0, BitSetCount=0 ) ) At approximately 15:30, the connection between the two switches is severed: Thirty seconds later (the default packet timeout in development mode) the logs indicate communication failures across the cluster. In this example, the communication failure was caused by a network failure. In a production setting, this type of communication failure can have many root causes, including (but not limited to) network failures, excessive GC, high CPU utilization, swapping/virtual memory, and exceeding maximum network bandwidth. In addition, this type of failure is not necessarily indicative of a split brain. Any communication failure will be logged in this fashion. Member 2 logs a communication failure with Member 5: 2010-02-26 15:30:32.638/196.928 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=PacketPublisher, member=2): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=5, Timestamp=2010-02-26 15:27:49.095, Address=10.149.155.79:8088, MachineId=1103, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:3229, Role=CoherenceServer) by MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) ) The Coherence clustering protocol (TCMP) is a reliable transport mechanism built on UDP. In order for the protocol to be reliable, it requires an acknowledgement (ACK) for each packet delivered. If a packet fails to be acknowledged within the configured timeout period, the Coherence cluster member will log a packet timeout (as seen in the log message above). When this occurs, the cluster member will consult with other members to determine who is at fault for the communication failure. If the witness members agree that the suspect member is at fault, the suspect is removed from the cluster. If the witnesses unanimously disagree, the accuser is removed. This process is known as the witness protocol. Since Member 2 cannot communicate with Member 5, it selects two witnesses (Members 1 and 4) to determine if the communication issue is with Member 5 or with itself (Member 2). However, Member 4 is on the switch that is no longer accessible by Members 1, 2 and 3; thus a packet timeout for member 4 is recorded as well: 2010-02-26 15:30:35.648/199.938 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=PacketPublisher, member=2): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) by MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) ) Member 1 has the ability to confirm the departure of member 4, however Member 6 cannot as it is also inaccessible. At the same time, Member 3 sends a request to remove Member 6, which is followed by a report from Member 3 indicating that Member 6 has departed the cluster: 2010-02-26 15:30:35.706/199.996 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=2): MemberLeft request for Member 6 received from Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer) 2010-02-26 15:30:35.709/199.999 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=2): MemberLeft notification for Member 6 received from Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer) The log for Member 3 determines how Member 6 departed the cluster: 2010-02-26 15:30:35.161/191.694 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=PacketPublisher, member=3): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) by MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) Member(Id=2, Timestamp=2010-02-26 15:27:17.847, Address=10.149.155.77:8088, MachineId=1101, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:296, Role=CoherenceServer) ) 2010-02-26 15:30:35.165/191.698 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=Cluster, member=3): Member departure confirmed by MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) Member(Id=2, Timestamp=2010-02-26 15:27:17.847, Address=10.149.155.77:8088, MachineId=1101, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:296, Role=CoherenceServer) ); removing Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) In this case, Member 3 happened to select two witnesses that it still had connectivity with (Members 1 and 2) thus resulting in a simple decision to remove Member 6. Given the departure of Member 6, Member 2 is left with a single witness to confirm the departure of Member 4: 2010-02-26 15:30:35.713/200.003 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=Cluster, member=2): Member departure confirmed by MemberSet(Size=1, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) ); removing Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) In the meantime, Member 4 logs a missing heartbeat from the senior member. This message is also logged on Members 5 and 6. 2010-02-26 15:30:07.906/150.453 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=PacketListenerN, member=4): Scheduled senior member heartbeat is overdue; rejoining multicast group. Next, Member 4 logs a TcpRing failure with Member 2, thus resulting in the termination of Member 2: 2010-02-26 15:30:21.421/163.968 Oracle Coherence GE 3.5.3/465p2 <D4> (thread=Cluster, member=4): TcpRing: Number of socket exceptions exceeded maximum; last was "java.net.SocketTimeoutException: connect timed out"; removing the member: 2 For quick process termination detection, Oracle Coherence utilizes a feature called TcpRing which is a sparse collection of TCP/IP-based connections between different members in the cluster. Each member in the cluster is connected to at least one other member, which (if at all possible) is running on a different physical box. This connection is not used for any data transfer, only heartbeat communications are sent once a second per each link. If a certain number of exceptions are thrown while trying to re-establish a connection, the member throwing the exceptions is removed from the cluster. Member 5 logs a packet timeout with Member 3 and cites witnesses Members 4 and 6: 2010-02-26 15:30:29.791/165.037 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=PacketPublisher, member=5): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer) by MemberSet(Size=2, BitSetCount=2 Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) ) 2010-02-26 15:30:29.798/165.044 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=Cluster, member=5): Member departure confirmed by MemberSet(Size=2, BitSetCount=2 Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) ); removing Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer) Eventually we are left with two distinct clusters consisting of Members 1, 2, 3 and Members 4, 5, 6, respectively. In the latter cluster, Member 4 is promoted to senior member. The connection between the two switches is restored at 15:33. Upon the restoration of the connection, the cluster members immediately receive cluster heartbeats from the two senior members. In the case of Members 1, 2, and 3, the following is logged: 2010-02-26 15:33:14.970/369.066 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=Cluster, member=1): The member formerly known as Member(Id=4, Timestamp=2010-02-26 15:30:35.341, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) has been forcefully evicted from the cluster, but continues to emit a cluster heartbeat; henceforth, the member will be shunned and its messages will be ignored. Likewise for Members 4, 5, and 6: 2010-02-26 15:33:14.343/336.890 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=Cluster, member=4): The member formerly known as Member(Id=1, Timestamp=2010-02-26 15:30:31.64, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) has been forcefully evicted from the cluster, but continues to emit a cluster heartbeat; henceforth, the member will be shunned and its messages will be ignored. This message indicates that a senior heartbeat is being received from members that were previously removed from the cluster, in other words, something that should not be possible. For this reason, the recipients of these messages will initially ignore them. After several iterations of these messages, the existence of multiple clusters is acknowledged, thus triggering the panic protocol to reconcile this situation. When the presence of more than one cluster (i.e. Split-Brain) is detected by a Coherence member, the panic protocol is invoked in order to resolve the conflicting clusters and consolidate into a single cluster. The protocol consists of the removal of smaller clusters until there is one cluster remaining. In the case of equal size clusters, the one with the older Senior Member will survive. Member 1, being the oldest member, initiates the protocol: 2010-02-26 15:33:45.970/400.066 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=Cluster, member=1): An existence of a cluster island with senior Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) containing 3 nodes have been detected. Since this Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) is the senior of an older cluster island, the panic protocol is being activated to stop the other island's senior and all junior nodes that belong to it. Member 3 receives the panic: 2010-02-26 15:33:45.803/382.336 Oracle Coherence GE 3.5.3/465p2 <Error> (thread=Cluster, member=3): Received panic from senior Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) caused by Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) Member 4, the senior member of the younger cluster, receives the kill message from Member 3: 2010-02-26 15:33:44.921/367.468 Oracle Coherence GE 3.5.3/465p2 <Error> (thread=Cluster, member=4): Received a Kill message from a valid Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer); stopping cluster service. In turn, Member 4 requests the departure of its junior members 5 and 6: 2010-02-26 15:33:44.921/367.468 Oracle Coherence GE 3.5.3/465p2 <Error> (thread=Cluster, member=4): Received a Kill message from a valid Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer); stopping cluster service. 2010-02-26 15:33:43.343/349.015 Oracle Coherence GE 3.5.3/465p2 <Error> (thread=Cluster, member=6): Received a Kill message from a valid Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer); stopping cluster service. Once Members 4, 5, and 6 restart, they rejoin the original cluster with senior member 1. The log below is from Member 4. Note that it receives a different member id when it rejoins the cluster. 2010-02-26 15:33:44.921/367.468 Oracle Coherence GE 3.5.3/465p2 <Error> (thread=Cluster, member=4): Received a Kill message from a valid Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer); stopping cluster service. 2010-02-26 15:33:46.921/369.468 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Service Cluster left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Invocation:InvocationService, member=4): Service InvocationService left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=OptimisticCache, member=4): Service OptimisticCache left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=ReplicatedCache, member=4): Service ReplicatedCache left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=DistributedCache, member=4): Service DistributedCache left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Invocation:Management, member=4): Service Management left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member 6 left service Management with senior member 5 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member 6 left service DistributedCache with senior member 5 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member 6 left service ReplicatedCache with senior member 5 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member 6 left service OptimisticCache with senior member 5 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member 6 left service InvocationService with senior member 5 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member(Id=6, Timestamp=2010-02-26 15:33:47.046, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) left Cluster with senior member 4 2010-02-26 15:33:49.218/371.765 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=main, member=n/a): Restarting cluster 2010-02-26 15:33:49.421/371.968 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=n/a): Service Cluster joined the cluster with senior service member n/a 2010-02-26 15:33:49.625/372.172 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=Cluster, member=n/a): This Member(Id=5, Timestamp=2010-02-26 15:33:50.499, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) joined cluster "cluster:0xDDEB" with senior Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) Cool isn't it?

    Read the article

  • MySQL Cluster 7.3 - Join This Week's Webinar to Learn What's New

    - by Mat Keep
    The first Development Milestone and Early Access releases of MySQL Cluster 7.3 were announced just several weeks ago. To provide more detail and demonstrate the new features, Andrew Morgan and I will be hosting a live webinar this coming Thursday 25th October at 0900 Pacific Time / 16.00 UTC Even if you can't make the live webinar, it is still worth registering for the event as you will receive a notification when the replay will be available, to view on-demand at your convenience In the webinar, we will discuss the enhancements being previewed as part of MySQL Cluster 7.3, including: - Foreign Key Constraints: Yes, we've looked into the future and decided Foreign Keys are it ;-) You can read more about the implementation of Foreign Keys in MySQL Cluster 7.3 here - Node.js NoSQL API: Allowing web, mobile and cloud services to query and receive results sets from MySQL Cluster, natively in JavaScript, enables developers to seamlessly couple high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. You can study the Node.js / MySQL Cluster tutorial here - Auto-Installer: This new web-based GUI makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments on-premise or in the cloud You can view a YouTube tutorial on the MySQL Cluster Auto-Installer here  So we have a lot to cover in our 45 minute session. It will be time well spent if you want to know more about the future direction of MySQL Cluster and how it can help you innovate faster, with greater simplicity. Registration is open 

    Read the article

  • Linux stretch cluster: MD replication, DRBD or Veritas?

    - by PieterB
    For the moment there's a lot of choices for setting up a Linux cluster. For cluster manager: you can use Red Hat Cluster manager, Pacemaker or Veritas Cluster Server. The first one has the most momentum, the second one comes by default with RH subscriptions and the last one is very expensive and has a very good reputation ;-) For storage: - You can replicate LUN's using software raid / md device - You can use the network using DRBD replication, which offers a bit more flexibility - You can use Veritas Storage Foundation technology to talk to your SANs replication technology. Anyone has any recommandations or experience with these technologies?

    Read the article

  • Product Update Bulletin: Oracle Solaris Cluster October 2013

    - by uwes
    Announcing new qualifications and general news for the Oracle Solaris Cluster product. Hardware Qualifications Sun Server X4-2 and X4-2L servers, Sun Blade X4-2B server module with Oracle Solaris Cluster 3.3 Sun Storage 16 Gb Fibre Channel ExpressModule Universal HBA, Emulex Oracle Dual Port QDR InfiniBand Adapter M3 Software Qualifications Oracle Database 12c Real Application Cluster with Oracle Solaris Cluster 4.1 Oracle Database 11.2.0.4 single instance and RAC with Oracle Solaris Cluster 4.1 Oracle VM server for SPARC 3.1 SAP Netweaver with new kernel versions ZFS Storage Appliance Kit version 2011.1.7.0 and 2013.1.0.0 Application monitoring in Oracle VM for SPARC failover guest domain Storage Partner Update Oracle Solaris Cluster 3.3 3/13 with the HDS Enterprise Storage arrays EMC SRDF for Oracle database 12c RAC in Oracle Solaris Cluster 4.1 geo cluster configuration Oracle Solaris Cluster References Korea Enterprise Data, HDFC Securities, Dealis Fund Operations Web Updates New blog entry: Oracle Solaris 10 Brand Zone cluster Solaris Application Engineering website now includes Oracle Solaris Cluster application support information Please read the Oracle Solaris Cluster Product Update Bulletin on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) _____________________________________________________________________ For More Information Go To:Oracle.com Oracle Solaris Cluster page Oracle Technology Network Oracle Solaris Cluster pageOracle Solaris Cluster mos communityPartner web Oracle Solaris Cluster pageOracle Solaris Cluster Blog Solaris.us.oracle.com page

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >