Search Results

Search found 4125 results on 165 pages for 'hash cluster'.

Page 47/165 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Migrating Identity Providers - specifying a new users password hash.

    - by Stephen Denne
    We'd like to switch Identity Provider (and Web Access Manager), and also the user directory we use, but would like to do so without users needing to change their password. We currently have the SSHA of the passwords. I'm expecting to write code to perform the migration. I don't mind how complex the code has to be, rather my concern is whether such a migration is possible at all. MS Active Directory would be our preferred user store, but I believe that it can not have new users set up in it with a particular password hash. Is that correct? What user directory stores can be populated with users already set up with a SSHA password? What Identity Provider and Access Management products work with those stores?

    Read the article

  • What do you do with a 15 node decommissioned Itanium cluster?

    - by Gomibushi
    We are decommissioning a 15 node Itanium cluster. We don't know what to do with it. Being geeks we want to put it (or its individual nodes) to some cool use, but since it is Itanium we are a bit unsure what that could/would be. We are not bringing it back as production servers and we are considering giving them away, if anyone wants them. It's not the most spiffy hardware, but being 2U rack servers they pack an ok amount of cpu and memory, they're about 3 years old perhaps. Ideas to what runs well on them? Or something one can use them as?

    Read the article

  • Is Tomcat Shared Session / Cluster between two machine possible?

    - by Snorri
    I have a setup of several Tomcat servers distributed between a few servers, all running the same thing. Apache is on top of Apache and a loadbalancer in front of the Apache servers. I want to cluster the Tomcats using Shared Session to minimize downtime and user interruption while deploying apps. I know clustering works within the same server but is it possible to setup Tomcat in a way that it shares sessions between servers on different machines? = Server 1 == Apache 1 === Tomcat 1 = Server 2 == Apache 2 === Tomcat 2 When Server/Tomcat 1 would be taken down, users and their sessions would transfer over to Server/Tomcat 2 and vice versa.

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • is it right to call ejb bean from thread by ThreadPoolExecutor?

    - by kislo_metal
    I trying to call some ejb bean method from tread. and getting error : (as is glassfish v3) Log Level SEVERE Logger javax.enterprise.system.std.com.sun.enterprise.v3.services.impl Name-Value Pairs {_ThreadName=Thread-1, _ThreadID=42} Record Number 928 Message ID java.lang.NullPointerException at ua.co.rufous.server.broker.TempLicService.run(TempLicService.java Complete Message 35) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:637) here is tread public class TempLicService implements Runnable { String hash; //it`s Stateful bean @EJB private LicActivatorLocal lActivator; public TempLicService(String hash) { this.hash= hash; } @Override public void run() { lActivator.proccessActivation(hash); } } my ThreadPoolExecutor public class RequestThreadPoolExecutor extends ThreadPoolExecutor { private boolean isPaused; private ReentrantLock pauseLock = new ReentrantLock(); private Condition unpaused = pauseLock.newCondition(); private static RequestThreadPoolExecutor threadPool; private RequestThreadPoolExecutor() { super(1, Integer.MAX_VALUE, 10, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>()); System.out.println("RequestThreadPoolExecutor created"); } public static RequestThreadPoolExecutor getInstance() { if (threadPool == null) threadPool = new RequestThreadPoolExecutor(); return threadPool; } public void runService(Runnable task) { threadPool.execute(task); } protected void beforeExecute(Thread t, Runnable r) { super.beforeExecute(t, r); pauseLock.lock(); try { while (isPaused) unpaused.await(); } catch (InterruptedException ie) { t.interrupt(); } finally { pauseLock.unlock(); } } public void pause() { pauseLock.lock(); try { isPaused = true; } finally { pauseLock.unlock(); } } public void resume() { pauseLock.lock(); try { isPaused = false; unpaused.signalAll(); } finally { pauseLock.unlock(); } } public void shutDown() { threadPool.shutdown(); } //<<<<<< creating thread here public void runByHash(String hash) { Runnable service = new TempLicService(hash); threadPool.runService(service); } } and method where i call it (it is gwt servlet, but there is no proble to call thread that not contain ejb) : @Override public Boolean submitHash(String hash) { System.out.println("submiting hash"); try { if (tBoxService.getTempLicStatus(hash) == 1) { //<<< here is the call RequestThreadPoolExecutor.getInstance().runByHash(hash); return true; } } catch (NoResultException e) { e.printStackTrace(); } return false; } I need to organize some pool of submitting hash to server (calls of LicActivator bean), is ThreadPoolExecutor design good idea and why it is not working in my case? (as I know we can`t create thread inside bean, but could we call bean from different threads? ). If No, what is the bast practice for organize such request pool? Thanks. << Answer: I am using DI (EJB 3.1) soo i do not need any look up here. (application packed in ear and both modules in it (web module and ejb), it works perfect for me). But I can use it only in managed classes. So.. 2.Can I use manual look up in Tread ? Could I use Bean that extends ThreadPoolExecutor and calling another bean that implements Runnable ? Or it is not allowed ?

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle is providing a large range of interesting solutions to ensure High Availability of the database. Dataguard, RAC or even both configurations (as recommended by Oracle for a Maximum Available Architecture - MAA) are the most frequently found and used solutions. But, when it comes to protecting a system with an Active/Passive architecture with failover capabilities, people often thinks to other expensive third party cluster systems. Oracle Clusterware technology, which comes along at no extra-cost with Oracle Database or Oracle Unbreakable Linux, is - in the knowing of most people - often linked to Oracle RAC and therefore, is seldom used to implement failover solutions. Oracle Clusterware 11gR2  (a part of Oracle 11gR2 Grid Infrastructure)  provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make "failover-able'", and then to protect, almost any kind of application (from the simple xclock to the most complex Application Server). Quoting Oracle: “Oracle Clusterware is a portable cluster software that allows clustering of single servers so that they cooperate as a single system. Oracle Clusterware also provides the required infrastructure for Oracle Real Application Clusters (RAC). In addition Oracle Clusterware enables the protection of any Oracle application or any other kind of application within a cluster.” In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional on my system. Unfortunately, there can always be a typo or a mistake. This blog entry does not replace a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it...  Note : This entry has been revised (rev.2) following comments from Philip Newlan. Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with an Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 - Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let's name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill -9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let's save the definition of this resource, for future use : mkdir -p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let's remove it from the OCR definitions, we don't need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'clean')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown abort    ##for i in `ps -ef | grep -i $ORACLE_SID | awk '{print $2}' ` ;do kill -9 $i; done EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop, clean and check the database. It is self-explaining and contains nothing special. Just be aware that it must be runnable (+x), it runs as Oracle user (because of the ACL property - see later) and needs to know about the environment. Also make sure it exists on every node of the cluster. Moreover, as of 11.2, the clean method is mandatory. It must provide the “last gasp clean up”, for example, a shutdown abort or a kill –9 of all the remaining processes. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more "MS$ like" method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions. References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference Clusterware for Unbreakable Linux Using Oracle Clusterware to Protect A Single Instance Oracle Database 11gR1 (to have an idea of complexity) Oracle Clusterware on OTN   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • How can I rank teams based off of head to head wins/losses

    - by TMP
    I'm trying to write an algorithm (specifically in Ruby) that will rank teams based on their record against each other. If a team A and team B have won the same amount of games against each other, then it goes down to point differentials. Here's an example: A beat B two times B beats C one time A beats D three times C bests D two times D beats C one time B beats A one time Which sort of reduces to A[B] = 2 B[C] = 1 A[D] = 3 C[D] = 2 D[C] = 1 B[A] = 1 Which sort of reduces to A[B] = 1 B[C] = 1 A[D] = 3 C[D] = 1 D[C] = -1 B[A] = -1 Which is about how far I've got I think the results of this specific algorithm would be: A, B, C, D But I'm stuck on how to transition from my nested hash-like structure to the results. My psuedo-code is as follows (I can post my ruby code too if someone wants): For each game(g): hash[g.winner][g.loser] += 1 That leaves hash as the first reduction above hash2 = clone of hash For each key(winner), value(losers hash) in hash: For each key(loser), value(losses against winner): hash2[loser][winner] -= losses Which leaves hash2 as the second reduction Feel free to as me question or edit this to be more clear, I'm not sure of how to put it in a very eloquent way. Thanks!

    Read the article

  • Best Practice: What can be the hashCode() method implementation if custom field used in equals() method are null?

    - by goodspeed
    What is the best practice to return a value for hashCode() method if custom field used in equals are null ? I have a situation, where equals() override is implemented using custom fields. Usually it it is better to override hashCode() also using that custom fields used in equals(). But if all the custom fields used in equals() are null, then what would be the best implementation for hashCode()? Example: class Person { private String firstName; private String lastName; public String getFirstName() { return firstName; } public String getLastName() { return lastName; } @Override public boolean equals(Object object) { boolean result = false; if (object == null || object.getClass() != getClass()) { result = false; } else { Person person = (Person) object; if (this.firstName == person.getFirstName() && this.lastName == tiger.getLastName()) { result = true; } } return result; } @Override public int hashCode() { int hash = 3; if(this.firstName == null || this.lastName == null) { // <b>What is the best practice here, </b> // <b>is return super.hashCode() better ?</b> } hash = 7 * hash + this.firstName.hashCode(); hash = 7 * hash + this.lastName.hashCode(); return hash; } } is it required to check for null in hashCode() ? If yes, what should be returned if custom values are null ?

    Read the article

  • Keyboard navigation for jQuery Tabs

    - by Binyamin
    How to make Keyboard navigation left/up/right/down (like for photo gallery) feature for jQury Tabs with History? Demo without Keyboard feature in http://dl.dropbox.com/u/6594481/tabs/index.html Needed functions: 1. on keyboardtop/down make select and CSS showactivenested ajax tabs from 1-st to last level 2. on keyboardleft/right changeback/forwardcontent ofactivenested ajax tabs tab 3. an extra option, makeactivenested ajax tab on 'cursor-on' on concrete nested ajax tabs level Read more detailed question with example pictures in http://stackoverflow.com/questions/2975003/jquery-tools-to-make-keyboard-and-cookies-feature-for-ajaxed-tabs-with-history /** * @license * jQuery Tools @VERSION Tabs- The basics of UI design. * * NO COPYRIGHTS OR LICENSES. DO WHAT YOU LIKE. * * http://flowplayer.org/tools/tabs/ * * Since: November 2008 * Date: @DATE */ (function($) { // static constructs $.tools = $.tools || {version: '@VERSION'}; $.tools.tabs = { conf: { tabs: 'a', current: 'current', onBeforeClick: null, onClick: null, effect: 'default', initialIndex: 0, event: 'click', rotate: false, // 1.2 history: false }, addEffect: function(name, fn) { effects[name] = fn; } }; var effects = { // simple "toggle" effect 'default': function(i, done) { this.getPanes().hide().eq(i).show(); done.call(); }, /* configuration: - fadeOutSpeed (positive value does "crossfading") - fadeInSpeed */ fade: function(i, done) { var conf = this.getConf(), speed = conf.fadeOutSpeed, panes = this.getPanes(); if (speed) { panes.fadeOut(speed); } else { panes.hide(); } panes.eq(i).fadeIn(conf.fadeInSpeed, done); }, // for basic accordions slide: function(i, done) { this.getPanes().slideUp(200); this.getPanes().eq(i).slideDown(400, done); }, /** * AJAX effect */ ajax: function(i, done) { this.getPanes().eq(0).load(this.getTabs().eq(i).attr("href"), done); } }; var w; /** * Horizontal accordion * * @deprecated will be replaced with a more robust implementation */ $.tools.tabs.addEffect("horizontal", function(i, done) { // store original width of a pane into memory if (!w) { w = this.getPanes().eq(0).width(); } // set current pane's width to zero this.getCurrentPane().animate({width: 0}, function() { $(this).hide(); }); // grow opened pane to it's original width this.getPanes().eq(i).animate({width: w}, function() { $(this).show(); done.call(); }); }); function Tabs(root, paneSelector, conf) { var self = this, trigger = root.add(this), tabs = root.find(conf.tabs), panes = paneSelector.jquery ? paneSelector : root.children(paneSelector), current; // make sure tabs and panes are found if (!tabs.length) { tabs = root.children(); } if (!panes.length) { panes = root.parent().find(paneSelector); } if (!panes.length) { panes = $(paneSelector); } // public methods $.extend(this, { click: function(i, e) { var tab = tabs.eq(i); if (typeof i == 'string' && i.replace("#", "")) { tab = tabs.filter("[href*=" + i.replace("#", "") + "]"); i = Math.max(tabs.index(tab), 0); } if (conf.rotate) { var last = tabs.length -1; if (i < 0) { return self.click(last, e); } if (i > last) { return self.click(0, e); } } if (!tab.length) { if (current >= 0) { return self; } i = conf.initialIndex; tab = tabs.eq(i); } // current tab is being clicked if (i === current) { return self; } // possibility to cancel click action e = e || $.Event(); e.type = "onBeforeClick"; trigger.trigger(e, [i]); if (e.isDefaultPrevented()) { return; } // call the effect effects[conf.effect].call(self, i, function() { // onClick callback e.type = "onClick"; trigger.trigger(e, [i]); }); // default behaviour current = i; tabs.removeClass(conf.current); tab.addClass(conf.current); return self; }, getConf: function() { return conf; }, getTabs: function() { return tabs; }, getPanes: function() { return panes; }, getCurrentPane: function() { return panes.eq(current); }, getCurrentTab: function() { return tabs.eq(current); }, getIndex: function() { return current; }, next: function() { return self.click(current + 1); }, prev: function() { return self.click(current - 1); } }); // callbacks $.each("onBeforeClick,onClick".split(","), function(i, name) { // configuration if ($.isFunction(conf[name])) { $(self).bind(name, conf[name]); } // API self[name] = function(fn) { $(self).bind(name, fn); return self; }; }); if (conf.history && $.fn.history) { $.tools.history.init(tabs); conf.event = 'history'; } // setup click actions for each tab tabs.each(function(i) { $(this).bind(conf.event, function(e) { self.click(i, e); return e.preventDefault(); }); }); // cross tab anchor link panes.find("a[href^=#]").click(function(e) { self.click($(this).attr("href"), e); }); // open initial tab if (location.hash) { self.click(location.hash); } else { if (conf.initialIndex === 0 || conf.initialIndex > 0) { self.click(conf.initialIndex); } } } // jQuery plugin implementation $.fn.tabs = function(paneSelector, conf) { // return existing instance var el = this.data("tabs"); if (el) { return el; } if ($.isFunction(conf)) { conf = {onBeforeClick: conf}; } // setup conf conf = $.extend({}, $.tools.tabs.conf, conf); this.each(function() { el = new Tabs($(this), paneSelector, conf); $(this).data("tabs", el); }); return conf.api ? el: this; }; }) (jQuery); /** * @license * jQuery Tools @VERSION History "Back button for AJAX apps" * * NO COPYRIGHTS OR LICENSES. DO WHAT YOU LIKE. * * http://flowplayer.org/tools/toolbox/history.html * * Since: Mar 2010 * Date: @DATE */ (function($) { var hash, iframe, links, inited; $.tools = $.tools || {version: '@VERSION'}; $.tools.history = { init: function(els) { if (inited) { return; } // IE if ($.browser.msie && $.browser.version < '8') { // create iframe that is constantly checked for hash changes if (!iframe) { iframe = $("<iframe/>").attr("src", "javascript:false;").hide().get(0); $("body").append(iframe); setInterval(function() { var idoc = iframe.contentWindow.document, h = idoc.location.hash; if (hash !== h) { $.event.trigger("hash", h); } }, 100); setIframeLocation(location.hash || '#'); } // other browsers scans for location.hash changes directly without iframe hack } else { setInterval(function() { var h = location.hash; if (h !== hash) { $.event.trigger("hash", h); } }, 100); } links = !links ? els : links.add(els); els.click(function(e) { var href = $(this).attr("href"); if (iframe) { setIframeLocation(href); } // handle non-anchor links if (href.slice(0, 1) != "#") { location.href = "#" + href; return e.preventDefault(); } }); inited = true; } }; function setIframeLocation(h) { if (h) { var doc = iframe.contentWindow.document; doc.open().close(); doc.location.hash = h; } } // global histroy change listener $(window).bind("hash", function(e, h) { if (h) { links.filter(function() { var href = $(this).attr("href"); return href == h || href == h.replace("#", ""); }).trigger("history", [h]); } else { links.eq(0).trigger("history", [h]); } hash = h; window.location.hash = hash; }); // jQuery plugin implementation $.fn.history = function(fn) { $.tools.history.init(this); // return jQuery return this.bind("history", fn); }; })(jQuery); $(function() { $("#list").tabs("#content > div", {effect: 'ajax', history: true}); });

    Read the article

  • Why is jQuery .load() firing twice?

    - by LeslieOA
    Hello S-O. I'm using jQuery 1.4 with jQuery History and trying to figure out why Firebug/Web Inspector are showing 2 XHR GET requests on each page load (double that amount when visiting my sites homepage (/ or /#). e.g. Visit this (or any) page with Firebug enabled. Here's the edited/relevant code (see full source): - $(document).ready(function() { $('body').delegate('a', 'click', function(e) { var hash = this.href; if (hash.indexOf(window.location.hostname) > 0) { /* Internal */ hash = hash.substr((window.location.protocol+'//'+window.location.host+'/').length); $.historyLoad(hash); return false; } else if (hash.indexOf(window.location.hostname) == -1) { /* External */ window.open(hash); return false; } else { /* Nothing to do */ } }); $.historyInit(function(hash) { $('#loading').remove(); $('#container').append('<span id="loading">Loading...</span>'); $('#ajax').animate({height: 'hide'}, 'fast', 'swing', function() { $('#page').empty(); $('#loading').fadeIn('fast'); if (hash == '') { /* Index */ $('#ajax').load('/ #ajax','', function() { ajaxLoad(); }); } else { $('#ajax').load(hash + ' #ajax', '', function(responseText, textStatus, XMLHttpRequest) { switch (XMLHttpRequest.status) { case 200: ajaxLoad(); break; case 404: $('#ajax').load('/404 #ajax','', ajaxLoad); break; // Default 404 default: alert('We\'re experiencing technical difficulties. Try refreshing.'); break; } }); } }); // $('#ajax') }); // historyInit() function ajaxLoad() { $('#loading').fadeOut('fast', function() { $(this).remove(); $('#ajax').animate({height: 'show', opacity: '1'}, 'fast', 'swing'); }); } }); A few notes that may be helpful: - Using WordPress with default/standard .htaccess I'm redirecting /links-like/this to /#links-like/this via JavaScript only (PE) I'm achieving the above with window.location.replace(addr); and not window.location=addr; Feel free to visit my site if needed. Thanks in advanced.

    Read the article

  • Using Handlebars.js issue

    - by Roland
    I'm having a small issue when I'm compiling a template with Handlebars.js . I have a JSON text file which contains an big array with objects : Source ; and I'm using XMLHTTPRequest to get it and then parse it so I can use it when compiling the template. So far the template has the following structure : <div class="product-listing-wrapper"> <div class="product-listing"> <div class="left-side-content"> <div class="thumb-wrapper"> <img src="{{ThumbnailUrl}}"> </div> <div class="google-maps-wrapper"> <div class="google-coordonates-wrapper"> <div class="google-coordonates"> <p>{{LatLon.Lat}}</p> <p>{{LatLon.Lon}}</p> </div> </div> <div class="google-maps-button"> <a class="google-maps" href="#" data-latitude="{{LatLon.Lat}}" data-longitude="{{LatLon.Lon}}">Google Maps</a> </div> </div> </div> <div class="right-side-content"></div> </div> And the following block of code would be the way I'm handling the JS part : $(document).ready(function() { /* Default Javascript Options ~a javascript object which contains all the variables that will be passed to the cluster class */ var default_cluster_options = { animations : ['flash', 'bounce', 'shake', 'tada', 'swing', 'wobble', 'wiggle', 'pulse', 'flip', 'flipInX', 'flipOutX', 'flipInY', 'flipOutY', 'fadeIn', 'fadeInUp', 'fadeInDown', 'fadeInLeft', 'fadeInRight', 'fadeInUpBig', 'fadeInDownBig', 'fadeInLeftBig', 'fadeInRightBig', 'fadeOut', 'fadeOutUp', 'fadeOutDown', 'fadeOutLeft', 'fadeOutRight', 'fadeOutUpBig', 'fadeOutDownBig', 'fadeOutLeftBig', 'fadeOutRightBig', 'bounceIn', 'bounceInUp', 'bounceInDown', 'bounceInLeft', 'bounceInRight', 'bounceOut', 'bounceOutUp', 'bounceOutDown', 'bounceOutLeft', 'bounceOutRight', 'rotateIn', 'rotateInDownLeft', 'rotateInDownRight', 'rotateInUpLeft', 'rotateInUpRight', 'rotateOut', 'rotateOutDownLeft', 'rotateOutDownRight', 'rotateOutUpLeft', 'rotateOutUpRight', 'lightSpeedIn', 'lightSpeedOut', 'hinge', 'rollIn', 'rollOut'], json_data_url : 'data.json', template_data_url : 'template.php', base_maps_api_url : 'https://maps.googleapis.com/maps/api/js?sensor=false', cluser_wrapper_id : '#content-wrapper', maps_wrapper_class : '.google-maps', }; /* Cluster ~main class, handles all javascript operations */ var Cluster = function(environment, cluster_options) { var self = this; this.options = $.extend({}, default_cluster_options, cluster_options); this.environment = environment; this.animations = this.options.animations; this.json_data_url = this.options.json_data_url; this.template_data_url = this.options.template_data_url; this.base_maps_api_url = this.options.base_maps_api_url; this.cluser_wrapper_id = this.options.cluser_wrapper_id; this.maps_wrapper_class = this.options.maps_wrapper_class; this.test_environment_mode(this.environment); this.initiate_environment(); this.test_xmlhttprequest_availability(); this.initiate_gmaps_lib_load(self.base_maps_api_url); this.initiate_data_processing(); }; /* Test Environment Mode ~adds a modernizr test which looks wheater the cluster class is initiated in development or not */ Cluster.prototype.test_environment_mode = function(environment) { var self = this; return Modernizr.addTest('test_environment', function() { return (typeof environment !== 'undefined' && environment !== null && environment === "Development") ? true : false; }); }; /* Test XMLHTTPRequest Availability ~adds a modernizr test which looks wheater the xmlhttprequest class is available or not in the browser, exception makes IE */ Cluster.prototype.test_xmlhttprequest_availability = function() { return Modernizr.addTest('test_xmlhttprequest', function() { return (typeof window.XMLHttpRequest === 'undefined' || window.XMLHttpRequest === null) ? true : false; }); }; /* Initiate Environment ~depending on what the modernizr test returns it puts LESS in the development mode or not */ Cluster.prototype.initiate_environment = function() { return (Modernizr.test_environment) ? (less.env = "development", less.watch()) : true; }; Cluster.prototype.initiate_gmaps_lib_load = function(lib_url) { return Modernizr.load(lib_url); }; /* Initiate XHR Request ~prototype function that creates an xmlhttprequest for processing json data from an separate json text file */ Cluster.prototype.initiate_xhr_request = function(url, mime_type) { var request, data; var self = this; (Modernizr.test_xmlhttprequest) ? request = new ActiveXObject('Microsoft.XMLHTTP') : request = new XMLHttpRequest(); request.onreadystatechange = function() { if(request.readyState == 4 && request.status == 200) { data = request.responseText; } }; request.open("GET", url, false); request.overrideMimeType(mime_type); request.send(); return data; }; Cluster.prototype.initiate_google_maps_action = function() { var self = this; return $(this.maps_wrapper_class).each(function(index, element) { return $(element).on('click', function(ev) { var html = $('<div id="map-canvas" class="map-canvas"></div>'); var latitude = $(element).attr('data-latitude'); var longitude = $(element).attr('data-longitude'); log("LAT : " + latitude); log("LON : " + longitude); $.lightbox(html, { "width": 900, "height": 250, "onOpen" : function() { } }); ev.preventDefault(); }); }); }; Cluster.prototype.initiate_data_processing = function() { var self = this; var json_data = JSON.parse(self.initiate_xhr_request(self.json_data_url, 'application/json; charset=ISO-8859-1')); var source_data = self.initiate_xhr_request(self.template_data_url, 'text/html'); var template = Handlebars.compile(source_data); for(var i = 0; i < json_data.length; i++ ) { var result = template(json_data[i]); $(result).appendTo(self.cluser_wrapper_id); } self.initiate_google_maps_action(); }; /* Cluster ~initiate the cluster class */ var cluster = new Cluster("Development"); }); My problem would be that I don't think I'm iterating the JSON object right or I'm using the template the wrong way because if you check this link : http://rolandgroza.com/labs/valtech/ ; you will see that there are some numbers there ( which represents latitude and longitude ) but they are all the same and if you take only a brief look at the JSON object each number is different. So what am I doing wrong that it makes the same number repeat ? Or what should I do to fix it ? I must notice that I've just started working with templates so I have little knowledge it.

    Read the article

  • add a decorate function to a class

    - by wiso
    I have a decorated function (simplified version): class Memoize: def __init__(self, function): self.function = function self.memoized = {} def __call__(self, *args, **kwds): hash = args try: return self.memoized[hash] except KeyError: self.memoized[hash] = self.function(*args) return self.memoized[hash] @Memoize def _DrawPlot(self, options): do something... now I want to add this method to a pre-esisting class. ROOT.TChain.DrawPlot = _DrawPlot when I call this method: chain = TChain() chain.DrawPlot(opts) I got: self.memoized[hash] = self.function(*args) TypeError: _DrawPlot() takes exactly 2 arguments (1 given) why doesn't it propagate self?

    Read the article

  • Implications of Multiple JobTracker nodes in a Hadoop cluster?

    - by Jim Dennis
    I get the impression that one can, potentially, have multiple JobTracker nodes configured to share the same set of MR (TaskTracker) nodes. I know that, conventionally, all the nodes in a Hadoop cluster should have the same set of configuration files (conventionally under /etc/hadoop/conf/ --- at least for the Cloudera Distribution of Hadoop (CDH). Can we define multiple Job Trackers in mapred-site.xml? Something like: <configuration> <property> <name>mapred.job.tracker</name> <value>jt01.mydomain.not:8021</value> </property> <property> <name>mapred.job.tracker</name> <value>jt02.mydomain.not:8021</value> </property> ... </configuration> Or is there some other allowed syntax for this? What are the implications of doing this. Does each JobTracker get information about the load on each TaskTracker node. In other words can the two JobTracker co-ordinated their scheduling across the TT nodes only based on the gossip information from the TTs or would they need to talk to one another? Is this documented anywhere?

    Read the article

  • what data structure should I use for hash lookup as well as binary search?

    - by zebraman
    I am working on a school homework. I have a list of names. I want to be able to perform binary search on these names (find all names between a lower and upper bound) for first name as well as last name, and perform keyword searches as well (this will be accomplished using hashing. For example, if I have the names Garfield Cat Snoopy Dog Captain Crunch Fat Cat then a binary search of first names (C,H) will return Captain Crunch, Fat Cat, and Garfield Cat. A binary search of last names (Cr,D) will return Captain Crunch. A keyword search of 'cat' will return Fat Cat and Garfield Cat. I understand binary search will only work on a sorted list, but since I am planning on searching two different criteria, I will have to sort the list by last name or first name depending on what I'm searching for. I feel like it will be too inefficient to have to resort the list each time I want to perform a new binary search. Would it just be better for me to set up and maintain two sorted lists (one for sorted by first name, one for sorted by last name)? Also, for hashing, will I have to set up a different table of names for that as well? I understand each keyword will hash to some value determined by a hash function, and this value (or key) is a table address where the corresponding names are stored. So I just want to know what would be the best way to solve this problem? Maintaining separate structures, or is there a way to efficiently do everything I want with just one data structure?

    Read the article

  • Do I need to Salt and Hash a randomly generated token?

    - by wag2639
    I'm using Adam Griffiths's Authentication Library for CodeIgniter and I'm tweaking the usermodel. I came across a generate function that he uses to generate tokens. His preferred approach is to reference a value from random.org but I considered that superfluous. I'm using his fall back approach of randomly generating a 20 character long string: $length = 20; $characters = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'; $token = ''; for ($i = 0; $i < $length; $i++) { $token .= $characters[mt_rand(0, strlen($characters)-1)]; } He then hashes this token using a salt (I'm combing code from different functions) sha1($this->CI->config->item('encryption_key').$str); I was wondering if theres any reason to to run the token through the salted hash? I've read that simply randomly generating strings was a naive way of making random passwords but is the sh1 hash and salt necessary? Note: I got my encryption_key from https://www.grc.com/passwords.htm (63 random alpha-numeric)

    Read the article

  • Openswan ipsec transport tunnel not going up

    - by gparent
    On ClusterA and B I have installed the "openswan" package on Debian Squeeze. ClusterA ip is 172.16.0.107, B is 172.16.0.108 When they ping one another, it does not reach the destination. /etc/ipsec.conf: version 2.0 # conforms to second version of ipsec.conf specification config setup protostack=netkey oe=off conn L2TP-PSK-CLUSTER type=transport left=172.16.0.107 right=172.16.0.108 auto=start ike=aes128-sha1-modp2048 authby=secret compress=yes /etc/ipsec.secrets: 172.16.0.107 172.16.0.108 : PSK "L2TPKEY" 172.16.0.108 172.16.0.107 : PSK "L2TPKEY" Here is the result of ipsec verify on both machines: root@cluster2:~# ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.28/K2.6.32-5-amd64 (netkey) Checking for IPsec support in kernel [OK] NETKEY detected, testing for disabled ICMP send_redirects [OK] NETKEY detected, testing for disabled ICMP accept_redirects [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [FAILED] Checking for 'ip' command [OK] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] root@cluster2:~# This is the end of the output of ipsec auto --status: 000 "cluster": 172.16.0.108<172.16.0.108>[+S=C]...172.16.0.107<172.16.0.107>[+S=C]; prospective erouted; eroute owner: #0 000 "cluster": myip=unset; hisip=unset; 000 "cluster": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cluster": policy: PSK+ENCRYPT+COMPRESS+PFS+UP+IKEv2ALLOW+lKOD+rKOD; prio: 32,32; interface: eth0; 000 "cluster": newest ISAKMP SA: #1; newest IPsec SA: #0; 000 "cluster": IKE algorithm newest: AES_CBC_128-SHA1-MODP2048 000 000 #3: "cluster":500 STATE_QUICK_R0 (expecting QI1); EVENT_CRYPTO_FAILED in 298s; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 #2: "cluster":500 STATE_QUICK_I1 (sent QI1, expecting QR1); EVENT_RETRANSMIT in 13s; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 #1: "cluster":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 2991s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 Interestingly enough, if I do ike-scan on the server here's what happens: Doesn't seem to take my ike settings into account root@cluster1:~# ike-scan -M 172.16.0.108 Starting ike-scan 1.9 with 1 hosts (http://www.nta-monitor.com/tools/ike-scan/) 172.16.0.108 Main Mode Handshake returned HDR=(CKY-R=641bffa66ba717b6) SA=(Enc=3DES Hash=SHA1 Auth=PSK Group=2:modp1024 LifeType=Seconds LifeDuration(4)=0x00007080) VID=4f45517b4f7f6e657a7b4351 VID=afcad71368a1f1c96b8696fc77570100 (Dead Peer Detection v1.0) Ending ike-scan 1.9: 1 hosts scanned in 0.008 seconds (118.19 hosts/sec). 1 returned handshake; 0 returned notify root@cluster1:~# I can't tell what's going on here, this is pretty much the simplest config I can have according to the examples.

    Read the article

  • IPSec VPN using ZyWALL IPSec VPN Client: unable to connect from some providers

    - by Reshi
    I'm trying to configure an IPSec VPN to one company from my home. The company has SANET internet service provider. I was able to create a VPN connection from another company that has the same internet service provider. The problem begins when I'm trying to connect from another ISP like Orange or Telekom. Here is the log from ZyWall: 20120816 10:06:18:359 Default (SA Gateway-P1) SEND phase 1 Main Mode [SA] [VID] [VID] [VID] [VID] [VID] 20120816 10:06:18:375 Default (SA Gateway-P1) RECV phase 1 Main Mode [SA] [VID] [VID] [VID] [VID] [VID] [VID] [VID] [VID] 20120816 10:06:18:390 Default (SA Gateway-P1) SEND phase 1 Main Mode [KEY_EXCH] [NONCE] [NAT_D] [NAT_D] 20120816 10:06:18:718 Default (SA Gateway-P1) RECV phase 1 Main Mode [KEY_EXCH] [NONCE] [NAT_D] [NAT_D] 20120816 10:06:18:734 Default (SA Gateway-P1) SEND phase 1 Main Mode [HASH] [ID] 20120816 10:06:18:750 Default (SA Gateway-P1) RECV phase 1 Main Mode [HASH] [ID] 20120816 10:06:18:750 Default phase 1 done: initiator id [email protected], responder id 111.112.113.114 20120816 10:06:18:765 Default (SA Gateway-Tunnel-P2) SEND phase 2 Quick Mode [HASH] [SA] [KEY_EXCH] [NONCE] [ID] [ID] 20120816 10:06:18:953 Default (SA Gateway-Tunnel-P2) RECV phase 2 Quick Mode [HASH] [SA] [KEY_EXCH] [NONCE] [ID] [ID] 20120816 10:06:18:953 Default (SA Gateway-Tunnel-P2) SEND phase 2 Quick Mode [HASH] 20120816 10:06:48:968 Default (SA Gateway-P1) SEND Informational [HASH] [NOTIFY] type DPD_R_U_THERE 20120816 10:06:48:984 Default (SA Gateway-P1) RECV Informational [HASH] [NOTIFY] type DPD_R_U_THERE_ACK ZyWall informs me that the tunnel was opened. But I can't ping or access any computer in the network. My configuration at home: ISP: Orange Optical connection Terminal: GPON OPTICAL NETWORK TERMINAL G-25E Router: TPLink TL-WR941N --> SPI Firewall Enabled --> VPN - IPSEC Passthrough Enabled I was wondering if the problem could not be on ISP side (that he blocks somehow this connection because in SANET ISP it worked fine) or even in my terminal or router. What could I check? Where could be the problem ?

    Read the article

  • Why is my code signing (MS authenticode) verification failing?

    - by Tim
    I posted this question and have a freshly minted code signing cert from Thawte. I followed the instructions (or so I thought) and the code signing claims to be done right, however when I try to verify the tool shows an error. I have no idea what it means and no idea how to fix this. Any comments would be appreciated. Command line to sign exe: signtool sign /f mdt.pfx /p password /t http://timestamp.verisign.com/scripts/timstamp.dll test.exe Results: The following certificate was selected: Issued to: [my company] Issued by: Thawte Code Signing CA Expires: 4/23/2011 7:59:59 PM SHA1 hash: 7D1A42364765F8969E83BC00AB77F901118F3601 Done Adding Additional Store Attempting to sign: test.exe Successfully signed and timestamped: test.exe Number of files successfully Signed: 1 Number of warnings: 0 Number of errors: 0 Note that there are no errors or warnings. Now, when I try to verify imagine my surprise: signtool verify /v test.exe results in: Verifying: test.exe SHA1 hash of file: 490BA0656517D3A322D19F432F1C6D40695CAD22 Signing Certificate Chain: Issued to: Thawte Premium Server CA Issued by: Thawte Premium Server CA Expires: 12/31/2020 7:59:59 PM SHA1 hash: 627F8D7827656399D27D7F9044C9FEB3F33EFA9A Issued to: Thawte Code Signing CA Issued by: Thawte Premium Server CA Expires: 8/5/2013 7:59:59 PM SHA1 hash: A706BA1ECAB6A2AB18699FC0D7DD8C7DE36F290F Issued to: [my company] Issued by: Thawte Code Signing CA Expires: 4/23/2011 7:59:59 PM SHA1 hash: 7D1A42364765F8969E83BC00AB77F901118F3601 The signature is timestamped: 4/27/2010 10:19:19 AM Timestamp Verified by: Issued to: Thawte Timestamping CA Issued by: Thawte Timestamping CA Expires: 12/31/2020 7:59:59 PM SHA1 hash: BE36A4562FB2EE05DBB3D32323ADF445084ED656 Issued to: VeriSign Time Stamping Services CA Issued by: Thawte Timestamping CA Expires: 12/3/2013 7:59:59 PM SHA1 hash: F46AC0C6EFBB8C6A14F55F09E2D37DF4C0DE012D Issued to: VeriSign Time Stamping Services Signer - G2 Issued by: VeriSign Time Stamping Services CA Expires: 6/14/2012 7:59:59 PM SHA1 hash: ADA8AAA643FF7DC38DD40FA4C97AD559FF4846DE Number of files successfully Verified: 0 Number of warnings: 0 Number of errors: 1

    Read the article

  • Python metaclass for enforcing immutability of custom types

    - by Mark Lehmacher
    Having searched for a way to enforce immutability of custom types and not having found a satisfactory answer I came up with my own shot at a solution in form of a metaclass: class ImmutableTypeException( Exception ): pass class Immutable( type ): ''' Enforce some aspects of the immutability contract for new-style classes: - attributes must not be created, modified or deleted after object construction - immutable types must implement __eq__ and __hash__ ''' def __new__( meta, classname, bases, classDict ): instance = type.__new__( meta, classname, bases, classDict ) # Make sure __eq__ and __hash__ have been implemented by the immutable type. # In the case of __hash__ also make sure the object default implementation has been overridden. # TODO: the check for eq and hash functions could probably be done more directly and thus more efficiently # (hasattr does not seem to traverse the type hierarchy) if not '__eq__' in dir( instance ): raise ImmutableTypeException( 'Immutable types must implement __eq__.' ) if not '__hash__' in dir( instance ): raise ImmutableTypeException( 'Immutable types must implement __hash__.' ) if _methodFromObjectType( instance.__hash__ ): raise ImmutableTypeException( 'Immutable types must override object.__hash__.' ) instance.__setattr__ = _setattr instance.__delattr__ = _delattr return instance def __call__( self, *args, **kwargs ): obj = type.__call__( self, *args, **kwargs ) obj.__immutable__ = True return obj def _setattr( self, attr, value ): if '__immutable__' in self.__dict__ and self.__immutable__: raise AttributeError( "'%s' must not be modified because '%s' is immutable" % ( attr, self ) ) object.__setattr__( self, attr, value ) def _delattr( self, attr ): raise AttributeError( "'%s' must not be deleted because '%s' is immutable" % ( attr, self ) ) def _methodFromObjectType( method ): ''' Return True if the given method has been defined by object, False otherwise. ''' try: # TODO: Are we exploiting an implementation detail here? Find better solution! return isinstance( method.__objclass__, object ) except: return False However, while the general approach seems to be working rather well there are still some iffy implementation details (also see TODO comments in code): How do I check if a particular method has been implemented anywhere in the type hierarchy? How do I check which type is the origin of a method declaration (i.e. as part of which type a method has been defined)?

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • Deterministic/Consistent Unique Masking

    - by Dinesh Rajasekharan-Oracle
    One of the key requirements while masking data in large databases or multi database environment is to consistently mask some columns, i.e. for a given input the output should always be the same. At the same time the masked output should not be predictable. Deterministic masking also eliminates the need to spend enormous amount of time spent in identifying data relationships, i.e. parent and child relationships among columns defined in the application tables. In this blog post I will explain different ways of consistently masking the data across databases using Oracle Data Masking and Subsetting The readers of post should have minimal knowledge on Oracle Enterprise Manager 12c, Application Data Modeling, Data Masking concepts. For more information on these concepts, please refer to Oracle Data Masking and Subsetting document Oracle Data Masking and Subsetting 12c provides four methods using which users can consistently yet irreversibly mask their inputs. 1. Substitute 2. SQL Expression 3. Encrypt 4. User Defined Function SUBSTITUTE The substitute masking format replaces the original value with a value from a pre-created database table. As the method uses a hash based algorithm in the back end the mappings are consistent. For example consider DEPARTMENT_ID in EMPLOYEES table is replaced with FAKE_DEPARTMENT_ID from FAKE_TABLE. The substitute masking transformation that all occurrences of DEPARTMENT_ID say ‘101’ will be replaced with ‘502’ provided same substitution table and column is used , i.e. FAKE_TABLE.FAKE_DEPARTMENT_ID. The following screen shot shows the usage of the Substitute masking format with in a masking definition: Note that the uniqueness of the masked value depends on the number of columns being used in the substitution table i.e. if the original table contains 50000 unique values, then for the masked output to be unique and deterministic the substitution column should also contain 50000 unique values without which only consistency is maintained but not uniqueness. SQL EXPRESSION SQL Expression replaces an existing value with the output of a specified SQL Expression. For example while masking an EMPLOYEES table the EMAIL_ID of an employee has to be in the format EMPLOYEE’s [email protected] while FIRST_NAME and LAST_NAME are the actual column names of the EMPLOYEES table then the corresponding SQL Expression will look like %FIRST_NAME%||’.’||%LAST_NAME%||’@COMPANY.COM’. The advantage of this technique is that if you are masking FIRST_NAME and LAST_NAME of the EMPLOYEES table than the corresponding EMAIL ID will be replaced accordingly by the masking scripts. One of the interesting aspect’s of a SQL Expressions is that you can use sub SQL expressions, which means that you can write a nested SQL and use it as SQL Expression to address a complex masking business use cases. SQL Expression can also be used to consistently replace value with hashed value using Oracle’s PL/SQL function ORA_HASH. The following SQL Expression will help in the previous example for replacing the DEPARTMENT_IDs with a hashed number ORA_HASH (%DEPARTMENT_ID%, 1000) The following screen shot shows the usage of encrypt masking format with in the masking definition: ORA_HASH takes three arguments: 1. Expression which can be of any data type except LONG, LOB, User Defined Type [nested table type is allowed]. In the above example I used the Original value as expression. 2. Number of hash buckets which can be number between 0 and 4294967295. The default value is 4294967295. You can also co-relate the number of hash buckets to a range of numbers. In the above example above the bucket value is specified as 1000, so the end result will be a hashed number in between 0 and 1000. 3. Seed, can be any number which decides the consistency, i.e. for a given seed value the output will always be same. The default seed is 0. In the above SQL Expression a seed in not specified, so it to 0. If you have to use a non default seed then the function will look like. ORA_HASH (%DEPARTMENT_ID%, 1000, 1234 The uniqueness depends on the input and the number of hash buckets used. However as ORA_HASH uses a 32 bit algorithm, considering birthday paradox or pigeonhole principle there is a 0.5 probability of collision after 232-1 unique values. ENCRYPT Encrypt masking format uses a blend of 3DES encryption algorithm, hashing, and regular expression to produce a deterministic and unique masked output. The format of the masked output corresponds to the specified regular expression. As this technique uses a key [string] to encrypt the data, the same string can be used to decrypt the data. The key also acts as seed to maintain consistent outputs for a given input. The following screen shot shows the usage of encrypt masking format with in the masking definition: Regular Expressions may look complex for the first time users but you will soon realize that it’s a simple language. There are many resources in internet, oracle documentation, oracle learning library, my oracle support on writing a Regular Expressions, out of all the following My Oracle Support document helped me to get started with Regular Expressions: Oracle SQL Support for Regular Expressions[Video](Doc ID 1369668.1) USER DEFINED FUNCTION [UDF] User Defined Function or UDF provides flexibility for the users to code their own masking logic in PL/SQL, which can be called from masking Defintion. The standard format of an UDF in Oracle Data Masking and Subsetting is: Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2; Where • rowid is the row identifier of the column that needs to be masked • column_name is the name of the column that needs to be masked • original_value is the column value that needs to be masked You can achieve deterministic masking by using Oracle’s built in hash functions like, ORA_HASH, DBMS_CRYPTO.MD4, DBMS_CRYPTO.MD5, DBMS_UTILITY. GET_HASH_VALUE.Please refers to the Oracle Database Documentation for more information on the Oracle Hash functions. For example the following masking UDF generate deterministic unique hexadecimal values for a given string input: CREATE OR REPLACE FUNCTION RD_DUX (rid varchar2, column_name varchar2, orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2 (26); no_of_characters number(2); BEGIN no_of_characters:=6; stext:=substr(RAWTOHEX(DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(text),1)),0,no_of_characters); RETURN stext; END; The uniqueness depends on the input and length of the string and number of bits used by hash algorithm. In the above function MD4 hash is used [denoted by argument 1 in the DBMS_CRYPTO.HASH function which is a 128 bit algorithm which produces 2^128-1 unique hashed values , however this is limited by the length of the input string which is 6, so only 6^6 unique values will be generated. Also do not forget about the birthday paradox/pigeonhole principle mentioned earlier in this post. An another example is to consistently replace characters or numbers preserving the length and special characters as shown below: CREATE OR REPLACE FUNCTION RD_DUS(rid varchar2,column_name varchar2,orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2(26); BEGIN DBMS_RANDOM.SEED(orig_val); stext:=TRANSLATE(orig_val,'ABCDEFGHILKLMNOPQRSTUVWXYZ',DBMS_RANDOM.STRING('U',26)); stext:=TRANSLATE(stext,'abcdefghijklmnopqrstuvwxyz',DBMS_RANDOM.STRING('L',26)); stext:=TRANSLATE(stext,'0123456789',to_char(DBMS_RANDOM.VALUE(1,9))); stext:=REPLACE(stext,'.','0'); RETURN stext; END; The following screen shot shows the usage of an UDF with in a masking definition: To summarize, Oracle Data Masking and Subsetting helps you to consistently mask data across databases using one or all of the methods described in this post. It saves the hassle of identifying the parent-child relationships defined in the application table. Happy Masking

    Read the article

  • Is reading xml simple in rails or converting it to hash will be simpler?

    - by Salil
    Hi All, Sorry for this question but after spending 1-2 hours on how to read xml, i thought posting it on forum will be better. So i get a complex (very large)xml response from the plugin trackify. i want to read some values form it so i covert it into hash and then read it as follows For ex:- to read city @tracking_info['TrackResponse']['Shipment']['ShipTo']['Address']['City'] #>> "SEATTLE" my question is it proper way to getting xml response or there are some xml methods which is simple to use?

    Read the article

  • What's the best library to do a URL hash/history in JQuery?

    - by alex
    I've been looking around JQuery libraries for the URL hash, but found none that were good. There is the "history plugin", but we all know it's buggy and isn't flexible. I am loading my pages inside a div. I'll need a way to do back/forward along with the url hashing. mydomain.com/#home mydomain.com/#aboutus mydomain.com/#register What's the best library that can handle all of this?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >