Search Results

Search found 1781 results on 72 pages for 'cluster'.

Page 14/72 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Run a script prior to start of SQL instance via Windows clusters

    - by Shahryar G. Hashemi
    Hi, We have a Windows 2008 cluster with several SQL 2008 instance. We would like to run a script that modifies 4 registry keys prior to the startup of SQL. I do not know if there is a way to have a script run through Windows 2008 clustering that does that. I have a VBS script to do it and tried to add a Generic Script to an existing cluster group, but it failed saying it could not be registered. Any ideas?

    Read the article

  • subgraph cluster ranking in dot

    - by Chris Becke
    I'm trying to use graphviz on media wiki as a documentation tool for software. First, I documented some class relationships which worked well. Everything was ranked vertically as expected. But, then, some of our modules are dlls, which I wanted to seperate into a box. When I added the nodes to a cluster, they got edged, but clusters seem to have a LR ranking rule. Or being added to a cluster broke the TB ranking of the nodes as the cluster now appears on the side of the graph. This graph represents what I am trying to do: at the moment, cluster1 and cluster2 appear to the right of cluster0. I want/need them to appear below. <graphviz> digraph d { subgraph cluster0 { A -> {B1 B2} B2 -> {C1 C2 C3} C1 -> D; } subgraph cluster1 { C2 -> dll1_A; dll1_A -> B1; } subgraph cluster2 { C3 -> dll2_A; } dll1_A -> dll2_A; } </graphviz>

    Read the article

  • Cisco SG200 vlan issue in ESXi VSA cluster

    - by George
    I have three Cisco SG200-26 switches, and I also have two ESXi hosts that I have connected like shown in the below "best practice" map by VMware: http://communities.vmware.com/servlet/JiveServlet/previewBody/17393-102-1-22458/VSA_networking_map.pdf Even though I created the VLANs in the SG200 and I set the two VLANs (508 and 608) as allowed for these untagged ports (where my ESX NIC's are connected), I can not ping from host 1 to host 2 when configuring the NIC's to use 608 VLAN. Am I missing something? my IP's are all in the 192.168. range, and the only reason I need the VLANs is to isolate the traffic of VSA back-end internally, only the two hosts will be using the VLANs. So I think I do not have to create virtual interfaces on my router since that's the case, is my understanding correct? Also sending my switch config screenshot below.. all 3 switches have the latest firmware (it seems these were originally linksys and got rebranded as cisco after the acquisition) http://img31.imageshack.us/img31/2503/switch.gif Any ideas what to change on the Cisco SG200 to make this work , would be appreciated! The second VLAN (608) only needs two IP's: 192.168.0.1 and 192.168.0.2 The first VLAN (508) will have about 15 IP's for ESXi Management and VSA cluster service, I could use either 192.168.1.xx or 10.0.1.xx The rest of my network (about 50 clients) is in 192.168.1.xx range VMware also states that the VLAN protocol on the physical switch must be 802.1Q, not ISL, anyone knows which of the two my SG200-26 uses? In addition to that, the only requirement from VSA is that my two hosts: -Are in the same subnet. -Have static IP addresses set. -Have the same Default Gateway configured. If I need inter-vlan routing for this, I suppose I have to create virtual interfaces on my sonicwall, and assign an IP for each VLAN, and then set routes between them? Thank you for your time!

    Read the article

  • Liferay - Verify each node in a cluster

    - by Schrute
    In this example, I have two clustered instances of Liferay using bundled Tomcat running, using cluster link and shared documents. Let's say the name of the public community is fubar and friendy URL used is fubar.lipsum.com. Let's say the ports listening on each server is 8080. If I go to both server1:8080 or server2:8080 I will get the default page for Liferay. How can I test fubar.lipsum.com on each node by using the backend server, so I can verify each server? If I test it, it just goes to the load balancer, I wish there was a way to append to the backend connection to bring it up. I can add the friendly URL to my local machines hosts file and this seems to kinda work, but then once something is called in the application, it tries to go out again from the backend server and then uses SSL and then we have problems. I think I may be able to do port forwarding, but this seems like a basic thing we should be able to do and what I've found so far in the admin docs has not helped. Using the option to print the server name in the page details isn't an option either.

    Read the article

  • Intermittent NFS lockups on Isilon cluster

    - by blackbox222
    We have an Isilon cluster with 8 IQ 12000x nodes which exports storage via several NFS shares for a handful of Linux and Solaris clients. There is a Linux system that has one of these NFS filesystems mounted. I/O to this filesystem is moderately heavy from the Linux system. Every 3-4 weeks (it's not on any kind of discernible schedule, and sometimes is more/less frequent than this), we notice that all activity ceases on this NFS mount (the process hangs, as if the network stopped working so process is stuck in uninterruptible sleep) - 30 minutes later, the share recovers and things continue to work normally. The kernel log from the affected machine is as follows: Dec 3 10:07:29 redacted kernel: [8710020.871993] nfs: server nfs-redacted not responding, still trying Dec 3 10:37:17 redacted kernel: [8711805.966130] nfs: server nfs-redacted OK relevant /etc/fstab line: nfs-redacted:/ifs/nfs/export_data/shared/...redacted... /data nfs defaults 0 0 I've checked to see if there are any scheduled processes e.g. cron jobs, Isilon related functions e.g. snapshots, etc that might be causing these hangups but I can't seem to find anything. I'm also not aware of any network related issues or maintenance that would cause this. All of the lockups last almost exactly 30 minutes per the kernel logs. Perhaps someone has some suggestions I could try? (I considered a soft mount to avoid the problems associated with processes accessing the filesystem hanging; however am wary of the corruption that could result and it would not really solve the underlying issue anyway).

    Read the article

  • Setting up MongoDB in High Performance Computing LSF linux cluster

    - by Dnaiel
    I am trying to run mongo in a LSF cluster computing environment where I have no admin control. Our sysadmin installed mongodb, but it is not running. Any ideas on what should I ask the server admin to do for it to run? Or if I could run it locally? [node1382]allelix> mongod --dbpath /users/dnaiel/ma/mongodb/ Tue Oct 2 21:33:48 [initandlisten] MongoDB starting : pid=22436 port=27017 dbpath=/seq/epigenome01/allelix/ma/mongodb/ 64-bit host=node1382 Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] ** WARNING: You are running on a NUMA machine. Tue Oct 2 21:33:48 [initandlisten] ** We suggest launching mongod like this to avoid performance problems: Tue Oct 2 21:33:48 [initandlisten] ** numactl --interleave=all mongod [other options] Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] db version v2.2.0, pdfile version 4.5 Tue Oct 2 21:33:48 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207 Tue Oct 2 21:33:48 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49 Tue Oct 2 21:33:48 [initandlisten] options: { dbpath: "/users/dnaiel/ma/mongodb/" } Tue Oct 2 21:33:48 [initandlisten] journal dir=users/dnaiel/ma/mongodb/journal Tue Oct 2 21:33:48 [initandlisten] recover begin Tue Oct 2 21:33:48 [initandlisten] info no lsn file in journal/ directory Tue Oct 2 21:33:48 [initandlisten] recover lsn: 0 Tue Oct 2 21:33:48 [initandlisten] recover /seq/epigenome01/allelix/ma/mongodb/journal/j._0 Tue Oct 2 21:33:48 [initandlisten] recover cleaning up Tue Oct 2 21:33:48 [initandlisten] removeJournalFiles Tue Oct 2 21:33:48 [initandlisten] recover done Tue Oct 2 21:33:48 [websvr] admin web console waiting for connections on port 28017 Tue Oct 2 21:33:48 [initandlisten] waiting for connections on port 27017 It basically waits forever and cannot start mongodb. These servers are not webservers but they do have network access, it's a cloud computing LSF environment system. Any advice would be welcome, thanks in advance.

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • HYPER-V R2 Can not mount ISO from network location (UNC Path)

    - by Entity_Razer
    So, as the name suggest I'm trying to mount a ISO from a network share using the UNC path to a HYPER-V R2 Cluster. This is a pure Demo / test case setup with: 2x HYPER-V R2 1X NAS/iSCSI CSV Cluster Management is happening through the MMC with RSAT tools. So what i've done so far is: Set up the cluster and configure Quorum, add CSV Shares and disks, set up 1 Virtual Machine on the Hyper-1 node. What i'm trying to do is, you go to settings --- DVD Drive --- use network location ---- Pick ISO file and press "apply". Error I'm getting is either "User account does not have rights to mount iso". I changed that or stopped getting that message when I went to the HYPER-V Node settings and tabbed on: "Use Default Credentials Automatically". Now I stopped getting the "user does not have right..." message but I get the following: Error applying DVD Drive Changes Failed to remove device microsoft synthetic DVD Drive:" the specified network resource or device is no longer available" I've google'd the problem but am unable to find a solution. Anyone here able to help me out ? Much abbliged !

    Read the article

  • I need advice about iscsi + zfs(or ntfs) + windows 2008 clustering

    - by Fatih
    I want to setup a storage farm with iSCSI. I have 2 cluster node machine, 1 iscsi target machine that has 8TB installed as RAID 10. The capacity is now 8TB, but I'll upgrade the capacity in future. Let's say, I installed clusters as file server, and I connected these servers to iscsi target, then I shared 8TB capacity as an only folder to the windows users. Users now see only a folder whose capacity is 8TB. But if I want to add another 8TB to expand the main capacity, the users must not see the second folder for this new 8 TB. The users must see only a folder as before, but this time this folder's capacity expanded to 16TB. And so on, if I add another 8TB, the users must deal with only a folder. For this purpose, I've learnt that ZFS can expand its size without a problem. So if I use ZFS as a file system on iSCSI luns, how can the cluster machines see the ZFS. Because the cluster machines have windows 2008. Is there another way to expand the size of shared folder without a problem? Does ntfs support it?

    Read the article

  • Clustered MSDTC

    - by niel
    Hi I'm setting up a SQL cluster (SQL 2008), Windows 2008 R2. I enable the network access on local dtc and then create a DTC resource in my cluster . the problem is that when i start up the resource it does nto pull through my settings to enable network access. the log shows this: MSDTC started with the following settings: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 0, Trasaction Manager Communication: Allow Inbound Transactions = 0, Allow Outbound Transactions = 0, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 0, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = Mutual Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 where when i restart the local dtc service it says this: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 1, Trasaction Manager Communication: Allow Inbound Transactions = 1, Allow Outbound Transactions = 1, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 1, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = No Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 settings on both nodes in teh cluster is the same. I have reinstalled and restarted to many times to mention. Any ideas ?

    Read the article

  • Python: Improving long cumulative sum

    - by Bo102010
    I have a program that operates on a large set of experimental data. The data is stored as a list of objects that are instances of a class with the following attributes: time_point - the time of the sample cluster - the name of the cluster of nodes from which the sample was taken code - the name of the node from which the sample was taken qty1 = the value of the sample for the first quantity qty2 = the value of the sample for the second quantity I need to derive some values from the data set, grouped in three ways - once for the sample as a whole, once for each cluster of nodes, and once for each node. The values I need to derive depend on the (time sorted) cumulative sums of qty1 and qty2: the maximum value of the element-wise sum of the cumulative sums of qty1 and qty2, the time point at which that maximum value occurred, and the values of qty1 and qty2 at that time point. I came up with the following solution: dataset.sort(key=operator.attrgetter('time_point')) # For the whole set sys_qty1 = 0 sys_qty2 = 0 sys_combo = 0 sys_max = 0 # For the cluster grouping cluster_qty1 = defaultdict(int) cluster_qty2 = defaultdict(int) cluster_combo = defaultdict(int) cluster_max = defaultdict(int) cluster_peak = defaultdict(int) # For the node grouping node_qty1 = defaultdict(int) node_qty2 = defaultdict(int) node_combo = defaultdict(int) node_max = defaultdict(int) node_peak = defaultdict(int) for t in dataset: # For the whole system ###################################################### sys_qty1 += t.qty1 sys_qty2 += t.qty2 sys_combo = sys_qty1 + sys_qty2 if sys_combo > sys_max: sys_max = sys_combo # The Peak class is to record the time point and the cumulative quantities system_peak = Peak(time_point=t.time_point, qty1=sys_qty1, qty2=sys_qty2) # For the cluster grouping ################################################## cluster_qty1[t.cluster] += t.qty1 cluster_qty2[t.cluster] += t.qty2 cluster_combo[t.cluster] = cluster_qty1[t.cluster] + cluster_qty2[t.cluster] if cluster_combo[t.cluster] > cluster_max[t.cluster]: cluster_max[t.cluster] = cluster_combo[t.cluster] cluster_peak[t.cluster] = Peak(time_point=t.time_point, qty1=cluster_qty1[t.cluster], qty2=cluster_qty2[t.cluster]) # For the node grouping ##################################################### node_qty1[t.node] += t.qty1 node_qty2[t.node] += t.qty2 node_combo[t.node] = node_qty1[t.node] + node_qty2[t.node] if node_combo[t.node] > node_max[t.node]: node_max[t.node] = node_combo[t.node] node_peak[t.node] = Peak(time_point=t.time_point, qty1=node_qty1[t.node], qty2=node_qty2[t.node]) This produces the correct output, but I'm wondering if it can be made more readable/Pythonic, and/or faster/more scalable. The above is attractive in that it only loops through the (large) dataset once, but unattractive in that I've essentially copied/pasted three copies of the same algorithm. To avoid the copy/paste issues of the above, I tried this also: def find_peaks(level, dataset): def grouping(object, attr_name): if attr_name == 'system': return attr_name else: return object.__dict__[attrname] cuml_qty1 = defaultdict(int) cuml_qty2 = defaultdict(int) cuml_combo = defaultdict(int) level_max = defaultdict(int) level_peak = defaultdict(int) for t in dataset: cuml_qty1[grouping(t, level)] += t.qty1 cuml_qty2[grouping(t, level)] += t.qty2 cuml_combo[grouping(t, level)] = (cuml_qty1[grouping(t, level)] + cuml_qty2[grouping(t, level)]) if cuml_combo[grouping(t, level)] > level_max[grouping(t, level)]: level_max[grouping(t, level)] = cuml_combo[grouping(t, level)] level_peak[grouping(t, level)] = Peak(time_point=t.time_point, qty1=node_qty1[grouping(t, level)], qty2=node_qty2[grouping(t, level)]) return level_peak system_peak = find_peaks('system', dataset) cluster_peak = find_peaks('cluster', dataset) node_peak = find_peaks('node', dataset) For the (non-grouped) system-level calculations, I also came up with this, which is pretty: dataset.sort(key=operator.attrgetter('time_point')) def cuml_sum(seq): rseq = [] t = 0 for i in seq: t += i rseq.append(t) return rseq time_get = operator.attrgetter('time_point') q1_get = operator.attrgetter('qty1') q2_get = operator.attrgetter('qty2') timeline = [time_get(t) for t in dataset] cuml_qty1 = cuml_sum([q1_get(t) for t in dataset]) cuml_qty2 = cuml_sum([q2_get(t) for t in dataset]) cuml_combo = [q1 + q2 for q1, q2 in zip(cuml_qty1, cuml_qty2)] combo_max = max(cuml_combo) time_max = timeline.index(combo_max) q1_at_max = cuml_qty1.index(time_max) q2_at_max = cuml_qty2.index(time_max) However, despite this version's cool use of list comprehensions and zip(), it loops through the dataset three times just for the system-level calculations, and I can't think of a good way to do the cluster-level and node-level calaculations without doing something slow like: timeline = defaultdict(int) cuml_qty1 = defaultdict(int) #...etc. for c in cluster_list: timeline[c] = [time_get(t) for t in dataset if t.cluster == c] cuml_qty1[c] = [q1_get(t) for t in dataset if t.cluster == c] #...etc. Does anyone here at Stack Overflow have suggestions for improvements? The first snippet above runs well for my initial dataset (on the order of a million records), but later datasets will have more records and clusters/nodes, so scalability is a concern. This is my first non-trivial use of Python, and I want to make sure I'm taking proper advantage of the language (this is replacing a very convoluted set of SQL queries, and earlier versions of the Python version were essentially very ineffecient straight transalations of what that did). I don't normally do much programming, so I may be missing something elementary. Many thanks!

    Read the article

  • Running Awk command on a cluster

    - by alex
    How do you execute a Unix shell command (awk script, a pipe etc) on a cluster in parallel (step 1) and collect the results back to a central node (step 2) Hadoop seems to be a huge overkill with its 600k LOC and its performance is terrible (takes minutes just to initialize the job) i don't need shared memory, or - something like MPI/openMP as i dont need to synchronize or share anything, don't need a distributed VM or anything as complex Google's SawZall seems to work only with Google proprietary MapReduce API some distributed shell packages i found failed to compile, but there must be a simple way to run a data-centric batch job on a cluster, something as close as possible to native OS, may be using unix RPC calls i liked rsync simplicity but it seem to update remote notes sequentially, and you cant use it for executing scripts as afar as i know switching to Plan 9 or some other network oriented OS looks like another overkill i'm looking for a simple, distributed way to run awk scripts or similar - as close as possible to data with a minimal initialization overhead, in a nothing-shared, nothing-synchronized fashion Thanks Alex

    Read the article

  • Google maps cluster manager

    - by Prashant
    Hello all I am using google maps in my application. I have to show 100 markers on map. First I prepared a markers array from these markers. When markers are added using addOverlay from markers array, it takes some time and they are being added in some animated way (in sequence). I want all of them to get added to map in a single shot, so no flickering effect. I tried MarkerClusterer but it shows a cluster of markers where the need be. Instead I want all the markers to appear, not a cluster. Only they should be added faster. var point = new GLatLng(latArr[i],lonArr[i]); var marker = new GMarker(point,markerOptions); markers[i] = marker; var markerCluster = new MarkerClusterer(map, markers); Any suggestions please? Thank you.

    Read the article

  • SMO some times doesn't display the instances in sql2008 cluster

    - by Cute
    Hi I have used SMO API.in that i have used SmoApplication.EnumAvailableServers(FALSE) and from that i have filtered local instances i have used this approch insted of true to make this as convinent for remote sqldiscovery also.using that api created a dll and use that dll in c++. Now this is working in all combinations but some times it is failed to retrieve the instannces in win2008 sql2008 cluster combination. if i run the exe for 5 times it got succeed for 3 times and failed for two times... What is wromg with win-sql2008 cluster .is there any additional changes needed to make it work properrly.My firewall is off and also added exception for tcp port 1433. Anyy help is greately Appreciated... Thanks in Advance.

    Read the article

  • Determine cluster size of file system in Python

    - by Philip Fourie
    I would like to calculate the "size on disk" of a file in Python. Therefore I would like to determine the cluster size of the file system where the file is stored. How do I determine the cluster size in Python? Or another built-in method that calculates the "size on disk" will also work. I looked at os.path.getsize but it returns the file size in bytes, not taking the FS's block size into consideration. I am hoping that this can be done in an OS independent way...

    Read the article

  • Clustering on WebLogic exception on Failover

    - by Markos Fragkakis
    Hi all, I deploy an application on a WebLogic 10.3.2 cluster with two nodes, and a load balancer in front of the cluster. I have set the <core:init distributable="true" debug="true" /> My Session and Conversation classes implement Serializable. I start using the application being served by the first node. The console shows that the session replication is working. <Jun 17, 2010 11:43:50 AM EEST> <Info> <Cluster> <BEA-000128> <Updating 5903057688359791237S:xxx.yyy.gr:[7002,7002,-1,-1,-1,-1,-1]:xxx.yyy.gr:7002,xxx.yyy.gr:7002:prs_domain:PRS_Server_2 in the cluster.> <Jun 17, 2010 11:43:50 AM EEST> <Info> <Cluster> <BEA-000128> <Updating 5903057688359791237S:xxx.yyy.gr:[7002,7002,-1,-1,-1,-1,-1]:xxx.yyy.gr:7002,xxx.yyy.gr:7002:prs_domain:PRS_Server_2 in the cluster.> When I shutdown the first node from the Administration console, I get this in the other node: <Jun 17, 2010 11:23:46 AM EEST> <Error> <Kernel> <BEA-000802> <ExecuteRequest failed java.lang.NullPointerException. java.lang.NullPointerException at org.jboss.seam.intercept.JavaBeanInterceptor.callPostActivate(JavaBeanInterceptor.java:165) at org.jboss.seam.intercept.JavaBeanInterceptor.invoke(JavaBeanInterceptor.java:73) at com.myproj.beans.SortingFilteringBean_$$_javassist_seam_2.sessionDidActivate(SortingFilteringBean_$$_javassist_seam_2.java) at weblogic.servlet.internal.session.SessionData.notifyActivated(SessionData.java:2258) at weblogic.servlet.internal.session.SessionData.notifyActivated(SessionData.java:2222) at weblogic.servlet.internal.session.ReplicatedSessionData.becomePrimary(ReplicatedSessionData.java:231) at weblogic.cluster.replication.WrappedRO.changeStatus(WrappedRO.java:142) at weblogic.cluster.replication.WrappedRO.ensureStatus(WrappedRO.java:129) at weblogic.cluster.replication.LocalSecondarySelector$ChangeSecondaryInfo.run(LocalSecondarySelector.java:542) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) > What am I doing wrong? This is the SortingFilteringBean: import java.util.HashMap; import java.util.LinkedHashMap; import org.jboss.seam.ScopeType; import org.jboss.seam.annotations.Name; import org.jboss.seam.annotations.Scope; import com.myproj.model.crud.Filtering; import com.myproj.model.crud.Sorting; import com.myproj.model.crud.SortingOrder; /** * Managed bean aggregating the sorting and filtering values for all the * application's lists. A light-weight bean to always keep in the session with * minimum impact. */ @Name("sortingFilteringBean") @Scope(ScopeType.SESSION) public class SortingFilteringBean extends BaseManagedBean { private static final long serialVersionUID = 1L; private Sorting applicantProductListSorting; private Filtering applicantProductListFiltering; private Sorting homePageSorting; private Filtering homePageFiltering; /** * Creates a new instance of SortingFilteringBean. */ public SortingFilteringBean() { // ********************** // Applicant Product List // ********************** // Sorting LinkedHashMap<String, SortingOrder> applicantProductListSortingValues = new LinkedHashMap<String, SortingOrder>(); applicantProductListSortingValues.put("applicantName", SortingOrder.ASCENDING); applicantProductListSortingValues.put("applicantEmail", SortingOrder.ASCENDING); applicantProductListSortingValues.put("productName", SortingOrder.ASCENDING); applicantProductListSortingValues.put("productEmail", SortingOrder.ASCENDING); applicantProductListSorting = new Sorting( applicantProductListSortingValues); // Filtering HashMap<String, String> applicantProductListFilteringValues = new HashMap<String, String>(); applicantProductListFilteringValues.put("applicantName", ""); applicantProductListFilteringValues.put("applicantEmail", ""); applicantProductListFilteringValues.put("productName", ""); applicantProductListFilteringValues.put("productEmail", ""); applicantProductListFiltering = new Filtering( applicantProductListFilteringValues); // ********* // Home page // ********* // Sorting LinkedHashMap<String, SortingOrder> homePageSortingValues = new LinkedHashMap<String, SortingOrder>(); homePageSortingValues.put("productName", SortingOrder.ASCENDING); homePageSortingValues.put("productId", SortingOrder.ASCENDING); homePageSortingValues.put("productAtcCode", SortingOrder.UNSORTED); homePageSortingValues.put("productEmaNumber", SortingOrder.UNSORTED); homePageSortingValues.put("productOrphan", SortingOrder.UNSORTED); homePageSortingValues.put("productRap", SortingOrder.UNSORTED); homePageSortingValues.put("productCorap", SortingOrder.UNSORTED); homePageSortingValues.put("applicationTypeDescription", SortingOrder.ASCENDING); homePageSortingValues.put("applicationId", SortingOrder.ASCENDING); homePageSortingValues .put("applicationEmaNumber", SortingOrder.UNSORTED); homePageSortingValues .put("piVersionImportDate", SortingOrder.ASCENDING); homePageSortingValues.put("piVersionId", SortingOrder.ASCENDING); homePageSorting = new Sorting(homePageSortingValues); // Filtering HashMap<String, String> homePageFilteringValues = new HashMap<String, String>(); homePageFilteringValues.put("productName", ""); homePageFilteringValues.put("productAtcCode", ""); homePageFilteringValues.put("productEmaNumber", ""); homePageFilteringValues.put("applicationTypeId", ""); homePageFilteringValues.put("applicationEmaNumber", ""); homePageFilteringValues.put("piVersionImportDate", ""); homePageFiltering = new Filtering(homePageFilteringValues); } /** * @return the applicantProductListFiltering */ public Filtering getApplicantProductListFiltering() { return applicantProductListFiltering; } /** * @param applicantProductListFiltering * the applicantProductListFiltering to set */ public void setApplicantProductListFiltering( Filtering applicantProductListFiltering) { this.applicantProductListFiltering = applicantProductListFiltering; } /** * @return the applicantProductListSorting */ public Sorting getApplicantProductListSorting() { return applicantProductListSorting; } /** * @param applicantProductListSorting * the applicantProductListSorting to set */ public void setApplicantProductListSorting( Sorting applicantProductListSorting) { this.applicantProductListSorting = applicantProductListSorting; } /** * @return the homePageSorting */ public Sorting getHomePageSorting() { return homePageSorting; } /** * @param homePageSorting * the homePageSorting to set */ public void setHomePageSorting(Sorting homePageSorting) { this.homePageSorting = homePageSorting; } /** * @return the homePageFiltering */ public Filtering getHomePageFiltering() { return homePageFiltering; } /** * @param homePageFiltering * the homePageFiltering to set */ public void setHomePageFiltering(Filtering homePageFiltering) { this.homePageFiltering = homePageFiltering; } /** * For convenience to view in the Seam Debug page. * * @see java.lang.Object#toString() */ @Override public String toString() { StringBuilder sb = new StringBuilder(""); sb.append("\n\n"); sb.append("applicantProductListSorting"); sb.append(applicantProductListSorting); sb.append("\n\n"); sb.append("applicantProductListFiltering"); sb.append(applicantProductListFiltering); sb.append("\n\n"); sb.append("homePageSorting"); sb.append(homePageSorting); sb.append("\n\n"); sb.append("homePageFiltering"); sb.append(homePageFiltering); return sb.toString(); } } And this is the BaseManagedBean, inheriting the AbstractMutable. import java.io.IOException; import java.io.OutputStream; import java.util.List; import javax.faces.application.FacesMessage; import javax.faces.application.FacesMessage.Severity; import javax.faces.context.FacesContext; import javax.servlet.http.HttpServletResponse; import org.apache.commons.lang.ArrayUtils; import org.jboss.seam.core.AbstractMutable; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.myproj.common.exceptions.WebException; import com.myproj.common.util.FileUtils; import com.myproj.common.util.StringUtils; import com.myproj.web.messages.Messages; public abstract class BaseManagedBean extends AbstractMutable { private static final Logger logger = LoggerFactory .getLogger(BaseManagedBean.class); private FacesContext facesContext; /** * Set a message to be displayed for a specific component. * * @param resourceBundle * the resource bundle where the message appears. Either base or * id may be used. * @param summaryResourceId * the id of the resource to be used as summary. For the detail * of the element, the element to be used will be the same with * the suffix {@code _detail}. * @param parameters * the parameters, in case the string is parameterizable * @param severity * the severity of the message * @param componentId * the component id for which the message is destined. Note that * an appropriate JSF {@code <h:message for="myComponentId">} tag * is required for the to appear, or alternatively a {@code * <h:messages>} tag. */ protected void setMessage(String resourceBundle, String summaryResourceId, List<Object> parameters, Severity severity, String componentId, Messages messages) { FacesContext context = getFacesContext(); FacesMessage message = messages.getMessage(resourceBundle, summaryResourceId, parameters); if (severity != null) { message.setSeverity(severity); } context.addMessage(componentId, message); } /** * Copies a byte array to the response output stream with the appropriate * MIME type and content disposition. The response output stream is closed * after this method. * * @param response * the HTTP response * @param bytes * the data * @param filename * the suggested file name for the client * @param mimeType * the MIME type; will be overridden if the filename suggests a * different MIME type * @throws IllegalArgumentException * if the data array is <code>null</code>/empty or both filename * and mimeType are <code>null</code>/empty */ protected void printBytesToResponse(HttpServletResponse response, byte[] bytes, String filename, String mimeType) throws WebException, IllegalArgumentException { if (response.isCommitted()) { throw new WebException("HTTP response is already committed"); } if (ArrayUtils.isEmpty(bytes)) { throw new IllegalArgumentException("Data buffer is empty"); } if (StringUtils.isEmpty(filename) && StringUtils.isEmpty(mimeType)) { throw new IllegalArgumentException( "Filename and MIME type are both null/empty"); } // Set content type (mime type) String calculatedMimeType = FileUtils.getMimeType(filename); // not among the known ones String newMimeType = mimeType; if (calculatedMimeType == null) { // given mime type passed if (mimeType == null) { // none available put default mime-type newMimeType = "application/download"; } else { if ("application/octet-stream".equals(mimeType)) { // small modification newMimeType = "application/download"; } } } else { // calculated mime type has precedence over given mime type newMimeType = calculatedMimeType; } response.setContentType(newMimeType); // Set content disposition and other headers String contentDisposition = "attachment;filename=\"" + filename + "\""; response.setHeader("Content-Disposition", contentDisposition); response.setHeader("Expires", "0"); response.setHeader("Cache-Control", "max-age=30"); response.setHeader("Pragma", "public"); // Set content length response.setContentLength(bytes.length); // Write bytes to response OutputStream out = null; try { out = response.getOutputStream(); out.write(bytes); } catch (IOException e) { throw new WebException("Error writing data to HTTP response", e); } finally { try { out.close(); } catch (Exception e) { logger.error("Error closing HTTP stream", e); } } } /** * Retrieve a session-scoped managed bean. * * @param sessionBeanName * the session-scoped managed bean name * @return the session-scoped managed bean */ protected Object getSessionBean(String sessionBeanName) { Object sessionScopedBean = FacesContext.getCurrentInstance() .getExternalContext().getSessionMap().get(sessionBeanName); if (sessionScopedBean == null) { throw new IllegalArgumentException("No such object in Session"); } else { return sessionScopedBean; } } /** * Set a session-scoped managed bean * * @param sessionBeanName * the session-scoped managed bean name * @return the session-scoped managed bean */ protected boolean setSessionBean(String sessionBeanName, Object sessionBean) { Object sessionScopedBean = FacesContext.getCurrentInstance() .getExternalContext().getSessionMap().get(sessionBeanName); if (sessionScopedBean == null) { FacesContext.getCurrentInstance().getExternalContext() .getSessionMap().put(sessionBeanName, sessionBean); } else { throw new IllegalArgumentException( "This session-scoped bean was already initialized"); } return true; } /** * For testing (enables mock of FacesContext) * * @return the faces context */ public FacesContext getFacesContext() { if (facesContext == null) { return FacesContext.getCurrentInstance(); } return facesContext; } /** * For testing (enables mocking of FacesContext). * * @param aFacesContext * a - possibly mock - faces context. */ public void setFacesContext(FacesContext aFacesContext) { this.facesContext = aFacesContext; } }

    Read the article

  • Register Today for Upcoming Oracle Solaris Events!

    - by Terri Wischmann
    Don't miss out on the exciting upcoming events around Oracle Solaris 11!  Register today for one or all of them - Check out the events below and Register Today! Please join us for the next Oracle Solaris Developer Webinar: "Simplify Your Development Environment with Zones, ZFS & More" on 04/10 @ 9am PT by Eric Reid (Principal Software Engineer) and Stefan Schneider (Chief Technologist ISV-Engineering) Register Now! Check out the upcoming Free OTN Sys Admin Day on April 10th on the Oracle Santa Clara Campus. Full Day of Hands on Labs Training, Demos, and Presentations.  Come learn about Oracle Solaris 11, Oracle Solaris Studio, Oracle Technology Network and Oracle Enterprise Linux! Register Now! Attend the Oracle Solaris 11 Technical Track at the NLUUG Conference in The Netherlands: April 11th, 2012  - This year, the conference will focus on Operating System innovations. Come learn about the innovations Oracle Solaris 11 brings, with technical deep-dive talks presented by Oracle experts. For more information including the agenda click here

    Read the article

  • Oracle Solaris at OpenWorld Tokyo 2012

    - by Markus Weber
    Oracle OpenWorld Tokyo will open its doors on Wednesday, April 4 2012, until Friday, April 6 2012, in Roppongi.I've you been in Tokyo as a Gaijin, or foreigner, you know exactly where that it. Many of Oracle's top executives will be there, including Larry Ellison, Mark Hurd, and John Fowler. The keynotes that they are covering will be very interesting, for sure. Now, whether you will actually be there, or not, you might still find it interesting that several great Solaris-related sessions will be held there, especially as part of the "Oracle Develop" track, such as: "Oracle Solaris 11 - Developers Need To Know" "How to build high performance and high security Oracle Database environment with Oracle SPARC/Solaris" "Oracle Solaris Tuning Contest" "IT Assets preservation and constructive migration with Oracle Solaris virtualization" And of course John Fowler's keynote "Server and Storage Systems Strategy".The complete schedule in English can be found here. We hope you can make it. If not, there will always be the San Francisco one.

    Read the article

  • Session serialization in JavaEE environment

    - by Ionut
    Please consider the following scenario: We are working on a JavaEE project for which the scalability starts to become an issue. Up until now, we were able to scale up but this is no longer an option. Therefore we need to consider scaling out and preparing the App for a clustered environment. Our main concern right now is serializing the user sessions. Sadly, we did not consider from the beginning the issue and we are encountering the following excetion: java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: org.apache.catalina.session.StandardSessionFacade I did some research and this exception is thrown because there are objects stored on the session which does not implement the Serializable interface. Considering that all over the app there are quite a few custom objects which are stored on the session without implementing this interface, it would require a lot of tedious work and dedication to fix all these classes declaration. We will fix all this declarations but the main concern is that, in the future, there may be a developer which will add a non Serializable object on the session and break the session serialization & replication over multiple nodes. As a quick overview of the project, we are developing using a home grown framework based on Struts 1 with the Servlet 3.0 API. This means that at this point, we are using the standard session.getAttribute() and session.setAttribute() to work with the session and the session handling is scattered all over the code base. Besides updating the classes of the objects stored on session and making sure that they implement the Serializable interface, what other measures of precaution should we take in order to ensure a reliable Session replication capability on the Application layer? I know it is a little bit late to consider this but what would be the best practice in this case? Furthermore, are there any other issues we should consider regarding this transition? Thank you in advance!

    Read the article

  • How to test issues in a local development environment that can only be introduced by clustering in production?

    - by Brian Reindel
    We recently clustered an application, and it came to light that because of how we're doing SSL offloading via the load balancer in production it didn't work right. I had to mimic this functionality on my local machine by SSL offloading Apache with a proxy, but it still isn't a 1-to-1 comparison. Similar issues can arise when dealing with stateful applications and sticky sessions. What would be the industry standard for testing this kind of production "black box" scenario in a local environment, especially as it relates to clustering?

    Read the article

  • April 25th Online Forum -- Oracle Solaris 11: What's New Since the Launch

    - by Larry Wake
    It's been a few months since we released Oracle Solaris 11, so we thought it was time to check in and let you know how things are going. On April 25th, at 9:00 PT, we'll host an online forum, featuring Markus Flierl, the VP for Solaris core engineering, as well as engineers, customers and partners. During the forum, Markus and his crew will give an update on the release, recap Oracle's OS strategy, and give you a peek at what the engineers are working on for future updates. I think one of the more interesting parts of this event will be the chance for some of our customers to share why they've moved to Oracle Solaris 11 and what benefits it has already given them.  We'll also have an online chat, so you can ask Solaris engineers any questions about what you've heard, or other thoughts you've had.  It should be a worthwhile event -- hope you can join us. Online Forum: Oracle Solaris 11: What’s New Since the LaunchApril 25th 9:00 a.m. PDT – 11:30 a.m. PDTRegister today!

    Read the article

  • No proper kmeans clustering of images in matlab

    - by user3237134
    I am having 1200 face images in my training set.There are 2989 test face images. I am using eigen faces (PCA) for feature extraction. I am using kmeans clustering. Source code I tried: IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); Problem: Images are not clustered properly. Same faces should be clustered. But different faces are being clustered. Questions: Should I have to use still more face images for training? How accuracy of clustering can be achieved? What is the solution?

    Read the article

  • Collaborative Filtering Techniques

    - by user95261
    Good Day! I am in need of help about collaborative filtering techniques implementation in predicting psychopathy of twitter users. I have two data set, training set and test set. Training set users have already scores in psychopathy, I need any collaborative filtering techniques to predict scores of test set users. Collaborative Filtering such as Item/User-Based CF, Bayesian Belief Nets, Clustering, Latent Semantic, etc. Please help me. :( I am very confused on how to implement any of these. Thank you!

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >