Search Results

Search found 16491 results on 660 pages for 'root node'.

Page 240/660 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • How does lookup work?

    - by badgirl
    Hello. I have BeanTreeView, and some nodes in it. Every node has constructor public class ProjectNode extends AbstractNode { public ProjectNode(MainProject obj, DiagramsChildren childrens) { super (new ProjectsChildren(), Lookups.singleton(obj)); setDisplayName ( obj.getName()); } I set Rootnode as a root for tree in ExplorerTopComponent as this: private final ExplorerManager mgr = new ExplorerManager(); public ExplorerTopComponent(){ associateLookup (ExplorerUtils.createLookup(mgr, getActionMap())); mgr.setRootContext(new RootNode()); } And now, how I can get MainProject obj from some node? I need to get it in another class.

    Read the article

  • What's the best way to move "a child up" in a C# XmlDocument?

    - by Mike
    Hi everyone, Given an xml structure like this: <garage> <car>Firebird</car> <car>Altima</car> <car>Prius</car> </garage> I want to "move" the Prius node "one level up" so it appears above the Altima node. Here's the final structure I want: <garage> <car>Firebird</car> <car>Prius</car> <car>Altima</car> </garage> So given the C# code: XmlNode priusNode = GetReferenceToPriusNode() What's the best way to cause the priusNode to "move up" one place in the garage's child list? Thanks! -Mike

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • quantity of measurable units design pattern

    - by Berryl
    Hello I am thinking through a nice pattern to be useful across domains of measurable units (ie, Length, Time) and came up with the following use case and initial classes, and of course, questions! 1) Does a Composite pattern help or complicate? 2) Should the Convert method(s) in the ComposityNode be a separate converter class? All comments appreciated. Cheers, Berryl Example Use Case: var inch = new ConvertableUnit("inch", 1) var foot = new ConvertableUnit("foot", 12) var imperialUnits = new CompositeConvertableUnit("imperial units", .024) imperialUnits.AddChild(inch) imperialUnits.AddChild(foot) var meter = new ConvertableUnit("meter", 1) var millimeter = new ConvertableUnit("millimeter ", .001) var imperialUnits = new CompositeConvertableUnit("metric units", 1) imperialUnits.AddChild(meter) imperialUnits.AddChild(millimeter) var oneInch = new Quantity(1, inch); var oneFoot = new Quantity(1, foot); oneFoot.ToBase() // "12 inches" var oneMeter = new Quantity(1, meter); oneInch.ToBase() // .024 meters Possible Solution ConvertableUnit : Node double Rate string Name Quantity ConvertableUnit Unit double Amount CompositeConvertableUnit : Node ISet<ConvertableUnit> _children ConvertableUnit BaseUnit {get{ return _children.Where(c=>c.Rate == 1).First() } } Quantity ConvertTo(Quantity from, Quantity to) Quantity ToBase(Quantity from);

    Read the article

  • Cinema4D XML scene not rendering texture

    - by George Profenza
    I was playing with the Cinema 4D command line options and ran into a problem that might not be specific to the command line options. I saved a basic scene(a textured cube) in two formats: the original .c4d(binary) format and .xml(Cinema 4D XML via File Export). the .c4d file renders with the texture applied, while the .xml file renders without the texture applied. I had a look at the xml file, and there is a node that holds a reference to the image used as texture, and there is another node which seems to store the pixels in xml. When I opened the .xml document in Cinema 4D, the document did store the material with the texture, but it was not applied to the object. Am I missing something when I export to XML ? How can I export a Cinema 4D XML that can render with the texture applied ?

    Read the article

  • Gaining information from nodes of tree

    - by jainp
    I am working with the tree data structure and trying to come up with a way to calculate information I can gain from the nodes of the tree. I am wondering if there are any existing techniques which can assign higher numerical importance to a node which appears less frequently at lower level (Distance from the root of the tree) than the same nodes appearance at higher level and high frequency. To give an example, I want to give more significance to node Book, at level 2 appearing once, then at level 3 appearing thrice. Will appreciate any suggestions/pointers to techniques which achieve something similar. Thanks, Prateek

    Read the article

  • Auto SSH and execute script

    - by rohanbk
    I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script. Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script? I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.

    Read the article

  • Drupal 6: Taxonomy splot up in managed fields?

    - by Ardi Coetzee
    So you have two taxonomies, namely: "Business Type" and "Location" This is assigned to a node called BUSINESS. In effect, when the user creates a BUSINESS node, her has to choose for example, location "New York" and type "Information Services". My problem is when: a) Capturing the taxonomy, and b) Displaying the taxonomy I want the two terms to be separated from each other. I.e. I want to be able to move the two terms individual positions in the MANAGE FIELDS view, so that they can be grouped or placed seperately. Currently, Drupal only allows one entry, called "TAXONOMY" which is effectively the two terms next to each other. This is what I have: This is what I want: Bare in mind, I need to be able to use this with Hierarchical Select, which means Content Taxonomy is not an option.

    Read the article

  • DB2 load partitioned data in parallel

    - by Erik Paulson
    I have a 10-node DB2 9.5 database, with raw data on each machine (ie node1:/scratch/data/dataset.1 node2:/scratch/data/dataset.2 ... node10:/scratch/data/dataset.10 There is no shared NFS mount - none of my machines could handle all of the datasets combined. each line of a dataset file is a long string of text, column delimited. The first column is the key. I don't know the hash function that DB2 will use, so dataset is not pre-partitioned. Short of renaming all of my files, is there any way to get DB2 to load them all in parallel? I'm trying to do something like load from '/scratch/data/dataset' of del modified by coldel| fastparse messages /dev/null replace into TESTDB.data_table part_file_location '/scratch/data'; but I have no idea how to suggest to db2 that it should look for dataset.1 on the first node, etc.

    Read the article

  • Test if single linked list is circular by traversing it only once

    - by user1589754
    I am a fresher and I was asked this question in a recent interview I gave. The question was --- By traversing each element of linked list just once find if the single linked list is circular at any point. To this I answered that we will store reference of each node while traversing the list in another linked list and for every node in the list being tested we will find if the reference exists in the list I am storing the references. The interviewer said that he needs a more optimized way to solve this problem. Can anyone please tell me what would be a more optimized method to solve this problem.

    Read the article

  • How to add some complex structure in multiple places in an XML file

    - by Guillaume
    I have an XML file which has many section like the one below: <Operations> <Action [some attributes ...]> [some complex content ...] </Action> <Action [some attributes ...]> [some complex content ...] </Action> </Operations> I have to add an <Action/> to every <Operations/>. It seems that an XSLT should be a good solution to this problem: <xsl:template match="Operations/Action[last()]"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> <Action>[some complex content ...]</Action> </xsl:template> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> My problem is that the content of my <Action/> contains some xPath expressions. For example: <Action code="p_histo01"> <customScript languageCode="gel"> <gel:script xmlns:core="jelly:core" xmlns:gel="jelly:com.niku.union.gel.GELTagLibrary" xmlns:soap="jelly:com.niku.union.gel.SOAPTagLibrary" xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:sql="jelly:sql" xmlns:x="jelly:xml" xmlns:xog="http://www.niku.com/xog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <sql:param value="${gel_stepInstanceId}"/> </gel:script> </customScript> </Action> The ${gel_stepInstanceId} is interpreted by my XSLT but I would like it to be copied as-is. Is that possible? How?

    Read the article

  • OutOfMemoryError calling XmlSerializer.Deserialize() - not related to XML size!

    - by Mike Atlas
    This is a really crazy bug. The following is throwing an OutOfMemoryException, for XML snippits that are very short (e.g., <ABC def='123'/>) of one type, but not for others of the same size but a different type: (e.g., <ZYX qpr='baz'/>). public static T DeserializeXmlNode<T>(XmlNode node) { try { return (T)new XmlSerializer(typeof(T)) .Deserialize(new XmlNodeReader(node)); } catch (Exception ex) { throw; // just for catching a breakpoint. } } I read in this MSDN article that if I were using XmlSerializer with additional parameters in the constructor, I'd end up generating un-cached serializer assemblies every it got called, causing an Assembly Leak. But I'm not using additional parameters in the constructor. It also happens on the first call, too, so the AppDomain is fresh. Worse yet, it is only thrown in release builds, not debug builds. What gives?

    Read the article

  • In an extension method how do a create an object based on the implementation class

    - by Greg
    Hi, In an extension method how do a create an object based on the implementation class. So in the code below I wanted to add an "AddRelationship" extension method, however I'm not sure how within the extension method I can create an Relationship object? i.e. don't want to tie the extension method to this particular implementation of relationship public static class TopologyExtns { public static void AddNode<T>(this ITopology<T> topIf, INode<T> node) { topIf.Nodes.Add(node.Key, node); } public static INode<T> FindNode<T>(this ITopology<T> topIf, T searchKey) { return topIf.Nodes[searchKey]; } public static bool AddRelationship<T>(this ITopology<T> topIf, INode<T> parentNode, INode<T> childNode) { var rel = new RelationshipImp(); // ** How do I create an object from teh implementation // Add nodes to Relationship // Add relationships to Nodes } } public interface ITopology<T> { //List<INode> Nodes { get; set; } Dictionary<T, INode<T> > Nodes { get; set; } } public interface INode<T> { // Properties List<IRelationship<T>> Relationships { get; set; } T Key { get; } } public interface IRelationship<T> { // Parameters INode<T> Parent { get; set; } INode<T> Child { get; set; } } namespace TopologyLibrary_Client { class RelationshipsImp : IRelationship<string> { public INode<string> Parent { get; set; } public INode<string> Child { get; set; } } } public class TopologyImp<T> : ITopology<T> { public Dictionary<T, INode<T>> Nodes { get; set; } public TopologyImp() { Nodes = new Dictionary<T, INode<T>>(); } } thanks

    Read the article

  • How to structure class to support imported 3d model ?

    - by brainydexter
    Hello, I've written a C++ library that reads in this 3d model file (collada DAE). uptil now, I would output a list of triangles and handle each at rendering stage. But now, I need to attach some Bounding sphere information with the imported model. I need some advice on how should I organize this in code. Here are some specs of the 3D file format: - 3D model is represented as a Tree consisting of nodes - each node can contain other nodes, geometry information, transformation etc My requirements: - a bounding sphere associated with each node, thereby yielding a tree of bounding sphere hierarchy for the model itself. - actual vertex information What would be the recommended way to deal with this situation? Thanks

    Read the article

  • SVN hook script conflict

    - by user297303
    I am trying to write a pre-commit hook script that will alter a specific svn-property of a folder/file. The script looks fairly similar to the one that is documented in the svn book. I figured out how to set/change the property of a node and when executing the binding function svn.fs.commit_txn the property of the node actually gets set. But at the moment tortoise always gives me a conflict on the folder I am altering the property. I wrote my script with Python but am new python and hook scripts. Hope someone can give me a clue why I am getting this conflict..

    Read the article

  • Check which nodes of Beowulf HPC cluster system are free from PHP app?

    - by Skuja
    I am working on my diploma thesis project. I have access to 32Node Dell poweredge HPC cluster system with Linux(Debian i think) installed on it. My first goal is to create web (PHP) app where logged users could see free and busy nodes, turn them on and off. I am planning to do something like this - write some cron daemon that would run every 30seconds or other interval, and it could run ping utility for each node to find out if it is on or off, then write results to some file. Then from my web app i could read the info. Will it be a good solution? What existing for node management solutions are there?

    Read the article

  • How can I hide the taxonomy field for authenticated users but show it for other users in Drupal 6?

    - by Jaymie
    I have a Drupal (v6.17) Content Type which includes a Taxonomy field. I want to hide this from ordinary Authenticated Users, but want it available to my Site Contributor role users, so they can review and then assign tags to user-created nodes. I've tried overriding the Node Add/Edit form in Panels 3 by creating a panel variant especially for Authenticated Users, which would exclude the Taxonomy field. However, the Taxonomy field is bundled in with the "General Form" controls - without showing this, I don't get the Title and Body fields. Is there a way I can either include the Title and Body fields without Taxonomy, OR hide just the Taxonomy field when the authenticated user role creates a node. I realise there's a CCK field which might be able to help me out here, but how do I tie that to the Taxonomy module? Any help gratefully received.

    Read the article

  • Algorithms to find longest common prefix in a sliding window.

    - by nn
    Hi, I have written a Lempel Ziv compressor and decompressor. I am seeking to improve the time to search the dictionary for a phrase. I have considered K-M-P and Boyer-Moore, but I think an algorithm that adapts to changes in the dictionary would be faster. I've been reading that binary search trees (AVL or with splays) improve the performance of compression time considerably. What I fail to understand is how to bootstrap the binary search tree and insert/remove data. I'm not actually quite sure the significance of each node in the binary search. I am searching for phrases so will each character be considered a node? Also how and what is inserted/removed from the search tree as new data enters the dictionary and old data is removed? The binary search tree sounds like a good payoff since it can adapt to the dictionary, but I'm just not quite sure of how it's used.

    Read the article

  • EXT JS toLayout on Viewport doesnt show anything...

    - by Nazariy
    After reading many example about adding new components in to existing container without reloading whole page I run in to small problem while combining tree and tabs inside Viewport component. What I'm trying to do is to register on click event to tree node and load in to content container new component, depending on node type it can be tabpanel, gridpanel or any other available component. here is short example of tree object: { xtype: 'treepanel', id: 'tree-panel', listeners: { click: function(n) { var content = Ext.getCmp('content-panel'); content.setTitle(n.attributes.qtip); //remove all components from content-panel content.removeAll(); content.add(dummy_tabs2); content.doLayout(); return; } }} All DOM manipulation went fine, everything registered properly, new Title shown, but dummy_tabs2 are not shown. I did try to set various properties for doLayout(true|false, true|false) but nothing happens. Am I doing something wrong?

    Read the article

  • Red Hat cluster: Failure of one of two services sharing the same virtual IP tears down IP

    - by js.
    I'm creating a 2+1 failover cluster under Red Hat 5.5 with 4 services of which 2 have to run on the same node, sharing the same virtual IP address. One of the services on each node needs a (SAN) disk, the other doesn't. I'm using HA-LVM. When I shut down (via ifdown) the two interfaces connected to the SAN to simulate SAN failure, the service needing the disk is disabled, the other keeps running, as expected. Surprisingly (and unfortunately), the virtual IP address shared by the two services on the same machine is also removed, rendering the still-running service useless. How can I configure the cluster to keep the IP address up?

    Read the article

  • Emacs/xterm color annoyance on Linux

    - by tgamblin
    I'm using emacs in a console window both on my local Linux box and on the login node of a remote cluster. I use emacs regularly, and I've got the foreground color set to white in my .emacs file like so: (set-foreground-color "white") (set-background-color "black") However, when I run emacs, the foreground isn't white; it's grey and very hard to read. On my Mac, emacs in a console window with the same settings shows up as proper white. But on both linux boxes, in konsole and xterm, it's grey. In case it matters, I've got TERM set to xterm-color, the desktop is running RHEL 5, and the cluster node is running RHEL 4 (CentOS). Is this some default with how Linux sets up terminal colors? How do I get white to be white? Note: this is with console emacs, not emacs under X. That's emacs -nw if you have DISPLAY set.

    Read the article

  • Finding height in Binary Search Tree

    - by mike
    Hey I was wondering if anybody could help me rework this method to find the height of a binary search tree. So far my code looks like this however the answer im getting is larger than the actual height by 1, but when I remove the +1 from my return statements its less than the actual height by 1? I'm still trying to wrap my head around recursion with these BST any help would be much appreciated. public int findHeight(){ if(this.isEmpty()){ return 0; } else{ TreeNode<T> node = root; return findHeight(node); } } private int findHeight(TreeNode<T> aNode){ int heightLeft = 0; int heightRight = 0; if(aNode.left!=null) heightLeft = findHeight(aNode.left); if(aNode.right!=null) heightRight = findHeight(aNode.right); if(heightLeft > heightRight){ return heightLeft+1; } else{ return heightRight+1; } }

    Read the article

  • Generating code for service proxies

    - by Hadi Eskandari
    I'm trying to generate some additional code base on the auto-generated webservice proxies in my VS2010 solution, I'm using a T4 template to do so. The problem is, automatically generated proxies are added in "Service Reference" folder but ProjectItems (files) are hidden by default and the following code does not find them in the project structure: var sr = GetProjectItem(project, "Service References"); if(sr != null) { foreach(ProjectItem item in sr.ProjectItems) { foreach(var file in item.ProjectItems) { //Services.Add(new ServiceInfo { Name = file.Name }); } } } The above code runs and although service reference is found, and there are ProjectItems under that node (named by the webservice reference name), under object under that node is of type System.__ComObject and I'm not sure how to progress. Any help is appreciated.

    Read the article

  • get xml attribute named xlink:href using xsl

    - by awe
    How can I get the value of an attribute called xlink:href of an xml node in xsl template? I have this xml node: <DCPType> <HTTP> <Get> <OnlineResource test="hello" xlink:href="http://localhost/wms/default.aspx" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:type="simple" /> </Get> </HTTP> </DCPType> When I try the following xsl, I get an error saying "Prefix 'xlink' is not defined." : <xsl:value-of select="DCPType/HTTP/Get/OnlineResource/@xlink:href" /> When I try this simple attribute, it works: <xsl:value-of select="DCPType/HTTP/Get/OnlineResource/@test" />

    Read the article

  • Can a binary tree or tree be always represented in a Database as 1 table and self-referencing?

    - by Jian Lin
    I didn't feel this rule before, but it seems that a binary tree or any tree (each node can have many children but children cannot point back to any parent), then this data structure can be represented as 1 table in a database, with each row having an ID for itself and a parentID that points back to the parent node. That is in fact the classical Employee - Manager diagram: one boss can have many people under him... and each person can have n people under him, etc. This is a tree structure and is represented in database books as a common example as a single table Employee.

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >