Search Results

Search found 2725 results on 109 pages for 'nodes'.

Page 2/109 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • how to avoid 'out of memory' errors when programmatically generating a lot of nodes in drupal?

    - by sprugman
    I'm creating about 150 nodes programmatically and running into 'out of memory' errors when doing it all in a single request. (I have a menu callback that generates the nodes and calls node_save() on them.) Example: for($i=0; $i<150; $i++) { $node = new stdClass(); $node->title="Foo $i"; $node->field_myfield[0]['value'] = "Bar $i"; ... node_save($node); } I've heard of BatchAPI, but never used it. Is that the right tool to get around this? The docs talk about timeouts, but not memory issues. Is there something simpler that I might be missing?

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Is it possible to add two nodes at a time dynamically to a treeview

    - by Dorababu
    I am having a tree-view on my main form with initially some nodes as follows ACH |-> some.txt |->FileHeader |->BatchHeader Now at this point i will have to add 2 child nodes at a time to BatchHeader. This nodes i will pass as strings from child forms My sample code that i added some nodes is as follows public void loadingDatafrom(string filename, bool str) { if (Append.oldbatchcontrol != filename) { if (tvwACH.SelectedNode.Text == "FileHeader") { tvwACH.SelectedNode.Nodes.Add(filename); } if (tvwACH.SelectedNode.Text == "BatchHeader" && filecontrolvariables.m_gridclick == false) { tvwACH.SelectedNode.Nodes.Add(filename); **I got this idea tvwach.SelectedNode.Lastnode.Nodes.Add("Node");** } } } Can any one give an idea to add 2 nodes as child to the existing node ..

    Read the article

  • UndoRedo on Nodes (Part 2)

    - by Geertjan
    After the recording of the latest API Design Tip for the upcoming NetBeans Podcast, Jaroslav Tulach helped me with the problem I blogged about yesterday. First he expressed surprise at seeing Undo/Redo work on Nodes, which was never the intention, i.e., that feature was always intended for documents, e.g., the Java editor. However, he then showed me where to find the Properties window in the NetBeans sources, where it is org.netbeans.core.windows.view.ui.NbSheet. It turns out that the Properties window does not have an activated node and hence the Node that implements UndoRedo.Manager is never put in the Lookup. Once we added, on line 303, "this.setActivatedNodes(nodes);", everything worked as expected, i.e., the Undo/Redo actions are now enabled, even when the Properties window is selected: Maybe it means I should file an issue to get that line added to NbSheet?

    Read the article

  • algorithm to use to return a specific range of nodes in a directed graph

    - by GatesReign
    I have a class Graph with two lists types namely nodes and edges I have a function List<int> GetNodesInRange(Graph graph, int Range) when I get these parameters I need an algorithm that will go through the graph and return the list of nodes only as deep (the level) as the range. The algorithm should be able to accommodate large number of nodes and large ranges. Atop this, should I use a similar function List<int> GetNodesInRange(Graph graph, int Range, int selected) I want to be able to search outwards from it, to the number of nodes outwards (range) specified. So in the first function, I expect it to return the nodes placed in the blue box. The other function, if I pass the nodes as in the graph with a range of 1 and it starts at node 5, I want it to return the list of nodes that satisfy this criteria (placed in the orange box)

    Read the article

  • Deep copying of tree view nodes.

    - by Kanags.Net
    I'm trying to copy a treeview nodes to treenodecollection for some other processing. When i execute the treeview.nodes.clear() in the next line, my treenodecollection is becoming null. Can you please tell me how to copy the treeview nodes to treenodecollection and keep the copies of the nodes even after calling Clear method of actual tree view nodes? TreeNodeCollection tnc = null; private TypeIn() { tnc = treeView1.Nodes; treeView1.Nodes.Clear(); //Now my tnc becomes null, but I want the tnc for future use. }

    Read the article

  • excel vba to CRUD drupal nodes

    - by Kirk Hings
    We need to periodically migrate Excel reports data into Drupal nodes. We looked at replicating some Excel functionality in Drupal with slickgrid, but it wasn't up to snuff. The Excel reports people don't want to double-enter their data, but their data is important to be in this Drupal site. They have hundreds of Excel reports, and update a row in each weekly. We want a button at the row end to fire a VBA macro that submits the data to Drupal, where a new node is created from the info submitted. (Yes, we are experienced with both Drupal and VBA; all users and the site are behind our firewall.) We need the new node's nid or URL returned so we can then create a link in Excel directly to that node Site is D6, using Services 3.x module. I tried the REST server module, but we can't get it to retrieve data without session authentication on, which we can't do from Excel. (unless you can?) I also noticed the 'data' it was returning via browser url was 14 or 20 nodes' info, not the one nid requested (Example: http://mysite.com/services/rest/report/node/30161) When I attempt to create a simple node like this from VBA: Dim MyURL as String MyURL = "http://mysite.com/services/rest/report/node?node[type]=test&node[title]=testing123&node[field_test_one][0][value]=123" Set objHTTP = CreateObject("MSXML2.ServerXMLHTTP") With objHTTP .Open "POST", MyURL, False .setRequestHeader "Content-Type", "application/x-www-form-urlencoded" .send (MyURL) End With I get HTTP Status: Unauthorized: Access denied for user 0 "anonymous" and HTTP Response: null Everything I search for has examples in php or java, nothing in VBA. Also tried switching to using an XMLRPC server but that's even more confusing. We would like json (used application/json, set formatter accordingly in REST server settings), but will use anything that works. Ideas? Thanks in advance!

    Read the article

  • More information about worldwide nodes how to get?

    - by Aubergine
    The context: Six hosts across worldwide were traced over week from UK. Ten thousands of lines to be parsed and analysed. And then I try to find any clue of geographical information and path - from where it jumps where. Then after Austria or Germany(each time different) I have mysterious 62.208.72.6 which in GEO LOC gives me Falklands Islands (which is where my target host is by the way, but before target host I still have 5 other nodes) Then I do whois for this 62.208.72.6 route: 62.208.0.0/16 descr: DE-ECRC-62-208-0-0 origin: AS1273 mnt-by: CW-EUROPE-GSOC source: RIPE # Filtered Why it says Europe now? How to understand this enigma code? I want to confirm more or less whether this is in europe or in falkland islands? But it can't be in FK yet as after next two hosts I get New York? Could you also tell me what does this CW-EUROPE-GSOC abbreviation means. (To preserve your sanity better not google, unless you already know it :-D) And the actual whois for the destination/target host, which completely destroys my head: route: 195.248.193.0/24 descr: HORIZON descr: Cable and Wireless Falkland Islands descr: Via Cable and Wireless Communications UK origin: AS5551 mnt-by: AS5551-MNT source: RIPE # Filtered How is it Via Cable and Wireless Communications UK if two nodes before I was in New York? Thank you guys,

    Read the article

  • Stairway to XML: Level 5 - The XML exist() and nodes() Methods

    The XML exist() method is used, often in a WHERE clause, to check the existence of an element within an XML document or fragment. The nodes() method lets you shred an XML instance and return the information as relational data. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • How to code Umbraco XSLT to retrieve Nodes from unrelated tree

    - by Phil.Wheeler
    I have an Umbraco site for personal use that I want to also use as a blog. I'm trying to put together the XSLT to grab the top three posts from the nodes in the Blog tree (node id = 1063) and display these on a tab page that is incorporated into the front page. The following image illustrates the node hierarchy: With my extremely limited appreciation of XSLT, I'm unable to grab the node ID of the "Blog" id and take the 3 pages below that to display in the "Top Posts" part of my site which is found under the "Frontpage Tabs" node. All the examples I find work with the "current page", which is typically the top-level node, "Personal Site". How should I accomplish this?

    Read the article

  • One bigger Virtual Machine distributed across many OpenStack nodes [duplicate]

    - by flyer
    This question already has an answer here: Can a virtualized machine have the CPU and RAM resources of multiple underlying physical machines? 2 answers I just setup virtual machines on one hardware with Vagrant. I want to use a Puppet to configure them and next try to setup OpenStack. I am not sure If I am understanding how this should look at the end. Is it possible to have below architecture with OpenStack after all where I will run one Virtual Machine with Linux? ------------------------------- | VM with OS | ------------------------------- | NOVA | NOVA | NOVA | ------------------------------- | OpenStack | ------------------------------- | Node | Node | Node | ------------------------------- More details: In my environment Nodes are just virtual machines, but my question concerns separate Hardware nodes. If we imagine this Nodes(Novas) are placed on a separate machines (e.g. every has 4 cores) can I run one Virtual Machine across many OpenStack Nodes? Is it possible to aggregate the computation power of OpenStack in one virtual distributed operating system?

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • Connectify Dispatch Links Multiple Network Nodes Into a Mega Connection

    - by Jason Fitzpatrick
    Connectify Dispatch wants to change the way you interact with the networks around you by making it dead simple to mesh all available Wi-Fi, Cellular, and Ethernet connections into a massive and stable pipeline. Dispatch makes it open-and-click easy to hook up multiple Wi-Fi nodes, your cellphone, and even Ethernet connections into a single blended connection. While the video above gives a great overview of the process, check out the video below to see it in real world action: The project is currently in the last phase of KickStarter funding, so now is a great time to score Connectify Dispatch at a steep discount–pledging as little as $10 to fund the project, for example, scores you 50% of a 6-month Pro license. Hit up the link below to read more about the project, check the KickStarter status, and see all the neat features in the development pipeline. Dispatch: The Internet, Faster. [KickStarter] HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • Openstack - Connectivity between instances on 2 separate nodes

    - by drcursor
    I have the following setup: 1 x Management Node (node A) 2 x Compute Nodes (node B & C) 1 x Volume Node (node D) Relevant configurations: VlanManager multi_host=true Node B[eth0=192.168.6.102;br100=10.1.0.6] Node C [eth0=192.168.6.103;br100=10.1.0.4] I can ping between instances on the same node,but not with instances in different nodes. If I run "brctl br100 eth0" , Instances can ping between nodes, but I loose conectivity on eth0 (192.168.6.102/192.168.6.103) What do I have to change to be able to ping instances between nodes while maintaining normal connectivity on eth0 ?

    Read the article

  • One bigger Virtual Machine distributed across many Nodes [on hold]

    - by flyer
    I just setup virtual machines on one hardware with Vagrant (this is just a test environment, not production!). I want to use a Puppet to configure them and next try to setup OpenStack. I am not sure If I am understanding how this should look at the end. Is it possible to have below architecture with OpenStack after all where I will run one Virtual Machine with Linux? ------------------------------- | VM | ------------------------------- | NOVA | NOVA | NOVA | ------------------------------- | OpenStack | ------------------------------- | Node | Node | Node | ------------------------------- (In my environment Nodes are just virtual machines, but my question concerns separate Hardware nodes) After some comments... Is it a language barrier, or? This is only my 'virtual environment'. If we imagine this virtual machines are a separate Nodes (e.g. every has 4 cores) the OpenStack is still the same, right? Can I run one Virtual Machine across many Nodes with OpenStack? Is it possible to aggregate the computation power of separate machines in one virtual distributed operating system?

    Read the article

  • MySQL Cluster data nodes - slow SELECTs

    - by Boyan Georgiev
    Hi to all. First off, I'm new to MySQL Cluster. This is my pain: I've managed to setup a MySQL Cluster with two data nodes, two SQL nodes and one management server. Everything works pretty well, except the following: my data nodes are spread across an intranet link which incurs latency into communications between the data nodes. Apparently, due to MySQL Cluster's internal partitioning schemes, when my PHP application pulls data from the cluster via SELECT queries, parts of the data are pulled from both data nodes. This makes the page appear onscreen REALLY slowly. If I bring one data node offline, the data can only be pulled from that single remaining data node, and thus, the final result (HTML output) appears on the screen in a very timely fashion. So, my question is this: can the data nodes/cluster be told to pull data from partitions stored only on a particular data node?

    Read the article

  • Cluster Nodes as RAID Drives

    - by BuckWoody
    I'm unable to sleep tonight so I thought I would push this post out VERY early. When you don't sleep your mind takes interesting turns, which can be a good thing. I was watching a briefing today by a couple of friends as they were talking about various ways to arrange a Windows Server Cluster for SQL Server. I often see an "active" node of a cluster with a "passive" node backing it up. That means one node is working and accepting transactions, and the other is not doing any work but simply "standing by" waiting for the first to fail over. The configuration in the demonstration I saw was a bit different. In this example, there were three nodes that were actively working, and a fourth standing by for all three. I've put configurations like this one into place before, but as I was looking at their architecture diagram, it looked familar - it looked like a RAID drive setup! And that's not a bad way to think about your cluster arrangements. The same concerns you might think about for a particular RAID configuration provides a good way to think about protecting your systems in general. So even if you're not staying awake all night thinking about SQL Server clusters, take this post as an opportunity for "lateral thinking" - a way of combining in your mind the concepts from one piece of knowledge to another. You might find a new way of making your technical environment a little better. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Bounding volume hierarchy - linked nodes (linear model)

    - by teodron
    The scenario A chain of points: (Pi)i=0,N where Pi is linked to its direct neighbours (Pi-1 and Pi+1). The goal: perform efficient collision detection between any two, non-adjacent links: (PiPi+1) vs. (PjPj+1). The question: it's highly recommended in all works treating this subject of collision detection to use a broad phase and to implement it via a bounding volume hierarchy. For a chain made out of Pi nodes, it can look like this: I imagine the big blue sphere to contain all links, the green half of them, the reds a quarter and so on (the picture is not accurate, but it's there to help understand the question). What I do not understand is: How can such a hierarchy speed up computations between segments collision pairs if one has to update it for a deformable linear object such as a chain/wire/etc. each frame? More clearly, what is the actual principle of collision detection broad phases in this particular case/ how can it work when the actual computation of bounding spheres is in itself a time consuming task and has to be done (since the geometry changes) in each frame update? I think I am missing a key point - if we look at the picture where the chain is in a spiral pose, we see that most spheres are already contained within half of others or do intersect them.. it's odd if this is the way it should work.

    Read the article

  • Copy subset of xml input using xslt

    - by mdfaraz
    I need an XSLT file to transform input xml to another with a subset of nodes in the input xml. For ex, if input has 10 nodes, I need to create output with about 5 nodes Input <Department diffgr:id="Department1" msdata:rowOrder="0"> <Department>10</Department> <DepartmentDescription>BABY PRODUCTS</DepartmentDescription> <DepartmentSeq>7</DepartmentSeq> <InsertDateTime>2011-09-29T13:19:28.817-05:00</InsertDateTime> </Department> Output: <Department diffgr:id="Department1" msdata:rowOrder="0"> <Department>10</Department> <DepartmentDescription>BABY PRODUCTS</DepartmentDescription> </Department> I found one way to suppress nodes that we dont need XSLT: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes"/> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> <xsl:template match="Department/DepartmentSeq"/> <xsl:template match="Department/InsertDateTime"/> </xsl:stylesheet> I need an xslt that helps me select the nodes I need and not "copy all and filter out what I dont need", since i may have to change my xslt whenever input schema adds more nodes.

    Read the article

  • Counting leaf nodes in hierarchical tree

    - by timn
    This code fills a tree with values based their depths. But when traversing the tree, I cannot manage to determine the actual number of children. node-cnt is always 0. I've already tried node-parent-cnt but that gives me lots of warnings in Valgrind. Anyway, is the tree type I've chosen even appropriate for my purpose? #include <string.h> #include <stdio.h> #include <stdlib.h> #ifndef NULL #define NULL ((void *) 0) #endif // ---- typedef struct _Tree_Node { // data ptr void *p; // number of nodes int cnt; struct _Tree_Node *nodes; // parent nodes struct _Tree_Node *parent; } Tree_Node; typedef struct { Tree_Node root; } Tree; void Tree_Init(Tree *this) { this->root.p = NULL; this->root.cnt = 0; this->root.nodes = NULL; this->root.parent = NULL; } Tree_Node* Tree_AddNode(Tree_Node *node) { if (node->cnt == 0) { node->nodes = malloc(sizeof(Tree_Node)); } else { node->nodes = realloc( node->nodes, (node->cnt + 1) * sizeof(Tree_Node) ); } Tree_Node *res = &node->nodes[node->cnt]; res->p = NULL; res->cnt = 0; res->nodes = NULL; res->parent = node; node->cnt++; return res; } // ---- void handleNode(Tree_Node *node, int depth) { int j = depth; printf("\n"); while (j--) { printf(" "); } printf("depth=%d ", depth); if (node->p == NULL) { goto out; } printf("value=%s cnt=%d", node->p, node->cnt); out: for (int i = 0; i < node->cnt; i++) { handleNode(&node->nodes[i], depth + 1); } } Tree tree; int curdepth; Tree_Node *curnode; void add(int depth, char *s) { printf("%s: depth (%d) > curdepth (%d): %d\n", s, depth, curdepth, depth > curdepth); if (depth > curdepth) { curnode = Tree_AddNode(curnode); Tree_Node *node = Tree_AddNode(curnode); node->p = malloc(strlen(s)); memcpy(node->p, s, strlen(s)); curdepth++; } else { while (curdepth - depth > 0) { if (curnode->parent == NULL) { printf("Illegal nesting\n"); return; } curnode = curnode->parent; curdepth--; } Tree_Node *node = Tree_AddNode(curnode); node->p = malloc(strlen(s)); memcpy(node->p, s, strlen(s)); } } void main(void) { Tree_Init(&tree); curnode = &tree.root; curdepth = 0; add(0, "1"); add(1, "1.1"); add(2, "1.1.1"); add(3, "1.1.1.1"); add(4, "1.1.1.1.1"); add(2, "1.1.2"); add(0, "2"); handleNode(&tree.root, 0); }

    Read the article

  • Adding nodes to MAAS server

    - by Yasith Tharindu
    I was able to install MAAS server using ubuntu 12.04. Then boot up nodes from he PXE. Then installed maas-precise-x86-64-commissioning through pxe. Now the installation is done. but im unable to commission with the MAAS server. It does not show it as a node and neither im unable to add it manually and end up with following error. Also what is the default username password for maas-precise-x86-64-commissioning. Im unable to login. This error when adding node manually. ERROR 2012-11-20 08:32:54,500 maas.maasserver ################################ Exception: timed out ################################ ERROR 2012-11-20 08:32:54,501 maas.maasserver Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 111, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.7/dist-packages/django/views/decorators/vary.py", line 22, in inner_func response = func(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/piston/resource.py", line 166, in call result = self.error_handler(e, request, meth, em_format) File "/usr/lib/python2.7/dist-packages/piston/resource.py", line 164, in call result = meth(request, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/maasserver/api.py", line 251, in dispatcher self, request, request.method, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/maasserver/api.py", line 193, in perform_api_operation return method(handler, request, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/maasserver/api.py", line 493, in new node = create_node(request) File "/usr/lib/python2.7/dist-packages/maasserver/api.py", line 418, in create_node return form.save() File "/usr/lib/python2.7/dist-packages/maasserver/forms.py", line 234, in save node = super(NodeWithMACAddressesForm, self).save() File "/usr/lib/python2.7/dist-packages/django/forms/models.py", line 363, in save fail_message, commit, construct=False) File "/usr/lib/python2.7/dist-packages/django/forms/models.py", line 85, in save_instance instance.save() File "/usr/lib/python2.7/dist-packages/maasserver/models.py", line 114, in save return super(CommonInfo, self).save(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/django/db/models/base.py", line 460, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/usr/lib/python2.7/dist-packages/django/db/models/base.py", line 570, in save_base created=(not record_exists), raw=raw, using=using) File "/usr/lib/python2.7/dist-packages/django/dispatch/dispatcher.py", line 172, in send response = receiver(signal=self, sender=sender, **named) File "/usr/lib/python2.7/dist-packages/maasserver/provisioning.py", line 485, in provision_post_save_Node profile, power_type, preseed_data) File "/usr/lib/python2.7/dist-packages/maasserver/provisioning.py", line 245, in call result = self.method(*args) 259,1 93% result = self.method(*args) File "/usr/lib/python2.7/xmlrpclib.py", line 1224, in call return self._send(self._name, args) File "/usr/lib/python2.7/xmlrpclib.py", line 1578, in _request verbose=self._verbose File "/usr/lib/python2.7/xmlrpclib.py", line 1264, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib/python2.7/xmlrpclib.py", line 1294, in single_request response = h.getresponse(buffering=True) File "/usr/lib/python2.7/httplib.py", line 1030, in getresponse response.begin() File "/usr/lib/python2.7/httplib.py", line 407, in begin version, status, reason = self._read_status() File "/usr/lib/python2.7/httplib.py", line 365, in _read_status line = self.fp.readline() File "/usr/lib/python2.7/socket.py", line 447, in readline data = self._sock.recv(self._rbufsize) timeout: timed out

    Read the article

  • Recreating OMS instances in a HA environment when instances on all nodes are lost

    - by rnigam
    Oracle highly recommends deploying EM in a HA environment. The best practices for HA deployments, backup and housekeeping of your Enterprise Manager environment are documented in the Oracle Enterprise Manager Advanced Configuration Guide. It is imperative that there is a good disaster recovery plan in place for your EM deployment. In this post I want to talk about a customer who failed to do the correct planning and housekeeping for EM and landed in a situation where we the all the OMSes were nearly blown away had we not jumped to help. We recently hit an issue at a customer site where we had a two node OMS setup of the Enterprise Manager and a RAC Database being used as the EM repository. An accidental delete of the OMS oracle home left us with a single node deployment. While we were trying to figure out a possible path to recover the first node, the second node was rebooted under a maintenance window. What followed was a complete site outage as the Admin and managed servers would not start on either of the nodes. In my situation there were - No backups of the Oracle Homes from any node - No OMS Configuration snapshots (created using the “emctl exportconfig oms” command) and the instance home was completely lost on node 1 which also had the Admin Server  We did however have: - A copy of the emkey.ora that I found under the OMS_ORACLE_HOME/ of the second node (NOTE: it is a bad practice to have your emkey present under the OMS Oracle home directory on the same server as the OMS. The backup of the emkey should be maintained on some other server. In this case however it was a savior in my situation since there were no backups - The oms oracle home on the second node but missing a number of files and had a number of changes done to the files in the home. There were a number of attempts to start the server by modifying various files based on the Weblogic server logs to have atleast node up and running but all of them failed. Here is how you can recover from this scenario: Follow these steps: STEP 1: Check status of emkey.ora Check whether the emkey exists is present in the EM repository or not. Run the following command: $OMS_ORACLE_HOME/bin/emctl status emkey If the output is something like this below then you are good to go and the key is present in the repository ./emctl status emkey Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. Enter Enterprise Manager Root (SYSMAN) Password : The EMKey is configured properly. Here are the messages that you might see as the emctl status emkey output depending upon whether the EM Admin Server is up and if the key is configured properly: Case1:  AdminServer is up, emkey is proper in CredStore & not in repos. This is same as the output of the command shown above:The EMKey is configured properly Case 2: AdminServer is up, emkey is proper in CredStore & exists in repos:The EMKey is configured properly, but is not secure. Secure the EMKey by running "emctl config emkey -remove_from_repos".Case 3: AdminServer is down or emkey is corrupted in CredStore) & (emkey exists in repos): The EMKey exists in the Management Repository, but is not configured properly or is corrupted in the credential store.Configure the EMKey by running "emctl config emkey -copy_to_credstore".Case 4: (AdminServer is down or emkey is corrupted in CredStore) & (emkey does not exist in repos): The EMKey is not configured properly or is corrupted in the credential store and does not exist in the Management Repository. To correct the problem:1) Get the backed up emkey.ora file.2) Configure the emkey by running "emctl config emkey -copy_to_credstore_from_file". If not the key was not secured properly, we will have to be put in the repository before proceeding. Look at the next step 2 for doing this There may be cases (like mine) where running emctl may give errors like the following: $OMS_ORACLE_HOME/bin/emctl status emkey Exception in thread “Main Thread” java.lang.NoClassDefFoundError: oracle/security/pki/OracleWallet At oracle.sysman.emctl.config.oms.EMKeyCmds.main (EMKeyCmds.java:658) Just move to the next step to put the key back in the repository STEP 2: Put emkey.ora back in the repository Skip this step if your emkey.ora is present in the repository. If not, you need to put the key back in the repository See if you can run the following command (with sample output): $OMS_ORACLE_HOME/bin/emctl config emkey –copy_to_repos Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure. After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos". Typically the key is present under $OMS_ORACLE_HOME/sysman/config directory before being removed after the install as a best practice. If you hit any errors while running emctl commands like the one mentioned in step 1, jump to step 3 and we will take care of the emkey.ora in Step 5 STEP 3: Get the port information Check for the existing port information in the emd.properties file under EM_INSTANCE_DIRECTORY (typically gc_inst directory right above the Middleware home where you have deployed em. For eg. /u01/app/oracle/product/gc_inst in case your oms home is /u01/app/oracle/product/Middleware/oms11g) In my case I got the information from the emgc.properties present in the gc_inst on the second node. If you can run emctl you may want to try the following command as well $OMS_ORACLE_HOME/bin/emctl status oms –details Note this information as this will be used in the next step STEP 4: Perform cleanup on Node 1 Note the oracle home of the Weblogic and OMS, get the list of applied patches in the homes (using opatch lsinventory command), take a backup copy of the home just in case we need it and then de-install/remove oracle homes, update inventory and cleanup processes on the first node STEP 5: Perform Software Only Installation of OMS on Node 1 Perform Weblogic 10.3.2 installation exactly under the same location as present in the earlier installation. Perform software only installation of the OMS using the following command. This will not run any configuration assistants and bypass all user interface validations runInstaller –noconfig -validationaswarnings Select the “Additional OMS” option while performing the installation. Provide the same path for OMS and Instance directories like the previous installation Use the port information collected in Step 3 while performing the installation. Once the installation is complete run the allroot.sh script to complete the binary deployment STEP 6: Apply one-off patches At this point you can apply any patches to the OMS Oracle Home previously. You only need to run opatch to install the patch in the home and not required to run the SQLs STEP 7: Copy EM key This step is only required if you were not able to use emctl command to put the emkey back into the EM repository in STEP 2 Copy the emkey.ora file of the old installation you have under $OMS_ORACLE_HOME/sysman/config directory of the newly installed OMS STEP 8: Configure Grid Control Domain Run the following command to configure the EM domain and OMS. Note that you need to use a different GC Domain name than what you used earlier. For example I have used GCDOMAIN11 as the new domain name when my previous domain name was GCDOMAIN $OMS_ORACLE_HOME/bin/omsca new –AS_USERNAME weblogic –EM_DOMAIN_NAME GCDOMAIN11 –NM_USER nodemanager -nostart This command as shown below will prompt for a number of inputs like Admin Server hostname, port, password, etc. Verify if the defaults shown are correct by pressing enter or provide a new value STEP 9: Run Add-ON Configuration Assistant After this step run the following add-on configuration assistant. This was used in my case to configure the virtualization add-on $OMS_ORACLE_HOME/addonca -oui -omsonly -name vt -install gc STEP 10: Start the OMS Now start the OMS using $OMS_ORACLE_HOME/bin/emctl start oms In a multi-node setup like mine you would either have a software load balancer or DNS round robin (using a virtual host name that resolves to one of multiple OMS hostnames) being used for load balancing. Secure the OMS against the SLB or DNS virtual hostname using the following $ OMS_HOME/bin/emctl secure oms -host slb.example.com -secure_port 1159 -slb_port 1159 -slb_console_port 443 STEP 11: Configure the Agent From the $AGENT_ORACLE_HOME/bin run the ./agentca –f At this point you should have your OMS on node 1 fully re-covered. Clean up node 2 and use the normal Additional OMS installation process documented in the official installation guide to add the additional OMS on node 2 Summary It took us nearly a little over two days to completely recover the environment with some other non-EM related issues that hit us along the way as well. In the end a situation like this could have been completely avoided had the proper housekeeping and backup of the Enterprise Manager Deployment been done in the first place. This is going to a topic that we cover in the next post. In the meantime please do refer to the Oracle Enterprise Manager Advanced Configuration Guide for planning your EM installation, backup and housekeeping procedures. This can be found here: http://download.oracle.com/docs/cd/E11857_01/index.htm Thanks This post would not have been possible without Raj Aggarwal, Prasad Chebrolu and Ravikumar Basa who helped to recover the environment and provided all the support we needed

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >