Search Results

Search found 5157 results on 207 pages for 'node'.

Page 144/207 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Puppet claims to be unable to resolve domains even if domain properly resolves

    - by gparent
    I have a fairly simple puppet setup, one master and one node, both running Debian Squeeze 6.0.4. I have DNS entries for the two machines, client and master respectively. Both client and master's DNS entries resolve correctly on both machines to the right IPs. On my client, I have this configuration: [main] server = master.example.org logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter pluginsync=true templatedir=/var/lib/puppet/templates Key exchange seems to fail, according to this messages in /var/log/syslog: localhost puppet-agent[11364]: Could not request certificate: getaddrinfo: Name or service not known Why is resolution not working only for puppet?

    Read the article

  • Why does my C++ LinkedList method print out the last word more than once?

    - by Anthony Glyadchenko
    When I call the cmremoveNode method in my LinkedList from outside code, I get an EXC_BAD_ACCESS. FIXED: But now the last word using the following test code gets repeated twice: #include <iostream> #include "LinkedList.h" using namespace std; int main (int argc, char * const argv[]) { ctlinkList linkMe; linkMe.cminsertNode("The"); linkMe.cminsertNode("Cat"); linkMe.cminsertNode("Dog"); linkMe.cminsertNode("Cow"); linkMe.cminsertNode("Ran"); linkMe.cminsertNode("Pig"); linkMe.cminsertNode("Away"); linkMe.cmlistList(); cout << endl; linkMe.cmremoveNode("The"); linkMe.cmremoveNode("Cow"); linkMe.cmremoveNode("Away"); linkMe.cmlistList(); return 0; } LinkedList code: /* * LinkedList.h * Lab 6 * * Created by Anthony Glyadchenko on 3/22/10. * Copyright 2010 __MyCompanyName__. All rights reserved. * */ #include <stdio.h> #include <iostream> #include <fstream> #include <iomanip> using namespace std; class ctNode { friend class ctlinkList ; // friend class allowed to access private data private: string sfileWord ; // used to allocate and store input word int iwordCnt ; // number of word occurrances ctNode* ctpnext ; // point of Type Node, points to next link list element }; class ctlinkList { private: ctNode* ctphead ; // initialized by constructor public: ctlinkList () { ctphead = NULL ; } ctNode* gethead () { return ctphead ; } string cminsertNode (string svalue) { ctNode* ctptmpHead = ctphead ; if ( ctphead == NULL ) { // allocate new and set head ctptmpHead = ctphead = new ctNode ; ctphead -> ctpnext = NULL ; ctphead -> sfileWord = svalue ; } else { //find last ctnode do { if ( ctptmpHead -> ctpnext != NULL ) ctptmpHead = ctptmpHead -> ctpnext ; } while ( ctptmpHead -> ctpnext != NULL ) ; // fall thru found last node ctptmpHead -> ctpnext = new ctNode ; ctptmpHead = ctptmpHead -> ctpnext ; ctptmpHead -> ctpnext = NULL; ctptmpHead -> sfileWord = svalue ; } return ctptmpHead -> sfileWord ; } string cmreturnNode (string svalue) { return NULL; } string cmremoveNode (string svalue) { int counter = 0; ctNode *tmpHead = ctphead; if (ctphead == NULL) return NULL; while (tmpHead->sfileWord != svalue && tmpHead->ctpnext != NULL){ tmpHead = tmpHead->ctpnext; counter++; } do{ tmpHead->sfileWord = tmpHead->ctpnext->sfileWord; tmpHead = tmpHead->ctpnext; } while (tmpHead->ctpnext != NULL); return tmpHead->sfileWord; } string cmlistList () { string tempList; ctNode *tmpHead = ctphead; if (ctphead == NULL){ return NULL; } else{ while (tmpHead != NULL){ cout << tmpHead->sfileWord << " "; tempList += tmpHead->sfileWord; tmpHead = tmpHead -> ctpnext; } } return tempList; } }; Why is this happening?

    Read the article

  • Stop Munin messages from /var/log/syslog

    - by Sparsh Gupta
    Hello I am using munin on a system which is adding a log entry in syslog everytime the munin-node cron job executes. It is not an issue but it sometimes makes other errors spotting difficult. There are entries like Feb 28 07:05:01 li235-57 CRON[2634]: (root) CMD (if [ -x /etc/munin/plugins/apt_all ]; then /etc/munin/plugins/apt_all update 7200 12 >/dev/null; elif [ -x /etc/munin/plugins/apt ]; then /etc/munin/plugins/apt update 7200 12 >/dev/null; fi) every 5 minutes and I was wondering how can I stop the messages going into syslog. For munin specific errors I anyways have to keep an eye on /var/log/munin/* Thanks Sparsh

    Read the article

  • Parse Nested XML tags with the same name

    - by footose
    Let's take a simple XML document: <x> <e> <e> <e>Whatever 1</e> </e> </e> <e> <e> <e>Whatever 2</e> </e> </e> <e> <e> <e>Whatever 3</e> </e> </e> </x> Using the standard org.w3c.dom, I can get the nodes in X by doing.. NodeList fullnodelist = doc.getElementsByTagName("x"); But if I want to return the next set of "e" I try to use something like .. Element element = (Element) fullnodelist.item(0); NodeList nodes = pelement.getElementsByTagName("e"); Expecting it to return "3" nodes (because there are 3 sets of "e"), but instead, it returns "9" - becuase it gets all entries with "e" apperently. This would be fine in the above case, because I could probably iterate through and find what I'm looking for. The problem I'm having is that when the XML file looks like the following: <x> <e> <pattern>whatever</pattern> <blanks> <e>Something Else</e> </blanks> </e> <e> <pattern>whatever</pattern> <blanks> <e>Something Else</e> </blanks> </e> </x> When I request the "e" value, it returns 4, instead of (what i expect) 2. Am I just not understanding how the DOM parsing works? Typically in the past I have used my own XML documents so I would never name the items like this, but unfortunately this is not my XML file and I have no choice to work like this. What I thought I would do is write a loop that "drills down" nodes so that I could group each node together... public static NodeList getNodeList(Element pelement, String find) { String[] nodesfind = Utilities.Split(find, "/"); NodeList nodeList = null; for (int i = 0 ; i <= nodesfind.length - 1; i++ ) { nodeList = pelement.getElementsByTagName( nodesfind[i] ); pelement = (Element)nodeList.item(i); } // value of the nod we are looking for return nodeList; } .. So that if you passed "s/e" into the function, it would return the 2 nodes that I'm looking for (or elements, maybe I'm using the terminology incorrect?). instead it returns all of the "e" nodes within that node. I'm using J2SE for this, so options are rather limited. I can't use any third party XML Parsers. Anyway, if anyone is still with me and has a suggestion, it would be appreciated.

    Read the article

  • Sun Grid Engine (SGE) Jobs Not Visible After Adding virtual_free

    - by Gary Richardson
    I'm trying to to use virtual_free to limit the number of large memory jobs running each grid node in my cluster. This seems to be working as expected. After I modified my code to submit jobs with the memory instances, qstat -f -q $queueName no longer shows a list of jobs waiting for a slot. The jobs are submitted with a specific queue (-q $queueName). I'm guessing this is happening due to the magic of SGE queue selection. Is there a way to make my jobs show up as before? Thanks! UPDATE I'm using: qstat -f -u * -q $queueName to view the queue. If I drop the queue argument, I can see the jobs. If I examine a specific job, I can see that it has the correct hard_queue_list value set. I'm also using Sun Grid Engine 6.1u4

    Read the article

  • OpenVZ multiple networks on CTs

    - by user6733
    I have Hardware Node (HN) which has 2 physical interfaces (eth0, eth1). I'm playing with OpenVZ and want to let my containers (CTs) have access to both of those interfaces. I'm using basic configuration - venet. CTs are fine to access eth0 (public interface). But I can't get CTs to get access to eth1 (private network). I tried: # on HN vzctl set 101 --ipadd 192.168.1.101 --save vzctl enter 101 ping 192.168.1.2 # no response here ifconfig # on CT returns lo (127.0.0.1), venet0 (127.0.0.1), venet0:0 (95.168.xxx.xxx), venet0:1 (192.168.1.101) I believe that the main problem is that all packets flows through eth0 on HN (figured out using tcpdump). So the problem might be in routes on HN. Or is my logic here all wrong? I just need access to both interfaces (networks) on HN from CTs. Nothing complicated.

    Read the article

  • UUID in Mountain Lion

    - by Naji
    I am trying to find my external HDD UUID in Mountain Lion but diskutil info /dev/disk1s1 returns: Najis-MacBook-Air:~ ****$ diskutil info disk1s1 Device Identifier: disk1s1 Device Node: /dev/disk1s1 Part of Whole: disk1 Device / Media Name: Untitled 1 Volume Name: My Book Escaped with Unicode: My%FF%FE%20%00Book Mounted: Yes Mount Point: /Volumes/My Book Escaped with Unicode: /Volumes/My%FF%FE%20%00Book File System Personality: NTFS Type (Bundle): ntfs Name (User Visible): Windows NT File System (NTFS) Partition Type: Windows_NTFS OS Can Be Installed: No Media Type: Generic Protocol: USB SMART Status: Not Supported Total Size: 2.0 TB (2000364240896 Bytes) (exactly 3906961408 512-Byte-Blocks) Volume Free Space: 212.5 GB (212506509312 Bytes) (exactly 415051776 512-Byte-Blocks) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Yes Ejectable: Yes Whole: No Internal: No And there is no UUID. What is wrong exactly? Thank you.

    Read the article

  • ldirectord ipvsadm not show reals ip and not work wtih pacemaker and corosync

    - by miguer27
    first thanks for your time. I'm having a problem with ldirectord that I can not solve, I comment my situation: I have two nodes with pace maker and corosync and configure somes resources: root@ldap1:/home/mamartin# crm status Last updated: Tue Jun 3 12:58:30 2014 Last change: Tue Jun 3 12:23:47 2014 via cibadmin on ldap1 Stack: openais Current DC: ldap2 - partition with quorum Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff 2 Nodes configured, 2 expected votes 7 Resources configured. Online: [ ldap1 ldap2 ] Resource Group: IPV_LVS IPV_4 (ocf::heartbeat:IPaddr2): Started ldap1 IPV_6 (ocf::heartbeat:IPv6addr): Started ldap1 lvs (ocf::heartbeat:ldirectord): Started ldap1 Clone Set: clon_IPV_lo [IPV_lo] Started: [ ldap2 ] Stopped: [ IPV_lo:1 ] root@ldap1:/home/mamartin# crm configure show node ldap2 \ attributes standby="off" node ldap1 \ attributes standby="off" primitive IPV-lo_4 ocf:heartbeat:IPaddr \ params ip="192.168.1.10" cidr_netmask="32" nic="lo" \ op monitor interval="5s" primitive IPV-lo_6 ocf:heartbeat:IPv6addrLO \ params ipv6addr="[fc00:1::3]" cidr_netmask="64" \ op monitor interval="5s" primitive IPV_4 ocf:heartbeat:IPaddr2 \ params ip="192.168.1.10" nic="eth0" cidr_netmask="25" lvs_support="true" \ op monitor interval="5s" primitive IPV_6 ocf:heartbeat:IPv6addr \ params ipv6addr="[fc00:1::3]" nic="eth0" cidr_netmask="64" \ op monitor interval="5s" primitive lvs ocf:heartbeat:ldirectord \ params configfile="/etc/ldirectord.cf" \ op monitor interval="20" timeout="10" \ meta target-role="Started" group IPV_LVS IPV_4 IPV_6 lvs group IPV_lo IPV-lo_6 IPV-lo_4 clone clon_IPV_lo IPV_lo \ meta interleave="true" target-role="Started" location cli-prefer-IPV_LVS IPV_LVS \ rule $id="cli-prefer-rule-IPV_LVS" inf: #uname eq ldap1 colocation LVS_no_IPV_lo -inf: clon_IPV_lo IPV_LVS property $id="cib-bootstrap-options" \ dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1401264327" rsc_defaults $id="rsc-options" \ resource-stickiness="1000" The problem is in the ipvsadm only show a one real IP, when i configured two now, show the ldirector.cf: root@ldap1:/home/mamartin# ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags - RemoteAddress:Port Forward Weight ActiveConn InActConn TCP ldap-maqueta.cica.es:ldap wrr - ldap2.cica.es:ldap Route 4 0 0 TCP [[fc00:1::3]]:ldap wrr - [[fc00:1::2]]:ldap Route 4 0 0 root@ldap1:/home/mamartin# cat /etc/ldirectord.cf checktimeout=10 checkinterval=2 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=yes #ipv4 virtual=192.168.1.10:389 real=192.168.1.11:389 gate 4 real=192.168.1.12:389 gate 4 scheduler=wrr protocol=tcp checktype=on #ipv6 virtual6=[[fc00:1::3]]:389 real6=[[fc00:1::1]]:389 gate 4 real6=[[fc00:1::2]]:389 gate 4 scheduler=wrr protocol=tcp checkport=389 checktype=on and in the logs I see nothing clear: root@ldap1:/home/mamartin# ldirectord -d /etc/ldirectord.cf start DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.11:389 -g -w 0) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.11:389 -g -w 0) DEBUG2: Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) DEBUG2: Disabled real server=on:tcp:192.168.1.11:389:::4:gate:\/: (virtual=tcp:192.168.1.10:389) DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 0) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 0) DEBUG2: Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) DEBUG2: Disabled real server=on:tcp:192.168.1.12:389:::4:gate:\/: (virtual=tcp:192.168.1.10:389) DEBUG2: Checking on: Real servers are added without any checks DEBUG2: Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) Destination already exists root@ldap1:/home/mamartin# cat /var/log/ldirectord.log [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) failed: [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Added real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: 192.168.1.11:389 (tcp:192.168.1.10:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Restored real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: [[fc00:1::2]]:389 (tcp:[[fc00:1::3]]:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] system(/sbin/ipvsadm -a -t [[fc00:1::3]]:389 -r [[fc00:1::2]]:389 -g -w 4) failed: [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Added real server: [[fc00:1::2]]:389 ([[fc00:1::3]]:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: [[fc00:1::1]]:389 (tcp:[[fc00:1::3]]:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Restored real server: [[fc00:1::1]]:389 ([[fc00:1::3]]:389) (Weight set to 4) do not know if this is a bug or a configuration error, can anyone help? Regards.

    Read the article

  • Apache on Mac Mavericks issue

    - by Michael
    Trying to run Apache so that I can create a testing server on my mac.When I start apache it starts, but it doesn't run (no connection to local host. Ill upload the unix,you'll see that after starting there is no processes, and I did a check to show you what was running on my port 80... I don't entirely know that means. Michaels-MacBook-Pro-3:~ michaelramos$ sudo apachectl start Michaels-MacBook-Pro-3:~ michaelramos$ ps aux | grep httpd michaelramos 348 0.0 0.0 2442000 624 s000 S+ 8:51AM 0:00.00 grep httpd Michaels-MacBook-Pro-3:~ michaelramos$ sudo apachectl start org.apache.httpd: Already loaded Michaels-MacBook-Pro-3:~ michaelramos$ sudo lsof -i ':80' COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ocspd 96 root 18u IPv4 0x8402f926599c58df 0t0 TCP dhcp-92-67.radford.edu:49267->108.162.232.196:http (ESTABLISHED) ocspd 96 root 20u IPv4 0x8402f926599c58df 0t0 TCP dhcp-92-67.radford.edu:49267->108.162.232.196:http (ESTABLISHED) ocspd 96 root 21u IPv4 0x8402f926599c50f7 0t0 TCP dhcp-92-67.radford.edu:49268->108.162.232.206:http (ESTABLISHED) ocspd 96 root 23u IPv4 0x8402f926599c50f7 0t0 TCP dhcp-92-67.radford.edu:49268->108.162.232.206:http (ESTABLISHED)

    Read the article

  • How to populate the span tags for all text nodes between specific self closing tags.

    - by Rachel
    I want to populate span tags for all text nodes between the self closing tags s1 and s2 having the same id. Eg. For the input: <a> <b>Some <s1 id="1" />text here</b> <c>Some <s2 id="1"/>more text <s1 id="2"/> here<c/> <d>More data</d> <e>Some <s2 id="2" />more data</e> </a> In the above input i want to enclose every text node between <s1 id="1"/> and <s2 id="1" /> with span tags. Also all the text node between <s1 id="2" /> and <s2 id="2" /> Expected Output: <a> <b>Some <span class="spanClass" id="1a">text here</span></b> <c><span class="spanClass" id="1b">Some </span>more text <span class="spanClass" id="2a"> here</span></c> <d><span class="spanClass" id="2b">More data</span></d> <e><span class="spanClass" id="2c">Some </span>more data</e> </a> I am not concened about the pattern of the id tag populated for the span as along it is unique. If the transformation of the input to the output form requires the list of ids of the s1 , s2 tag pairs say 1, 2 etc i can assume that it in place in any form if required. Hope i am clear. How can this be achieved using XSL? EDIT: The input can have any structure and not exactly the same as seen in the sample input. It can have any number of s1, s2 tag pairs but each pair will have a unique id.Adding another sample input and output pattern for more information. Input: <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>This is my title</title> </head> <body> <h1 align="center">This <s1 id="1" />is my <s2 id="1" />heading</h1> <p> Sample content <s1 id="2" />Some text here. Some content here. </p> <p> Here you <s2 id="2" />go. </p> </body> </html> Desired output: <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>This is my title</title> </head> <body> <h1 align="center">This <span id="1">is my </span>heading</h1> <p> Sample content <span id="2">Some text here. Some content here.</span> </p> <p> <span id="2">Here you </span>go. </p> </body> </html>

    Read the article

  • OpenVPN route missing

    - by dajuric
    I can connect to an OpenVPN server from Windows without any problems. But when I try to connect from Ubuntu 12.04 (start OpenVPN) I receive the following: OpenVPN needs a gateway parameter for a --route option and no default was specified by either --route-gateway or --ifconfig options SERVER IP: 161.53.X.X internal network: 10.0.0.0 / 8 What I need to do ? client configuration: client dev tap proto udp remote 161.53.X.X 1194 resolv-retry infinite nobind ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3 server conf: local 161.53.X.X port 1194 proto udp dev tap dev-node OpenVPN ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem # DHCP leases addresses to clients server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 10.0.0.1 255.255.0.0" client-to-client duplicate-cn keepalive 10 120 comp-lzo verb 6

    Read the article

  • Monitoring host and app parameters in real-time

    - by devopsdude42
    I have a bunch of VMs that I need to monitor in real-time. For all nodes I need to watch host parameters like load, network usage and free memory; and for some I need app-specific metrics too, like redis (some vars from the output of INFO command) and nginx (like requests/sec, avg. request time). Ideally I'd also like to track some parameters from the custom apps that run on these node too. These parameters should get tracked as a bunch of line charts on a dashboard. I checked out graphite and it looks suitable (although the UX and aesthetics looks like it needs some love). But setting up and maintaining graphite looks to be a pain, esp. since we don't have a full-time person just for this. Are there any alternatives? Or at least something that is simpler to setup and will scale? Reasonable paid services are also ok.

    Read the article

  • VLC 2.0: can't enable UPnP support

    - by Craig
    I'm attempting to use VLC 2.0.1 (Intel 64 bit) to connect to a UPnP server (Serviio 0.6.2 on OS X Snow Leopard). I've tried to enable the feature by right clicking the "Universal Plug 'n Play" node in the LOCAL NETWORK section of the Main Window. While I am able to select 'Enable', the setting doesn't seem to 'stick'. Moreover, UPnP isn't listed in the Playlist Service discovery section of the preferences. The 'Service discovery modules' is currently set to upnp_intel:sap. I am able to connect to the UPnP server with my Samsung TV, so I know that the service works.

    Read the article

  • What happens if an OpenStack cloud controller dies?

    - by magu
    I've been reading up on OpenStack and how we can re-create an EC2/S3-style cloud for our internal development and I'm having a hard time finding information on how the OpenStack cloud controller provides redundancy of the cloud management services. I know I can setup multiple Swift and Nova nodes, but not a single document/article/howto/wiki contains information on: a) what happens if the cloud controller node dies; and b) how to setup redundant cloud controllers. It seems to me that, although it is massively scalable, there is a big single-point-of-failure built into OpenStack. Can anyone with more experience on OpenStack please shed some light as to how it all works in regards to high-availability?

    Read the article

  • CPU Affinity on ARM processors

    - by dsljanus
    I am using some RaspberryPI boards for a data acquisition system. They are nice boards, with plenty of community support around them, but they are really slow. I am thinking of gradually replacing them with ODROID multicore boards, with the Samsung Exynos processors. I have some experience using taskset to set CPU affinity on my servers because I am always running Node.js applications that are by definition single threaded. Now, is it possible to do this on an ARM board? I do not see why it would not in theory, but I have doubts over how well it is going to work. Does anyone have experience with this kind of hack? Also, I would appreciate any comments about ARM CPUs and how they differ from x86.

    Read the article

  • How to manage several Linux workstation like a cluster?

    - by Richard Zak
    How does one go about managing a lab of Linux workstations? I'd like for users to be able to log in, run their GUI apps (LibreOffice, Firefox, Eclipse, etc), and for the computers to be able to be used as compute nodes (OpenMPI). This part I'm fine with. But how can I centrally deploy a new software package or upgrade an installed package? How can I reload the entire OS on a given node, as if these workstations were part of a super computing cluster? Is there a nice program to help with setting up PXE booting and image management, and remotely managing packages? Ideally such a system would work with Ubuntu. If there isn't a nice package, how could this be set up manually?

    Read the article

  • How to get cluster information remotely via Powershell?

    - by pdanke
    I've been trying to find a good way to gather various pieces of a windows cluster setup remotely, preferably via WMI, as we are not yet at a point where Powershell remoting is implemented (and I know this problem goes away with that). I know I can use the following to get the current node: Get-WmiObject Win32_ComputerSystem -ComputerName RemoteServer1 | Select Name I also need the name property of Get-Cluster, which I can't figure out how to get from a remote system. Is there something out there, or should I wait it out until Remoting gets implemented? I'm a newbie to all things clustering, just a dba looking to inventory our servers properly. Thanks for any help!

    Read the article

  • Windows Advanced Firewall certificate based IPSEC

    - by Tim Brigham
    I'm working on migrating from using IPSEC settings stored under the 'IP Security Policies on Active Directory' to using the 'Windows Firewall with Advanced Security' for my 2008+ boxes. I have successfully been able to get this set up using Kerberos authentication, however my openswan implementation on my Linux boxes is using certificates. Whenever I try changing the authentication method to computer certificate (using RSA and my root CA) the connection is bombing out. I've made this change at both a connection request policy and on the IPSEC settings on the root Windows Firewall with Advanced Security node. The windows event log shows the authentication request is taking place but failing negotiating a mode. What am I missing here?

    Read the article

  • How do you keep up with Nagios/Capistrano configs when using EC2?

    - by imaginative
    I use Amazon EC2 for my mobile app. Depending on load of the application at a given time, I might spawn new instances and then take them down when load is lower to save costs. How does one keep up with Nagios configurations for such a dynamic environment? When one deals with managed hardware, configuration files are predictable. In this case Nagios, Capistrano and a bunch of other configuration files would need to be added. Capistrano needs to know where to deploy a new build to for an app server. Nagios needs to know to remove an existing instance or add a new instance for monitoring. Nagios also needs to know if a node was intentionally taken down or if the host is down due to error. How is this done with the wonderful world of VPS/dynamic instances?

    Read the article

  • How can I uninstall a clustered SQL instance if the cluster has been destroyed?

    - by Bob
    First time going through this scenario, and apparently I did it very wrong. On the DB servers I deleted the cluster group that held SQL and Reporting Services. I then destroyed the cluster. Then I tried to uninstall SQL. No dice. SQL still thinks its part of the non-existant cluster and will not let me uninstall it. I went into the Maintenance menu of the SQL setup and tried to Remove Node...nope. Unless I find a way out of this I will have to rebuild the OS if I can't get SQL off the box.

    Read the article

  • Where does netstat get the process name?

    - by tjameson
    I am developing a node application and there is an option to set the process title (process name). This only sets it in some tools (like ps and top), but not in htop or netstat. I found this article that explained how most applications do it, but it doesn't change in netstat. That lead me to wonder where those programs are getting the process name. Would they be getting it from /proc/##/cmdline? (## being the PID of the process) I figure messing with things in /proc is a bad idea (and probably not possible), so if this is where those programs are getting it, is there a way to change it?

    Read the article

  • Client can't reach my production webserver. It's their ISP's fault, but now what?

    - by MikeN
    I have a customer in Michigan who can't access my production SaaS webserver that is hosted on Slicehost. All other companies across the US/Canada/Europe have no problem reaching the site. This problem is occuring intermittantly, and Slicehost customer service says it's a problem with the client's ISP. I got the IP address of my client, and ping'ing that IP address from my PROD server fails, but ping'ing the IP address from my dev box or our seperate blog server (also hosted on slicehost) works. How do I debug a problem like this? I asked the client to reach out to their local ISP and ask about this problem. A traceroute shows that the packets are getting stopped on a Comcast Michigan node which is the client's ISP. Is there anything I can do additionally to fix this problem for my client?

    Read the article

  • Avoid Apache's mod_status being exposed by Varnish

    - by Peteris Caune
    An Ubuntu 9.04 box running Apache on 8080 and Varnish on 80. Recently set up Munin and was wondering why Apache graphs are empty. Saw from the logs that Munin is accessing /server-status?auto and getting 403 Forbidden back. So I edited /etc/apache2/monds-enabled/status.conf to allow access from 127.0.0.1. But doing this actually made /server-status public, since requests coming through Varnish appear to come from 127.0.0.1 too. So the question is, how do I configure mod_status to be accessible only by munin-node and not by Varnish?

    Read the article

  • Q: MySQL Cluster - Data insertion in NDBCLUSTER table - error out after 5 million rows

    - by Mata
    MysqlCluster version: mysql-5.6.11 ndb-7.3.2 Insertload = 50 M dataset Datanodes = 3 LOAD DATA INFILE '/input_50m/Table_1_sorted.csv' IGNORE INTO TABLE nw_ndb FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' We recently setup a new mySQL cluster and trying to load data from a flat file. But getting error “Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER" when inserting 5 million rows in a single table in MySQL Cluster. We are using "LOAD DATA INFILE" command to load the data in the table from csv file. Server (musqld, ndb nodes) has good hardware: 126 GB RAM, 32 Gb allocated to mysqld tried below settings with no effect: SET autocommit=0; SET FOREIGN_KEY_CHECKS=0; SET unique_checks=0; SET GLOBAL ndb_batch_size=8*1024*1024; SET GLOBAL ndb_cache_check_time = 1000; SET GLOBAL ndb_index_stat_cache_entries = 10000000; SET SESSION BULK_INSERT_BUFFER_SIZE=256217728; SET GLOBAL KEY_BUFFER_SIZE=256217728; Any clues?

    Read the article

  • Permission to see the expandable list of ISA Server 2006

    - by Hossein Mobasher
    I am working on ISA Server 2006 in Windows Server. I want to add some policy rules to my server, I followed this link. But It points to In the Microsoft Internet Security and Acceleration Server 2006 management console, expand the array name, and then click the Firewall Policy node. When I open the ISA Server 2006 Management Console, I can not show the expand list, how can I force ISA to show the expandable tree to start Firewall Policy? Could any one please help me to do this ? Note : I have administrator permission for my account. Thanks in advance :)

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >