Search Results

Search found 12730 results on 510 pages for 'trusted root'.

Page 116/510 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • JavaFX: Use a Screen with your Scene!

    - by user12610255
    Here's a handy tip for sizing your application. You can use the javafx.stage.Screen class to obtain the width and height of the user's screen, and then use those same dimensions when sizing your scene. The following code modifies default "Hello World" application that appears when you create a new JavaFX project in NetBeans. package screendemo; import javafx.application.Application; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.stage.Stage; import javafx.stage.Screen; import javafx.geometry.Rectangle2D; public class ScreenDemo extends Application { public static void main(String[] args) { Application.launch(args); } @Override public void start(Stage primaryStage) { primaryStage.setTitle("Hello World"); Group root = new Group(); Rectangle2D screenBounds = Screen.getPrimary().getVisualBounds(); Scene scene = new Scene(root, screenBounds.getWidth(), screenBounds.getHeight()); Button btn = new Button(); btn.setLayoutX(100); btn.setLayoutY(80); btn.setText("Hello World"); btn.setOnAction(new EventHandler() { public void handle(ActionEvent event) { System.out.println("Hello World"); } }); root.getChildren().add(btn); primaryStage.setScene(scene); primaryStage.show(); } } Running this program will set the Stage boundaries to visible bounds of the main screen. -- Scott Hommel

    Read the article

  • Not able to install LTT tool in 10.04

    - by Ashoka
    I have tried the below commands to install LTT on VMware running Ubuntu 10.04, but got the following error. Please help. From link : https://launchpad.net/~lttng/+archive/ppa $ sudo apt-add-repository ppa:lttng/ppa $ sudo apt-get update $ sudo apt-get install lttng-tools lttng-modules-dkms babeltrace user@usr:~$ sudo apt-add-repository ppa:lttng/ppa Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv C541B13BD43FA44A287E4161F4A7DFFC33739778 gpg: requesting key 33739778 from hkp server keyserver.ubuntu.com gpg: unable to execute program `/usr/local/libexec/gnupg/gpgkeys_curl': No such file or directory gpg: no handler for keyserver scheme `hkp' gpg: keyserver receive failed: keyserver error user@usr:~$ user@usr:~$ sudo apt-get update ..... $ user@usr:~$ sudo apt-get install lttng-tools lttng-modules-dkms babeltrace Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package lttng-tools user@usr:~$ My systemm details: user@usr:~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.4 LTS" user@usr:~$ user@usr:~$ user@usr:~$ uname -a Linux usr 2.6.32-42-generic #95-Ubuntu SMP Wed Jul 25 15:57:54 UTC 2012 i686 GNU/Linux user@usr:~$ Regards, Ashoka

    Read the article

  • RSA decrypting data in C# (.NET 3.5) which was encrypted with openssl in php 5.3.2

    - by panny
    Maybe someone can clear me up. I have been surfing on this a while now. Step #1: Create a root certificate Key generation on unix 1) openssl req -x509 -nodes -days 3650 -newkey rsa:1024 -keyout privatekey.pem -out mycert.pem 2) openssl rsa -in privatekey.pem -pubout -out publickey.pem 3) openssl pkcs12 -export -out mycertprivatekey.pfx -in mycert.pem -inkey privatekey.pem -name "my certificate" Step #2: Does root certificate work on php: YES PHP side I used the publickey.pem to read it into php: $publicKey = "file://C:/publickey.pem"; $privateKey = "file://C:/privatekey.pem"; $plaintext = "123"; openssl_public_encrypt($plaintext, $encrypted, $publicKey); $transfer = base64_encode($encrypted); openssl_private_decrypt($encrypted, $decrypted, $privateKey); echo $decrypted; // "123" OR $server_public_key = openssl_pkey_get_public(file_get_contents("C:\publickey.pem")); // rsa encrypt openssl_public_encrypt("123", $encrypted, $server_public_key); and the privatekey.pem to check if it works: openssl_private_decrypt($encrypted, $decrypted, openssl_get_privatekey(file_get_contents("C:\privatekey.pem"))); echo $decrypted; // "123" Coming to the conclusion, that encryption/decryption works fine on the php side with these openssl root certificate files. Step #3: Does root certificate work on .NET: YES C# side In same manner I read the keys into a .net C# console program: X509Certificate2 myCert2 = new X509Certificate2(); RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(); try { myCert2 = new X509Certificate2(@"C:\mycertprivatekey.pfx"); rsa = (RSACryptoServiceProvider)myCert2.PrivateKey; } catch (Exception e) { } byte[] test = {Convert.ToByte("123")}; string t = Convert.ToString(rsa.Decrypt(rsa.Encrypt(test, false), false)); Coming to the point, that encryption/decryption works fine on the c# side with these openssl root certificate files. Step #4: Enrypt in php and Decrypt in .NET: !!NO!! PHP side $onett = "123" .... openssl_public_encrypt($onett, $encrypted, $server_public_key); $onettbase64 = base64_encode($encrypted); copy - paste $onettbase64 ("LkU2GOCy4lqwY4vtPI1JcsxgDgS2t05E6kYghuXjrQe7hSsYXETGdlhzEBlp+qhxzTXV3pw+AS5bEg9CPxqHus8fXHOnXYqsd2HL20QSaz+FjZee6Kvva0cGhWkFdWL+ANDSOWRWo/OMhm7JVqU3P/44c3dLA1eu2UsoDI26OMw=") into c# program: C# side byte[] transfered_onettbase64 = Convert.FromBase64String("LkU2GOCy4lqwY4vtPI1JcsxgDgS2t05E6kYghuXjrQe7hSsYXETGdlhzEBlp+qhxzTXV3pw+AS5bEg9CPxqHus8fXHOnXYqsd2HL20QSaz+FjZee6Kvva0cGhWkFdWL+ANDSOWRWo/OMhm7JVqU3P/44c3dLA1eu2UsoDI26OMw="); string k = Convert.ToString(rsa.Decrypt(transfered_onettbase64, false)); // Bad Data exception == Exception while decrypting!!! Any ideas?

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Can I query DOM Document with xpath expression from multiple threads safely?

    - by Dan
    I plan to use dom4j DOM Document as a static cache in an application where multiples threads can query the document. Taking into the account that the document itself will never change, is it safe to query it from multiple threads? I wrote the following code to test it, but I am not sure that it actually does prove that operation is safe? package test.concurrent_dom; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.DocumentHelper; import org.dom4j.Element; import org.dom4j.Node; /** * Hello world! * */ public class App extends Thread { private static final String xml = "<Session>" + "<child1 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText1</child1>" + "<child2 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText2</child2>" + "<child3 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText3</child3>" + "</Session>"; private static Document document; private static Element root; public static void main( String[] args ) throws DocumentException { document = DocumentHelper.parseText(xml); root = document.getRootElement(); Thread t1 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child1"); if(!n1.getText().equals("ChildText1")){ System.out.println("WRONG!"); } } } }; Thread t2 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child2"); if(!n1.getText().equals("ChildText2")){ System.out.println("WRONG!"); } } } }; Thread t3 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child3"); if(!n1.getText().equals("ChildText3")){ System.out.println("WRONG!"); } } } }; t1.start(); t2.start(); t3.start(); System.out.println( "Hello World!" ); } }

    Read the article

  • Generate Ant build file

    - by inakiabt
    I have the following project structure: root/ comp/ env/ version/ build.xml build.xml build.xml Where root/comp/env/version/build.xml is: <project name="comp-env-version" basedir="."> <import file="../build.xml" optional="true" /> <echo>Comp Env Version tasks</echo> <target name="run"> <echo>Comp Env Version run task</echo> </target> </project> root/comp/env/build.xml is: <project name="comp-env" basedir="."> <import file="../build.xml" optional="true" /> <echo>Comp Env tasks</echo> <target name="run"> <echo>Comp Env run task</echo> </target> </project> root/comp/build.xml is: <project name="comp" basedir="."> <echo>Comp tasks</echo> </project> Each build file imports the parent build file and each child inherits and overrides parent tasks/properties. What I need is to get the generated build XML without run anything. For example, if I run "ant" (or something like that) on root/comp/env/version/, I would like to get the following output: <project name="comp-env-version" basedir="."> <echo>Comp tasks</echo> <echo>Comp Env tasks</echo> <echo>Comp Env Version tasks</echo> <target name="run"> <echo>Comp Env Version run task</echo> </target> </project> Is there an Ant plugin to do this? With Maven? What are my options if not? EDIT: I need something like "mvn help:effective-pom" for Ant.

    Read the article

  • RVM can't switch to Ruby 1.9.1

    - by Marco
    I installed Ruby 1.8.7 through apt-get. I then installed 1.9.1 through RVM. The RVM 1.9.1 installation was successful: root: rvm install 1.9.1 <i>Installing Ruby from source to: /usr/local/rvm/rubies/ruby-1.9.1-p378 </i> <i>/usr/local/rvm/src/ruby-1.9.1-p378 has already been extracted. </i> <i>Configuring ruby-1.9.1-p378, this may take a while depending on your cpu(s)... </i> <i>Compiling ruby-1.9.1-p378, this may take a while, depending on your cpu(s)... </i> <i>Installing ruby-1.9.1-p378 </i> <i>Installation of ruby-1.9.1-p378 is complete. </i> <i>Updating rubygems for /usr/local/rvm/gems/ruby-1.9.1-p378@global </i> <i>Updating rubygems for /usr/local/rvm/gems/ruby-1.9.1-p378 </i> <i>adjusting shebangs for ruby-1.9.1-p378 (gem irb erb ri rdoc testrb rake). </i> <i>Installing gems for ruby-1.9.1-p378 (rdoc rake). </i> <i>Installing rdoc to /usr/local/rvm/gems/ruby-1.9.1-p378@global </i> <i>Installing rdoc to /usr/local/rvm/gems/ruby-1.9.1-p378 </i> <i>Installing rake to /usr/local/rvm/gems/ruby-1.9.1-p378@global </i> <i>Installing rake to /usr/local/rvm/gems/ruby-1.9.1-p378 </i> <i>Installation of gems for ruby-1.9.1-p378 is complete. </i> However, I cannot get RVM to switch to the new version: root: ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux] root: rvm 1.9.1 root: ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux] Despite that, it seems to have installed fine: root: /usr/local/rvm/bin/ruby-1.9.1-p378 -v ruby 1.9.1p378 (2010-01-10 revision 26273) [x86_64-linux] I also tried setting the rvm --default to 1.9.1 but that did not help. Why can't RVM switch to the new version? Should I just set an alias for ruby=1.9.1? *running Debian

    Read the article

  • Regular expression works normally, but fails when placed in an XML schema

    - by Eli Courtwright
    I have a simple doc.xml file which contains a single root element with a Timestamp attribute: <?xml version="1.0" encoding="utf-8"?> <root Timestamp="04-21-2010 16:00:19.000" /> I'd like to validate this document against a my simple schema.xsd to make sure that the Timestamp is in the correct format: <?xml version="1.0" encoding="utf-8"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="root"> <xs:complexType> <xs:attribute name="Timestamp" use="required" type="timeStampType"/> </xs:complexType> </xs:element> <xs:simpleType name="timeStampType"> <xs:restriction base="xs:string"> <xs:pattern value="(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}" /> </xs:restriction> </xs:simpleType> </xs:schema> So I use the lxml Python module and try to perform a simple schema validation and report any errors: from lxml import etree schema = etree.XMLSchema( etree.parse("schema.xsd") ) doc = etree.parse("doc.xml") if not schema.validate(doc): for e in schema.error_log: print e.message My XML document fails validation with the following error messages: Element 'root', attribute 'Timestamp': [facet 'pattern'] The value '04-21-2010 16:00:19.000' is not accepted by the pattern '(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}'. Element 'root', attribute 'Timestamp': '04-21-2010 16:00:19.000' is not a valid value of the atomic type 'timeStampType'. So it looks like my regular expression must be faulty. But when I try to validate the regular expression at the command line, it passes: >>> import re >>> pat = '(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}' >>> assert re.match(pat, '04-21-2010 16:00:19.000') >>> I'm aware that XSD regular expressions don't have every feature, but the documentation I've found indicates that every feature that I'm using should work. So what am I mis-understanding, and why does my document fail?

    Read the article

  • Cluster Node Recovery Using Second Node in Solaris Cluster

    - by Onur Bingul
    Assumptions:Node 0a is the cluster node that has crashed and could not boot anymore.Node 0b is the node in cluster and in production with services active.Both nodes have their boot disk mirrored via SDS/SVM.We have many options to clone the boot disk from node 0b:- make a copy via network using the ufsdump command and pipe to ufsrestore - make a copy inserting the disk locally on node 0b and creating the third mirror with SDS- make a copy inserting the disk locally on node 0b using dd commandIn this procedure we are going to use dd command (from my experience this is the best option).Bare in mind that in the examples provided we work on Sun Fire V240 systems which have SCSI internal disks. In the case of Fibre Channel (FC) internal disks you must pay attention to the unique identifier, or World Wide Name (WWN), associated with each FC disk (in this case take a look at infodoc #40133 in order to recreate the device tree correctly).Procedure:On node 0b the boot disk is c1t0d0 (c1t1d0 mirror) and this is the VTOC:* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory      0      2    00          0   2106432   2106431      1      3    01    2106432  74630784  76737215      2      5    00          0 143349312 143349311      4      7    00   76737216  50340672 127077887      5      4    00  127077888  14683968 141761855      6      0    00  141761856   1058304 142820159      7      0    00  142820160    529152 143349311We will insert the new disk on node 0b and it will be seen as c1t2d0.1) On node 0b we make a copy via dd from disk c1t0d0s2 to disk c1t2d0s2# dd if=/dev/rdsk/c1t0d0s2 of=/dev/rdsk/c1t2d0s2 bs=8192kA copy of a 72GB disk will take approximately about 45 minutes.Note: as an alternative to make identical copy of root over network follow Document ID: 47498Title: Sun[TM] Cluster 3.0: How to Rebuild a node with Veritas Volume Manager2) Perform an fsck on disk c1t2d0 data slices:   1.  fsck -o f /dev/rdsk/c1t2d0s0 (root)   2.  fsck -o f /dev/rdsk/c1t2d0s4 (/var)   3.  fsck -o f /dev/rdsk/c1t2d0s5 (/usr)   4.  fsck -o f /dev/rdsk/c1t2d0s6 (/globaldevices)3) Mount the root file system in order to edit following files for changing the node name:# mount /dev/dsk/c1t2d0s0 /mntChange the hostname from 0b to 0a:# cd /mnt/etc# vi hosts # vi hostname.bge0 # vi hostname.bge2 # vi nodename 4) Change the /mnt/etc/vfstab from the actual:/dev/md/dsk/d201        -       -       swap    -       no      -/dev/md/dsk/d200        /dev/md/rdsk/d200       /       ufs     1       no      -/dev/md/dsk/d205        /dev/md/rdsk/d205       /usr    ufs     1       no      logging/dev/md/dsk/d204        /dev/md/rdsk/d204       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d206        /dev/md/rdsk/d206       /global/.devices/node@2 ufs     2       noglobalto this (unencapsulate disk from SDS/SVM):/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs     2       no globalIt is important that global device partition (slice 6) in the new vfstab will point to the physical partition of the disk (in our case slice 6).Be careful with the name you use for the new disk. In this case we define it as c1t0d0 because we will insert it as target 0 in node 0a.But this could be different based on the configuration you are working on.5) Remove following entry from /mnt/etc/system (part of unencapsulation procedure):rootdev:/pseudo/md@0:0,200,blk6) Correct the link shared -> ../../global/.devices/node@2/dev/md/shared in order to point to the nodeid of node 0a (in our case nodeid 1):# cd /mnt/dev/mdhow it is now.... node 0b has nodeid 2lrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@2/dev/md/shared# rm shared# ln -s ../../global/.devices/node@1/dev/md/shared sharedhow is going to be... with nodeid 1 for node 0alrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@1/dev/md/shared7) Change nodeid (in our case from 2 to 1):# cd /mnt/etc/cluster# vi nodeid8) Change the file /mnt/etc/path_to_inst in order to reflect the correct nodeid for node 0a:# cd /mnt/etc# vi path_to_instChange entries from node@2 to node@1 with the vi command ":%s/node@2/node@1/g"9) Write the bootblock to the disk... just in case:# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0Now the disk is ready to be inserted in node 0a in order to bootup the node.10) Bootup node 0a with command "boot -sx"... this is becasue we need to make some changes in ccr files in order to recreate did environment.11) Modify cluster ccr:# cd /etc/cluster/ccr# rm did_instances# rm did_instances.bak# vi directory - remove the did_instances line.# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/directory # grep ccr_gennum /etc/cluster/ccr/directory ccr_gennum -1 # /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure # grep ccr_gennum /etc/cluster/ccr/infrastructure ccr_gennum -112) Bring the node 0a down again to the ok prompt and then issue the command "boot -r"Now the node will join the cluster and from scstat and metaset command you can verify functionality. Next step is to encapsulate the boot disk in SDS/SVM and create the mirrors.In our case node 0b has metadevice name starting from d200. For this reason on node 0a we need to create metadevice starting from d100. This is just an example, you can have different names.The important thing to remember is that metadevice boot disks have different names on each node.13) Remove metadevice pointing to the boot and mirror disks (inherit from node 0b):# metaclear -r -f d200# metaclear -r -f d201# metaclear -r -f d204# metaclear -r -f d205# metaclear -r -f d206verify from metastat that no metadevices are set for boot and mirror disks.14) Encapsulate the boot disk:# metainit -f d110 1 1 c1t0d0s0# metainit d100 -m d110# metaroot d10015) Reboot node 0a.16) Create all the metadevice for slices remaining on boot disk# metainit -f d111 1 1 c1t0d0s1# metainit d101 -m d111# metainit -f d114 1 1 c1t0d0s4# metainit d104 -m d114# metainit -f d115 1 1 c1t0d0s5# metainit d105 -m d115# metainit -f d116 1 1 c1t0d0s6# metainit d106 -m d11617) Edit the vfstab in order to specifiy metadevices created:old:/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs      2       no  globalnew:/dev/md/dsk/d101        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/md/dsk/d105        /dev/md/rdsk/d105       /usr    ufs     1       no      logging/dev/md/dsk/d104        /dev/md/rdsk/d104       /var    ufs     1       no      logging#/dev/md/dsk/106       /dev/md/rdsk/d106       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d106        /dev/md/rdsk/d106       /global/.devices/node@1 ufs     2       noglobal18) Reboot node 0a in order to check new SDS/SVM boot configuration.19) Label the mirror disk c1t1d0 with the VTOC of boot disk c1t0d0:# prtvtoc /dev/dsk/c1t0d0s2 > /var/tmp/VTOC_c1t0d0 # fmthard -s /var/tmp/VTOC_c1t0d0 /dev/rdsk/c1t1d0s220) Put DB replica on slice 7 of disk c1t1d0:# metadb -a -c 3 /dev/dsk/c1t1d0s721) Create metadevice for mirror disk c1t1d0 and attach the new mirror side:# metainit d120 1 1 c1t1d0s0# metattach d100 d120# metainit d121 1 1 c1t1d0s1# metattach d101 d121# metainit d124 1 1 c1t1d0s4# metattach d104 d124# metainit d125 1 1 c1t1d0s5# metattach d105 d125# metainit d126 1 1 c1t1d0s6# metattach d106 d126

    Read the article

  • Restructure XML nodes using XSLT

    - by Brian
    Looking to use XSLT to transform my XML. The sample XML is as follows: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <notes> <note>text1</note> <note>text2</note> </notes> <othernotes> <note>text3</note> <note>text4</note> </othernotes> I'm looking to extract all "note" elements, and have them under a parent node "notes". The result I'm looking for is as follows: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <notes> <note>text1</note> <note>text2</note> <note>text3</note> <note>text4</note> </notes> </root> The XSLT I attempted to use is allowing me to extract all my "note", however, I can't figure out how I can wrap them back within a "notes" node. Here's the XSLT I'm using: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:template match="notes|othernotes"> <xsl:apply-templates select="note"/> </xsl:template> <xsl:template match="*"> <xsl:copy><xsl:apply-templates/></xsl:copy> </xsl:template> </xsl:stylesheet> The result I'm getting with the above XSLT is: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <note>text1</note> <note>text2</note> <note>text3</note> <note>text4</note> </root> Thanks

    Read the article

  • How can I bind a instance of a custom class to a WPF TreeListView without bugs?

    - by user327104
    First, from : http://blogs.msdn.com/atc_avalon_team/archive/2006/03/01/541206.aspx I get a nice TreeListView. I left the original classes (TreeListView, TreeListItemView, and LevelToIndentConverter) intact, the only code y have Added is on the XAML, here is it: <Style TargetType="{x:Type l:TreeListViewItem}"> .... </ControlTemplate.Triggers> <ControlTemplate.Resources> <HierarchicalDataTemplate DataType="{x:Type local:FileInfo}" ItemsSource="{Binding Childs}"> </HierarchicalDataTemplate> </ControlTemplate.Resources> </ControlTemplate> ... here is my custom class public class FileInfo { private List<FileInfo> childs; public FileInfo() { childs = new List<FileInfo>(); } public bool IsExpanded { get; set; } public bool IsSelected { get; set; } public List<FileInfo> Childs { get { return childs; } set { } } public string FileName { get; set; } public string Size { get; set; } } And before I use my custom class I remove all the items inside the treeListView1: <l:TreeListView> <l:TreeListView.Columns> <GridViewColumn Header="Name" CellTemplate="{StaticResource CellTemplate_Name}" /> <GridViewColumn Header="IsAbstract" DisplayMemberBinding="{Binding IsAbstract}" Width="60" /> <GridViewColumn Header="Namespace" DisplayMemberBinding="{Binding Namespace}" /> </l:TreeListView.Columns> </l:TreeListView> So finaly I add this code to bind a Instance of my class to the TreeListView: private void Window_Loaded(object sender,RoutedEventArgs e) { FileInfo root = new FileInfo() { FileName = "mis hojos", Size="asdf" }; root.Childs.Add(new FileInfo(){FileName="sub", Size="123456" }); treeListView1.Items.Add(root); root.Childs[0].Childs.Add(new FileInfo() { FileName = "asdf" }); root.Childs[0].IsExpanded = true; } So the bug is that the button to expand the elementes dont appear and when a node is expanded by doble click the child nodes dont look like child nodes. Please F1 F1 F1 F1 F1 F1 F1 F1.

    Read the article

  • how much time does grid.py take to run ?

    - by trinity
    Hello all , I am using libsvm for binary classification.. I wanted to try grid.py , as it is said to improve results.. I ran this script for five files in separate terminals , and the script has been running for more than 12 hours.. this is the state of my 5 terminals now : [root@localhost tools]# python grid.py sarts_nonarts_feat.txt>grid_arts.txt Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sgames_nongames_feat.txt>grid_games.txt Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sref_nonref_feat.txt>grid_ref.txt Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sbiz_nonbiz_feat.txt>grid_biz.txt Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py snews_nonnews_feat.txt>grid_news.txt Wrong input format at line 494 Traceback (most recent call last): File "grid.py", line 223, in run if rate is None: raise "get no rate" TypeError: exceptions must be classes or instances, not str I had redirected the outputs to files , but those files for now contain nothing.. And , the following files were created : sbiz_nonbiz_feat.txt.out sbiz_nonbiz_feat.txt.png sarts_nonarts_feat.txt.out sarts_nonarts_feat.txt.png sgames_nongames_feat.txt.out sgames_nongames_feat.txt.png sref_nonref_feat.txt.out sref_nonref_feat.txt.png snews_nonnews_feat.txt.out (-- is empty ) There's just one line of information in .out files.. the ".png" files are some GNU PLOTS . But i dont understand what the above GNUplots / warnings convey .. Should i re-run them ? Can anyone please tell me on how much time this script might take if each input file contains about 144000 lines.. Thanks and regards

    Read the article

  • How to create nested directories in PhoneGap

    - by Stallman
    I have tried on this, but this didn't satisfy my request at all. I write a new one: var file_system; var fs_root; window.requestFileSystem(LocalFileSystem.PERSISTENT, 1024*1024, onInitFs, request_FS_fail); function onInitFs(fs) { file_system= fs; fs_root= file_system.root; alert("ini fs"); create_Directory(); alert("ini fs done."); } var string_array; var main_dir= "story_repository/"+ User_Editime; string_array= new Array("story_repository/",main_dir, main_dir+"/rec", main_dir+"/img","story_repository/"+ User_Name ); function create_Directory(){ var start= 0; var path=""; while(start < string_array.length) { path = string_array[start]; alert(start+" th created directory " +" is "+ path); fs_root.getDirectory ( path , {create: true, exclusive: false}, function(entry) { alert(path +"is created."); }, create_dir_err() ); start++; }//while loop }//create_Directory function create_dir_err() { alert("Recursively create directories error."); } function request_FS_fail() { alert("Failed to request File System "); } Although the directories are created, the it sends me ErrorCallback:"alert("Recursively create directories error.");" Firstly, I don't think this code will work since I have tried on this: This one failed: window.requestFileSystem( LocalFileSystem.PERSISTENT, 0, //request file system success callback. function(fileSys) { fileSys.root.getDirectory( "story_repository/"+ dir_name, {create: true, exclusive: false}, //Create directory story_repository/Stallman_time. function(directory) { alert("Create directory: "+ "story_repository/"+ dir_name); //create dir_name/img/ fileSys.root.getDirectory { "story_repository/"+ dir_name + "/img/", {create: true, exclusive: false}, function(directory) { alert("Create a directory: "+ "story_repository/"+ dir_name + "/img/"); //check. //create dir_name/rec/ fileSys.root.getDirectory { "story_repository/"+ dir_name + "/rec/", {create: true, exclusive: false}, function(directory) { alert("Create a directory: "+ "story_repository/"+ dir_name + "/rec/"); //check. //Go ahead. }, createError } //create dir_name/rec/ }, createError } //create dir_name/img }, createError); }, //Create directory story_repository/Stallman_time. createError()); } I just repeatedly call fs.root.getDirectory only but it failed. But the first one is almost the same... 1. What is the problem at all? Why does the first one always gives me the ErrorCallback? 2. Why can't the second one work? 3. Does anyone has a better solution?(no ErrorcallBack msg) ps: I work on Android and PhoneGap 1.7.0.

    Read the article

  • Ignore order of elements using xs:extension

    - by Peter Lang
    How can I design my xsd to ignore the sequence of elements? <root> <a/> <b/> </root> <root> <b/> <a/> </root> I need to use extension for code generation reasons, so I tried the following using all: <?xml version="1.0" encoding="UTF-8"?> <xs:schema targetNamespace="http://www.example.com/test" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:t="http://www.example.com/test" > <xs:complexType name="BaseType"> <xs:all> <xs:element name="a" type="xs:string" /> </xs:all> </xs:complexType> <xs:complexType name="ExtendedType"> <xs:complexContent> <xs:extension base="t:BaseType"> <xs:all> <!-- ERROR --> <xs:element name="b" type="xs:string" /> </xs:all> </xs:extension> </xs:complexContent> </xs:complexType> <xs:element name="root" type="t:ExtendedType"></xs:element> </xs:schema> This xsd is not valid though, the following error is reported at <!-- ERROR -->: cos-all-limited.1.2: An all model group must appear in a particle with {min occurs} = {max occurs} = 1, and that particle must be part of a pair which constitutes the {content type} of a complex type definition. Documentation of cos-all-limited.1.2 says: 1.2 the {term} property of a particle with {max occurs}=1 which is part of a pair which constitutes the {content type} of a complex type definition. I don't really understand this (neither xsd nor English native speaker :) ). Am I doing the wrong thing, am I doing the right thing wrong, or is there no way to achieve this? Thanks!

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Infinite loop when adding a row to a list in a class in python3

    - by Margaret
    I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): and class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): My initialisation code is currently: if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.

    Read the article

  • Can XPath concatenate two nodeset values? (for use in XForms)

    - by iHeartGreek
    Hi! I am wanting to concatenate two nodeset values using XPath in XForms. I know that XPath has a concat(string, string) function, but how would I go about concatenating two nodeset values? BEGIN EDIT: I tried concat function.. I tried this.. and variations of it to make it work, but it doesn't <xf:value ref="concat(instance('param_choices')/choice/root, .)"/> END EDIT Below is a simplified code example of what I am trying to achieve. XForms model: <xf:instance id="param_choices" xmlns=""> <choices> <root label="Param Choices">/param</root> <choice label="Name">/@AAA</choice> <choice label="Value">/@BBB</choice> </choices> </xf:instance> XForms ui code that I currently have: <xf:select ref="instance('criteria_data')/criteria/criterion" appearance="full"> <xf:label>Param choices:</xf:label> <br/> <xf:itemset nodeset="instance('param_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> (if user selects "Name" checkbox..) the XML output is: <criterion>/@BBB</criterion> However! I want to combine the root nodeset value with the current choice nodeset value. Essentially: <xf:value ref="(instance('definition_choices')/choice/root) + ."/> to achieve the following XML output: <criterion>/param/@BBB</criterion> Any suggestions on how to do this? (I am fairly new to XPath and XForms) p.s. what I am asking makes sense to me when I typed it out, but if you have trouble figuring out what I'm asking, just let me know.. thanks!

    Read the article

  • Lighttpd + fastcgi + python (for django) slow on first request

    - by EagleOne
    I'm having a problem with a django website I host with lighttpd + fastcgi. It works great but it seems that the first request always takes up to 3seconds. Subsequent requests are much faster (<1s). I activated access logs in lighttpd in order to track the issue. But I'm kind of stuck. Here are logs where I 'lose' 4s (from 10:04:17 to 10:04:21): 2012-12-01 10:04:17: (mod_fastcgi.c.3636) handling it in mod_fastcgi 2012-12-01 10:04:17: (response.c.470) -- before doc_root 2012-12-01 10:04:17: (response.c.471) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.472) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.473) Path : 2012-12-01 10:04:17: (response.c.521) -- after doc_root 2012-12-01 10:04:17: (response.c.522) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.523) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.524) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:17: (response.c.541) -- logical -> physical 2012-12-01 10:04:17: (response.c.542) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.543) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.544) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:21: (response.c.128) Response-Header: HTTP/1.1 200 OK Last-Modified: Sat, 01 Dec 2012 09:04:21 GMT Expires: Sat, 01 Dec 2012 09:14:21 GMT Content-Type: text/html; charset=utf-8 Cache-Control: max-age=600 Transfer-Encoding: chunked Date: Sat, 01 Dec 2012 09:04:21 GMT Server: lighttpd/1.4.28 I guess that if there is a problem, it's whith my configuration. So here is the way I launch my django app: python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 And here is my lighttpd conf: server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_accesslog", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" accesslog.filename = "/var/log/lighttpd/access.log" debug.log-request-header = "enable" debug.log-response-header = "enable" debug.log-file-not-found = "enable" debug.log-request-handling = "enable" debug.log-timeouts = "enable" debug.log-ssl-noise = "enable" debug.log-condition-cache-handling = "enable" debug.log-condition-handling = "enable" fastcgi.server = ( "/finderauto.fcgi" => ( "main" => ( # Use host / port instead of socket for TCP fastcgi "host" => "127.0.0.1", "port" => 3033, #"socket" => "/home/finderadmin/finderauto.sock", "check-local" => "disable", "fix-root-scriptname" => "enable", ) ), ) alias.url = ( "/media" => "/home/user/django/contrib/admin/media/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/finderauto.fcgi$1", ) index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" If any of you could help me finding out where I lose these 3 or 4 s. I would much appreciate. Thanks in advance!

    Read the article

  • Nginx + Nagios : 502 Bad gateway

    - by MrROY
    I have a fully new install nagios, but I can't access to it. Here's my Nginx config: server{ listen 80; server_name 61.148.45.10; # blahblah # Nagios Monitoring location /nagios3/ { proxy_pass http://127.0.0.1:80; } } Nagios is installed step by step(From this Linode guide): sudo apt-get install -y nagios3 Then I try to visit http://ip-address/nagios3/, but it shows 502 bad gateway. How do I deal with this ? This is my /var/log/syslog: Oct 25 14:18:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;1;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:19:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;1;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:19:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;2;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:20:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;2;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:20:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;3;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;3;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:21:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;HARD;4;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:17 my-server nagios3: SERVICE NOTIFICATION: root;localhost;Disk Space;WARNING;notify-service-by-email;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:17 my-server postfix/pickup[24474]: 4F89F394034C: uid=109 from=<nagios> Oct 25 14:21:17 my-server postfix/cleanup[27756]: 4F89F394034C: message-id=<20131025062117.4F89F394034C@my-server> Oct 25 14:21:17 my-server postfix/qmgr[24475]: 4F89F394034C: from=<nagios@[email protected]>, size=594, nrcpt=1 (queue active) Oct 25 14:21:17 my-server postfix/local[27758]: 4F89F394034C: to=<root@localhost>, relay=local, delay=0.15, delays=0.11/0/0/0.04, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 14:21:17 my-server postfix/qmgr[24475]: 4F89F394034C: removed Oct 25 14:22:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;HARD;4;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:22:07 my-server nagios3: SERVICE NOTIFICATION: root;localhost;HTTP;WARNING;notify-service-by-email;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:22:07 my-server postfix/pickup[24474]: 219CA3940381: uid=109 from=<nagios> Oct 25 14:22:07 my-server postfix/cleanup[27756]: 219CA3940381: message-id=<20131025062207.219CA3940381@my-server> Oct 25 14:22:07 my-server postfix/qmgr[24475]: 219CA3940381: from=<nagios@[email protected]>, size=605, nrcpt=1 (queue active) Oct 25 14:22:07 my-server postfix/local[27758]: 219CA3940381: to=<root@localhost>, relay=local, delay=0.12, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 14:22:07 my-server postfix/qmgr[24475]: 219CA3940381: removed Oct 25 14:39:01 my-server CRON[28242]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) And there're lot of 127.0.0.1 visit in nginx log, but I actually visit from a external ip: 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36" 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36" 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36"

    Read the article

  • Rails, Capistrano, Nginx, Unicorn - Application has been already initialized (RuntimeError)

    - by Andy Copley
    Can anyone shed some light on what exactly this error refers to? I'm having trouble deploying new versions of the site. I, INFO -- : reloading config_file=[snip]/current/config/unicorn.rb I, INFO -- : Refreshing Gem list E, ERROR -- : error reloading config_file=[snip]/current/config/unicorn.rb: Application has been already initialized. (RuntimeError) E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/railties-3.2.3/lib/rails/application.rb:135:in `initialize!' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/railties-3.2.3/lib/rails/railtie/configurable.rb:30:in `method_missing' E, ERROR -- : [snip]/releases/20120907085937/config/environment.rb:5:in `<top (required)>' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:251:in `require' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:251:in `block in require' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:236:in `load_dependency' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:251:in `require' E, ERROR -- : config.ru:4:in `block in <main>' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' E, ERROR -- : config.ru:1:in `new' E, ERROR -- : config.ru:1:in `<main>' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn.rb:44:in `eval' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn.rb:44:in `block in builder' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:696:in `call' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:696:in `build_app!' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:677:in `load_config!' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:303:in `join' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `<top (required)>' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/bin/unicorn:23:in `load' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/bin/unicorn:23:in `<main>' I, INFO -- : reaped #<Process::Status: pid 3182 exit 0> worker=0 I, INFO -- : reaped #<Process::Status: pid 3185 exit 0> worker=1 I, INFO -- : reaped #<Process::Status: pid 3188 exit 0> worker=2 I, INFO -- : reaped #<Process::Status: pid 3191 exit 0> worker=3 I, INFO -- : worker=0 ready I, INFO -- : worker=3 ready I, INFO -- : worker=1 ready I, INFO -- : worker=2 ready Unicorn.rb root = "/home/[user]/apps/[site]/current" working_directory root pid "#{root}/tmp/pids/unicorn.pid" stderr_path "#{root}/log/unicorn.log" stdout_path "#{root}/log/unicorn.log" listen "/tmp/unicorn.[site].sock", :backlog => 2048 worker_processes 4 preload_app true timeout 30 before_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection end Any help appreciated - I can pull more config files if required.

    Read the article

  • OpenBSD configuration: Client unable to mount via NFS using Berkeley Automounter (amd)

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • OpenBSD configuration: Client unable to automount via NFS using amd

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

  • Replicate between mysql 5.0.xx community and enterprise edition over ssh

    - by Arlukin
    I'm trying to setup a mysql replication over an SSH tunnel. The odd thing about this setup is that I have one master with mysql 5.0.60sp1-enterprise-gpl-log and one slave with mysql 5.0.67-community-log. Could it be so that it's not possible to replicate between community and enterprise edition? As you can see in my log below, it's possible to login on the remote server with the mysql client. But the replication get "Can't connect to MySQL server on '127.0.0.1' (13)" Is it any log file I have forgotten to look in, to get more info? [root@mysql1-av ~]# mysql -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 73 Server version: 5.0.67-community-log MySQL Community Edition (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. The version of the slave mysql [root@mysql1-av ~]# autossh -f -M 20001 -L 3307:10.200.200.200:3306 [email protected] -N [root@mysql1-av ~]# mysql -h127.0.0.1 --port 3307 -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 5189 Server version: 5.0.60sp1-enterprise-gpl-log MySQL Enterprise Server (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> Aborted Login to the master mysql with the mysql client over the ssh tunnel. [root@mysql1-av ~]# mysql -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 75 Server version: 5.0.67-community-log MySQL Community Edition (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> change master to master_host='127.0.0.1', MASTER_PORT=3307, master_user='xxxx', master_password='xxxx', master_log_file='bin.000001'; Query OK, 0 rows affected (0.00 sec) mysql> start slave; Query OK, 0 rows affected (0.00 sec) mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: 127.0.0.1 Master_User: replNSG Master_Port: 3307 Connect_Retry: 60 Master_Log_File: bin.000001 Read_Master_Log_Pos: 4 Relay_Log_File: relay.000001 Relay_Log_Pos: 98 Relay_Master_Log_File: bin.000001 Slave_IO_Running: No Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 4 Relay_Log_Space: 98 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) Start the replication, but it breaks on IO. [root@mysql1-av ~]# tail /var/log/mysqld.log 120921 22:17:59 [Note] Slave I/O thread killed while connecting to master 120921 22:17:59 [Note] Slave I/O thread exiting, read up to log 'bin.000001', position 4 120921 22:17:59 [Note] Error reading relay log event: slave SQL thread was killed 120921 22:29:36 [Note] Slave SQL thread initialized, starting replication in log 'bin.000001' at position 4, relay log '/var/lib/mysql/relay.000001' position: 4 120921 22:29:36 [ERROR] Slave I/O thread: error connecting to master '[email protected]:3307': Error: 'Can't connect to MySQL server on '127.0.0.1' (13)' errno: 2003 retry-time: 60 retries: 86400 Because it can't connect to the master server.

    Read the article

  • Bash can't start a programme that's there and has all the right permissions

    - by Rory
    This is a gentoo server. There's a programme prog that can't execute. (Yes the execute permission is set) About the file $ ls prog $ ./prog bash: ./prog: No such file or directory $ file prog prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped $ pwd /usr/local/bin $ /usr/local/bin/prog bash: /usr/local/bin/prog: No such file or directory $ less prog | head ELF Header: Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Intel 80386 Version: 0x1 I have a fancy less, to show that it's an actual executable, here's some more data: $ xxd prog |head 0000000: 7f45 4c46 0101 0100 0000 0000 0000 0000 .ELF............ 0000010: 0200 0300 0100 0000 c092 0408 3400 0000 ............4... 0000020: 0401 0a00 0000 0000 3400 2000 0700 2800 ........4. ...(. 0000030: 2600 2300 0600 0000 3400 0000 3480 0408 &.#.....4...4... 0000040: 3480 0408 e000 0000 e000 0000 0500 0000 4............... 0000050: 0400 0000 0300 0000 1401 0000 1481 0408 ................ 0000060: 1481 0408 1300 0000 1300 0000 0400 0000 ................ 0000070: 0100 0000 0100 0000 0000 0000 0080 0408 ................ 0000080: 0080 0408 21f1 0500 21f1 0500 0500 0000 ....!...!....... 0000090: 0010 0000 0100 0000 40f1 0500 4081 0a08 ........@...@... and $ ls -l prog -rwxrwxr-x 1 1000 devs 725706 Aug 6 2007 prog $ ldd prog not a dynamic executable $ strace ./prog 1249403877.639076 execve("./prog", ["./prog"], [/* 27 vars */]) = -1 ENOENT (No such file or directory) 1249403877.640645 dup(2) = 3 1249403877.640875 fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE) 1249403877.641143 fstat(3, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0 1249403877.641484 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b3b8954a000 1249403877.641747 lseek(3, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek) 1249403877.642045 write(3, "strace: exec: No such file or dir"..., 40strace: exec: No such file or directory ) = 40 1249403877.642324 close(3) = 0 1249403877.642531 munmap(0x2b3b8954a000, 4096) = 0 1249403877.642735 exit_group(1) = ? About the server FTR the server is a xen domU, and the programme is a closed source linux application. This VM is a copy of another VM that has the same root filesystem (including this programme), that works fine. I've tried all the above as root and same problem. Did I mention the root filesystem is mounted over NFS. However it's mounted 'defaults,nosuid', which should include execute. Also I am able to run many other programmes from that mounted drive /proc/cpuinfo: processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 1 cpu MHz : 2992.692 cache size : 1024 KB fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl cid cx16 xtpr bogmips : 5989.55 clflush size : 64 cache_alignment : 128 address sizes : 36 bits physical, 48 bits virtual power management: Example of a file that I can run I can run other programmes on that mounted filesystem on that server. For example: $ ls -l ls -rwxr-xr-x 1 root root 105576 Jul 25 17:14 ls $ file ls ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), stripped $ ./ls attr cat cut echo getfacl ln more ... (you get the idea) ... rmdir sort tty $ less ls | head ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Machine: Advanced Micro Devices X86-64 Version: 0x1

    Read the article

  • Secure method of changing a user's password via Python script/non-interactively

    - by Matthew Rankin
    I've created a Python script using Fabric to configure a freshly built Slicehost Ubuntu slice. In case you're not familiar with Fabric, it uses Paramiko, a Python SSH2 client, to provide remote access "for application deployment or systems administration tasks." One of the first things I have the Fabric script do is to create a new admin user and set their password. Unlike Pexpect, Fabric cannot handle interactive commands on the remote system, so I need to set the user's password non-interactively. At present, I'm using the chpasswd command to change the password. This transmits the password as clear text over SSH to the remote system. Questions Is my current method of setting the password a security concern? Currently, the drawback I see is that Fabric shows the password as clear text on my local system as follows: [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd. Since I only run the Fabric script from my laptop, I don't think this is a security issue, but I'm interested in others' input. Is there a better method for setting the user's password non-interactively? Another option, would be to use Pexpect from within the Fabric script to set the password. Current Code # Fabric imports and host configuration excluded for brevity root_password = getpass.getpass("Root's password given by SliceManager: ") admin_username = prompt("Enter a username for the admin user to create: ") admin_password = getpass.getpass("Enter a password for the admin user: ") env.user = 'root' env.password = root_password # Create the admin group and add it to the sudoers file admin_group = 'admin' run('addgroup {group}'.format(group=admin_group)) run('echo "%{group} ALL=(ALL) ALL" >> /etc/sudoers'.format( group=admin_group) ) # Create the new admin user (default group=username); add to admin group run('adduser {username} --disabled-password --gecos ""'.format( username=admin_username) ) run('adduser {username} {group}'.format( username=admin_username, group=admin_group) ) # Set the password for the new admin user run('echo "{username}:{password}" | chpasswd'.format( username=admin_username, password=admin_password) ) Local System Terminal I/O $ fab config_rebuilt_slice Root's password given by SliceManager: Enter a username for the admin user to create: johnsmith Enter a password for the admin user: [xxx.xx.xx.xxx] run: addgroup admin [xxx.xx.xx.xxx] out: Adding group `admin' (GID 1000) ... [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "%admin ALL=(ALL) ALL" >> /etc/sudoers [xxx.xx.xx.xxx] run: adduser johnsmith --disabled-password --gecos "" [xxx.xx.xx.xxx] out: Adding user `johnsmith' ... [xxx.xx.xx.xxx] out: Adding new group `johnsmith' (1001) ... [xxx.xx.xx.xxx] out: Adding new user `johnsmith' (1000) with group `johnsmith' ... [xxx.xx.xx.xxx] out: Creating home directory `/home/johnsmith' ... [xxx.xx.xx.xxx] out: Copying files from `/etc/skel' ... [xxx.xx.xx.xxx] run: adduser johnsmith admin [xxx.xx.xx.xxx] out: Adding user `johnsmith' to group `admin' ... [xxx.xx.xx.xxx] out: Adding user johnsmith to group admin [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd [xxx.xx.xx.xxx] run: passwd --lock root [xxx.xx.xx.xxx] out: passwd: password expiry information changed. Done. Disconnecting from [email protected]... done.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >