Search Results

Search found 18604 results on 745 pages for 'export to xml'.

Page 252/745 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • What is the opposite of JAXB? i.e. generating XML FROM classes?

    - by 8EM
    I am currently designing a solution to a problem I have. I need to dynamically generate an XML file on the fly using Java objects, in the same way JAXB generates Java classes from XML files, however the opposite direction. Is there something out there already like this? Alternatively, a way in which one could 'save' a state of java classes. The goal I am working towards is a dynamically changing GUI, where a user can redesign their GUI in the same way you can with iGoogle.

    Read the article

  • Jboss Seam: Enabling Debug page on WebLogic 10.3.2 (11g)

    - by Markos Fragkakis
    Hi all, SKIP TO UPDATE 3 I want to enable the Seam debug page on Weblogic 10.3.2 (11g). So, I have done the following: I have the jboss-seam and jboss-seam-debug jars as dependency in both my ejb and web maven projects (both are modules of my superproject) I put this context parameter in my web.xml: <context-param> <param-name>org.jboss.seam.core.init.debug</param-name> <param-value>true</param-value> </context-param> Now, when I hit the URL of my application, I get the debug page with this exception (full stacktrace at the end of the post): Caused by java.lang.IllegalStateException with message: "No phase id bound to current thread (make sure you do not have two SeamPhaseListener instances installed)" From posts I read it seems that this is somehow related to two jars of jboss-seam or jboss-seam-debug being in the classpath. I opened my ear file and only one of each is present (in the ear) whereas the war itself has no libraries in the WEB-INF/lib. I have also read of another way to initialize debug page using the components.xml. I also tried to include the following components.xml in the WEB-INF, but it didn't work either: <?xml version="1.0" encoding="UTF-8"?> <components xmlns="http://jboss.com/products/seam/components" xmlns:core="http://jboss.com/products/seam/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://jboss.com/products/seam/core http://jboss.com/products/seam/core-2.2.xsd http://jboss.com/products/seam/components http://jboss.com/products/seam/components-2.2.xsd"> <core:init debug="true"/> </components> Any suggestions on what to do to enable the debug page correctly? Cheers! Full stacktrace: org.jboss.seam.contexts.PageContext.getPhaseId(PageContext.java:163) org.jboss.seam.contexts.PageContext.isBeforeInvokeApplicationPhase(PageContext.java:175) org.jboss.seam.contexts.PageContext.getCurrentWritableMap(PageContext.java:91) org.jboss.seam.contexts.PageContext.remove(PageContext.java:105) org.jboss.seam.Component.newInstance(Component.java:2141) org.jboss.seam.Component.getInstance(Component.java:2021) org.jboss.seam.Component.getInstance(Component.java:2000) org.jboss.seam.Component.getInstance(Component.java:1994) org.jboss.seam.Component.getInstance(Component.java:1967) org.jboss.seam.Component.getInstance(Component.java:1962) org.jboss.seam.faces.FacesPage.instance(FacesPage.java:92) org.jboss.seam.core.ConversationPropagation.restorePageContextConversationId(ConversationPropagation.java:84) org.jboss.seam.core.ConversationPropagation.restoreConversationId(ConversationPropagation.java:57) org.jboss.seam.jsf.SeamPhaseListener.afterRestoreView(SeamPhaseListener.java:391) org.jboss.seam.jsf.SeamPhaseListener.afterServletPhase(SeamPhaseListener.java:230) org.jboss.seam.jsf.SeamPhaseListener.afterPhase(SeamPhaseListener.java:196) com.sun.faces.lifecycle.Phase.handleAfterPhase(Phase.java:175) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:114) com.sun.faces.lifecycle.RestoreViewPhase.doPhase(RestoreViewPhase.java:104) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:265) weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:178) org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290) org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:83) org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.IdentityFilter.doFilter(IdentityFilter.java:40) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.MultipartFilter.doFilter(MultipartFilter.java:90) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.ExceptionFilter.doFilter(ExceptionFilter.java:64) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.RedirectFilter.doFilter(RedirectFilter.java:45) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592) weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) weblogic.work.ExecuteThread.run(ExecuteThread.java:173) UPDATE 1: Now the debug page does not appear at all. When I ask for http://localhost/myapp/debug.xhtml I get a page with: myapp/debug.xhtml the same as any page that does not exist. I opened the .ear and the following jboss jars are in: jboss-seam-debug-2.2.0.GA.jar jboss-el-1.0_02.CR4.jar jboss-seam-2.2.0.GA.jar My current configuration: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org /2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <display-name>PRS 6.0</display-name> <session-config> <session-timeout>30</session-timeout> </session-config> <!-- The default behavior of JSF is to map the incoming request for a JSF view identifier (view ID for short) to a JSP file with the file extension .jsp. To get JSF to look for a Facelets template instead, we must register the .xhtml extension as the default suffix for JSF views --> <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value> </context-param> <context-param> <param-name>javax.faces.STATE_SAVING_METHOD</param-name> <param-value>server</param-value> </context-param> <context-param> <param-name>javax.faces.CONFIG_FILES</param-name> <param-value> /WEB-INF/faces-config/application.xml </param-value> </context-param> <context-param> <param-name>facelets.REFRESH_PERIOD</param-name> <param-value>2</param-value> </context-param> <context-param> <param-name>facelets.DEVELOPMENT</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>facelets.SKIP_COMMENTS</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>com.sun.faces.verifyObjects</param-name> <param-value>false</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.VIEW_HANDLERS</param-name> <param-value>com.sun.facelets.FaceletViewHandler</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.COMPRESS_SCRIPT</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.COMPRESS_STYLE</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.xmlparser.ORDER</param-name> <param-value>NONE</param-value> </context-param> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>blueSky</param-value> </context-param> <context-param> <param-name>org.richfaces.CONTROL_SKINNING</param-name> <param-value>enable</param-value> </context-param> <context-param> <param-name>org.richfaces.LoadStyleStrategy</param-name> <param-value>ALL</param-value> </context-param> <context-param> <param-name>org.richfaces.LoadScriptStrategy</param-name> <param-value>ALL</param-value> </context-param> <!-- Seam Filter --> <!-- (MUST BE FIRST)--> <filter> <filter-name>Seam Filter</filter-name> <filter-class>org.jboss.seam.servlet.SeamFilter</filter-class> </filter> <filter-mapping> <filter-name>Seam Filter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <!-- RichFaces filter --> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> <init-param> <description>Set the size limit for uploaded files as attachments in bytes. (max 5MB)</description> <param-name>maxRequestSize</param-name> <param-value>5242880</param-value> </init-param> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> <listener> <listener-class> XX.XXXX.XXX.prs.web.listeners.ResourceInitializationListener</listener-class> </listener> <listener> <listener-class>com.sun.faces.config.ConfigureListener</listener-class> </listener> <listener> <listener-class>XX.XXXX.XXX.prs.web.listeners.EJBInjectionListener</listener-class> </listener> <!-- Seam Listener--> <listener> <listener-class>org.jboss.seam.servlet.SeamListener</listener-class> </listener> <!-- Faces Servlet --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> -- Seam Resource Servlet-- org.jboss.seam.servlet.SeamResourceServlet-- -- -- Seam Resource Servlet-- /seam/resource/*-- -- <welcome-file-list> <welcome-file>index.xhtml</welcome-file> </welcome-file-list> </web-app> faces.config <?xml version="1.0" encoding="UTF-8"?> <faces-config xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-facesconfig_1_2.xsd" version="1.2"> <application> <locale-config> <default-locale>en</default-locale> <supported-locale>en</supported-locale> </locale-config> <view-handler>com.sun.facelets.FaceletViewHandler</view-handler> <el-resolver>org.jboss.seam.el.SeamELResolver</el-resolver> <resource-bundle> <base-name>XX.XXXX.XXX.prs.web.messages.messages</base-name> <var>msgs</var> </resource-bundle> <resource-bundle> <base-name>XX.XXXX.XXX.prs.web.messages.validation</base-name> <var>val</var> </resource-bundle> </application> <lifecycle> <phase-listener>XX.XXXX.XXX.prs.web.listeners.SetFocusListener</phase-listener> </lifecycle> <!-- <lifecycle>--> <!-- <phase-listener>XX.XXXX.XXX.prs.web.listeners.DebugPhaseListener</phase-listener> --> <converter> <converter-for-class>XX.XXXX.XXX.prs.model.Applicant</converter-for-class> <converter-class> XX.XXXX.XXX.prs.web.common.converters.ApplicantConverter</converter-class> </converter> <validator> <validator-id>EmailValidator</validator-id> <validator-class>XX.XXXX.XXX.prs.web.common.validators.EmailValidator</validator-class> </validator> </faces-config> components.xml <?xml version="1.0" encoding="UTF-8"?> <components xmlns="http://jboss.com/products/seam/components" xmlns:core="http://jboss.com/products/seam/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://jboss.com/products/seam/core http://jboss.com/products/seam/core-2.2.xsd http://jboss.com/products/seam/components http://jboss.com/products/seam/components-2.2.xsd"> <core:init debug="true" /> <core:manager concurrent-request-timeout="500" conversation-timeout="1200000" conversation-id-parameter="cid" parent-conversation-id-parameter="pid" /> UPDATE 2: These guys here have the same problem. They have an outer EAR project containing the inner WAR project. They discuss that this may be related to how jars end up in the projects. I use Maven, and I had set it to create "Skinny Wars", that is excluding all jar dependencies from the inner WAR project, so that it remains small in size. All its dependencies are contained in the EAR and are used by all other modules. I changed the settings of the maven-war-plugin to leave inside the war the web-specific jars (the ones mentioned in the link, like RichFaces, jboss-seam-debug, Facelets etc). However, the problem has reverted to its previous form. I now get the debug page, whatever link I press, with the initial exception. Caused by java.lang.IllegalStateException with message: "No phase id bound to current thread (make sure you do not have two SeamPhaseListener instances installed)" UPDATE 3: The structure of the application .ear is the following: application.ear |--> APP-INF | |--> classes |--> lib (all jar dependencies go here, including the WAR dependencies, EJB module dependencies) |-->META_INF | |--> application.xml | |--> data-sources.xml | |--> MANIFEST.MF | |--> weblogic.xml | |--> weblogic-application.xml |--> jboss-seam-2.2.0.GA.jar |--> myEjbModule1.jar |--> myEjbModule2.jar |--> myEjbModule3.jar |--> myEjbModule4.jar |--> myWar.war (NO libraries in WEB-INF/lib, finds everything in EAR/lib) When deploying .ear application WITHOUT debug enabled (not including the jboss-seam-debug.jar in the ear), the application is loaded correctly. When deploying WITH jboss-seam-debug.jar in the EAR (EAR/lib directory), the application does not appear, but ONLY the debug page with the following exception (stacktrace at the end): Exception during request processing: Caused by java.lang.IllegalStateException with message: "No phase id bound to current thread (make sure you do not have two SeamPhaseListener instances installed)" When "JBoss-izing" the same EAR (remove hibernate jars, which are provided by JBoss, and move all libraries from EAR/lib to EAR root), the application loads correctly. BOTH the application AND the debug page appear correctly. Full stacktrace: org.jboss.seam.contexts.PageContext.getPhaseId(PageContext.java:163) org.jboss.seam.contexts.PageContext.isBeforeInvokeApplicationPhase(PageContext.java:175) org.jboss.seam.contexts.PageContext.getCurrentWritableMap(PageContext.java:91) org.jboss.seam.contexts.PageContext.remove(PageContext.java:105) org.jboss.seam.Component.newInstance(Component.java:2141) org.jboss.seam.Component.getInstance(Component.java:2021) org.jboss.seam.Component.getInstance(Component.java:2000) org.jboss.seam.Component.getInstance(Component.java:1994) org.jboss.seam.Component.getInstance(Component.java:1967) org.jboss.seam.Component.getInstance(Component.java:1962) org.jboss.seam.faces.FacesPage.instance(FacesPage.java:92) org.jboss.seam.core.ConversationPropagation.restorePageContextConversationId(ConversationPropagation.java:84) org.jboss.seam.core.ConversationPropagation.restoreConversationId(ConversationPropagation.java:57) org.jboss.seam.jsf.SeamPhaseListener.afterRestoreView(SeamPhaseListener.java:391) org.jboss.seam.jsf.SeamPhaseListener.afterServletPhase(SeamPhaseListener.java:230) org.jboss.seam.jsf.SeamPhaseListener.afterPhase(SeamPhaseListener.java:196) com.sun.faces.lifecycle.Phase.handleAfterPhase(Phase.java:175) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:114) com.sun.faces.lifecycle.RestoreViewPhase.doPhase(RestoreViewPhase.java:104) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:265) weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:530) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:83) org.jboss.seam.web.IdentityFilter.doFilter(IdentityFilter.java:40) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.MultipartFilter.doFilter(MultipartFilter.java:90) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.ExceptionFilter.doFilter(ExceptionFilter.java:64) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.RedirectFilter.doFilter(RedirectFilter.java:45) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:178) org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290) org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515) org.jboss.seam.web.Ajax4jsfFilter.doFilter(Ajax4jsfFilter.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592) weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

    Read the article

  • Apache Tomcat Server failure

    - by Kenneth Ordona
    I'm trying to set up Apache Tomcat 6 with SSL and once I edited the server.xml file to include the following definitions the server started to fail as soon as I hit startup.bat: <-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -- < Connector protocol="org.apache.coyote.http11.Http11Protocol" port="8445" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="${user.home}/.tomcat" keystorePass="pnnlpw" clientAuth="false" sslProtocol="TLS"/ The logs that I have are as follows: Jul 05, 2012 1:52:15 PM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: C:\Program Files\Java\jdk1.7.0_05\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;. Jul 05, 2012 1:52:15 PM org.apache.tomcat.util.digester.Digester fatalError SEVERE: Parse Fatal Error at line 91 column 2: The content of elements must consist of well-formed character data or markup. org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1388) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.startOfMarkup(XMLDocumentFragmentScannerImpl.java:2565) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2663) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina load WARNING: Catalina.start using conf/server.xml: org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1236) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Jul 05, 2012 1:52:15 PM org.apache.tomcat.util.digester.Digester fatalError SEVERE: Parse Fatal Error at line 91 column 2: The content of elements must consist of well-formed character data or markup. org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1388) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.startOfMarkup(XMLDocumentFragmentScannerImpl.java:2565) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2663) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.start(Catalina.java:582) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina load WARNING: Catalina.start using conf/server.xml: org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1236) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.start(Catalina.java:582) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina start SEVERE: Cannot start server. Server instance is not configured. Does anyone have an idea why this is happening? I believe it has to do with the configuration of my connector. I'm pretty new to this so any help would be much appreciated.

    Read the article

  • Spring security ldap: no declaration can be found for element 'ldap-authentication-provider'

    - by wuntee
    Following the spring-security documentation: http://static.springsource.org/spring-security/site/docs/3.0.x/reference/ldap.html I am trying to set up ldap authentication (very simple - just need to know if a user is authenticated or not, no authorities mapping needed) and have put this in my applicationContext-security.xml file <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> ... <ldap-server url="ldap://adapps.company.com:389/dc=company,dc=com" /> <ldap-authentication-provider user-search-filter="(samaccountname={0})" user-search-base="dc=company,dc=com"/> The problem I run into is that it doesnt seem like ldap-authentication-provider; I fell like i may be missing some configuration isn the beans definition. The error I get when trying to run the application is: SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 27 in XML document from ServletContext resource [/WEB-INF/rvaContext-security.xml] is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'ldap-authentication-provider'. at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:396) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:334) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:302) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:465) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:395) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3972) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4467) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardHost.start(StandardHost.java:722) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:593) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'ldap-authentication-provider'. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:236) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:172) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:382) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:316) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator$XSIErrorReporter.reportError(XMLSchemaValidator.java:429) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.reportSchemaError(XMLSchemaValidator.java:3185) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleStartElement(XMLSchemaValidator.java:1955) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.emptyElement(XMLSchemaValidator.java:725) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:322) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1693) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:368) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:834) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:148) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:250) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:75) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:388) ... 28 more Can anyone see what im missing? Also, is that all I need to add to the security bean in order to authenticate against ldap?

    Read the article

  • How can I best share Ant targets between projects?

    - by Rob Hruska
    Is there a well-established way to share Ant targets between projects? I have a solution currently, but it's a bit inelegant. Here's what I'm doing so far. I've got a file called ivy-tasks.xml hosted on a server on our network. This file contains, among other targets, boilerplate tasks for managing project dependencies with Ivy. For example: <project name="ant-ivy-tasks" default="init-ivy" xmlns:ivy="antlib:org.apache.ivy.ant"> ... <target name="ivy-download" unless="skip.ivy.download"> <mkdir dir="${ivy.jar.dir}"/> <echo message="Installing ivy..."/> <get src="http://repo1.maven.org/maven2/org/apache/ivy/ivy/${ivy.install.version}/ivy-${ivy.install.version}.jar" dest="${ivy.jar.file}" usetimestamp="true"/> </target> <target name="ivy-init" depends="ivy-download" description="-> Defines ivy tasks and loads global settings"> <path id="ivy.lib.path"> <fileset dir="${ivy.jar.dir}" includes="*.jar"/> </path> <taskdef resource="org/apache/ivy/ant/antlib.xml" uri="antlib:org.apache.ivy.ant" classpathref="ivy.lib.path"/> <ivy:settings url="http://myserver/ivy/settings/ivysettings-user.xml"/> </target> ... </project> The reason this file is hosted is because I don't want to: Check the file into every project that needs it - this will result in duplication, making maintaining the targets harder. Have my build.xml depend on checking out a project from source control - this will make the build have more XML at the top-level just to access the file. What I do with this file in my projects' build.xmls is along the lines of: <property name="download.dir" location="download"/> <mkdir dir="${download.dir}"/> <echo message="Downloading import files to ${download.dir}"/> <get src="http://myserver/ivy/ivy-tasks.xml" dest="${download.dir}/ivy-tasks.xml" usetimestamp="true"/> <import file="${download.dir}/ivy-tasks.xml"/> The "dirty" part about this is that I have to do the above steps outside of a target, because the import task must be at the top-level. Plus, I still have to include this XML in all of the build.xml files that need it (i.e. there's still some amount of duplication). On top of that, there might be additional situations where I might have common (non-Ivy) tasks that I'd like imported. If I were to provide these tasks using Ivy's dependency management I'd still have problems, since by the time I'd have resolved the dependencies I would have to be inside of a target in my build.xml, and unable to import (due to the constraint mentioned above). Is there a better solution for what I'm trying to accomplish?

    Read the article

  • how do I access XHR responseBody from Javascript?

    - by Cheeso
    I've got a web page that uses XMLHttpRequest to download a binary resource. Because it's binary I'm trying to use xhr.responseBody to access the bytes. I've seen a few posts suggesting that it's impossible to access the bytes directly from Javascript. This sounds crazy to me. Weirdly, xhr.responseBody is accessible from VBScript, so the suggestion is that I must define a method in VBScript in the webpage, and then call that method from Javascript. See jsdap for one example. var IE_HACK = (/msie/i.test(navigator.userAgent) && !/opera/i.test(navigator.userAgent)); if (IE_HACK) document.write('<script type="text/vbscript">\n\ Function BinaryToArray(Binary)\n\ Dim i\n\ ReDim byteArray(LenB(Binary))\n\ For i = 1 To LenB(Binary)\n\ byteArray(i-1) = AscB(MidB(Binary, i, 1))\n\ Next\n\ BinaryToArray = byteArray\n\ End Function\n\ </script>'); var xml = (window.XMLHttpRequest) ? new XMLHttpRequest() // Mozilla/Safari/IE7+ : (window.ActiveXObject) ? new ActiveXObject("MSXML2.XMLHTTP") // IE6 : null; // Commodore 64? xml.open("GET", url, true); if (xml.overrideMimeType) { xml.overrideMimeType('text/plain; charset=x-user-defined'); } else { xml.setRequestHeader('Accept-Charset', 'x-user-defined'); } xml.onreadystatechange = function() { if (xml.readyState == 4) { if (!binary) { callback(xml.responseText); } else if (IE_HACK) { // call a VBScript method to copy every single byte callback(BinaryToArray(xml.responseBody).toArray()); } else { callback(getBuffer(xml.responseText)); } } }; xml.send(''); Is this really true? The best way? copying every byte? For a large binary stream that's not gonna be very efficient. There is also a possible technique using ADODB.Stream, which is a COM equivalent of a MemoryStream. See here for an example. It does not require VBScript but does require a separate COM object. if (typeof (ActiveXObject) != "undefined" && typeof (httpRequest.responseBody) != "undefined") { // Convert httpRequest.responseBody byte stream to shift_jis encoded string var stream = new ActiveXObject("ADODB.Stream"); stream.Type = 1; // adTypeBinary stream.Open (); stream.Write (httpRequest.responseBody); stream.Position = 0; stream.Type = 1; // adTypeBinary; stream.Read.... /// ???? what here } I don't think that's gonna work - ADODB.Stream is disabled on most machines these days. In The IE8 developer tools - the IE equivalent of Firebug - I can see the responseBody is an array of bytes and I can even see the bytes themselves. The data is right there. I don't understand why I can't get to it. Is it possible for me to read it with responseText? hints? (other than defining a VBScript method)

    Read the article

  • rails: "unknown action" message when action is clearly specified

    - by john
    hi, I had hard time to figure out why I've been getting "unknown action" error message when I was do some editing: Unknown action No action responded to 11. Actions: bin, create, destroy, edit, index, new, observe_new, show, tag, update, and vote you can see that Rails did mention each action in the above list - update. And in my form, I did specify action = "update". I wonder if some friends could kindly help me with the missing links... here is the code: edit.rhtml <h1>Editing tip</h1> <% form_tag :action => 'update', :id => @tip do %> <%= render :partial => 'form' %> <p> <%= submit_tag_or_cancel 'Save Changes' %> </p> <% end %> _form.rhtml <%= error_messages_for :tip %> <p><label>Title<br/> <%= text_field :tip, :title %></label></p> <p><label>Categories<br/> <%= select_tag('categories[]', options_for_select(Category.find(:all).collect {|c| [c.name, c.id] }, @tip.category_ids), :multiple => true ) %></label></p> <p><label>Abstract:<br/> <%= text_field_with_auto_complete :tip, :abstract %></label></p> <p><label>Name: <br/> <%= text_field :tip, :name %></label></p> <p><label>Link: <br/> <%= text_field :tip, :link %></label></p> <p><label>Content<br/> <%= text_area :tip, :content, :rows => 5 %></label></p> <p><label>Tags <span>(space separated)</span><br/> <%= text_field_tag 'tags', @tip.tag_list, :size => 40 %></label></p> class TipsController < ApplicationController before_filter :authenticate, :except => %w(index show) # GET /tips # GET /tips.xml def index @tips = Tip.all respond_to do |format| format.html # index.html.erb format.xml { render :xml => @tips } end end # GET /tips/1 # GET /tips/1.xml def show @tip = Tip.find_by_permalink(params[:permalink]) respond_to do |format| format.html # show.html.erb format.xml { render :xml => @tip } end end # GET /tips/new # GET /tips/new.xml def new @tip = session[:tip_draft] || current_user.tips.build end def create #tip = current_user.tips.build(params[:tip]) #tipMail=params[:email] #if tipMail # TipMailer.deliver_email_friend(params[:email], params[:name], tip) # flash[:notice] = 'Your friend has been notified about this tip' #end @tip = current_user.tips.build(params[:tip]) @tip.categories << Category.find(params[:categories]) unless params[:categories].blank? @tip.tag_with(params[:tags]) if params[:tags] if @tip.save flash[:notice] = 'Tip was successfully created.' session[:tip_draft] = nil redirect_to :action => 'index' else render :action => 'new' end end def edit @tip = Tip.find(params[:id]) end def update @tip = Tip.find(params[:id]) respond_to do |format| if @tip.update_attributes(params[:tip]) flash[:notice] = 'Tip was successfully updated.' format.html { redirect_to(@tip) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @tip.errors, :status => :unprocessable_entity } end end end def destroy @tip = Tip.find(params[:id]) @tip.destroy respond_to do |format| format.html { redirect_to(tips_url) } format.xml { head :ok } end end def observe_new session[:tip_draft] = current_user.tips.build(params[:tip]) render :nothing => true end end

    Read the article

  • xsl:include template with no default namespace causes xmlns=""

    - by CraftyFella
    Hi, I've got a problem with xsl:include and default namespaces which is causing the final xml document contain nodes with the xmlns="" In this synario I have 1 source document which is Plain Old XML and doesn't have a namespace: <?xml version="1.0" encoding="UTF-8"?> <SourceDoc> <Description>Hello I'm the source description</Description> <Description>Hello I'm the source description 2</Description> <Description/> <Title>Hello I'm the title</Title> </SourceDoc> This document is transformed into 2 different xml documents each with their own default namespace. First Document: <?xml version="1.0" encoding="utf-8"?> <OutputDocType1 xmlns="http://MadeupNS1"> <Description >Hello I'm the source description</Description> <Description>Hello I'm the source description 2</Description> <Title>Hello I'm the title</Title> </OutputDocType1> Second Document: <?xml version="1.0" encoding="utf-8"?> <OutputDocType2 xmlns="http://MadeupNS2"> <Description>Hello I'm the source description</Description> <Description>Hello I'm the source description 2</Description> <DocTitle>Hello I'm the title</DocTitle> </OutputDocType2> I want to be able to re-use the template for descriptions in both of the transforms. As it's the same logic for both types of document. To do this I created a template file which was *xsl:include*d in the other 2 transformations: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output indent="yes" method="xml"/> <xsl:template match="Description[. != '']"> <Description> <xsl:value-of select="."/> </Description> </xsl:template> </xsl:stylesheet> Now the problem here is that this shared transformation can't have a default Namespace as it will be different depending on which of the calling transformations calls it. E.g. for First Document Transformation: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output indent="yes" method="xml"/> <xsl:template match="SourceDoc"> <OutputDocType1 xmlns="http://MadeupNS1"> <xsl:apply-templates select="Description"/> <xsl:if test="Title"> <Title> <xsl:value-of select="Title"/> </Title> </xsl:if> </OutputDocType1> </xsl:template> <xsl:include href="Template.xsl"/> </xsl:stylesheet> This actually outputs it as follows: <?xml version="1.0" encoding="utf-8"?> <OutputDocType1 xmlns="http://MadeupNS1"> <Description xmlns="">Hello I'm the source description</Description> <Description xmlns="">Hello I'm the source description 2</Description> <Title>Hello I'm the title</Title> </OutputDocType1> Here is the problem. On the description Lines I get an xmlns="" Does anyone know how to solve this issue? Thanks Dave

    Read the article

  • How to solve Memory leaks in Lib Xml Parser in objective-C where the list is returned?

    - by Madan Mohan
    Hi Guys, I got leaks in Lib Xml Parser while retrieving the data from the net, Here in the below code, I have allocated the list - (void)getCustomersList { // make an operation so we can push it into the queue SEL method = @selector(parseForData); NSInvocationOperation *op = [[NSInvocationOperation alloc] initWithTarget:self selector:method object:nil]; customersTempList = [[NSMutableArray alloc] initWithCapacity:20];// allocated list [self.retrieverQueue addOperation:op]; [op release]; } // return each recode // in parser .m class one of the condition in endElement where it shows a leak. else if(0 == strncmp((const char *)localname, kCustomerElement, kCustomerElementLength)) { [customersTempList addObject:customer]; printf("\n no of objects in temp list:%d", [customersTempList count]); if ([customersTempList count] == 20) { NSMutableArray* argsList = [customersTempList copy];//////////////////////here it is showing leak. printf("\n Calling reload data with %d new objects", [argsList count]); SEL selector = @selector(parser:addCustomerObject:); NSMethodSignature *sig = [(id)self.delegate methodSignatureForSelector:selector]; if(nil != sig && [self.delegate respondsToSelector:selector]) { NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:sig]; [invocation retainArguments]; [invocation setTarget:self.delegate]; [invocation setSelector:selector]; [invocation setArgument:&self atIndex:2]; [invocation setArgument:&argsList atIndex:3]; [invocation performSelectorOnMainThread:@selector(invoke) withObject:NULL waitUntilDone:NO]; } [customersTempList removeAllObjects]; } } // returned the list after all the records are stored in the list else if(0 == strncmp((const char *)localname, kCustomersElement, kCustomersElementLength)) { printf("\n Calling reload data with %d new objects", [customersTempList count]); NSMutableArray* argsList = [customersTempList copy]; printf("\n Calling reload data with %d new objects", [argsList count]); SEL selector = @selector(parser:addCustomerObject:); NSMethodSignature *sig = [(id)self.delegate methodSignatureForSelector:selector]; if(nil != sig && [self.delegate respondsToSelector:selector]) { NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:sig]; [invocation retainArguments]; [invocation setTarget:self.delegate]; [invocation setSelector:selector]; [invocation setArgument:&self atIndex:2]; [invocation setArgument:&argsList atIndex:3]; [invocation performSelectorOnMainThread:@selector(invoke) withObject:NULL waitUntilDone:NO]; } [customersTempList removeAllObjects]; } } please help me out of this, Thanks, Madan.

    Read the article

  • Create and Share a File (Also a mysterious error)

    - by Kirk
    My goal is to create a XML file and then send it through the share Intent. I'm able to create a XML file using this code FileOutputStream outputStream = context.openFileOutput(fileName, Context.MODE_WORLD_READABLE); PrintStream printStream = new PrintStream(outputStream); String xml = this.writeXml(); // get XML here printStream.println(xml); printStream.close(); I'm stuck trying to retrieve a Uri to the output file in order to share it. I first tried to access the file by converting the file to a Uri File outFile = context.getFileStreamPath(fileName); return Uri.fromFile(outFile); This returns file:///data/data/com.my.package/files/myfile.xml but I cannot appear to attach this to an email, upload, etc. If I manually check the file length, it's proper and shows there is a reasonable file size. Next I created a content provider and tried to reference the file and it isn't a valid handle to the file. The ContentProvider doesn't ever seem to be called a any point. Uri uri = Uri.parse("content://" + CachedFileProvider.AUTHORITY + "/" + fileName); return uri; This returns content://com.my.package.provider/myfile.xml but I check the file and it's zero length. How do I access files properly? Do I need to create the file with the content provider? If so, how? Update Here is the code I'm using to share. If I select Gmail, it does show as an attachment but when I send it gives an error Couldn't show attachment and the email that arrives has no attachment. public void onClick(View view) { Log.d(TAG, "onClick " + view.getId()); switch (view.getId()) { case R.id.share_cancel: setResult(RESULT_CANCELED, getIntent()); finish(); break; case R.id.share_share: MyXml xml = new MyXml(); Uri uri; try { uri = xml.writeXmlToFile(getApplicationContext(), "myfile.xml"); //uri is "file:///data/data/com.my.package/files/myfile.xml" Log.d(TAG, "Share URI: " + uri.toString() + " path: " + uri.getPath()); File f = new File(uri.getPath()); Log.d(TAG, "File length: " + f.length()); // shows a valid file size Intent shareIntent = new Intent(); shareIntent.setAction(Intent.ACTION_SEND); shareIntent.putExtra(Intent.EXTRA_STREAM, uri); shareIntent.setType("text/plain"); startActivity(Intent.createChooser(shareIntent, "Share")); } catch (FileNotFoundException e) { e.printStackTrace(); } break; } } I noticed that there is an Exception thrown here from inside createChooser(...), but I can't figure out why it's thrown. E/ActivityThread(572): Activity com.android.internal.app.ChooserActivity has leaked IntentReceiver com.android.internal.app.ResolverActivity$1@4148d658 that was originally registered here. Are you missing a call to unregisterReceiver()? I've researched this error and can't find anything obvious. Both of these links suggest that I need to unregister a receiver. Chooser Activity Leak - Android Why does Intent.createChooser() need a BroadcastReceiver and how to implement? I have a receiver setup, but it's for an AlarmManager that is set elsewhere and doesn't require the app to register / unregister. Code for openFile(...) In case it's needed, here is the content provider I've created. public ParcelFileDescriptor openFile(Uri uri, String mode) throws FileNotFoundException { String fileLocation = getContext().getCacheDir() + "/" + uri.getLastPathSegment(); ParcelFileDescriptor pfd = ParcelFileDescriptor.open(new File(fileLocation), ParcelFileDescriptor.MODE_READ_ONLY); return pfd; }

    Read the article

  • Bogus InvalidOperationException (in a DataServiceRequestException)

    - by Andrei Rinea
    I am having a hard time with ADO.NET Data Services (formerly code-named Astoria) as it gives me a bogus exception when I try to insert a new entity from the silverlight client and trying in a clean project (the same code) doesn't. In both cases, however, data is correctly inserted into the database. Using Fiddler (an HTTP debugger I could see that there is no problem in the HTTP communication as I will show later in this question. The code : var ctx = new MyProject123Entities(new Uri("http://andreiri/MyProject.Data/Data.svc")); var i = new Zone() { Data = DateTime.Now, IdElement = 1 }; ctx.AddToZone(i); i.StareZone = new StareZone() { IdStareZone = 1 }; ctx.AttachTo("StareZone", i.StareZone); ctx.SetLink(i, "StareZone", i.StareZone); i.TipZone = new TipZone() { IdTipZone = 1 }; ctx.AttachTo("TipZone", i.TipZone); ctx.SetLink(i, "TipZone", i.TipZone); i.User = new User() { IdUser = 2 }; ctx.AttachTo("User", i.User); ctx.SetLink(i, "User", i.User); ctx.BeginSaveChanges(r =] ctx.EndSaveChanges(r), null); when run the last line (ctx.EndSaveChanges(r)) will throw the following exception : System.Data.Services.Client.DataServiceRequestException was unhandled by user code Message="An error occurred while processing this request." StackTrace: at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.HandleBatchResponse() at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.EndRequest() at System.Data.Services.Client.DataServiceContext.EndSaveChanges(IAsyncResult asyncResult) at MyProject.MainPage.[]c__DisplayClassd6.[]c__DisplayClassd8.[dashboard_PostZoneCurent]b__d5(IAsyncResult r) at System.Data.Services.Client.BaseAsyncResult.HandleCompleted() at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.HandleCompleted(PerRequest pereq) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.AsyncEndRead(IAsyncResult asyncResult) at System.IO.Stream.BeginRead(Byte[] buffer, Int32 offset, Int32 count, AsyncCallback callback, Object state) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.AsyncEndGetResponse(IAsyncResult asyncResult) InnerException: System.InvalidOperationException Message="The context is already tracking a different entity with the same resource Uri." StackTrace: at System.Data.Services.Client.DataServiceContext.AttachTo(Uri identity, Uri editLink, String etag, Object entity, Boolean fail) at System.Data.Services.Client.MaterializeAtom.MoveNext() at System.Data.Services.Client.DataServiceContext.HandleResponsePost(ResourceBox entry, MaterializeAtom materializer, Uri editLink, String etag) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.[HandleBatchResponse]d__1d.MoveNext() InnerException: (there is no further information regarding the exception although the ADo.NET Data Service is configured to return detailed informations) However the row is inserted correctly and completely in the database. Using fiddler I can see that the request : <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="MyProject123Model.Zone" /> <title /> <updated>2009-09-11T13:36:46.917157Z</updated> <author> <name /> </author> <id /> <link href="http://andreiri/MyProject.Data/Data.svc/StareZone(1)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/StareZone" type="application/atom+xml;type=entry" /> <link href="http://andreiri/MyProject.Data/Data.svc/TipZone(4)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/TipZone" type="application/atom+xml;type=entry" /> <link href="http://andreiri/MyProject.Data/Data.svc/User(4)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/User" type="application/atom+xml;type=entry" /> <content type="application/xml"> <m:properties> <d:Data m:type="Edm.DateTime">2009-09-11T16:36:40.588951+03:00</d:Data> <d:Detalii>aslkdfjasldkfj</d:Detalii> <d:IdElement m:type="Edm.Int32">1</d:IdElement> <d:IdZone m:type="Edm.Int32">0</d:IdZone> <d:X_Post m:type="Edm.Decimal">587647.4705</d:X_Post> <d:X_Repost m:type="Edm.Decimal" m:null="true" /> <d:Y_Post m:type="Edm.Decimal">325783.077599999</d:Y_Post> <d:Y_Repost m:type="Edm.Decimal" m:null="true" /> </m:properties> </content> </entry> is well accepted and a successful response is returned : HTTP/1.1 201 Created Date: Fri, 11 Sep 2009 13:36:47 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 DataServiceVersion: 1.0; Location: http://andreiri/MyProject.Data/Data.svc/Zone(75) Cache-Control: no-cache Content-Type: application/atom+xml;charset=utf-8 Content-Length: 2213 <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xml:base="http://andreiri/MyProject.Data/Data.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <id>http://andreiri/MyProject.Data/Data.svc/Zone(75)</id> <title type="text"></title> <updated>2009-09-11T13:36:47Z</updated> <author> <name /> </author> <link rel="edit" title="Zone" href="Zone(75)" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/CenterZone" type="application/atom+xml;type=feed" title="CenterZone" href="Zone(75)/CenterZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/ZoneMobil" type="application/atom+xml;type=feed" title="ZoneMobil" href="Zone(75)/ZoneMobil" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/StareZone" type="application/atom+xml;type=entry" title="StareZone" href="Zone(75)/StareZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/TipZone" type="application/atom+xml;type=entry" title="TipZone" href="Zone(75)/TipZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/User" type="application/atom+xml;type=entry" title="User" href="Zone(75)/User" /> <category term="MyProject123Model.Zone" scheme="http://schemas.microsoft.com ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:IdZone m:type="Edm.Int32">75</d:IdZone> <d:X_Post m:type="Edm.Decimal">587647.4705</d:X_Post> <d:Y_Post m:type="Edm.Decimal">325783.077599999</d:Y_Post> <d:X_Repost m:type="Edm.Decimal" m:null="true" /> <d:Y_Repost m:type="Edm.Decimal" m:null="true" /> <d:Data m:type="Edm.DateTime">2009-09-11T16:36:40.588951+03:00</d:Data> <d:Detalii>aslkdfjasldkfj</d:Detalii> <d:IdElement m:type="Edm.Int32">1</d:IdElement> </m:properties> </content> </entry> Why do I get an exception? And, using this in a clean project does not throw the exception..

    Read the article

  • How to open new view (call an activity) from options menu defined in XML? (android)

    - by Portablejim
    I cant seem to open a new view from an options menu item. The program keeps crashing as it applies the intent and listener to the item. I am just beginning, so please be nice. The current view is mnfsms, and the view I am trying to open is mnfsms_settings. I am developing for 1.5. Could someone please help me get the menu working. The menu (called options_menu.xml): <menu xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@+id/settings_button" android:title="Settings" android:icon="@android:drawable/ic_menu_preferences" /> <item android:id="@+id/about_button" android:title="About" android:icon="@android:drawable/ic_menu_myplaces" /> </menu> The main view (called mnfsms.java): package com.example.mnfsms; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.Menu; import android.view.MenuInflater; import android.view.MenuItem; public class mnfsms extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); /* OnClickListener myocl = new View.OnClickListener() { public void onClick(View v){ Intent myi = new Intent(mnfsms.this, mnfsms_settings.class); startActivity(myi); } };*/ } public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.options_menu, menu); MenuItem mi_settings = (MenuItem)findViewById(R.id.settings_button); mi_settings.setIntent(new Intent(this, mnfsms_settings.class)); return true; } } The manifest: <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.mnfsms" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".mnfsms" android:label="@string/main_window_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".mnfsms_settings" android:label="string/main_window_name"> </activity> </application> <uses-sdk android:minSdkVersion="3" /> </manifest> The stacktrace: 01-06 15:07:58.045: ERROR/AndroidRuntime(2123): Uncaught handler: thread main exiting due to uncaught exception 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): java.lang.NullPointerException 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at com.example.mnfsms.mnfsms.onCreateOptionsMenu(mnfsms.java:30) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at android.app.Activity.onCreatePanelMenu(Activity.java:2038) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at com.android.internal.policy.impl.PhoneWindow.preparePanel(PhoneWindow.java:421) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at com.android.internal.policy.impl.PhoneWindow.onKeyDownPanel(PhoneWindow.java:664) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at com.android.internal.policy.impl.PhoneWindow.onKeyDown(PhoneWindow.java:1278) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchKeyEvent(PhoneWindow.java:1735) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at android.view.ViewRoot.deliverKeyEventToViewHierarchy(ViewRoot.java:2188) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at android.view.ViewRoot.handleFinishedEvent(ViewRoot.java:2158) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at android.view.ViewRoot.handleMessage(ViewRoot.java:1490) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at android.os.Handler.dispatchMessage(Handler.java:99) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at android.os.Looper.loop(Looper.java:123) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at android.app.ActivityThread.main(ActivityThread.java:3948) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at java.lang.reflect.Method.invokeNative(Native Method) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at java.lang.reflect.Method.invoke(Method.java:521) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:782) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:540) 01-06 15:07:58.055: ERROR/AndroidRuntime(2123): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Using DNFS for test purposes

    - by rene.kundersma
    Because of other priorities such as bringing the first v2 Database Machine in Netherlands into production I did spend less time on my blog that planned. I do however like to tell some things about DNFS, the build-in NFS client we have in Oracle RDBMS since 11.1. What DNFS is and how to set it up can all be found here . As you see this documentation is actually the "Clusterware Installation Guide". I think that is weird, I would expect this to be part of the Admin Guide, especially the "Tablespace" chapter. I do however want to show what I did not find in the documentation that quickly (and solved after talking to my famous colleague "the prutser"): First, a quick setup: 1. The standard ODM library needs to be replaced with the NFS ODM library: [oracle@ocm01 ~]$ cp $ORACLE_HOME/lib/libodm11.so $ORACLE_HOME/lib/libodm11.so_stub [oracle@ocm01 ~]$ ln -s $ORACLE_HOME/lib/libnfsodm11.so $ORACLE_HOME/lib/libodm11.so After changing to this library you will notice the following in your alert.log: Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 2.0 2. The intention is to mount the datafiles over normal NAS (like NetApp). But, in case you want to test yourself and use an exported NFS filesystem, it should look like the following: [oracle@ocm01 ~]$ cat /etc/exports /u01/scratch/nfs *(rw,sync,insecure) Please note the "insecure" option in the export, since you will not be able to use DNFS without it if you export a filesystem from a host. Without the "insecure" option the NFS server considers the port used by the database "insecure" and the database is unable to acquire the mount: Direct NFS: NFS3ERR 1 Not owner. path ocm01.nl.oracle.com mntport 930 nfsport 2049 3. Before configuring the new Oracle stanza for NFS we still need to configure a regular kernel NFS mount: [root@ocm01 ~]# cat /etc/fstab | grep nfs ocm01.nl.oracle.com:/u01/scratch/nfs /incoming nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 4. Then a so called Oracle-'nfstab' needs to be created that specifies what the available exports to use: [oracle@ocm01 ~]$ cat /etc/oranfstab server:ocm01.nl.oracle.com path:192.168.1.40 export:/u01/scratch/nfs mount:/incoming 5. Creating a tablespace with a datafile on the NFS location: SQL create tablespace rk datafile '/incoming/rk.dbf' size 10M; Tablespace created. Be sure to know that it may happen that you do not specify the insecure option (like I did). In that case you will still see output from the query v$dnfs_servers: SQL select * from v$dnfs_servers; ID SVRNAME DIRNAME MNTPORT NFSPORT WTMAX RTMAX -- -------------------- ----------------- --------- ---------- ------ ------ 1 ocm01.nl.oracle.com /u01/scratch/nfs 684 2049 32768 32768 But, querying v$dnfsfiles and v$dnfs_channels will now return any result, and indeed, you will see the following message in the alert-log when you create a file : Direct NFS: NFS3ERR 1 Not owner. path ocm01.nl.oracle.com mntport 930 nfsport 2049 After correcting the export: SQL select * from v$dnfs_files; FILENAME FILESIZE PNUM SVR_ID --------------- -------- ------ ------ /incoming/rk.dbf 10493952 20 1 Rene Kundersma Oracle Technology Services, The Netherlands

    Read the article

  • Maintaining Revision Levels

    - by kyle.hatlestad
    A question that came up on an earlier blog post was how to limit the number of revisions on a piece of content. UCM does not inherently enforce any sort of limit on how many revisions you can have. It's unlimited. In some cases, there may be content that goes through lots of changes, but there just simply isn't a need to keep all of its revisions around. Deleting those revisions through the content information screen can be very cumbersome. And going through the Repository Manager applet can take time as well to filter and find the revisions to get rid of. But there is an easier way through the Archiver. The Export Query criteria in Archiver includes a very handy field called 'Revision Rank'. With revision labels, they typically go up as new revisions come in (e.g. 1, 2, 3, 4, etc...). But you can't really use this field to tell it to keep the top 5 revisions. Those top 5 revision numbers are always going up. But revision rank goes the opposite direction. The very latest revision is always 0. The previous revision to that is 1. Previous revision to that is 2. And so on and so forth. With revision rank, you can set your query to look for any Revision Rank greater or equal to 5. Now as older revisions move down the line, their revision rank gets higher and higher until they reach that threshold. Then when you run that archive export, you can choose to delete and remove those revisions. Running that export in Archiver is normally a manual process. But with Idc Command, you can script the process and have it run automatically from the server. Idc Command is a utility that allows you to run any of the content server services via the command line. You basically feed it a text file with the services and parameters defined along with the user to run it as. The Idc Command executable is located within the \bin\ directory: $ ./IdcCommand -f DeleteOlderRevisions.txt -u sysadmin -l delete_revisions.log In this example, our IdcCommand file to run the export and do the deletions would look like: IdcService=EXPORT_ARCHIVE aArchiveName=DeleteOlderRevisions aDoDelete=1 IDC_Name=idc dataSource=RevisionIDs <<EOD>> You can then use automated scheduling routines in the OS to run the command and command file at the frequency needed. Remember that you are deleting the revisions from within UCM, but they are still getting placed within the archive. So you will need to delete those batches to have them fully removed (or re-import if you need to recover them). For more information about Idc Command, you can find that in the Idc Command Reference Guide.

    Read the article

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • SSMS Tools Pack 1.8 is out!

    - by Mladen Prajdic
    This is a release that fixes all known major bugs and most of the minor ones. The main feature list hasn’t changed. The only addition is the ability to export and import only SQL snippets. Before you could only export/import all settings which included the snippets. You can download the new version here. Enjoy it!

    Read the article

  • How can I replicate Google Page Speed's lossless image compression as part of my workflow?

    - by Keefer
    I love that Google's Page Speed is able to losslessly compress a lot of my images, but I'd love to make it part of my workflow, prior to uploading a site and making it live. Is there anything I can run locally to give me the same lossless compression? I currently export images from Export For Web from Photoshop, and use a little application called PNGCrusher to reduce file size of PNGs. I'd love to find a faster way though than saving out and replacing the individual images from Page Speed's results.

    Read the article

  • PHP may be executing as a "privileged" group and user, which could be a serious security vulnerability

    - by Martin
    I ran some security tests on a Ubuntu 12.04 Server, and I've got these warnings : PHP may be executing as a "privileged" group, which could be a serious security vulnerability. PHP may be executing as a "privileged" user, which could be a serious security vulnerability. In /etc/apache2/envvars, I have this: export APACHE_RUN_USER=www-data export APACHE_RUN_GROUP=www-data And all files in /var/www are having these user/group: www-data:www-data Am I setting this correctly? What should I do to fix this problem?

    Read the article

  • New Source Database Added for EBS 12 + 11gR2 Transportable Tablespaces

    - by John Abraham
    The Transportable Tablespaces (TTS) process was originally certified for the migration of E-Business Suite R12 databases going from a source database of 11gR1 or 11gR2 to a target of 11gR2. This requirement has now been expanded to include a source database of 10gR2 (10.2.0.5) - this will potentially save time for existing 10gR2 customers as they can remove on a crucial upgrade step prior to performing the platform migration. The migration process requires an updated Controlled patch delivered by the Oracle E-Business Suite Platform Engineering team, i.e. it requires a password obtainable from Oracle Support. We released the patch in this manner to gauge uptake, and help identify and monitor any customer issues due to the nature of this technology. This patch has been updated to now include supporting 10gR2 as a source database. Does it meet your requirements?Note that for migration across platforms of the same "endian" format, users are advised to use the Transportable Database (TDB) migration process instead for large databases. The "endian-ness" target platforms can be verified by querying the view V$DB_TRANSPORTABLE_PLATFORM using SQL*Plus (connected as sysdba) on the source platform:SQL>select platform_name from v$db_transportable_platform;If the intended target platform does not appear in the output, it means that it is of a different endian format from the source. Consequently. database migration will need to be performed via Transportable Tablespaces (for large databases) or export/import.The use of Transportable Tablespaces can greatly speed up the migration of the data portion of the database. However, it does not affect metadata, which must still be migrated using export/import. We recommend that users initially perform a test migration on their database, using export/import with the 'metrics=y' parameter. This will help identify the relative amounts of data and metadata, and provide a basis for assessing likely gains in timing. In general, the larger the amount of data (compared to metadata), the greater the reduction in downtime that can be expected from using TTS as a migration process. For smaller databases or for those that have relatively small data compared to metadata, TTS will not be as beneficial for cross endian migration and the use of export/import (datapump) for the whole database is recommended. Where can I find more information? Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 Enterprise Edition (My Oracle Support Document 1311487.1) Oracle Database Administrator's Guide 11g Release 2 (11.2) Related Articles Database Migration using 11gR2 Transportable Tablespaces Now Certified for EBS 12 New Source Databases Added for Transportable Tablespaces + EBS 11i 10gR2 Transportable Tablespaces Certified for EBS 11i Migrating E-Business Suite Release 11i Databases Between Platforms Migrating E-Business Suite Release 12 Databases Between Platforms

    Read the article

  • Blender to 3ds max to cal3d format

    - by Kaliber64
    There are quite a few questions on cal3d but they are old and don't apply anymore. In Blender(must be 2.49a for python script to work!!!): I have a scene with 7 meshes, 1 armature, 10 bones. I tried going to one mesh to simplify it but doesn't change anything. I found a small blend file that was used for cal3d and it exported just fine. So I tried to copy it's setup with no success. EDIT*8/13/2012 In the last week here is what I have found so far. I made the mesh in the newest blender(2.62?) and exported it to import it in the old one(2.49a). Did an animation in the old one because importing new blend files to old blenders, its just said it would lose keyframe data and all was good. And then you get the last problem of it not exporting meshes. BUT I found that meshes made in the old one export regardless. I can't find any that won't export. So if I used the old blender to remake my model I could get it to export :) At this point I found a modified release of cal3d (because the most core model variable would not initiate as I made a really small test subject in old blender instead of remaking my big one which took 4 hours.) which fixes the morph objects and adds what cal3d left off with. Under their license they have to release the modification but it has no documentation so I have to figure it out on my own. Its mostly the same. But with this lib it came with a 3ds max exporter. My question now is how do I transfer armature and mesh information from blender to 3ds max in order to export into cal3d format. Every time I try the models are see through and small and there are no bones. The formats I have tried to import are .3ds .obj(mesh only) and COLLADA. In all of them the mesh is invisible and no bones. It says the default texture is on so I should be able to see it. All the vertices are present I found a vertex highlighter so I can see those. If any of this is confusing let me know so I can clear it up. Its late .<=sleep.

    Read the article

  • New ZFS Encryption features in Solaris 11.1

    - by darrenm
    Solaris 11.1 brings a few small but significant improvements to ZFS dataset encryption.  There is a new readonly property 'keychangedate' that shows that date and time of the last wrapping key change (basically the last time 'zfs key -c' was run on the dataset), this is similar to the 'rekeydate' property that shows the last time we added a new data encryption key. $ zfs get creation,keychangedate,rekeydate rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool/export/home/bob creation Mon Mar 21 11:05 2011 - rpool/export/home/bob keychangedate Fri Oct 26 11:50 2012 local rpool/export/home/bob rekeydate Tue Oct 30 9:53 2012 local The above example shows that we have changed both the wrapping key and added new data encryption keys since the filesystem was initially created.  If we haven't changed a wrapping key then it will be the same as the creation date.  It should be obvious but for filesystems that were created prior to Solaris 11.1 we don't have this data so it will be displayed as '-' instead. Another change that I made was to relax the restriction that the size of the wrapping key needed to match the size of the data encryption key (ie the size given in the encryption property).  In Solaris 11 Express and Solaris 11 if you set encryption=aes-256-ccm we required that the wrapping key be 256 bits in length.  This restriction was unnecessary and made it impossible to select encryption property values with key lengths 128 and 192 when the wrapping key was stored in the Oracle Key Manager.  This is because currently the Oracle Key Manager stores AES 256 bit keys only.  Now with Solaris 11.1 this restriciton has been removed. There is still one case were the wrapping key size and data encryption key size will always match and that is where they keysource property sets the format to be 'passphrase', since this is a key generated internally to libzfs and to preseve compatibility on upgrade from older releases the code will always generate a wrapping key (using PKCS#5 PBKDF2 as before) that matches the key length size of the encryption property. The pam_zfs_key module has been updated so that it allows you to specify encryption=off. There were also some bugs fixed including not attempting to load keys for datasets that are delegated to zones and some other fixes to error paths to ensure that we could support Zones On Shared Storage where all the datasets in the ZFS pool were encrypted that I discussed in my previous blog entry. If there are features you would like to see for ZFS encryption please let me know (direct email or comments on this blog are fine, or if you have a support contract having your support rep log an enhancement request).

    Read the article

  • Mixed java version installed

    - by david99world
    I've got the following in /.bashrc export JAVA_HOME=/usr/bin/jdk1.7.0_03/ export PATH=$PATH:/usr/bin/jdk1.7.0_03/bin This is fine, if I do $JAVA_HOME I get the directory above. The problem is if I do java -version I get... OpenJDK Runtime Environment (IcedTea6 1.11.1) (6b24-1.11.1-4ubuntu3) OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode) How do I make the official jdk version the right one? Sorry I'm extremely new to Ubuntu. Thanks, David

    Read the article

  • How to set system-wide proxy address using shell script?

    - by skg
    I want to set system Proxy address through my Qt application. So i was wondering if i could write a script which can be executed by my application every time to change the proxy address. I tried : #! /bin/sh echo "# Generated by Application" export $1 echo "Proxy Address ${1} but this script was not successful. I think it was unable to execute "export" command. Can anyone help me resolving this issue ?

    Read the article

  • SharePoint 2010 Web Parts in Action

    This article is taken from the book SharePoint 2010 Web Parts in Action. The author discusses export and import features of Web Parts and how to disable the export of Web Parts when confidentiality is a concern.

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >