Search Results

Search found 15354 results on 615 pages for 'desktop deployment'.

Page 583/615 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • How to build Lucene / Solr from source code in windows environment in order to add patches

    - by Simon
    I have successfully implemented Apache’s Solr for free text searching a database driven web site build for windows platforms using Visual Studio in c#. I am trying to get a version Solr working with field collapsing (which is not in the release version). There are patches available from apache and discussions on the web of people successfully doing this for the version I am using but my problem is cannot get the build to work. I am a c# coder on windows platforms so java development is new to me. I understand I need to get the correct source code (and revision) from SVN, add the appropriate patches, then build the war file to deploy to my system. I cannot seem to get the source to build and produce the deployment code including jar (and subsequent war) files. My system is: Windows 7 Ultimate for development Visual Studio 2010 for c# / javascript development MyEclipse 8.6 / Eclipse 3.5 for the java build from source Subecplise 1.6x SVN plugin to get the source from apache’s SVN Apache Solr 1.4.1 So far I have: Found the right patches for the function I need: https://issues.apache.org/jira/browse/SOLR-236 Specifically I need to patch: field_collapsing_1.1.0.patch HTTPS //issues.apache.org/jira/secure/attachment/12357681/field_collapsing_1.1.0.patch and SOLR-236-1_4_1.patch HTTPS //issues.apache.org/jira/secure/attachment/12448216/SOLR-236-1_4_1.patch I downloaded the Lucene trunk version from the day before the patch was released (revision 958303 from 28/6/10) via subeclipse into a java package in myeclipse from: HTTPS //svn.apache.org/repos/asf/lucene/dev/trunk (Solr is the web implementation of Lucene and is in the subfolder solr/) I can apply patches to the solr directory once it has downloaded but the parent Lucene project doesn’t build the war files, copy the jar or other files into the bin folder (it stays empty). The build process starts, but doesn’t do anything apart from creating the folders bin and src. I am building the whole Lucene project, which contains Solr. I have tried building the source without patching and the same happens. If I copy out the Solr directory into a new project, it runs the build and copies all the related files, tests, etc but fails with 4,500 errors and does not produce the jar files or war file, which I assume is because it can’t find the Lucene trunk files which it depends on. I have two interrelated problems 1) I can't get the Lucene downloaded trunk to build 2) The jar, war and associated files are not created Can anyone help with what I am missing to build the war file? I have spent 2 days to get this far as the help online is extremely patchy and I can’t find a walk though tutorial on building a java war file from source in a windows environment. Any help will be much appreciated. Simon

    Read the article

  • Sharepoint web part stops working because of Resources.en-US.resx file

    - by Eric C
    I've been developing a Sharepoint web part, which had been working fine upon deployment. The web part has been developed with WSP Builder, packaged up and then deployed via stsadm. The web part has been deployed tens, if not a hundred times to the dev box with no problems. Now, the web part throws an error which breaks the page it's on: Object reference not set to an instance of an object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [NullReferenceException: Object reference not set to an instance of an object.] NYCIRB.DMS.WebParts.SearchUpload.SearchUpload.HandleException(Exception ex) +62 NYCIRB.DMS.WebParts.SearchUpload.SearchUpload.OnLoad(EventArgs e) +214 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 When looking through my Sharepoint logs, I find these errors repeated over and over which correspond to the time the web part was attempted to be loaded: 01/19/2009 10:53:14.43 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 72kg High (#2: Cannot open "Resources.en-US.resx": no such file or folder.) 01/19/2009 10:53:14.43 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e26 Medium Failed to open the language resource for Fea367b94a9-4a15-42ba-b4a2-32420363e018 keyfile Resources. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e25 Medium Failed to look up string with key "XomlUrl", keyfile core. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8l3c Medium Localized resource for token 'XomlUrl' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\Features\Fields\fieldswss.xml". 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e25 Medium Failed to look up string with key "RulesUrl", keyfile core. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8l3c Medium Localized resource for token 'RulesUrl' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\Features\Fields\fieldswss.xml". I've retracted the web part manually through Solution Management, retracted through stsadm, checked for the existence of the resource file, which is nowhere to be found. I'm pretty much at a loss to why this happened or how to resolve it.

    Read the article

  • web.config transforms not being applied on either publish or build installation package

    - by BenA
    Today I started playing with the web.config transforms in VS 2010. To begin with, I attempted the same hello world example that features in a lot of the blog posts on this topic - updating a connection string. I created the minimal example shown below (and similar to the one found in this blog). The problem is that whenever I do a right-click - "Publish", or a right-click - "Build Deployment Package" on the .csproj file, I'm not getting the correct output. Rather than a transformed web.config, I'm getting no web.config, and instead the two transform files are included. What am I doing wrong? Any help gratefully received! Web.config: <?xml version="1.0"?> <configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0"> <connectionStrings> <add name="ConnectionString" connectionString="server=(local); initial catalog=myDB; user=xxxx;password=xxxx" providerName="System.Data.SqlClient"/> </connectionStrings> </configuration> Web.debug.config: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="ConnectionString" connectionString="server=DebugServer; initial catalog=myDB; user=xxxx;password=xxxx" providerName="System.Data.SqlClient" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> </configuration> Web.release.config: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="ConnectionString" connectionString="server=ReleaseServer; initial catalog=myDB; user=xxxx;password=xxxx" providerName="System.Data.SqlClient" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> </configuration>

    Read the article

  • How best to store Subversion version information in EAR's?

    - by Rene
    When receiving a bug report or an it-doesnt-work message one of my initials questions is always what version? With a different builds being at many stages of testing, planning and deploying this is often a non-trivial question. I the case of releasing Java JAR (ear, jar, rar, war) files I would like to be able to look in/at the JAR and switch to the same branch, version or tag that was the source of the released JAR. How can I best adjust the ant build process so that the version information in the svn checkout remains in the created build? I was thinking along the lines of: adding a VERSION file, but with what content? storing information in the META-INF file, but under what property with which content? copying sources into the result archive added svn:properties to all sources with keywords in places the compiler leaves them be I ended up using the svnversion approach (the accepted anwser), because it scans the entire subtree as opposed to svn info which just looks at the current file / directory. For this I defined the SVN task in the ant file to make it more portable. <taskdef name="svn" classname="org.tigris.subversion.svnant.SvnTask"> <classpath> <pathelement location="${dir.lib}/ant/svnant.jar"/> <pathelement location="${dir.lib}/ant/svnClientAdapter.jar"/> <pathelement location="${dir.lib}/ant/svnkit.jar"/> <pathelement location="${dir.lib}/ant/svnjavahl.jar"/> </classpath> </taskdef> Not all builds result in webservices. The ear file before deployment must remain the same name because of updating in the application server. Making the file executable is still an option, but until then I just include a version information file. <target name="version"> <svn><wcVersion path="${dir.source}"/></svn> <echo file="${dir.build}/VERSION">${revision.range}</echo> </target> Refs: svnrevision: http://svnbook.red-bean.com/en/1.1/re57.html svn info http://svnbook.red-bean.com/en/1.1/re13.html subclipse svn task: http://subclipse.tigris.org/svnant/svn.html svn client: http://svnkit.com/

    Read the article

  • JBOSS 7 encoding not working as expected

    - by Fofole
    I had problems with my listgrids not showing diacritcs corectly and I found out that when I inserted from java into the db the values where already bugged. A post here helped and I changed my project properties - Text encoding - other - UTF-8 and this fixed my problem. Thing is this only fixes my problem locally. What I need to do is on my Jboss server also set the encoding somehow. This is what I put in my configuration file: <?xml version='1.0' encoding='UTF-8'?> <server name="vali-ubuntu" xmlns="urn:jboss:domain:1.0"> extensions> extension module="org.jboss.as.clustering.infinispan"/> extension module="org.jboss.as.connector"/> extension module="org.jboss.as.deployment-scanner"/> extension module="org.jboss.as.ee"/> extension module="org.jboss.as.ejb3"/> extension module="org.jboss.as.jaxrs"/> extension module="org.jboss.as.jmx"/> extension module="org.jboss.as.logging"/> extension module="org.jboss.as.naming"/> extension module="org.jboss.as.osgi"/> extension module="org.jboss.as.remoting"/> extension module="org.jboss.as.sar"/> extension module="org.jboss.as.security"/> extension module="org.jboss.as.threads"/> extension module="org.jboss.as.transactions"/> extension module="org.jboss.as.web"/> extension module="org.jboss.as.weld"/> /extensions> system-properties> property name="org.apache.catalina.connector.URI_ENCODING" value="UTF-8"/> property name="org.apache.catalina.connector.USE_BODY_ENCODING_FOR_QUERY_STRING" value="tru e"/> /system-properties> //..... This doesn't work so maybe I need to add something else. I tried everything I could find with no succes so any help is appreciated. Thanks. EDIT:From what I read, this will work only in jboss 7.1.0 beta 1 or highier. (URIEncoding) and I use JBoss 7.0.2 so I need a replacement for 7.0.2

    Read the article

  • How to launch a Windows service network process to listen to a port on a localhost socket that is vi

    - by rwired
    Here's the code (in a standard TService in Delphi): const ProcessExe = 'MyNetApp.exe'; function RunService: Boolean; var StartInfo : TStartupInfo; ProcInfo : TProcessInformation; CreateOK : Boolean; begin CreateOK := false; FillChar(StartInfo,SizeOf(TStartupInfo),#0); FillChar(ProcInfo,SizeOf(TProcessInformation),#0); StartInfo.cb := SizeOf(TStartupInfo); CreateOK := CreateProcess(nil, PChar(ProcessEXE),nil,nil,False, CREATE_NEW_PROCESS_GROUP+NORMAL_PRIORITY_CLASS, nil, PChar(InstallDir), StartInfo, ProcInfo); CloseHandle(ProcInfo.hProcess); CloseHandle(ProcInfo.hThread); Result := CreateOK; end; procedure TServicel.ServiceExecute(Sender: TService); const IntervalsBetweenRuns = 4; //no of IntTimes between checks IntTime = 250; //ms var Count: SmallInt; begin Count := IntervalsBetweenRuns; //first time run immediately while not Terminated do begin Inc(Count); if Count >= IntervalsBetweenRuns then begin Count := 0; //We check to see if the process is running, //if not we run it. That's all there is to it. //if ProcessEXE crashes, this service host will just rerun it if processExists(ProcessEXE)=0 then RunService; end; Sleep(IntTime); ServiceThread.ProcessRequests(False); end; end; MyNetApp.exe is a SOCKS5 proxy listening on port 9870. Users configure their browser to this proxy which acts as a secure-tunnel/anonymizer. All works perfectly fine on 2000/XP/2003, but on Vista/Win7 with UAC the service runs in Session0 under LocalSystem and port 9870 doesn't show up in netstat for the logged-in user or Administrator. Seems UAC is getting in my way. Is there something I can do with the SECURITY_ATTRIBUTES or CreateProcess, or is there something I can do with CreateProcessAsUser or impersonation to ensure that a network socket on a service is available to logged-in users on the system (note, this app is for mass deployment, I don't have access to user credentials, and require the user elevate their privileges to install a service on Vista/Win7)

    Read the article

  • Hadoop in a RESTful Java Web Application - Conflicting URI templates

    - by user1231583
    I have a small Java Web Application in which I am using Jersey 1.12 and the Hadoop 1.0.0 JAR file (hadoop-core-1.0.0.jar). When I deploy my application to my JBoss 5.0 server, the log file records the following error: SEVERE: Conflicting URI templates. The URI template / for root resource class org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods and the URI template / transform to the same regular expression (/.*)? To make sure my code is not the problem, I have created a fresh web application that contains nothing but the Jersey and Hadoop JAR files along with a small stub. My web.xml is as follows: <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <servlet> <servlet-name>ServletAdaptor</servlet-name> <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet- class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>ServletAdaptor</servlet-name> <url-pattern>/mytest/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> </web-app> My simple RESTful stub is as follows: import javax.ws.rs.core.Context; import javax.ws.rs.core.UriInfo; import javax.ws.rs.Path; @Path("/mytest") public class MyRest { @Context private UriInfo context; public MyRest() { } } In my regular application, when I remove the Hadoop JAR files (and the code that is using Hadoop), everything works as I would expect. The deployment is successful and the remaining RESTful services work. I have also tried the Hadoop 1.0.1 JAR files and have had the same problems with the conflicting URL template in the NamenodeWebHdfsMethods class. Any suggestions or tips in solving this problem would be greatly appreciated.

    Read the article

  • Rails. Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • How to configure Jetty to reload a WebAppContext when classes are changed

    - by Guss
    I'm developing a web application and I run Jetty as the development and testing environment when I develop under Eclipse. When I make changes to Java classes, Eclipse automatically compiles them to the build directory, but Jetty won't see the changes until I stop and start the server. I know that Jetty supports "hot deployment" using ContextDeployer that will refresh updated application contexts, but it relies on a context file in a context directory being updated - which is not very useful in my case. Is there a way to set up Jetty so that it will reload the web app when any of the classes it uses is updated? My current jetty.xml looks something like this: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure id="Server" class="org.eclipse.jetty.server.Server"> <Set name="ThreadPool"><!-- bla bla --></Set> <Call name="addConnector"><!-- bla bla --></Call> <Set name="handler"> <New id="Handlers" class="org.eclipse.jetty.server.handler.HandlerCollection"> <Set name="handlers"> <Array type="org.eclipse.jetty.server.Handler"> <Item> <New id="webapp" class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="displayName">My Web App</Set> <Set name="resourceBase">src/main/webapp</Set> <Set name="descriptor">src/main/webapp/WEB-INF/web.xml</Set> <Set name="contextPath">/mywebapp</Set> </New> </Item> <Item> <New id="DefaultHandler" class="org.eclipse.jetty.server.handler.DefaultHandler"/> </Item> </Array> </Set> </New> </Set> </Configure>

    Read the article

  • Loading jar file using JCL(JarClassLoader ) : classpath in manifest is ignored ..

    - by Xinus
    I am trying to load jar file using JCL using following code FileInputStream fis = new FileInputStream(new File( "C:\\Users\\sunils\\glassfish-tests\\working\\test.jar") ); JarClassLoader jc = new JarClassLoader( ); jc.add(fis); Class main = jc.loadClass( "highmark.test.Main" ); String[] str={}; main.getMethod("test").invoke(null);//.getDeclaredMethod("main",String[].class).invoke(null,str); fis.close(); But when I try to run this program I get Exception as Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at Main.main(Main.java:21) Caused by: java.lang.RuntimeException: Embedded startup not found, classpath is probably incomplete at org.glassfish.api.embedded.Server.<init>(Server.java:292) at org.glassfish.api.embedded.Server.<init>(Server.java:75) at org.glassfish.api.embedded.Server$Builder.build(Server.java:185) at org.glassfish.api.embedded.Server$Builder.build(Server.java:167) at highmark.test.Main.test(Main.java:33) ... 5 more According to this it is not able to locate class, But when I run the jar file explicitly it runs fine. It seems like JCL is ignoring other classes present in the jar file, MANIFEST.MF file in jar file shows: Manifest-Version: 1.0 Class-Path: . Main-Class: highmark.test.Main It seems to be ignoring Class-Path: . , This jar file runs fine when I run it using Java explicitly, This is just a test, in reality this jar file is coming as a InputStream and it cannot be stored in filesystem, How can I overcome this problem , Is there any workaround ? Thanks for any help . UNDATE: Here is a jar Main class : package highmark.test; import org.glassfish.api.embedded.*; import java.io.*; import org.glassfish.api.deployment.*; import com.sun.enterprise.universal.io.FileUtils; public class Main { public static void main(String[] args) throws IOException, LifecycleException, ClassNotFoundException { test(); } public static void test() throws IOException, LifecycleException, ClassNotFoundException{ Server.Builder builder = new Server.Builder("test"); Server server = builder.build(); server.createPort(8080); ContainerBuilder containerBuilder = server.createConfig(ContainerBuilder.Type.web); server.addContainer(containerBuilder); server.start(); File war=new File("C:\\Users\\sunils\\maventests\\simple-webapp\\target\\simple-webapp.war");//(File) inputStream.readObject(); EmbeddedDeployer deployer = server.getDeployer(); DeployCommandParameters params = new DeployCommandParameters(); params.contextroot = "simple"; deployer.deploy(war, params); } }

    Read the article

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • Application error with MyFaces 1.2: java.lang.IllegalStateException: No Factories configured for this Application.

    - by IgorB
    For my app I'm using Tomcat 6.0.x and Mojarra 1.2_04 JSF implementation. It works fine, just I would like to switch now to MyFaces 1.2_10 impl of JSF. During the deployment of my app a get the following error: ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/myApp]] StandardWrapper.Throwable java.lang.IllegalStateException: No Factories configured for this Application. This happens if the faces-initialization does not work at all - make sure that you properly include all configuration settings necessary for a basic faces application and that all the necessary libs are included. Also check the logging output of your web application and your container for any exceptions! If you did that and find nothing, the mistake might be due to the fact that you use some special web-containers which do not support registering context-listeners via TLD files and a context listener is not setup in your web.xml. A typical config looks like this; <listener> <listener-class>org.apache.myfaces.webapp.StartupServletContextListener</listener-class> </listener> at javax.faces.FactoryFinder.getFactory(FactoryFinder.java:106) at javax.faces.webapp.FacesServlet.init(FacesServlet.java:137) at org.apache.myfaces.webapp.MyFacesServlet.init(MyFacesServlet.java:113) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1172) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:992) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4058) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4371) ... Here is part of my web.xml configuration: <servlet> <servlet-name>Faces Servlet</servlet-name> <!-- <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> --> <servlet-class>org.apache.myfaces.webapp.MyFacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> ... <listener> <listener- class>org.apache.myfaces.webapp.StartupServletContextListener</listener-class> </listener> Has anyone experienced similar error, and what should I do i order to fix it? Thanx!

    Read the article

  • WLI domain with 3 servers - issues on JPD process startup

    - by XpiritO
    Hi there. I'm currently working on a clustered WLI environment which comprehends 3 servers: 1 admin server ("AdminServer") and 2 managed servers ("mn1" and "mn2") grouped as a cluster, as follows: Architecture diagram: http://img72.imageshack.us/img72/4112/clusterdiagram.jpg I've developed a JPD process to execute some scheduled tasks, invoked using a Message Broker. I've deployed this project into a single-server WLI domain (with AdminServer only) and it works as expected: the JPD process is invoked (I've configured a Timer Event Generator instance to start it up). Message broker: http://img532.imageshack.us/img532/1443/wlimessagebroker.jpg Timer event generator: http://img408.imageshack.us/img408/7358/wlitimereventgenerator.jpg In order to achieve fail-over and load-balancing capabilities, I'm currently trying to deploy this JPD process into this clustered WLI environment. Although, I'm having some issues with this, as I cannot get it to work properly, even if it still works. Here is a screenshot of the "WLI Process Instance Monitor" (with AdminServer and mn1 instances up and running): http://img710.imageshack.us/img710/8477/wliprocessinstancemonit.jpg According to this screen the process seems to be running, as it shows in this instance monitor screen. However, I don't see any output coming out neither at AdminServer console or mn1 console. In single-server domain it was visible output from JPD process "timeout" callback method, wich implementation is shown below: @com.bea.wli.control.broker.MessageBroker.StaticSubscription(xquery = "", filterValueMatch = "", channelName = "/SamplePrefix/Samples/SampleStringChannel", messageBody = "{x0}") public void subscription(java.lang.String x0) { String toReturn=""; try { Context myCtx = new InitialContext(); MBeanHome mbeanHome = (MBeanHome)myCtx.lookup("weblogic.management.home.localhome"); toReturn=mbeanHome.getMBeanServer().getServerName(); System.out.println("**** executed at **** " + System.currentTimeMillis() + " by: " + toReturn); } catch (Exception e) { System.out.println("Exception!"); e.printStackTrace(); } } (...) @org.apache.beehive.controls.api.events.EventHandler(field = "myT", eventSet = com.bea.control.WliTimerControl.Callback.class, eventName = "onTimeout") public void myT_onTimeout(long time, java.io.Serializable data) { // #START: CODE GENERATED - PROTECTED SECTION - you can safely add code above this comment in this method. #// // input transform System.out.println("**** published at **** " + System.currentTimeMillis()); publishControl.publish("aaaa"); // parameter assignment // #END : CODE GENERATED - PROTECTED SECTION - you can safely add code below this comment in this method. #// } and here is the output visible at "AdminServer" console in single-server domain testing: **** published at **** 1273238090713 **** executed at **** 1273238132123 by: AdminServer **** published at **** 1273238152462 **** executed at **** 1273238152562 by: AdminServer (...) What may be wrong with my clustered configuration? Am I missing something to accomplish clustered deployment? Thanks in advance for your help.

    Read the article

  • Intermittent "Specified cast is invalid" with StructureMap injected data context

    - by FreshCode
    I am intermittently getting an System.InvalidCastException: Specified cast is not valid. error in my repository layer when performing an abstracted SELECT query mapped with LINQ. The error can't be caused by a mismatched database schema since it works intermittently and it's on my local dev machine. Could it be because StructureMap is caching the data context between page requests? If so, how do I tell StructureMap v2.6.1 to inject a new data context argument into my repository for each request? Update: I found this question which correlates my hunch that something was being re-used. Looks like I need to call Dispose on my injected data context. Not sure how I'm going to do this to all my repositories without copypasting a lot of code. Edit: These errors are popping up all over the place whenever I refresh my local machine too quickly. Doesn't look like it's happening on my remote deployment box, but I can't be sure. I changed all my repositories' StructureMap life cycles to HttpContextScoped() and the error persists. Code: public ActionResult Index() { // error happens here, which queries my page repository var page = _branchService.GetPage("welcome"); if (page != null) ViewData["Welcome"] = page.Body; ... } Repository: GetPage boils down to a filtered query mapping in my page repository. public IQueryable<Page> GetPages() { var pages = from p in _db.Pages let categories = GetPageCategories(p.PageId) let revisions = GetRevisions(p.PageId) select new Page { ID = p.PageId, UserID = p.UserId, Slug = p.Slug, Title = p.Title, Description = p.Description, Body = p.Text, Date = p.Date, IsPublished = p.IsPublished, Categories = new LazyList<Category>(categories), Revisions = new LazyList<PageRevision>(revisions) }; return pages; } where _db is an injected data context as an argument, stored in a private variable which I reuse for SELECT queries. Error: Specified cast is not valid. Exception Details: System.InvalidCastException: Specified cast is not valid. Stack Trace: [InvalidCastException: Specified cast is not valid.] System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) +4539 System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) +207 System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) +500 System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute(Expression expression) +50 System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) +383 Manager.Controllers.SiteController.Index() in C:\Projects\Manager\Manager\Controllers\SiteController.cs:68 lambda_method(Closure , ControllerBase , Object[] ) +79 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) +258 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +39 System.Web.Mvc.<>c__DisplayClassd.<InvokeActionMethodWithFilters>b__a() +125 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) +640 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList`1 filters, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +312 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +709 System.Web.Mvc.Controller.ExecuteCore() +162 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__4() +58 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +20 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +453 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +371

    Read the article

  • CDI SessionScoped Bean results in two instances in same session

    - by Ryan
    I've got two instances of a SessionScoped CDI bean for the same session. I was under the impression that there would be one instance generated for me by CDI, but it generated two. Am I misunderstanding how CDI works, or did I find a bug? Here is the bean code: package org.mycompany.myproject.session; import java.io.Serializable; import javax.enterprise.context.SessionScoped; import javax.faces.context.FacesContext; import javax.inject.Named; import javax.servlet.http.HttpSession; @Named @SessionScoped public class MyBean implements Serializable { private String myField = null; public MyBean() { System.out.println("MyBean constructor called"); FacesContext fc = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession)fc.getExternalContext().getSession(false); String sessionId = session.getId(); System.out.println("Session ID: " + sessionId); } public String getMyField() { return myField; } public void setMyField(String myField) { this.myField = myField; } } Here is the Facelet code: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core"> <f:view contentType="text/html" encoding="UTF-8"> <h:head> <title>Test</title> </h:head> <h:body> <h:form id="form"> <h:inputText value="#{myBean.myField}"/> <h:commandButton value="Submit"/> </h:form> </h:body> </f:view> </html> Here is the output from deployment and navigating to page: INFO: Loading application org.mycompany_myproject_war_1.0-SNAPSHOT at /myproject INFO: org.mycompany_myproject_war_1.0-SNAPSHOT was successfully deployed in 8,237 milliseconds. INFO: MyBean constructor called INFO: Session ID: 175355b0e10fe1d0778238bf4634 INFO: MyBean constructor called INFO: Session ID: 175355b0e10fe1d0778238bf4634 Using GlassFish 3.0.1

    Read the article

  • Thin & Sinatra not taking port

    - by NekoNova
    I'm having problems settig up my application using Thin and Sinatra. I have created a development-config.ru file that contains the following settings: # This is a rack configuration file to fire up the Sinatra application. # This allows better control and configuration as we are using the modular # approach here for controlling our application. # # Extend the Ruby load path with the root of the API and the lib folder # so that we can automatically include all our own custom classes. This makes # the requiring of files a bit cleaner and easier to maintain. # This is basically what rails does as well. # We also store the root of the API in the ENV settings to ensure we have # always access to the root of the API when building paths. ENV['API_ROOT'] = File.dirname(__FILE__) $:.unshift ENV['API_ROOT'] $:.unshift File.expand_path(File.join(ENV['API_ROOT'], 'lib')) $:.unshift File.expand_path(File.join(ENV['API_ROOT'], 'db')) # Now we can require all the gems used for the entire API by simpling requiring # them here. We can also include the classes that we have defined inside the lib # folder. require 'rubygems' require 'bundler' # Run Bundler to setup our gems properly. This will install all the missing gems on # the system and ensure that the deployment environment is ready to run. Bundler.require # To make the loading easier for the application, we will now automatically load all # models that have been defined inside the lib folder. This ensures that we do not need # to load them anymore anywhere else in our application, as the models will be known to # ruby everywhere. Dir.glob(File.join(ENV['API_ROOT'], 'lib', '**', '*.rb')).each{|file| require file} # Now we will configure the Sinatra application so that we can fire up the entire API. # This requires some detailed settings like whether logging is allowed, the port to be # used and some folder locations. require 'sinatra' require 'app' set :logging, true set :dump_errors, true set :port, 3001 set :views, "#{ENV['API_ROOT']}/views" set :public_folder, "#{ENV['API_ROOT']}/public" set :environment, :test # Start up the Sinatra application with all the settings that we have defined. run App.new This is based upon the information I found on the Sinatra website. However, the problem is that I cannot get the application running on port 3001. If I use thin start -R development-config.ru it runs on port 3000. If I use rackup config-development.ru it runs on port 9696. However I never see Sinatra kick in or run over port 3000. My application looks like this: # Author : Arne De Herdt # Email : # This is the actuall application that will be running under Sinatra # to serve the requests for the billing middleware API. # We use the modular approach here to allow control when deploying # the application using Capistrano. require 'sinatra/base' require 'logger' require 'savon' require 'billcrux' class App < Sinatra::Base # This action responds to POST requests on the URI '/billcrux/register' # and is responsible for handeling registration requests with the # BillCrux payment system. # The post "/billcrux/register" do # do stuff end end Can someone tell me what I am doing wrong?

    Read the article

  • RAD Visual Web Application Creator/ Builder/ Designer for PHP

    - by inhoue
    Hi all, I want to see if any of you know a (free and open source will be ideal) tool/ app that can help build a php web application very quickly without investing too much time on writing codes, preferring drag and drop/ point and click work-flow designer for logic design (see Agile from Outsystems below). Plus, visual designer for the business logic is great since it can help a developer visualize the logic better. There are a lot of GUI builders, form builders out there, but I am looking for one app for the entire web application development process. My goal is to find an application that a team of developers can use together and use the build-in code of the app as much as possible. E.g. the app will provide a modular just for handle user login or a shopping cart; a developer just need to drag and drop the modular to the logic designer and the code will be generated. This way the functionality will be in a module and code will always be standard across developers. So if a new developer get on-board, he will just need to use the system and get up and running quickly. To explain this better: there is a lot php frameworks, e.g. cakephp, CodeIgniter, etc which I can use to help coding, but still I need to create (code) the GUI, writing quite a bit of codes. I am looking for a tool/ app that is a little more high level than those frameworks. Here is 2 examples apps I found during my google search which they have visual logic designer and gui builder in one single app. Also a single click deployment (but I need it to be php apps or at least I can deploy the (php) code to a LAMP/ WAMP server): Wavemaker: for JAVA Agile from Outsystems: for JAVA or .net (This one is really good, with work-flow drag and drop logic designer!) Talend: it is just an ETL tool, but the concept is what I want to bring up. Drag and drop, point and click logic design. Custom code can be added if it is needed, but the drag and drop process already finished the structure and most of the coding of the web app one needs to build. I want to list Adobe Flex, but it is more like a GUI designer + IDE, not exactly what I want to describe here. The drag and drop/ work-flow logic designer is a key for the app. I could go for the CMS route by learning how to extend them, but it is not that flexible for me and is a long learning curve. Anybody came across this type of app before? Or any idea of how can I find those apps? I googled them for long time, I don't see any of them for php and just few (just 2) for Java. Thanks in advance!

    Read the article

  • jms unresolved message-destination-ref

    - by portoalet
    hi, I am using netbeans 6.8, and glassfish v3, and making a simple jms application to work. I got this: com.sun.enterprise.container.common.spi.util.InjectionException: Exception attempting to inject Unresolved Message-Destination-Ref jms/[email protected]@null into class enterpriseapplication4.Main Code: public class Main { @Resource(name = "jms/myQueue") private static Topic myQueue; @Resource(name = "jms/myFactory") private static ConnectionFactory myFactory; ... // the rest is just boiler plate created by netbeans } In my Glassfish v3 admin console, I have jms/myFactory as my ConnectionFactory and jms/myQueue as my Destination Resources. What am I missing? Full stack: WARNING: enterprise.deployment.backend.invalidDescriptorMappingFailure com.sun.enterprise.container.common.spi.util.InjectionException: Exception attempting to inject Unresolved Message-Destination-Ref jms/[email protected]@null into class enterpriseapplication4.Main at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl._inject(InjectionManagerImpl.java:614) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.inject(InjectionManagerImpl.java:384) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.injectClass(InjectionManagerImpl.java:210) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.injectClass(InjectionManagerImpl.java:202) at org.glassfish.appclient.client.acc.AppClientContainer$ClientMainClassSetting.getClientMainClass(AppClientContainer.java:599) at org.glassfish.appclient.client.acc.AppClientContainer.getMainMethod(AppClientContainer.java:498) at org.glassfish.appclient.client.acc.AppClientContainer.completePreparation(AppClientContainer.java:397) at org.glassfish.appclient.client.acc.AppClientContainer.prepare(AppClientContainer.java:311) at org.glassfish.appclient.client.AppClientFacade.prepareACC(AppClientFacade.java:264) at org.glassfish.appclient.client.acc.agent.AppClientContainerAgent.premain(AppClientContainerAgent.java:75) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:323) at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:338) Caused by: javax.naming.NamingException: Lookup failed for 'java:comp/env/jms/myQueue' in SerialContext targetHost=localhost,targetPort=3700 [Root exception is javax.naming.NameNotFoundException: No object bound for java:comp/env/jms/myQueue [Root exception is java.lang.NullPointerException]] at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:442) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl._inject(InjectionManagerImpl.java:513) ... 15 more Caused by: javax.naming.NameNotFoundException: No object bound for java:comp/env/jms/myQueue [Root exception is java.lang.NullPointerException] at com.sun.enterprise.naming.impl.JavaURLContext.lookup(JavaURLContext.java:218) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:428) ... 17 more Caused by: java.lang.NullPointerException at javax.naming.InitialContext.getURLScheme(InitialContext.java:269) at javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:318) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.sun.enterprise.naming.util.JndiNamingObjectFactory.create(JndiNamingObjectFactory.java:75) at com.sun.enterprise.naming.impl.GlassfishNamingManagerImpl.lookup(GlassfishNamingManagerImpl.java:688) at com.sun.enterprise.naming.impl.GlassfishNamingManagerImpl.lookup(GlassfishNamingManagerImpl.java:657) at com.sun.enterprise.naming.impl.JavaURLContext.lookup(JavaURLContext.java:148) ... 18 more Regards

    Read the article

  • How do I query delegation properties of an active directory user account?

    - by Mark J Miller
    I am writing a utility to audit the configuration of a WCF service. In order to properly pass credentials from the client, thru the WCF service back to the SQL back end the domain account used to run the service must be configured in Active Directory with the setting "Trust this user for delegation" (Properties - "Delegation" tab). Using C#, how do I access the settings on this tab in Active Directory. I've spent the last 5 hours trying to track this down on the web and can't seem to find it. Here's what I've done so far: using (Domain domain = Domain.GetCurrentDomain()) { Console.WriteLine(domain.Name); // get domain "dev" from MSSQLSERVER service account DirectoryEntry ouDn = new DirectoryEntry("LDAP://CN=Users,dc=dev,dc=mydomain,dc=lcl"); DirectorySearcher search = new DirectorySearcher(ouDn); // get sAMAccountName "dev.services" from MSSQLSERVER service account search.Filter = "(sAMAccountName=dev.services)"; search.PropertiesToLoad.Add("displayName"); search.PropertiesToLoad.Add("userAccountControl"); SearchResult result = search.FindOne(); if (result != null) { Console.WriteLine(result.Properties["displayName"][0]); DirectoryEntry entry = result.GetDirectoryEntry(); int userAccountControlFlags = (int)entry.Properties["userAccountControl"].Value; if ((userAccountControlFlags & (int)UserAccountControl.TRUSTED_FOR_DELEGATION) == (int)UserAccountControl.TRUSTED_FOR_DELEGATION) Console.WriteLine("TRUSTED_FOR_DELEGATION"); else if ((userAccountControlFlags & (int)UserAccountControl.TRUSTED_TO_AUTH_FOR_DELEGATION) == (int)UserAccountControl.TRUSTED_TO_AUTH_FOR_DELEGATION) Console.WriteLine("TRUSTED_TO_AUTH_FOR_DELEGATION"); else if ((userAccountControlFlags & (int)UserAccountControl.NOT_DELEGATED) == (int)UserAccountControl.NOT_DELEGATED) Console.WriteLine("NOT_DELEGATED"); foreach (PropertyValueCollection pvc in entry.Properties) { Console.WriteLine(pvc.PropertyName); for (int i = 0; i < pvc.Count; i++) { Console.WriteLine("\t{0}", pvc[i]); } } } } The "userAccountControl" does not seem to be the correct property. I think it is tied to the "Account Options" section on the "Account" tab, which is not what we're looking for but this is the closest I've gotten so far. The justification for all this is: We do not have permission to setup the service in QA or in Production, so along with our written instructions (which are notoriously only followed in partial) I am creating a tool that will audit the setup (WCF and SQL) to determine if the setup is correct. This will allow the person deploying the service to run this utility and verify everything is setup correctly - saving us hours of headaches and reducing downtime during deployment.

    Read the article

  • How to prevent mvn jetty:run from executing test phase?

    - by tputkonen
    We use MySQL in production, and Derby for unit tests. Our pom.xml copies Derby version of persistence.xml before tests, and replaces it with the MySQL version in prepare-package phase: <plugin> <artifactId>maven-antrun-plugin</artifactId> <version>1.3</version> <executions> <execution> <id>copy-test-persistence</id> <phase>process-test-resources</phase> <configuration> <tasks> <!--replace the "proper" persistence.xml with the "test" version--> <copy file="${project.build.testOutputDirectory}/META-INF/persistence.xml.test" tofile="${project.build.outputDirectory}/META-INF/persistence.xml" overwrite="true" verbose="true" failonerror="true" /> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> <execution> <id>restore-persistence</id> <phase>prepare-package</phase> <configuration> <tasks> <!--restore the "proper" persistence.xml--> <copy file="${project.build.outputDirectory}/META-INF/persistence.xml.production" tofile="${project.build.outputDirectory}/META-INF/persistence.xml" overwrite="true" verbose="true" failonerror="true" /> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> The problem is, that if I execute mvn jetty:run it will execute the test persistence.xml file copy task before starting jetty. I want it to be run using the deployment version. How can I fix this?

    Read the article

  • How to design service that can provide interface as JAX-WS web service, or via JMS, or as local meth

    - by kevinegham
    Using a typical JEE framework, how do I develop and deploy a service that can be called as a web service (with a WSDL interface), be invoked via JMS messages, or called directly from another service in the same container? Here's some more context: Currently I am responsible for a service (let's call it Service X) with the following properties: Interface definition is a human readable document kept up-to-date manually. Accepts HTTP form-encoded requests to a single URL. Sends plain old XML responses (no schema). Uses Apache to accept requests + a proprietary application server (not servlet or EJB based) containing all logic which runs in a seperate tier. Makes heavy use of a relational database. Called both by internal applications written in a variety of languages and also by a small number of third-parties. I want to (or at least, have been told to!): Switch to a well-known (pref. open source) JEE stack such as JBoss, Glassfish, etc. Split Service X into Service A and Service B so that we can take Service B down for maintenance without affecting Service A. Note that Service B will depend on (i.e. need to make requests to) Service A. Make both services easier for third parties to integrate with by providing at least a WS-I style interface (WSDL + SOAP + XML + HTTP) and probably a JMS interface too. In future we might consider a more lightweight API too (REST + JSON? Google Protocol Buffers?) but that's a nice to have. Additional consideration are: On a smaller deployment, Service A and Service B will likely to running on the same machine and it would seem rather silly for them to use HTTP or a message bus to communicate; better if they could run in the same container and make method calls to each other. Backwards compatibility with the existing ad-hoc Service X interface is not required, and we're not planning on re-using too much of the existing code for the new services. I'm happy with either contract-first (WSDL I guess) or (annotated) code-first development. Apologies if my terminology is a bit hazy - I'm pretty experienced with Java and web programming in general, but am finding it quite hard to get up to speed with all this enterprise / SOA stuff - it seems I have a lot to learn! I'm also not very used to using a framework rather than simply writing code that calls some packages to do things. I've got as far as downloading Glassfish, knocking up a simple WSDL file and using wsimport + a little dummy code to turn that into a WAR file which I've deployed.

    Read the article

  • Yii file upload issue

    - by user1289853
    Couple of months ago,I developed a simple app using YII,one of the feature was to upload file. The feature was working well in my dev machine,couple of days ago client found the file upload feature is not working in his server since deployment. And after that I test my dev machine that was not working too. My controller looks: public function actionEntry() { if (!Yii::app()->user->isGuest) { $model = new TrackForm; if (isset($_POST['TrackForm'])) { $entry = new Track; try { $entry->product_image = $_POST['TrackForm']['product_image']; $entry->product_image = CUploadedFile::getInstance($model, 'product_image'); if ($entry->save()) { if ($entry->product_image) { $entry->product_image->saveAs($entry->product_image->name, '/trackshirt/uploads'); } } $this->render('success', array('model' => $model)); // redirect to success page } } catch (Exception $e) { echo 'Caught exception: ', $e->getMessage(), "\n"; } } else { $this->render('entry', array('model' => $model)); } } } Model is like below: <?php class Track extends CActiveRecord { public static function model($className=__CLASS__) { return parent::model($className); } public function tableName() { return 'product_details'; } } My view looks: <?php $form = $this->beginWidget('CActiveForm', array( 'id' => 'hide-form', 'enableClientValidation' => true, 'clientOptions' => array( 'validateOnSubmit' => true, ), 'htmlOptions' => array('enctype' => 'multipart/form-data'), )); ?> <p class="auto-style2"><strong>Administration - Add New Product</strong></p> <table align="center" style="width: 650px"><td class="auto-style3" style="width: 250px">Product Image</td> <td> <?php echo $form->activeFileField($model, 'product_image'); ?> </td> </tr> </table> <p class="auto-style1"> <div style="margin-leftL:-100px;"> <?php echo CHtml::submitButton('Submit New Product Form'); ?> </div> <?php $this->endWidget(); ?> Any idea where is the problem?I tried to debug it but every time it returns Null. Thanks.

    Read the article

  • Why is this line breaking Rails with Passenger on DreamHost?

    - by Frew
    Ok, so I have a Rails app set up on DreamHost and I had it working a while ago and now it's broken. I don't know a lot about deployment environments or anything like that so please forgive my ignorance. Anyway, it looks like the app is crashing at this line in config/environment.rb: require File.join(File.dirname(__FILE__), 'boot') config/boot.rb is pretty much normal, but I'll include it here anyway. # Don't change this file! # Configure your app in config/environment.rb and config/environments/*.rb RAILS_ROOT = "#{File.dirname(__FILE__)}/.." unless defined?(RAILS_ROOT) module Rails class << self def boot! unless booted? preinitialize pick_boot.run end end def booted? defined? Rails::Initializer end def pick_boot (vendor_rails? ? VendorBoot : GemBoot).new end def vendor_rails? File.exist?("#{RAILS_ROOT}/vendor/rails") end def preinitialize load(preinitializer_path) if File.exist?(preinitializer_path) end def preinitializer_path "#{RAILS_ROOT}/config/preinitializer.rb" end end class Boot def run load_initializer Rails::Initializer.run(:set_load_path) end end class VendorBoot < Boot def load_initializer require "#{RAILS_ROOT}/vendor/rails/railties/lib/initializer" Rails::Initializer.run(:install_gem_spec_stubs) end end class GemBoot < Boot def load_initializer self.class.load_rubygems load_rails_gem require 'initializer' end def load_rails_gem if version = self.class.gem_version gem 'rails', version else gem 'rails' end rescue Gem::LoadError => load_error $stderr.puts %(Missing the Rails #{version} gem. Please `gem install -v=#{version} rails`, update your RAILS_GEM_VERSION setting in config/environment.rb for the Rails version you do have installed, or comment out RAILS_GEM_VERSION to use the latest version installed.) exit 1 end class << self def rubygems_version Gem::RubyGemsVersion if defined? Gem::RubyGemsVersion end def gem_version if defined? RAILS_GEM_VERSION RAILS_GEM_VERSION elsif ENV.include?('RAILS_GEM_VERSION') ENV['RAILS_GEM_VERSION'] else parse_gem_version(read_environment_rb) end end def load_rubygems require 'rubygems' min_version = '1.1.1' unless rubygems_version >= min_version $stderr.puts %Q(Rails requires RubyGems >= #{min_version} (you have #{rubygems_version}). Please `gem update --system` and try again.) exit 1 end rescue LoadError $stderr.puts %Q(Rails requires RubyGems >= #{min_version}. Please install RubyGems and try again: http://rubygems.rubyforge.org) exit 1 end def parse_gem_version(text) $1 if text =~ /^[^#]*RAILS_GEM_VERSION\s*=\s*["']([!~<>=]*\s*[\d.]+)["']/ end private def read_environment_rb File.read("#{RAILS_ROOT}/config/environment.rb") end end end end # All that for this: Rails.boot! Does anyone have any ideas? I am not getting any errors in the log or on the page. -fREW

    Read the article

  • Configuring Novel iPrint client on ubuntu 13.10

    - by Mahdi Sadeghi
    Recently I have struggled a lot to make Novel iPrint client to work on my laptop. I need it to use Follow Me printers in our university(you can take your print form any printer). Using this tutorial from Novel, I tried to convert the rpm package and install it on Ubuntu 13.04 & 13.10. The post install script from installing generated deb package had a typo which I saw in post install messages and I fixed that. Now I have the client running. To see the client UI I installed cinnamon desktop(because unity does not have system tray and old solutions did'nt work to whitelist Novel clinet). I have iPrint plugin installed on firefox as well(I copied the shared object files to plugin directories). I try installing printers from provided ipp URL(which lists available printers on the server) with no success. After clicking the printer name I see this: I have various errors: Formerly firefox used to asked my network username/password for installing SSL printer but now it returns this: iPrint Printer - The printer is currently not available. However I can install non-SSL version but the printer location is either empty or points to: file:///dev/null even if I change it to the exact address which I see on working machines still it prints nothing. I have tried the novel command line tool, iprntcmd to print. It is being installed at: /opt/novell/iprint/bin/ msadeghi@werkstatt:/opt/novell/iprint/bin$ ./iprntcmd --addprinter ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me\ -\ IPP iprntcmd v05.04.00 Adding printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP. Added printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP successfully. It adds the printer with empty location and again no print. What I found interesting is the log file at ~/.iprint/errors.txt with strange errors which I hope somebody here can understand. When I try to install the SSL printer I receive these logs(note that HP is my local printer and has nothing to do with iprint): Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 676 Group Info: CLIB Error Code: 0 (0x0) User ID: 1000 Error Msg: Success Debug Msg: MyCupsDoFileRequest - httpReconnect failed (0) Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 1293 Group Info: CUPS-IPP Error Code: 1282 (0x502) User ID: 1000 Error Msg: iPrint Printer - The printer is currently not available. Debug Msg: MyCupsDoFileRequest - IPP SERVICE UNAVAILABLE Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified I should say that my friend can print using the same instructions on CrunchBang easily and another guy on 12.04 LTS but with more struggling. It worked for me on linux mint maya with my old laptop as well. Is there anybody out there who can help me to solve these problems? I am really disappointed with Novell and our university support. PS. I had the same problemwith 13.04. No matter if I am within the network or I connect with VPN, I have the same issues.

    Read the article

  • How to Use Windows’ Advanced Search Features: Everything You Need to Know

    - by Chris Hoffman
    You should never have to hunt down a lost file on modern versions of Windows — just perform a quick search. You don’t even have to wait for a cartoon dog to find your files, like on Windows XP. The Windows search indexer is constantly running in the background to make quick local searches possible. This enables the kind of powerful search features you’d use on Google or Bing — but for your local files. Controlling the Indexer By default, the Windows search indexer watches everything under your user folder — that’s C:\Users\NAME. It reads all these files, creating an index of their names, contents, and other metadata. Whenever they change, it notices and updates its index. The index allows you to quickly find a file based on the data in the index. For example, if you want to find files that contain the word “beluga,” you can perform a search for “beluga” and you’ll get a very quick response as Windows looks up the word in its search index. If Windows didn’t use an index, you’d have to sit and wait as Windows opened every file on your hard drive, looked to see if the file contained the word “beluga,” and moved on. Most people shouldn’t have to modify this indexing behavior. However, if you store your important files in other folders — maybe you store your important data a separate partition or drive, such as at D:\Data — you may want to add these folders to your index. You can also choose which types of files you want to index, force Windows to rebuild the index entirely, pause the indexing process so it won’t use any system resources, or move the index to another location to save space on your system drive. To open the Indexing Options window, tap the Windows key on your keyboard, type “index”, and click the Indexing Options shortcut that appears. Use the Modify button to control the folders that Windows indexes or the Advanced button to control other options. To prevent Windows from indexing entirely, click the Modify button and uncheck all the included locations. You could also disable the search indexer entirely from the Programs and Features window. Searching for Files You can search for files right from your Start menu on Windows 7 or Start screen on Windows 8. Just tap the Windows key and perform a search. If you wanted to find files related to Windows, you could perform a search for “Windows.” Windows would show you files that are named Windows or contain the word Windows. From here, you can just click a file to open it. On Windows 7, files are mixed with other types of search results. On Windows 8 or 8.1, you can choose to search only for files. If you want to perform a search without leaving the desktop in Windows 8.1, press Windows Key + S to open a search sidebar. You can also initiate searches directly from Windows Explorer — that’s File Explorer on Windows 8. Just use the search box at the top-right of the window. Windows will search the location you’ve browsed to. For example, if you’re looking for a file related to Windows and know it’s somewhere in your Documents library, open the Documents library and search for Windows. Using Advanced Search Operators On Windows 7, you’ll notice that you can add “search filters” form the search box, allowing you to search by size, date modified, file type, authors, and other metadata. On Windows 8, these options are available from the Search Tools tab on the ribbon. These filters allow you to narrow your search results. If you’re a geek, you can use Windows’ Advanced Query Syntax to perform advanced searches from anywhere, including the Start menu or Start screen. Want to search for “windows,” but only bring up documents that don’t mention Microsoft? Search for “windows -microsoft”. Want to search for all pictures of penguins on your computer, whether they’re PNGs, JPEGs, or any other type of picture file? Search for “penguin kind:picture”. We’ve looked at Windows’ advanced search operators before, so check out our in-depth guide for more information. The Advanced Query Syntax gives you access to options that aren’t available in the graphical interface. Creating Saved Searches Windows allows you to take searches you’ve made and save them as a file. You can then quickly perform the search later by double-clicking the file. The file functions almost like a virtual folder that contains the files you specify. For example, let’s say you wanted to create a saved search that shows you all the new files created in your indexed folders within the last week. You could perform a search for “datecreated:this week”, then click the Save search button on the toolbar or ribbon. You’d have a new virtual folder you could quickly check to see your recent files. One of the best things about Windows search is that it’s available entirely from the keyboard. Just press the Windows key, start typing the name of the file or program you want to open, and press Enter to quickly open it. Windows 8 made this much more obnoxious with its non-unified search, but unified search is finally returning with Windows 8.1.     

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >