Search Results

Search found 19807 results on 793 pages for 'xml binding'.

Page 715/793 | < Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >

  • Aspect-Oriented Programming in OOP world - breaking rules ?

    - by Maksim Kondratyuk
    Hi 2 all! When I worked on asp.net mvc web site project, I investigated different approaches for validation. Some of them were DataAnotation validation and Validation Block. They use attributes for setting up rules for validation. Like this: [Required] public string Name {get;set;} I was confused how this approach combines with SRP (single responsibilty principle) from OOP world. Also I don't like any business logic in business objects, I prefer "poor business objects" model, but when I decorate my business objects with validation attributes for real requirements, they become ugly (Has a lot of attributes / with localization logic and so on). Idea with attributes realy simple, but in my opinion the validation decoration should be separated from object. I'm not sure is the approach to separate validation rules to xml files or to another objects, maybe it is a solution. Another bad side of AOP - problems with unit testin such code. When I decorated some controller actions with custom attributes for example to import/export TempData between actions or initialize some required services I can't to write proper unit test for testing this actions. Do you think that attributes don't break srp or you just disregard this and think that it's simplest , is not worst way ? P.S. I read some likes articles and discussions and I just want to put things in proper order. P.P.S. sorry for my "fluent" english :=)

    Read the article

  • XSLT 1.0 : Iterate over characters in a string

    - by subtenante
    I need to iterate over the characters in a string to build an XML structure. Currently, I am doing this : <xsl:template name="verticalize"> <xsl:param name="text">Some text</xsl:param> <xsl:for-each select="tokenize(replace(replace($text,'(.)','$1\\n'),'\\n$',''),'\\n')"> <xsl:element name="para"> <xsl:value-of select="."/> </xsl:element> </xsl:for-each> </xsl:template> This produces something like : <para>S</para> <para>o</para> <para>m</para> <para>e</para> <para> </para> <para>t</para> <para>e</para> <para>x</para> <para>t</para> This works fine with Xpath 2.0. But I need to apply the same treatment in a XPath 1.0 environment, where the replace() method is not available. Do you know a way to achieve this ?

    Read the article

  • Problem loading scripts from Ajax Response

    - by konathamrajesh
    The problem is I am using get_info() to make a ajax call to Result.lasso and paste the response in div with id 'test'.I am unable to use the sendForm() function from the page where i am calling the get_info(). I have also tried using different versions of jQuery 1.1.1.3 is working fine.But i am facing the problem while using higher versions of jquery. The error with higher versions is as follows missing } in XML expression [Break on this error] alert('hi');\n test.lasso (line 3) sendForm is not defined [Break on this error] sendForm(); get_info() function definition <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js" type="text/javascript"></script> <SCRIPT> function get_info() { $.ajax({url: "Result.lasso", context: document.body, success: function(response){ document.getElementById('test').innerHTML = response ;},dataType:"script"}); } </SCRIPT> The code in Result.lasso is as follows [Content_Type: 'text/html; charset=UTF-8'] <script type="text/javascript"> function sendForm() { alert('hi'); } </script> [Date] form name= "abc" method = "get" action = "abcd.lasso"> input type ="text" name = "element1"/> input type = "button" value="Click" onClick = "javascript: sendForm();"/> </form> Please help me out in resolving this problem Thanks, Rajesh Konatham

    Read the article

  • Oracle command hangs when using view for "WHILE x IN..." subquery

    - by Calvin Fisher
    I'm working on a web service that fetches data from an oracle data source in chunks and passes it back to an indexing/search tool in XML format. I'm the C#/.NET guy, and am kind of fuzzy on parts of Oracle. Our Oracle team gave us the following script to run, and it works well: SELECT ROWID, [columns] FROM [table] WHERE ROWID IN ( SELECT ROWID FROM ( SELECT ROWID FROM [table] WHERE ROWID > '[previous_batch_last_rowid]' ORDER BY ROWID ) WHERE ROWNUM <= 10000 ) ORDER BY ROWID 10,000 rows is an arbitrary but reasonable chunk size and ROWID is sufficiently unique for our purposes to use as a UID since each indexing run hits only one table at a time. Bracketed values are filled in programmatically by the web service. Now we're going to start adding views to the indexing, each of which will union a few separate tables. Since ROWID would no longer function as a unique identifier, they added a column to the views (VIEW_UNIQUE_ID) that concatenates the ROWIDs from the component tables to construct a UID for each union. But this script does not work, even though it follows the same form as the previous one: SELECT VIEW_UNIQUE_ID, [columns] FROM [view] WHERE VIEW_UNIQUE_ID IN ( SELECT VIEW_UNIQUE_ID FROM ( SELECT VIEW_UNIQUE_ID FROM [view] WHERE ROWID > '[previous_batch_last_view_unique_id]' ORDER BY VIEW_UNIQUE_ID ) WHERE ROWNUM <= 10000 ) ORDER BY VIEW_UNIQUE_ID It hangs indefinitely with no response from the Oracle server. I've waited 20+ minutes and the SQLTools dialog box indicating a running query remains the same, with no progress or updates. I've tested each subquery independently and each works fine and takes a very short amount of time (<= 1 second), so the view itself is sound. But as soon as the inner two SELECT queries are added with "WHERE VIEW_UNIQUE_ID IN...", it hangs. Why doesn't this query work for views? In what important way are they not interchangeable here?

    Read the article

  • What should the standard be for ReSTful URLS?

    - by gargantaun
    Since I can't find a chuffing job, I've been reading up on ReST and creating web services. The way I've interpreted it, the future is all about creating a web service for all your data before you build the web app. Which seems like a good idea. However, there seems to be a lot of contradictory thoughts on what the best scheme is for ReSTful URLs. Some people advocate simple pretty urls http://api.myapp.com/resource/1 In addition, some people like to add the API version to the url like so http://api.myapp.com/v1/resource/1 And to make things even more confusing, some people advocate adding the content-type to get requests http://api.myapp.com/v1/resource/1.xml http://api.myapp.com/v1/resource/1.json http://api.myapp.com/v1/resource/1.txt Whereas others think the content-type should be sent in the HTTP header. Soooooooo.... That's a lot of variation, which has left me unsure of what the best URL scheme is. I personally see the merits of the most comprehensive URL that includes a version number, resource locator and content-type, but I'm new to this so I could be wrong. On the other hand, you could argue that you should do "whatever works best for you". But that doesn't really fit with the ReST mentality as far as I can tell since the aim is to have a standard. And since a lot of you people will have more experience than me with ReST, I thought I'd ask for some guidance. So, with all that in mind... What should the standard be for ReSTful URLS?

    Read the article

  • oracle hibernate + maven dependenciesm dbcp.basicdatasource exception

    - by Joe
    I'm trying to create a web application using maven, tomcat and hibernate. Now I'm getting a cannot find class for org.appache.commons.dbcp.basicdatasource for bean with name datasource... exception. Without the hibernate aspects it works fine, but if I add <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource"/> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.Oracle10gDialect</prop> <prop key="hibernate.hbm2ddl.auto">create</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> <property name="mappingResources"> <list> </list> </property> </bean> to my applicationContext then I get the error. What I did was: -add org.hibernate to my pom -put ojdbc16.jar in my tomcat bin folder -add the above snippet to my applicationContext.xml I use a bat file to compile my project (using maven), copy it to my tomcat webapp folder and to start the server. Any input on what I'm doing wrong is welcome.

    Read the article

  • How can I modify a scalar reference passed to a subroutine reference

    - by Mark
    I have a function to convert documents into different formats, which then calls another function based on the type document. It's pretty straight forward for everything aside from HTML documents which require a bit of cleaning up, and that cleaning up is different based on where it's come from. So I had the idea that I could pass a reference to a subroutine to the convert function so the caller has the opportunity to modify the HTML, kinda like so (I'm not at work so this isn't copy-and-pasted): package Converter; ... sub convert { my ($self, $filename, $coderef) = @_; if ($filename =~ /html?$/i) { $self->_convert_html($filename, $coderef); } } sub _convert_html { my ($self, $filename, $coderef) = @_; my $html = $self->slurp($filename); $coderef->(\$html); #this modifies the html $self->save_to_file($filename, $html); } which is then called by: Converter->new->convert("./whatever.html", sub { s/<html>/<xml>/i }); I've tried a couple of different things along these lines but I keep on getting 'Use of uninitialized value in substitution (s///)'. Is there any way of doing what I'm trying to do? Thanks

    Read the article

  • Logback: What would cause DEBUG - WARN to append to file but NOT ERROR?

    - by Zombies
    I am running a Java program from Eclipe, and I am using the logback console plugin. I can see the ERROR level statements being appended to console (as well as all of the others). But for some reason my file, which is correctly recieving new DEBUG-WARN statements, is NOT recieving the ERROR level ones. Here is my logback.xml: <consolePlugin /> <appender name="FILE" class="ch.qos.logback.core.FileAppender"> <file>logs/main.log</file> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</Pattern> </layout> </appender> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%msg%n</Pattern> </layout> </appender> <logger name="WebsiteChecker"> <appender-ref ref="FILE" /> </logger> <root> <level value="debug" /> <!--<appender-ref ref="STDOUT" />--> <appender-ref ref="FILE" /> </root> </configuration>

    Read the article

  • Passing array values in an HTTP request in .NET

    - by Zarjay
    What's the standard way of passing and processing an array in an HTTP request in .NET? I have a solution, but I don't know if it's the best approach. Here's my solution: <form action="myhandler.ashx" method="post"> <input type="checkbox" name="user" value="Aaron" /> <input type="checkbox" name="user" value="Bobby" /> <input type="checkbox" name="user" value="Jimmy" /> <input type="checkbox" name="user" value="Kelly" /> <input type="checkbox" name="user" value="Simon" /> <input type="checkbox" name="user" value="TJ" /> <input type="submit" value="Submit" /> </form> The ASHX handler receives the "user" parameter as a comma-delimited string. You can get the values easily by splitting the string: public void ProcessRequest(HttpContext context) { string[] users = context.Request.Form["user"].Split(','); } So, I already have an answer to my problem: assign multiple values to the same parameter name, assume the ASHX handler receives it as a comma-delimited string, and split the string. My question is whether or not this is how it's typically done in .NET. What's the standard practice for this? Is there a simpler way to grab the multiple values than assuming that the value is comma-delimited and calling Split() on it? Is this how arrays are typically passed in .NET, or is XML used instead? Does anyone have any insight on whether or not this is the best approach?

    Read the article

  • Why are Managed Beans not loaded in Tomcat?

    - by c0d3x
    Hi, I created a JSF 2 web application with facelets. The libs for JSF where stored at tomcat/lib, to share it between several applications. I thought maybe it would be better to store the libs inside the WEB-INF/lib folder of the application, to get the application more independent from server configurations. Now when I start tomcat via eclipse, the managed beans are loaded and working. But when I start tomcat directly / standalone the managed beans are not loaded automatically. I used @ManagedBean @SessionScoped / @RequestScoped annotations to declare classes as managed beans. Why is this? What can I do to fix it? I don't use any faces-config.xml file yet. Thanks in advance. edited: Maybe this helps to see whats going on: javax.el.PropertyNotFoundException: /Artikel.xhtml @12,108 value="#{artikelBackingBean.nameFilterPattern}": Target Unreachable, identifier 'artikelBackingBean' resolved to null at com.sun.faces.facelets.el.TagValueExpression.getType(TagValueExpression.java:93) at com.sun.faces.renderkit.html_basic.HtmlBasicInputRenderer.getConvertedValue(HtmlBasicInputRenderer.java:95) at javax.faces.component.UIInput.getConvertedValue(UIInput.java:1008) at javax.faces.component.UIInput.validate(UIInput.java:934) at javax.faces.component.UIInput.executeValidate(UIInput.java:1189) at javax.faces.component.UIInput.processValidators(UIInput.java:691) at javax.faces.component.UIForm.processValidators(UIForm.java:243) at javax.faces.component.UIComponentBase.processValidators(UIComponentBase.java:1080) at javax.faces.component.UIViewRoot.processValidators(UIViewRoot.java:1180) at com.sun.faces.lifecycle.ProcessValidationsPhase.execute(ProcessValidationsPhase.java:76) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:433) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Visual Studio 2010 Professional - Problem Unit-Testing Web Services

    - by Ben
    Have created a very simple Web Service (asmx) in Visual Studio 2010 Professional, and am trying to use the auto-generated unit test cases. I get something that seems quite familiar on this site: The web site could not be configured correctly; getting ASP.NET process information failed. Requesting http://localhost:81/zfp/VSEnterpriseHelper.axd return an error: The remote server returned an error: (500) Internal Server Error. http://stackoverflow.com/questions/260432/500-error-running-visual-studio-asp-net-unit-test I have tried: 1. Running the tests on IIS rather than ASP.NET Development Server 2. Adding and then removing the XML fragment to my Web Service's .config file 3. Giving the MACHINE\ASPNET account Full control to the local folder My current questions: 1. Why am I being bothered with this instrumentation / code coverage DLL, when this doesn't seem to be something that ships with Visual Studio 2010 Professional? Is there any way I can turn it off? 2. I'm placing the node under in Web.config - is that the correct node? 3. Is it possible to bind to a web service without using the webby test attributes? I've seen other people advising making the Web Service as light-weight as possible. I'm trying to call it with jQuery / AJAX / JSON, so being able to debug the actual web service would be really helpful. Best wishes, Ben

    Read the article

  • ResourceFilterFactory and non-Path annotated Resources

    - by tousdan
    (I'm using Jersey 1.7) I am attempting to add a ResourceFilterFactory in my project to select which filters are used per method using annotations. The ResourceFilterFactory seems to be able to filters on Resources which are annotated with the Path annotation but it would seem that it does not attempt to generate filters for the methods of the SubResourceLocator of the resources that are called. @Path("a") public class A { //sub resource locator? @Path("b") public B getB() { return new B(); } @GET public void doGet() {} } public class B { @GET public void doOtherGet() { } @Path("c") public void doInner() { } } When ran, the Filter factory will only be called for the following: AbstractResourceMethod(A#doGet) AbstractSubResourceLocator(A#getB) When I expected it to be called for every method of the sub resource. I'm currently using the following options in my web.xml; <init-param> <param-name>com.sun.jersey.spi.container.ResourceFilters</param-name> <param-value>com.my.MyResourceFilterFactory</param-value> </init-param> <init-param> <param-name>com.sun.jersey.config.property.packages</param-name> <param-value>com.my.resources</param-value> </init-param> Is my understanding of the filter factory flawed?

    Read the article

  • java.lang.ClassNotFoundException: org.springframework.ui.ModelMap

    - by aelshereay
    I create a simple webapp using tomcat 6, spring 2.5.6 and maven. The problem is when I boot up tomcat, I am getting the following errors: SEVERE: StandardWrapper.Throwable java.lang.NoClassDefFoundError: org/springframework/ui/ModelMap ... Caused by: java.lang.ClassNotFoundException: org.springframework.ui.ModelMap The ModelMap class does exist in spring-2.5.6.jar and spring-context-2.5.6.jar, I also have some other spring jars. All of them are being deployed to tomcat correctly, when I check the application WEB-INF (deployed to tomcat) I found all those jars there! I have only one @Controller that has a @RequestMapping("/home.htm") showForm(ModelMap model) method. My applicationContext is quite simple: <?xml version="1.0" encoding="utf-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context" xmlns:dwr="http://www.directwebremoting.org/schema/spring-dwr" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd http://www.directwebremoting.org/schema/spring-dwr http://www.directwebremoting.org/schema/spring-dwr-3.0.xsd" default-autowire="byName"> <context:component-scan base-package="org.myapp"/> <bean class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping"/> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"/> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"></property> <property name="prefix" value="/WEB-INF/view/"></property> <property name="suffix" value=".jsp"></property> </bean> </beans>

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Forms Authentication & Virtual Directory

    - by benclaytonfranklin
    Hi, We're having trouble getting Forms Authentication to work with a virtual directory in IIS. We have a main site, and then a microsite setup within a virtual directory. This mircosite has its own admin system within an "Admin" folder, which has authentication on it but currently it is not kicking in and the admin section is browsable by anyone. The web.config with the admin folder has the following: <?xml version="1.0"?> <configuration> <appSettings/> <connectionStrings/> <system.web> <authorization> <deny users="?"/> </authorization> <customErrors mode="RemoteOnly" defaultRedirect="~/Admin/Error.aspx"/> </system.web> </configuration> Could anyone give me any clues as to why this might not be working? Cheers!

    Read the article

  • Running Solr on VPS problems

    - by Camran
    I have a VPS with Ubuntu OS. I run solr om my local machine (windows xp laptop) just fine. I have configured Jetty, and Solr just the same way as on my computer, but on the server. I have also downloaded the JRE and installed it on the server. However, whenever I try to run the start.jar file, the PuTTY terminal shows a bunch of text but gets stuck. I could pase the text here but it is very long, so unless somebody wants to see it I wont. Also, I cant view the solr admin page at all. Does anybody have experience in this kind of problem? Maybe java isn't correctly installed? It is a VPS so maybe installation is different. Thanks UPDATE: These are the last lines from the terminal, in other words, this is where it stops every time: INFO: [] webapp=null path=null params={event=firstSearcher&q=static+firstSearcher+warming+query+from+solrconfig.xml} hits=0 status=0 QTime=9 May 28, 2010 8:58:42 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 28, 2010 8:58:42 PM org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener newSearcher INFO: Loading spell index for spellchecker: default May 28, 2010 8:58:42 PM org.apache.solr.core.SolrCore registerSearcher INFO: [] Registered new searcher Searcher@63a721 main

    Read the article

  • Generic FSM for game in C++ ?

    - by Mr.Gando
    Hello, I was wondering if there's a way I could code some kind of "generic" FSM for a game with C++?. My game has a component oriented design, so I use a FSM Component. my Finite State Machine (FSM) Component, looks more or less this way. class gecFSM : public gecBehaviour { public: //Constructors gecFSM() { state = kEntityState_walk; } gecFSM(kEntityState s) { state = s; } //Interface void setRule(kEntityState initialState, int inputAction, kEntityState resultingState); void performAction(int action); private: kEntityState fsmTable[MAX_STATES][MAX_ACTIONS]; }; I would love to hear your opinions/ideas or suggestions about how to make this FSM component, generic. With generic I mean: 1) Creating the fsmTable from an xml file is really easy, I mean it's just a bunch of integers that can be loaded to create an MAX_STATESxMAX_ACTION matrix. void gecFSM::setRule(kEntityState initialState, int inputAction, kEntityState resultingState) { fsmTable[initialState][inputAction] = resultingState; } 2) But what about the "perform Action" method ? void gecFSM::performAction(int action) { switch( smTable[ ownerEntity->getState() ][ action ] ) { case WALK: /*Walk action*/ break; case STAND: /*Stand action*/ break; case JUMP: /*Jump action*/ break; case RUN: /*Run action*/ break; } } What if I wanted to make the "actions" and the switch generic? This in order to avoid creating a different FSM component class for each GameEntity? (gecFSMZombie, gecFSMDryad, gecFSMBoss etc ?). That would allow me to go "Data Driven" and construct my generic FSM's from files for example. What do you people suggest?

    Read the article

  • Check for modification failure in content Integration using visualSvn sever and cruise control.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • hint and textview with right gravity and a singleline

    - by codeScriber
    I've opened a bug but i was wondering if anyone encountered this issue and knows a workaround. If you define a text view with a hint inside it, give it right gravity (android:gravity="right") then if you define android:singleLine=true or android:maxLines="1" or android:scrollHorizonatally="true" you don't see the hint. removing the right gravity returns the hint to the left side, removing all the tree params i mentioned above puts the hint on the right side. i want my hint on the right, but i need a single horizontal line... here's the sample layout that doesn't show the hint: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content" android:padding="5dp"> <EditText android:layout_width="fill_parent" android:layout_gravity="center_vertical|right" android:layout_height="wrap_content" android:layout_margin="6dp" android:textSize="16sp" android:paddingRight="5dp" android:id="@+id/c" android:gravity="right" android:hint="hello!!!" android:scrollHorizontally="true" android:maxLines="1" android:singleLine="true"/> </LinearLayout> i checked on 1.6 and 2.1 emulators and it reproduces 100%, i'm prettysure it's a bug, i don't see the connection between single line and the hint.... what's more the hint got it's own layout in the TextView (mLayout and mHintLayout both exists, in onDraw if the text length is 0 mHintLayout if mHint is not null is used).

    Read the article

  • Connection

    - by pepersview
    Hello, I would like to ask you about NSURLConnection in objective-c for iPhone. I have one app that needs to connect to one webservice to receive data (about YouTube videos), Then I have all the things that I need to connect (Similar to Hello_Soap sample code in the web). But now, my problem is that I create a class (inherits from NSObject) named Connection and I have implemented the methods: didReceiveResponse, didReceiveData, didFailWithError and connectionDidFinishLoading. Also the method: -(void)Connect:(NSString *) soapMessage{ NSLog(soapMessage); NSURL *url = [NSURL URLWithString:@"http://....."]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if( theConnection ) { webData = [[NSMutableData data] retain]; } else { NSLog(@"theConnection is NULL"); } } But when from my AppDelegate I create one Connection object: Connection * connect = [[Connection alloc] Init:num]; //It's only one param to test. [connect Connect:method.soapMessage]; And I call this method, when this finishes, it doesn't continue calling didReceiveResponse, didReceiveData, didFailWithError or connectionDidFinishLoading. I'm trying to do this but I can't for the moment. The thing I would like to do is to be able to call this class "Connection" each time that I want to receive data (after that to be parsed and displayed in UITableViews). Thank you.

    Read the article

  • How to eliminate duplicate nodes bases on values of multiple attributes?

    - by JayRaj
    Hello All, How can I eliminate duplicate nodes based on values of multiple (more than 1) attributes? Also the attribute names are passed as parameters to the stylesheet. Now I am aware of the Muenchian method of grouping that uses a <xsl:key> element. But I came to know that XSLT 1.0 does not allow paramters/variables in <xsl:key>. Is there another method(s) to achieve duplicate nodes removal? It is fine if it not as efficient as the Munechian method. Update from previus question: XML: <data id = "root"> <record id="1" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="2" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="3" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="4" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="5" operator1='xxx' operator2='lkj' operator3='tyu'/> <record id="6" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="7" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="8" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="9" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="10" operator1='rrr' operator2='yyy' operator3='zzz'/> </data>

    Read the article

  • Time delay an external RSS feed

    - by x3ja
    I subscribe to a number of RSS feeds, mostly from within my own timezone (UK: currently GMT+1, a.k.a BST). However I'm also interested in news from New Zealand (currently GMT+12). My problem is caused by my addiction to needing to keep my unread count at, or near, zero. When I load up my RSS reader in the mornings it has gathered all the NZ news at once (normally around 100 items) and I feel compelled either to read them all or to mark them all as read to feed my need for zero-unread-count. I figured a good solution to this would be to time delay the RSS feed somehow, so I would be drip-fed the stories at their time +12 hours, so I could read them through the day as they come in. So my question (or, rather, questions): Does such a thing exist currently & what is it? (no point reworking the wheel) If not: What would be the best way to approach doing this myself? I have access to a Linux web server on which I can run scripts, create databases, store files etc, so there should be a way... I'm most conversant in perl and have done a little fiddling with XML within that, so would naturally process ... or is there some simpler way to do it that I'm missing?

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • 'mvn install' does not work & shows error while attempting to download

    - by Raj
    I am using maven 3.0.2 version. mvn --version works fine on command line but when I try mvn install it does not work & shows following error. Z:\dev\hector rantavmvn install [INFO] Scanning for projects... Downloading: http://repo1.maven.org/maven2/junit/junit/3.8.2/junit-3.8.2.jar [ERROR] The build could not read 1 project - [Help 1] [ERROR] [ERROR] The project me.prettyprint:hector:0.7.0-24-SNAPSHOT (Z:\dev\hector ran tav\pom.xml) has 1 error [ERROR] Unresolveable build extension: Plugin org.apache.maven.scm:maven-scm -manager-plexus:1.3 or one of its dependencies could not be resolved: Could not transfer artifact junit:junit:jar:3.8.2 from/to central (http://repo1.maven.org/ maven2): Error transferring file: repo1.maven.org: Unknown host repo1.maven.org - [Help 2] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit ch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please rea d the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildin gException [ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/PluginResoluti onException Z:\dev\hector rantav Please let me how I can fix this.

    Read the article

  • Using Ant how can I dex a directory of jars?

    - by cbeaudin
    I have a directory full of jars (felix bundles). I want to iterate through all of these jars and create dex'd versions. My intent is to deploy each of these dex'd jars as standalone apk's since they are bundles. Feel free to straighten me out if I am approaching this from the wrong direction. This first part is just to try and create a corresponding .dex file for each jar. However when I run this I am getting a "no resources specified" error coming out of Ant. Is this the right approach, or is there a simpler approach to just input a jar and output a dex'd version of that jar? The ${file} is valid as it is spitting out the name of the file in the echo command. <target name="dexBundles" description="Run dex on all the bundles"> <taskdef resource="net/sf/antcontrib/antlib.xml" classpath="${basedir}/libs/ant-contrib.jar" /> <echo>Starting</echo> <for param="file"> <path> <fileset dir="${pre.dex.dir}"> <include name="**/*.jar" /> </fileset> </path> <sequential> <echo message="@{file}" /> <echo>Converting jar file @{file} into ${post.dex.dir}/@{file}.class...</echo> <apply executable="${dx}" failonerror="true" parallel="true" verbose="true"> <arg value="--dex" /> <arg value="--output=${post.dex.dir}/${file}.dex" /> <arg path="@{file}" /> </apply> </sequential> </for> <echo>Finished</echo> </target>

    Read the article

< Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >