Search Results

Search found 92314 results on 3693 pages for 'user unknown'.

Page 543/3693 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • Axis2 with OpenEJB error on Tomcat

    - by kostya
    Hi, I deployed on Tomcat Axis2 and OpenEjb and got the error. If deploy either only axis2 or openejb, they works properly, but when deploy them together, Axis2 can't be deployed, but OpenEjb is available. Could anybody help with this problem, please? This is error that I got when Tomcat starts : SEVERE: Error deploying web application archive axis2.war java.lang.ArrayIndexOutOfBoundsException: 48188 at org.apache.xbean.asm.ClassReader.readClass(Unknown Source) at org.apache.xbean.asm.ClassReader.accept(Unknown Source) at org.apache.xbean.asm.ClassReader.accept(Unknown Source) at org.apache.openejb.util.AnnotationFinder.readClassDef(AnnotationFinder.java:251) at org.apache.openejb.util.AnnotationFinder.find(AnnotationFinder.java:157) at org.apache.openejb.config.DeploymentLoader.discoverModuleType(DeploymentLoader.java:1198) at org.apache.openejb.tomcat.catalina.TomcatWebAppBuilder.loadApplication(TomcatWebAppBuilder.java:552) at org.apache.openejb.tomcat.catalina.TomcatWebAppBuilder.start(TomcatWebAppBuilder.java:242) at org.apache.openejb.tomcat.catalina.GlobalListenerSupport.lifecycleEvent(GlobalListenerSupport.java:58) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4377) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:905) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:740) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:500) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:321) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:785) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)

    Read the article

  • a scala remote actor exception

    - by kula
    hi all i with a scala code like this for echo service. import scala.actors.Actor import scala.actors.Actor._ import scala.actors.remote.RemoteActor._ class Echo extends Actor { def act() { alive(9010) register('myName, self) loop { react { case msg = println(msg) } } } } object EchoServer { def main(args: Array[String]): unit = { val echo = new Echo echo.start println("Echo server started") } } EchoServer.main(null) but there has some exception. java.lang.NoClassDefFoundError: Main$$anon$1$Echo$$anonfun$act$1 at Main$$anon$1$Echo.act((virtual file):16) at scala.actors.Reaction.run(Reaction.scala:76) at scala.actors.Actor$$anonfun$start$1.apply(Actor.scala:785) at scala.actors.Actor$$anonfun$start$1.apply(Actor.scala:783) at scala.actors.FJTaskScheduler2$$anon$1.run(FJTaskScheduler2.scala:160) at scala.actors.FJTask$Wrap.run(Unknown Source) at scala.actors.FJTaskRunner.scanWhileIdling(Unknown Source) at scala.actors.FJTaskRunner.run(Unknown Source) Caused by: java.lang.ClassNotFoundException: Main$$anon$1$Echo$$anonfun$act$1 at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 8 more i don't konw how can cause it. by the way .my scala version is 2.7.5

    Read the article

  • c++ OpenCV CVCalibrateCamera2 is causing multiple errors

    - by tlayton
    I am making a simple calibration program in C++ using OpenCV. Everything goes fine until I actually try to call CVCalibrateCamera2. At this point, I get one of several errors: If the number of images which I am using is equal to 4 (which is the number of points being drawn from each image: OpenCV Error: Sizes of input arguments do not match (Both matrices must have the same number of points) in unknown function, file ......\src\cv\cvfundam.cpp, line 870 If the number of images is below 20: OpenCV Error: Bad argument (The total number of matrix elements is not divisible by the new number of rows) in unknown function, file ......\src\cxcore\cxarray.cpp, line 2749 Otherwise, if the number of image is 20 or above: OpenCV Error: Unsupported format or combination of formats (Invalid matrix type) in unknown function, file ......\src\cxcore\cxarray.cpp, line 117 I have checked the arguments for CVCalibrateCamera2 many times, and I am certain that they are of the correct dimensions relative to one another. It seems like somewhere the program is trying to reshape a matrix based on the number of images, but I can't figure out where or why. Any ideas? I am using Eclipse Galileo, MINGW 5.1.6, and OpenCV 2.1.

    Read the article

  • Java FileLock for Reading and Writing

    - by bobtheowl2
    I have a process that will be called rather frequently from cron to read a file that has certain move related commands in it. My process needs to read and write to this data file - and keep it locked to prevent other processes from touching it during this time. A completely separate process can be executed by a user to (potential) write/append to this same data file. I want these two processes to play nice and only access the file one at a time. The nio FileLock seemed to be what I needed (short of writing my own semaphore type files), but I'm having trouble locking it for reading. I can lock and write just fine, but when attempting to create lock when reading I get a NonWritableChannelException. Is it even possible to lock a file for reading? Seems like a RandomAccessFile is closer to what I need, but I don't see how to implement that. Here is the code that fails: FileInputStream fin = new FileInputStream(f); FileLock fl = fin.getChannel().tryLock(); if(fl != null) { System.out.println("Locked File"); BufferedReader in = new BufferedReader(new InputStreamReader(fin)); System.out.println(in.readLine()); ... The exception is thrown on the FileLock line. java.nio.channels.NonWritableChannelException at sun.nio.ch.FileChannelImpl.tryLock(Unknown Source) at java.nio.channels.FileChannel.tryLock(Unknown Source) at Mover.run(Mover.java:74) at java.lang.Thread.run(Unknown Source) Looking at the JavaDocs, it says Unchecked exception thrown when an attempt is made to write to a channel that was not originally opened for writing. But I don't necessarily need to write to it. When I try creating a FileOutpuStream, etc. for writing purposes it is happy until I try to open a FileInputStream on the same file.

    Read the article

  • Twitter4J throws exception in TwitterFactory

    - by Philipp Andre
    Hello Guys, i'm trying to access twitter via oauth. Therefore, i registered my app, downloaded Twitter4j, added the jars in my Eclipse-Project, and then tried to execute the following code: Twitter twitter = new TwitterFactory().getOAuthAuthorizedInstance("[key]","[secretKey]); RequestToken requestToken = twitter.getOAuthRequestToken(); System.out.println(requestToken.getAuthorizationURL()); But it raises the following exception: [Sat May 29 11:19:11 CEST 2010]Using class twitter4j.internal.logging.StdOutLoggerFactory as logging factory. [Sat May 29 11:19:11 CEST 2010]Use twitter4j.internal.http.alternative.HttpClientImpl as HttpClient implementation. Exception in thread "main" java.lang.AssertionError: java.lang.reflect.InvocationTargetException at twitter4j.internal.http.HttpClientFactory.getInstance(HttpClientFactory.java:71) at twitter4j.internal.http.HttpClientWrapper.<init>(HttpClientWrapper.java:59) at twitter4j.http.OAuthAuthorization.init(OAuthAuthorization.java:83) at twitter4j.http.OAuthAuthorization.<init>(OAuthAuthorization.java:74) at twitter4j.TwitterFactory.getOAuthAuthorizedInstance(TwitterFactory.java:112) at MainProgram.main(MainProgram.java:18) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at twitter4j.internal.http.HttpClientFactory.getInstance(HttpClientFactory.java:65) ... 5 more Caused by: java.lang.NoClassDefFoundError: org/apache/http/impl/client/DefaultHttpClient at twitter4j.internal.http.alternative.HttpClientImpl.<init>(HttpClientImpl.java:63) ... 10 more I currently can't figure out why ... can you please give me some suggestions? Best regards Philipp

    Read the article

  • Problem with Spring @Configuration class

    - by easyrider
    Hi, i use class with @Configuration annotation to configure my spring application: @Configuration public class SpringConfiguration { @Value("${driver}") String driver; @Value("${url}") String url; @Value("${minIdle}") private int minIdle; // snipp .. @Bean(destroyMethod = "close") public DataSource dataSource() { DataSource dataSource = new DataSource(); dataSource.setDriverClassName(driver); dataSource.setUrl(url); dataSource.setUsername(user); dataSource.setPassword(password); dataSource.setMinIdle(minIdle); return dataSource; } and properties file in CLASSPATH driver=org.postgresql.Driver url=jdbc:postgresql:servicerepodb minIdle=1 I would like to get my DataSource configured object in my DAO class: ApplicationContext ctx = new AnnotationConfigApplicationContext(SpringConfiguration.class); DataSource dataSource = ctx.getBean(DataSource.class); But i get the error: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'springConfiguration': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private int de.hska.repo.configuration.SpringConfiguration.minIdle; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert value of type 'java.lang.String' to required type 'int'; nested exception is **java.lang.NumberFormatException: For input string: "${minIdle}"** Caused by: java.lang.NumberFormatException: For input string: **"${minIdle}"** at java.lang.NumberFormatException.forInputString(**Unknown Source**) at java.lang.Integer.parseInt(Unknown Source) at java.lang.Integer.valueOf(Unknown Source) It worked with String properties (driver, url), but ${minIdle} (of type int) can't be resolved! Please help. Thanx in advance!

    Read the article

  • datanucleus enhancer & javaw: "the parameter is incorrect"

    - by Riley
    I'm on windows XP using eclipse and the datanucleus enhancer for a gwt + gae app. When I run the enhancer, I get an error: Error Thu Oct 21 16:33:57 CDT 2010 Cannot run program "C:\Program Files\Java\jdk1.6.0_18\bin\javaw.exe" (in directory "C:\ag\dev"): CreateProcess error=87, The parameter is incorrect java.io.IOException: Cannot run program "C:\Program Files\Java\jdk1.6.0_18\bin\javaw.exe" (in directory "C:\ag\dev"): CreateProcess error=87, The parameter is incorrect at java.lang.ProcessBuilder.start(Unknown Source) at com.google.gdt.eclipse.core.ProcessUtilities.launchProcessAndActivateOnError(ProcessUtilities.java:213) at com.google.appengine.eclipse.core.orm.enhancement.EnhancerJob.runInWorkspace(EnhancerJob.java:154) at org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: java.io.IOException: CreateProcess error=87, The parameter is incorrect at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(Unknown Source) at java.lang.ProcessImpl.start(Unknown Source) ... 5 more I've had this problem before, and it was due to a long classpath. I just spent an hour and a half shortening my classpath by moving libraries around and even moving my eclipse install, but with no luck. Any ideas about where I should start to look for an answer? The error message doesn't include any information about what directory it's in or anything. It's kind of infuriating! Is it possible to make the output of javaw more verbose? Is it possible to get around this class-path size bug?

    Read the article

  • Windows Azure ASP.NET MVC Role behaves strangely when redirecting from HTTP to HTTPS

    - by Rinat Abdullin
    Subj. I've got an ASP.NET 2 MVC Worker Role Application, that does not differ much from the default template. When attempting redirect from HTTP to HTTPS (this happens when we access constroller secured by the usual RequireSSL attribute implementation) we get blank page with "Bad Request" message. IntelliTrace shows this: Thrown: "The file '/Views/Home/LogOnUserControl.aspx' does not exist." (System.Web.HttpException) Call stack is really short: [External Code] App_Web_vfahw7gz.dll!ASP.views_shared_site_master.__Render__control1(System.Web.UI.HtmlTextWriter __w = {unknown}, System.Web.UI.Control parameterContainer = {unknown}) [External Code] App_Web_bsbqxr44.dll!ASP.views_home_index_aspx.ProcessRequest(System.Web.HttpContext context = {unknown}) [External Code] User control reference is the usual one in /Views/Shared/Site.Master: <div id="logindisplay"> <% Html.RenderPartial("LogOnUserControl"); %> </div> And partial view LogOnUserControl.ashx is located in Views/Shared (and it is ASHX, not ASPX). Problem shows up, when we try to access site pages, that require auth and redirect. These pages are secured by RequireSSL attribute (Redirect == true): [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = false)] public sealed class RequireSslAttribute : FilterAttribute, IAuthorizationFilter { public bool Redirect { get; set; } // Methods public void OnAuthorization(AuthorizationContext filterContext) { // this get's messy, when we are running custom ports // within the local dev fabric. // hence we disable code in the debug #if !DEBUG if (filterContext == null) { throw new ArgumentNullException("filterContext"); } if (filterContext.HttpContext.Request.IsSecureConnection) return; var canRedirect = string.Equals(filterContext.HttpContext.Request.HttpMethod, "GET", StringComparison.OrdinalIgnoreCase); if (canRedirect && Redirect) { var builder = new UriBuilder { Scheme = "https", Host = filterContext.HttpContext.Request.Url.Host, Path = filterContext.HttpContext.Request.RawUrl }; filterContext.Result = new RedirectResult(builder.ToString()); } else { throw new HttpException(0x193, "Access forbidden. The requested resource requires an SSL connection."); } #endif } } Obviously we compile in RELEASE for this case. Does anybody have any idea, what could cause this strange exception and how to get rid of it?

    Read the article

  • Generating All Permutations of Character Combinations when # of arrays and length of each array are

    - by Jay
    Hi everyone, I'm not sure how to ask my question in a succinct way, so I'll start with examples and expand from there. I am working with VBA, but I think this problem is non language specific and would only require a bright mind that can provide a pseudo code framework. Thanks in advance for the help! Example: I have 3 Character Arrays Like So: Arr_1 = [X,Y,Z] Arr_2 = [A,B] Arr_3 = [1,2,3,4] I would like to generate ALL possible permutations of the character arrays like so: XA1 XA2 XA3 XA4 XB1 XB2 XB3 XB4 YA1 YA2 . . . ZB3 ZB4 This can be easily solved using 3 while loops or for loops. My question is how do I solve for this if the # of arrays is unknown and the length of each array is unknown? So as an example with 4 character arrays: Arr_1 = [X,Y,Z] Arr_2 = [A,B] Arr_3 = [1,2,3,4] Arr_4 = [a,b] I would need to generate: XA1a XA1b XA2a XA2b XA3a XA3b XA4a XA4b . . . ZB4a ZB4b So the Generalized Example would be: Arr_1 = [...] Arr_2 = [...] Arr_3 = [...] . . . Arr_x = [...] Is there a way to structure a function that will generate an unknown number of loops and loop through the length of each array to generate the permutations? Or maybe there's a better way to think about the problem? Thanks Everyone!

    Read the article

  • CStdioFile Undeclared Identifier

    - by Eric Regnier
    I am unable to compile my code when using a CStdioFile type. I have included the afx.h header file but I am still receiving errors. What I am trying to do is write a cstring that contains an entire xml document to file. This cstring contains comments inside of it, but when I use other objects such as wofstream to write it to file, these comments are striped out for some reason. So that is why I am trying to write this to file using CStdioFile now. If anyone can help me out as to why I cannot compile my code using CStdioFile, or any other way to write a cstring that contains xml comments to file, please let me know! Below is my code using CStdioFile: CStdioFile xmlFile; xmlFile.Open( "c:\\test.txt", CFile::modeCreate | CFile::modeWrite | CFile::typeText ); xmlFile.WriteString( m_sData ); xmlFile.Close(); And my errors: error C2065: 'CStdioFile' : undeclared identifier error C2065: 'modeCreate' : undeclared identifier error C2065: 'modeWrite' : undeclared identifier error C2065: 'typeText' : undeclared identifier error C2065: 'xmlFile' : undeclared identifier error C2146: syntax error : missing ';' before identifier 'xmlFile' error C2228: left of '.Close' must have class/struct/union type type is ''unknown-type'' error C2228: left of '.Open' must have class/struct/union type type is ''unknown-type'' error C2228: left of '.WriteString' must have class/struct/union type type is ''unknown-type'' error C2653: 'CFile' : is not a class or namespace name error C2653: 'CFile' : is not a class or namespace name error C2653: 'CFile' : is not a class or namespace name error C3861: 'xmlFile': identifier not found, even with argument-dependent lookup error C3861: 'xmlFile': identifier not found, even with argument-dependent lookup error C3861: 'xmlFile': identifier not found, even with argument-dependent lookup

    Read the article

  • JavaFX: JNLP file error

    - by jeff porter
    Hello everyone, I'm getting the following error when I upload a JavaFX app to a website, but I don't get it locally. I'm presuming that I'm missing something like the 'codebase' tag, but I'm not sure where it goes, can anyone help me out please? Java Console error: exception: JNLP file error: iShout_Foxpro_browser.jnlp. Please make sure the file exists and check if "codebase" and "href" in the JNLP file are correct.. java.io.FileNotFoundException: JNLP file error: iShout_Foxpro_browser.jnlp. Please make sure the file exists and check if "codebase" and "href" in the JNLP file are correct. at sun.plugin2.applet.JNLP2Manager.loadJarFiles(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception: java.io.FileNotFoundException: JNLP file error: iShout_Foxpro_browser.jnlp. Please make sure the file exists and check if "codebase" and "href" in the JNLP file are correct. HTML file source... <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>app_one</title> </head> <body> <script src="http://dl.javafx.com/1.3/dtfx.js"></script> <script> javafx( { archive: "app_one.jar", draggable: true, width: 480, height: 320, code: "app.Main", name: "app_one" } );

    Read the article

  • Correct way of setting a custom FileInfo class to an Iterator

    - by Gordon
    I am trying to set a custom class to an Iterator through the setInfoClass method: Use this method to set a custom class which will be used when getFileInfo and getPathInfo are called. The class name passed to this method must be derived from SplFileInfo. My class is like this (simplified example): class MyFileInfo extends SplFileInfo { public $props = array( 'foo' => '1', 'bar' => '2' ); } The iterator code is this: $rit = new RecursiveIteratorIterator( new RecursiveDirectoryIterator('/some/file/path/'), RecursiveIteratorIterator::SELF_FIRST); Since RecursiveDirectoryIterator is by inheritance through DirectoryIterator also an SplFileInfo object, it provides the setInfoClass method (it's not listed in the manual, but reflection shows it's there). Thus I can do: $rit->getInnerIterator()->setInfoClass('MyFileInfo'); All good up to here, but when iterating over the directory with foreach($rit as $file) { var_dump( $file ); } I get the following weird result object(MyFileInfo)#4 (3) { ["props"]=>UNKNOWN:0 ["pathName":"SplFileInfo":private]=>string(49) "/some/file/path/someFile.txt" ["fileName":"SplFileInfo":private]=>string(25) "someFile.txt" } So while MyFileInfo is picked up, I cannot access it's properties. If I add custom methods, I can invoke them fine, but any properties are UNKNOWN. If I don't set the info class to the iterator, but to the SplFileInfo object (like shown in the example in the manual), it will give the same UNKNOWN result: foreach($rit as $file) { // $file is a SplFileInfo instance $file->setInfoClass('MyFileInfo'); var_dump( $file->getFileInfo() ); } However, it will work when I do foreach($rit as $file) { $file = new MyFileInfo($file); var_dump( $file ); } Unfortunately, the code I a want to use this in is somewhat more complicated and stacks some more iterators. Creating the MyFileInfo class like this is not an option. So, does anyone know how to get this working or why PHP behaves this weird? Thanks.

    Read the article

  • How to use Comparator in Java to sort

    - by Dan
    I learned how to use the comparable but I'm having difficulty with the Comparator. I am having a error in my code: Exception in thread "main" java.lang.ClassCastException: New.People cannot be cast to java.lang.Comparable at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.sort(Unknown Source) at java.util.Collections.sort(Unknown Source) at New.TestPeople.main(TestPeople.java:18) Here is my code: import java.util.Comparator; public class People implements Comparator{ private int id; private String info; private double price; public People(int newid, String newinfo, double newprice){ setid(newid); setinfo(newinfo); setprice(newprice); } public int getid() { return id; } public void setid(int id) { this.id = id; } public String getinfo() { return info; } public void setinfo(String info) { this.info = info; } public double getprice() { return price; } public void setprice(double price) { this.price = price; } public int compare(Object obj1, Object obj2) { Integer p1 = ((People)obj1).getid(); Integer p2 = ((People)obj2).getid(); if (p1 p2 ){ return 1; } else if (p1 < p2){ return -1; } else return 0; } } import java.util.ArrayList; import java.util.Collections; public class TestPeople { public static void main(String[] args) { ArrayList peps = new ArrayList(); peps.add(new People(123, "M", 14.25)); peps.add(new People(234, "M", 6.21)); peps.add(new People(362, "F", 9.23)); peps.add(new People(111, "M", 65.99)); peps.add(new People(535, "F", 9.23)); Collections.sort(peps); for(int i=0;i I believe it has to do something with the casting in the compare method but I was playing around with it and still could not find the solution

    Read the article

  • Using a MockContext inside a Java package not an Android Package.

    - by jax
    I have moved most of my Android code into a separate Java package. I want to run some JUnit4 tests however I can't seem to get a MockContext working. I have extended MockContext() but have not done anything with it yet as I don't know what need to be done. At: private static MyMockContext context = new MyMockContext(); I get java.lang.ExceptionInInitializerError at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.lang.RuntimeException: Stub! at android.content.Context.<init>(Context.java:4) at android.test.mock.MockContext.<init>(MockContext.java:5) at com.example.zulu.MyMockContext.<init>(MyMockContext.java:34) at com.example.zulu.RoomCoreImplTest.<clinit>(RoomCoreImplTest.java:15) ... 16 more

    Read the article

  • Why does grails use hsqldb when I ask for mysql?

    - by John
    I'm following the racetrack example from Jason Rudolph's book at InfoQ, using grails-1.2.1. I got up to the part where I was to switch from hsqldb to mysql. I think I've deleted every reference to hsqldb in the DataSource.groovy file, but I get an exception and the stack trace shows it's still using hsqldb. DataSource.groovy dataSource { boolean pooled = true String driverClassName = "com.mysql.jdbc.Driver" String url = "jdbc:mysql://localhost/dfpc2" String dbCreate = "create" String username = "dfpc2" String password = "dfpc2" dialect = org.hibernate.dialect.MySQL5InnoDBDialect } hibernate { cache.use_second_level_cache=true cache.use_query_cache=true cache.provider_class='net.sf.ehcache.hibernate.EhCacheProvider' } // environment specific settings environments { development { } test { } production { } } When I grails run-app it all starts up with no errors. I can navigate to the home page. But when I click on one of the links, I get a stack trace: java.sql.SQLException: Table not found in statement [select this_.id as id0_0_, this_.version as version0_0_, this_.name as name0_0_, this_.variant as variant0_0_ from domainObject this_ limit ?] at org.hsqldb.jdbc.Util.throwError(Unknown Source) at org.hsqldb.jdbc.jdbcPreparedStatement.<init>(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.prepareStatement(Unknown Source) at dfpc2.domainObjectController$_closure2.doCall(script1269434425504953491149.groovy:13) at dfpc2.domainObjectController$_closure2.doCall(script1269434425504953491149.groovy) at java.lang.Thread.run(Thread.java:619) My mysql database shows no tables created. (I don't think groovy's connected to mysql yet.) Things I've checked: mysql-connector-java-5.1.6.jar is in lib directory. I've tried grails clean I tried putting the dataSource info in the development environment (I haven't graduated to test or prod yet), but it seemed to make no difference. The stdout shows I'm using development env. I've googled for solutions, but the only solution I've found is when people don't change the test or production environments.

    Read the article

  • Zend XML parsing

    - by Vincent
    I have an xml file called error.xml like this: <?xml version="1.0" encoding="UTF-8"?> <errorList> <error> <code>0</code> <desc>Fault</desc> <ufmessage>Fault</ufmessage> </error> <error> <code>1</code> <desc>Unknown</desc> <ufmessage>Unknown</ufmessage> </error> <error> <code>2</code> <desc>Internal Error</desc> <ufmessage>Internal Error</ufmessage> </error> </errorList> I am using Zend and have the above xml file stored in a Zend Registry variable called "apperrors" like this: $apperrors = new Zend_Config_Xml( 'error.xml'); Zend_Registry::set('apperrors', $apperrors->errorList); If my code throws an error code of 2, I want a snippet of code to check the error code with "code" key in "apperrors" array and echo the ufmessage value corresponding to it. How can I do this? Ex: //Error thrown by the code. Hardcoded for now... $errorThrownByCode = 2; //Get the Zend Registry value for all errors $errorList = Zend_Registry::get('apperrors'); //Write some code to compare $errorThrownByCode with $errorList->error->code //If found echo $errorList->error->ufmessage, else echo "An Unknown error occured"; Thanks

    Read the article

  • Issue selenium code maintenance

    - by Rajasankar
    I want to group the common methods in one file and use it. For example, login to a page using selenium may be used in multiple times. Define that in class A and call it in class B. However, it throws null pointer exception. class A has public void test_Login() throws Exception { try{ selenium.setTimeout("60000"); selenium.open("http://localhost"); selenium.windowFocus(); selenium.windowMaximize(); selenium.windowFocus(); selenium.type("userName", "admin"); selenium.type("password", "admin"); Result=selenium.isElementPresent("//input[@type='image']"); selenium.click("//input[@type='image']"); selenium.waitForPageToLoad(Timeout); } catch(Exception ex) { System.out.println(ex); ex.printStackTrace(); } } with all other java syntax in class B public void test_kk() throws Exception { try { a.test_Login(); } catch(Exception ex) { System.out.println(ex); ex.printStackTrace(); } } with all syntax. When I execute class B, I got this error, java.lang.NullPointerException at A.test_Login(A.java:32) at B.test_kk(savefile.java:58) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at com.thoughtworks.selenium.SeleneseTestCase.runBare(SeleneseTestCase.j ava:212) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at junit.textui.TestRunner.doRun(TestRunner.java:116) at junit.textui.TestRunner.doRun(TestRunner.java:109) at junit.textui.TestRunner.run(TestRunner.java:77) at B.main(B.java:77) I hope someone must have tried this before. I may miss something here.

    Read the article

  • implementing interface

    - by 1ace1
    hello guys so i have this assignment that i need to implement interface to go over an ArrayList and sort it (ascending or descnding).I dont want "the" answer i just need some suggestions on my approach and why i get this error Exception in thread "main" java.lang.ClassCastException: Week7.Check cannot be cast to java.lang.Comparable at java.util.Arrays.mergeSort(Unknown Source) at java.util.Arrays.sort(Unknown Source) at java.util.Collections.sort(Unknown Source) at Week7.TestCheck.main(TestCheck.java:18) This is how i did it: comparable had one method called public int compairTo(Object o) public class Check implements comparable { private Integer checkNumber; public Check(Integer newCheckNumber) { setCheckNumber(newCheckNumber); } public String toString() { return getCheckNumber().toString(); } public void setCheckNumber(Integer checkNumber) { this.checkNumber = checkNumber; } public Integer getCheckNumber() { return checkNumber; } @Override public int compairTo(Object o) { Check compair = (Check)o; int result = 0; if (this.getCheckNumber() > compair.getCheckNumber()) result = 1; else if(this.getCheckNumber() < compair.getCheckNumber()) result = -1; return result; } } in my main i had this import java.util.ArrayList; import java.util.Collections; public class TestCheck { public static void main(String[] args) { ArrayList checkList = new ArrayList(); checkList.add(new Check(445)); checkList.add(new Check(101)); checkList.add(new Check(110)); checkList.add(new Check(553)); checkList.add(new Check(123)); Collections.sort(checkList); for (int i =0; i < checkList.size(); i++){ System.out.println(checkList.get(i)); } } }

    Read the article

  • regular expression repeating subexpression

    - by Michael Z
    I have the following text <pattern name="pattern1"/> <success>success case 1</success> <failed> failure 1</failed> <failed> failure 2</failed> <unknown> unknown </unknown> <pattern name="pattern2"/> <success>success case 2</success> <otherTag>There are many other tags.</otherTag> <failed> failure 3</failed> And the regular expression <failed>[\w|\W]*?</failed> matches all the lines contains failed tag. What do I need to to if I want to include the lines contains pattern tag as well? Basically, I want the following output: <pattern name="pattern1"/> <failed> failure 1</failed> <failed> failure 2</failed> <pattern name="pattern2"/> <failed> failure 3</failed> I am doing this in javascript, I do not mind of doing some intermediate steps.

    Read the article

  • java.awt.Desktop.open doesn’t work with PDF files?

    - by Jason S
    It looks like I cannot use Desktop.open() on PDF files regardless of location. Here's a small test program: package com.example.bugs; import java.awt.Desktop; import java.io.File; import java.io.IOException; public class DesktopOpenBug { static public void main(String[] args) { try { Desktop desktop = null; // Before more Desktop API is used, first check // whether the API is supported by this particular // virtual machine (VM) on this particular host. if (Desktop.isDesktopSupported()) { desktop = Desktop.getDesktop(); for (String path : args) { File file = new File(path); System.out.println("Opening "+file); desktop.open(file); } } } catch (IOException e) { e.printStackTrace(); } } } If I run DesktopOpenBug with arguments c:\tmp\zz1.txt c:\tmp\zz.xml c:\tmp\ss.pdf (3 files I happen to have lying around) I get this result: (the .txt and .xml files open up fine) Opening c:\tmp\zz1.txt Opening c:\tmp\zz.xml Opening c:\tmp\ss.pdf java.io.IOException: Failed to open file:/c:/tmp/ss.pdf. Error message: The parameter is incorrect. at sun.awt.windows.WDesktopPeer.ShellExecute(Unknown Source) at sun.awt.windows.WDesktopPeer.open(Unknown Source) at java.awt.Desktop.open(Unknown Source) at com.example.bugs.DesktopOpenBug.main(DesktopOpenBug.java:21) What the heck is going on? I'm running WinXP, I can type "c:\tmp\ss.pdf" at the command prompt and it opens up just fine. edit: if this is an example of Sun Java bug #6764271 please help by voting for it. What a pain. :(

    Read the article

  • JPA Problems mapping relationships

    - by Rosen Martev
    Hello. I have a problem when I try to persist my model. An exception is thrown when creating the EntityManagerFactory: Blockquote javax.persistence.PersistenceException: [PersistenceUnit: NIF] Unable to build EntityManagerFactory at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:677) at org.hibernate.ejb.HibernatePersistence.createEntityManagerFactory(HibernatePersistence.java:126) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:52) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:34) at project.serealization.util.PersistentManager.createSession(PersistentManager.java:24) at project.serealization.SerializationTest.testProject(SerializationTest.java:25) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:79) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: org.hibernate.HibernateException: Wrong column type in nif.action_element for column FLOW_ID. Found: double, expected: bigint at org.hibernate.mapping.Table.validateColumns(Table.java:284) at org.hibernate.cfg.Configuration.validateSchema(Configuration.java:1116) at org.hibernate.tool.hbm2ddl.SchemaValidator.validate(SchemaValidator.java:139) at org.hibernate.impl.SessionFactoryImpl.(SessionFactoryImpl.java:349) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1327) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:867) at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:669) ... 24 more The code for SimpleActionElement and SimpleFlow is as follows: @Entity public class SimpleActionElement { @OneToOne(cascade = CascadeType.ALL, targetEntity = SimpleFlow.class) @JoinColumn(name = "FLOW_ID") private SimpleFlow<T> flow; ... } @Entity public class SimpleFlow<T> { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "ELEMENT_ID") private Long element_id; ... }

    Read the article

  • htaccess rewriterule works in one virtualhost, but not a second virtualhost

    - by Casey Flynn
    I have two virtualhosts configured with xampp on mac os x snow lion. Both use the following .htaccess file. <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / # Protect hidden files from being viewed <Files .*> Order Deny,Allow Deny From All </Files> #Removes access to the system folder by users. #Additionally this will allow you to create a System.php controller, #previously this would not have been possible. #'system' can be replaced if you have renamed your system folder. RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] #When your application folder isn't in the system folder #This snippet prevents user access to the application folder #Submitted by: Fabdrol #Rename 'application' to your applications folder name. RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] #Checks to see if the user is attempting to access a valid file, #such as an image or css document, if this isn't true it sends the #request to index.php RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$01 [L] # If we don't have mod_rewrite installed, all 404's # can be sent to index.php, and everything works as normal. # Submitted by: ElliotHaughin ErrorDocument 404 /index.php My goal is to eliminate /index.php/ from my url strings. This htaccess works perfectly for one project, but not for the other (project/vhost) This is my vhosts.conf # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/2.2> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/2.2/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "logs/foo.log" # with ServerRoot set to "/Applications/xampp/xamppfiles" will be interpreted by the # server as "/Applications/xampp/xamppfiles/logs/foo.log". # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to point the LockFile directive # at a local disk. If you wish to share the same ServerRoot for multiple # httpd daemons, you will need to change at least LockFile and PidFile. # ServerRoot "/Applications/XAMPP/xamppfiles" # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule cache_module modules/mod_cache.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule mem_cache_module modules/mod_mem_cache.so LoadModule dbd_module modules/mod_dbd.so LoadModule bucketeer_module modules/mod_bucketeer.so LoadModule dumpio_module modules/mod_dumpio.so LoadModule echo_module modules/mod_echo.so LoadModule case_filter_module modules/mod_case_filter.so LoadModule case_filter_in_module modules/mod_case_filter_in.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule include_module modules/mod_include.so LoadModule filter_module modules/mod_filter.so LoadModule charset_lite_module modules/mod_charset_lite.so LoadModule deflate_module modules/mod_deflate.so LoadModule ldap_module modules/mod_ldap.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule cern_meta_module modules/mod_cern_meta.so LoadModule expires_module modules/mod_expires.so LoadModule headers_module modules/mod_headers.so LoadModule ident_module modules/mod_ident.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule unique_id_module modules/mod_unique_id.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule asis_module modules/mod_asis.so LoadModule info_module modules/mod_info.so LoadModule suexec_module modules/mod_suexec.so LoadModule cgi_module modules/mod_cgi.so LoadModule cgid_module modules/mod_cgid.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule imagemap_module modules/mod_imagemap.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so #LoadModule apreq_module modules/mod_apreq2.so LoadModule ssl_module modules/mod_ssl.so <IfDefine JUSTTOMAKEAPXSHAPPY> LoadModule php4_module modules/libphp4.so LoadModule php5_module modules/libphp5.so </IfDefine> <IfModule !mpm_winnt_module> <IfModule !mpm_netware_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User nobody Group nogroup </IfModule> </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:80 # XAMPP ServerName localhost # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/Users/caseyflynn/Documents/workspace/vibecompass" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None #XAMPP #Order deny,allow #Deny from all </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/Users/caseyflynn/Documents/workspace/vibecompass"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks ExecCGI Includes # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride All # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> DirectoryIndex index.html index.php index.htmls index.htm </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <FilesMatch "^\.ht"> Order allow,deny Deny from all </FilesMatch> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog logs/error_log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog logs/access_log common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog logs/access_log combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/Applications/XAMPP/xamppfiles/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock logs/cgisock </IfModule> # # "/Applications/xampp/xamppfiles/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/Applications/XAMPP/xamppfiles/phpmyadmin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> # # DefaultType: the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig etc/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi AddHandler cgi-script .cgi .pl # For files that include their own HTTP headers: #AddHandler send-as-is asis # For server-parsed imagemap files: #AddHandler imap-file map # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile etc/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall is used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # EnableMMAP off EnableSendfile off # Supplemental configuration # # The configuration files in the /Applications/xampp/etc/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) #Include /Applications/XAMPP/etc/extra/httpd-mpm.conf # Multi-language error messages Include /Applications/XAMPP/etc/extra/httpd-multilang-errordoc.conf # Fancy directory listings #Include /Applications/XAMPP/etc/extra/httpd-autoindex.conf # Language settings #Include /Applications/XAMPP/etc/extra/httpd-languages.conf # User home directories Include /Applications/XAMPP/etc/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include /Applications/XAMPP/etc/extra/httpd-info.conf # Virtual hosts Include /Applications/XAMPP/etc/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include /Applications/XAMPP/etc/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include /Applications/XAMPP/etc/extra/httpd-dav.conf # Various default settings #Include /Applications/XAMPP/etc/extra/httpd-default.conf # Secure (SSL/TLS) connections Include /Applications/XAMPP/etc/extra/httpd-ssl.conf <IfModule ssl_module> <IfDefine SSL> Include etc/extra/httpd-ssl.conf </IfDefine> </IfModule> # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> #XAMPP Include etc/extra/httpd-xampp.conf Any idea what might be the root of this? ANSWER: had to add this to my httpd.conf file <Directory /Users/caseyflynn/Documents/workspace/cobar> Options FollowSymLinks AllowOverride all #XAMPP Order deny,allow Allow from all </Directory>

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • You should NOT be writing jQuery in SharePoint if&hellip;

    - by Mark Rackley
    Yes… another one of these posts. What can I say? I’m a pot stirrer.. a rabble rouser *rabble rabble* jQuery in SharePoint seems to be a fairly polarizing issue with one side thinking it is the most awesome thing since Princess Leia as the slave girl in Return of the Jedi and the other half thinking it is the worst idea since Mannequin 2: On the Move. The correct answer is OF COURSE “it depends”. But what are those deciding factors that make jQuery an awesome fit or leave a bad taste in your mouth? Let’s see if I can drive the discussion here with some polarizing comments of my own… I know some of you are getting ready to leave your comments even now before reading the rest of the blog, which is great! Iron sharpens iron… These discussions hopefully open us up to understanding the entire process better and think about things in a different way. You should not be writing jQuery in SharePoint if you are not a developer… Let’s start off with my most polarizing and rant filled portion of the blog post. If you don’t know what you are doing or you don’t have a background that helps you understand the implications of what you are writing then you should not be writing jQuery in SharePoint! I truly believe that one of the biggest reasons for the jQuery haters is because of all the bad jQuery out there. If you don’t know what you are doing you can do some NASTY things! One of the best stories I’ve heard about this is from my good friend John Ferringer (@ferringer). John tells this story during our Mythbusters session we do together. One of his clients was undergoing a Denial of Service attack and they couldn’t figure out what was going on! After much searching they found that some genius jQuery developer wrote some code for an image rotator, but did not take into account what happens when there are no images to load! The code just kept hitting the servers over and over and over again which prevented anything else from getting done! Now, I’m NOT saying that I have not done the same sort of thing in the past or am immune from such mistakes. My point is that if you don’t know what you are doing, there are very REAL consequences that can have a major impact on your organization AND they will be hard to track down.  Think how happy your boss will be after you copy and pasted some jQuery from a blog without understanding what it does, it brings down the farm, AND it takes them 3 days to track it back to you.  :/ Good times will not be had. Like it or not JavaScript/jQuery is a programming language. While you .NET people sit on your high horses because your code is compiled and “runs faster” (also debatable), the rest of us will be actually getting work done and delivering solutions while you are trying to figure out why your widget won’t deploy. I can pick at that scab because I write .NET code too and speak from experience. I can do both, and do both well. So, I am not speaking from ignorance here. In JavaScript/jQuery you have variables, loops, conditionals, functions, arrays, events, and built in methods. If you are not a developer you just aren’t going to take advantage of all of that and use it correctly. Ahhh.. but there is hope! There is a lot of jQuery resources out there to help you learn and learn well! There are many experts on the subject that will gladly tell you when you are smoking crack. I just this minute saw a tweet from @cquick with a link to: “jQuery Fundamentals”. I just glanced through it and this may be a great primer for you aspiring jQuery devs. Take advantage of all the resources and become a developer! Hey, it will look awesome on your resume right? You should not be writing jQuery in SharePoint if it depends too much on client resources for a good user experience I’ve said it once and I’ll say it over and over until you understand. jQuery is executed on the client’s computer. Got it? If you are looping through hundreds of rows of data, searching through an enormous DOM, or performing many calculations it is going to take some time! AND if your user happens to be sitting on some old PC somewhere that they picked up at a garage sale their experience will be that much worse! If you can’t give the user a good experience they will not use the site. So, if jQuery is causing the user to have a bad experience, don’t use it. I sometimes go as far to say that you should NOT go to jQuery as a first option for external facing web sites because you have ZERO control over what the end user’s computer will be. You just can’t guarantee an awesome user experience all of the time. Ahhh… but you have no choice? (where have I heard that before?). Well… if you really have no choice, here are some tips to help improve the experience: Avoid screen scraping This is not 1999 and SharePoint is not an old green screen from a mainframe… so why are you treating it like it is? Screen scraping is time consuming and client intensive. Take advantage of tools like SPServices to do your data retrieval when possible. Fine tune your DOM searches A lot of time can be eaten up just searching the DOM and ignoring table rows that you don’t need. Write better jQuery to only loop through tables rows that you need, or only access specific elements you need. Take advantage of Element ID’s to return the one element you are looking for instead of looping through all the DOM over and over again. Write better jQuery Remember this is development. Think about how you can write cleaner, faster jQuery. This directly relates to the previous point of improving your DOM searches, but also when using arrays, variables and loops. Do you REALLY need to loop through that array 3 times? How can you knock it down to 2 times or even 1? When you have lots of calculations and data that you are manipulating every operation adds up. Think about how you can streamline it. Back in the old days before RAM was abundant, Cores were plentiful and dinosaurs roamed the earth, us developers had to take performance into account in everything we did. It’s a lost art that really needs to be used here. You should not be writing jQuery in SharePoint if you are sending a lot of data over the wire… Developer:  “Awesome… you can easily call SharePoint’s web services to retrieve and write data using SPServices!” Administrator: “Crap! you can easily call SharePoint’s web services to retrieve and write data using SPServices!” SPServices may indeed be the best thing that happened to SharePoint since the invention of SharePoint Saturdays by Godfather Lotter… BUT you HAVE to use it wisely! (I REFUSE to make the Spiderman reference). If you do not know what you are doing your code will bring back EVERY field and EVERY row from a list and push that over the internet with all that lovely XML wrapped around it. That can be a HUGE amount of data and will GREATLY impact performance! Calling several web service methods at the same time can cause the same problem and can negatively impact your SharePoint servers. These problems, thankfully, are not difficult to rectify if you are careful: Limit list data retrieved Use CAML to reduce the number of rows returned and limit the fields returned using ViewFields.  You should definitely be doing this regardless. If you aren’t I hope your admin thumps you upside the head. Batch large list updates You may or may not have noticed that if you try to do large updates (hundreds of rows) that the performance is either completely abysmal or it fails over half the time. You can greatly improve performance and avoid timeouts by breaking up your updates into several smaller updates. I don’t know if there is a magic number for best performance, it really depends on how much data you are sending back more than the number of rows. However, I have found that 200 rows generally works well.  Play around and find the right number for your situation. Delay Web Service calls when possible One of the cool things about jQuery and SPServices is that you can delay queries to the server until they are actually needed instead of doing them all at once. This can lead to performance improvements over DataViewWebParts and even .NET code in the right situations. So, don’t load the data until it’s needed. In some instances you may not need to retrieve the data at all, so why retrieve it ALL the time? You should not be writing jQuery in SharePoint if there is a better solution… jQuery is NOT the silver bullet in SharePoint, it is not the answer to every question, it is just another tool in the developers toolkit. I urge all developers to know what options exist out there and choose the right one! Sometimes it will be jQuery, sometimes it will be .NET,  sometimes it will be XSL, and sometimes it will be some other choice… So, when is there a better solution to jQuery? When you can’t get away from performance problems Sometimes jQuery will just give you horrible performance regardless of what you do because of unavoidable obstacles. In these situations you are going to have to figure out an alternative. Can I do it with a DVWP or do I have to crack open Visual Studio? When you need to do something that jQuery can’t do There are lots of things you can’t do in jQuery like elevate privileges, event handlers, workflows, or interact with back end systems that have no web service interface. It just can’t do everything. When it can be done faster and more efficiently another way Why are you spending time to write jQuery to do a DataViewWebPart that would take 5 minutes? Or why are you trying to implement complicated logic that would be simple to do in .NET? If your answer is that you don’t have the option, okay. BUT if you do have the option don’t reinvent the wheel! Take advantage of the other tools. The answer is not always jQuery… sorry… the kool-aid tastes good, but sweet tea is pretty awesome too. You should not be using jQuery in SharePoint if you are a moron… Let’s finish up the blog on a high note… Yes.. it’s true, I sometimes type things just to get a reaction… guess this section title might be a good example, but it feels good sometimes just to type the words that a lot of us think… So.. don’t be that guy! Another good buddy of mine that works for Microsoft told me. “I loved jQuery in SharePoint…. until I had to support it.”. He went on to explain that some user was making several web service calls on a page using jQuery and then was calling Microsoft and COMPLAINING because the page took so long to load… DUH! What do you expect to happen when you are pushing that much data over the wire and are making that many web service calls at once!! It’s one thing to write that kind of code and accept it’s just going to take a while, it’s COMPLETELY another issue to do that and then complain when it’s not lightning fast!  Someone’s gene pool needs some chlorine. So, I think this is a nice summary of the blog… DON’T be that guy… don’t be a moron. How can you stop yourself from being a moron? Ah.. glad you asked, here are some tips: Think Is jQuery the right solution to my problem? Is there a better approach? What are the implications and pitfalls of using jQuery in this situation? Search What are others doing? Does someone have a better solution? Is there a third party library that does the same thing I need? Plan Write good jQuery. Limit calculations and data sent over the wire and don’t reinvent the wheel when possible. Test Okay, it works well on your machine. Try it on others ESPECIALLY if this is for an external site. Test with empty data. Test with hundreds of rows of data. Test as many scenarios as possible. Monitor those server resources to see the impact there as well. Ask the experts As smart as you are, there are people smarter than you. Even the experts talk to each other to make sure they aren't doing something stupid. And for the MOST part they are pretty nice guys. Marc Anderson and Christophe Humbert are two guys who regularly keep me in line. Make sure you aren’t doing something stupid. Repeat So, when you think you have the best solution possible, repeat the steps above just to be safe.  Conclusion jQuery is an awesome tool and has come in handy on many occasions. I’m even teaching a 1/2 day SharePoint & jQuery workshop at the upcoming SPTechCon in Boston if you want to berate me in person. However, it’s only as awesome as the developer behind the keyboard. It IS development and has its pitfalls. Knowledge and experience are invaluable to giving the user the best experience possible.  Let’s face it, in the end, no matter our opinions, prejudices, or ego providing our clients, customers, and users with the best solution possible is what counts. Period… end of sentence…

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >