Search Results

Search found 6281 results on 252 pages for 'automated tests'.

Page 238/252 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Set the JAXB context factory initialization class to be used

    - by user1902288
    I have updated our projects (Java EE based running on Websphere 8.5) to use a new release of a company internal framework (and Ejb 3.x deployment descriptors rather than the 2.x ones). Since then my integration Tests fail with the following exception: [java.lang.ClassNotFoundException: com.ibm.xml.xlxp2.jaxb.JAXBContextFactory] I can build the application with the previous framework release and everything works fine. While debugging i noticed that within the ContextFinder (javax.xml.bind) there are two different behaviours: Previous Version (Everything works just fine): None of the different places brings up a factory class so the default factory class gets loaded which is com.sun.xml.internal.bind.v2.ContextFactory (defined as String constant within the class). Upgraded Version (ClassNotFound): There is a resource "META-INF/services/javax.xml.bind.JAXBContext" beeing loaded successfully and the first line read makes the ContextFinder attempt to load "com.ibm.xml.xlxp2.jaxb.JAXBContextFactory" which causes the error. I now have two questions: What sort is that resource? Because inside our EAR there is two WARs and none of those two contains a folder services in its META-INF directory. Where could that value be from otherwise? Because a filediff showed me no new or changed properties files. No need to say i am going to read all about the JAXB configuration possibilities but if you have first insights on what could have gone wrong or help me out with that resource (is it a real file i have to look for?) id appreciate a lot. Many Thanks! EDIT (according to comments Input/Questions): Out of curiosity, does your framework include JAXB JARs? Did the old version of your framework include jaxb.properties? Indeed (i am a bit surprised) the framework has a customized eclipselink-2.4.1-.jar inside the EAR that includes both a JAXB implementation and a jaxb.properties file that shows the following entry in both versions (the one that finds the factory as well as in the one that throws the exception): javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory I think this is has nothing to do with the current issue since the jar stayed exactly the same in both EARs (the one that runs/ the one with the expection) It's also not clear to me why the old version of the framework was ever selecting the com.sun implementation There is a class javax.xml.bind.ContextFinder which is responsible for initializing the JAXBContextFactory. This class searches various placess for the existance of a jaxb.properties file or a "javax.xml.bind.JAXBContext" resource. If ALL of those places dont show up which Context Factory to use there is a deault factory loaded which is hardcoded in the class itself: private static final String PLATFORM_DEFAULT_FACTORY_CLASS = "com.sun.xml.internal.bind.v2.ContextFactory"; Now back to my problem: Building with the previous version of the framework (and EJB 2.x deployment descriptors) everything works fine). While debugging i can see that there is no configuration found and thatfore above mentioned default factory is loaded. Building with the new version of the framework (and EJB 3.x deployment descriptors so i can deploy) ONLY A TESTCASE fails but the rest of the functionality works (like i can send requests to our webservice and they dont trigger any errors). While debugging i can see that there is a configuration found. This resource is named "META-INF/services/javax.xml.bind.JAXBContext". Here are the most important lines of how this resource leads to the attempt to load 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' which then throws the ClassNotFoundException. This is simplified source of the mentioned javax.xml.bind.ContextFinder class: URL resourceURL = ClassLoader.getSystemResource("META-INF/services/javax.xml.bind.JAXBContext"); BufferedReader r = new BufferedReader(new InputStreamReader(resourceURL.openStream(), "UTF-8")); String factoryClassName = r.readLine().trim(); The field factoryClassName now has the value 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' (The day i understand how to format source code on stackoverflow will be my biggest step ahead.... sorry for the formatting after 20 mins it still looks the same :() Because this has become a super lager question i will also add a bounty :)

    Read the article

  • MSVC Compiler options with mojo-native in Maven

    - by graham.reeds
    I'm trying to set up a test environment with Maven to build VC++ and I am way behind on this. I have 3 source files that I builds a dll (once I get this fixed it should be a simple matter to add the unit-tests): hook.cpp hook.h hook.def This is compiled, on the command line, with the following: C:\Develop\hook\src\main\msvc>cl -o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def" Which produces the expected obj, dll, lib and exp files. So now to get this working in Maven2 with the Mojo-native plugin. With no options Maven w/Mojo gives me this (truncated) output: [INFO] [native:initialize {execution: default-initialize}] [INFO] [native:unzipinc {execution: default-unzipinc}] [INFO] [native:javah {execution: default-javah}] [INFO] [native:compile {execution: default-compile}] [INFO] cmd.exe /X /C "cl -IC:\Develop\hook\src\main\msvc /FoC:\Develop\hook\targ et\objs\Hook.obj -c C:\Develop\hook\src\main\msvc\Hook.cpp" Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 15.00.30729.01 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. Hook.cpp [INFO] [native:link {execution: default-link}] [INFO] cmd.exe /X /C "link.exe /out:C:\Develop\hook\target\hook.dll target\objs\ Hook.obj" Microsoft (R) Incremental Linker Version 9.00.30729.01 Copyright (C) Microsoft Corporation. All rights reserved. LINK : fatal error LNK1561: entry point must be defined [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error executing command line. Exit code:1561 mojo-native gives options for manipulating the compiler/linker options but gives no example of usage. No matter what I tweak in these settings I get the error of: [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to configure plugin parameters for: org.codehaus.mojo:native-maven -plugin:1.0-alpha-4 (found static expression: '-o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def"' which may act as a default value). Cause: Cannot assign configuration entry 'compilerStartOptions' to 'interface ja va.util.List' from '-o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook .def"', which is of type class java.lang.String The relevant part of my pom.xml looks like this: <configuration> <sources> <source> <directory>src/main/msvc</directory> <includes> <include>**/*.cpp</include> </includes> </source> </sources> <compilerProvider>msvc</compilerProvider> <compilerExecutable>cl</compilerExecutable> <!-- cl -o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def" --> <compilerStartOptions>-o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def"</compilerStartOptions> <!-- <compilerMiddleOptions></compilerMiddleOptions> <compilerEndOptions></compilerEndOptions> <linkerStartOptions></linkerStartOptions> <linkerMiddleOptions></linkerMiddleOptions> <linkerEndOptions></linkerEndOptions> --> </configuration> How do I manipulate the compiler to produce a DLL like the command line version does? Or should I give up and just use exec?

    Read the article

  • Random Access Violation Exception in WPF Application

    - by PT1984
    Hi, I am facing weird problem while running regression tests on my WPF Application. I am getting AccessViolationException with different stacktraces each time. First : Message :Attempted to read or write protected memory. This is often an indication that other memory is corrupt. StackTrace : at MS.Win32.PresentationCore.UnsafeNativeMethods.MILUnknown.Release(IntPtr pIUnkown) at MS.Win32.PresentationCore.UnsafeNativeMethods.MILUnknown.ReleaseInterface(IntPtr& ptr) at System.Windows.Media.SafeMILHandle.ReleaseHandle() at System.Runtime.InteropServices.SafeHandle.InternalFinalize() at System.Runtime.InteropServices.SafeHandle.Dispose(Boolean disposing) at System.Runtime.InteropServices.SafeHandle.Finalize() Source :PresentationCore Type : System.AccessViolationException. Second : Message :Attempted to read or write protected memory. This is often an indication that other memory is corrupt. StackTrace : at MS.Win32.PresentationCore.UnsafeNativeMethods.IMILBitmapEffect.GetOutput(SafeHandle THIS_PTR, UInt32 uiIndex, SafeMILHandle pContext, BitmapSourceSafeMILHandle& ppBitmapSource) at System.Windows.Media.Effects.BitmapEffect.GetOutput(SafeHandle unmanagedEffect, Int32 index, BitmapEffectRenderContext context) at System.Windows.Media.Effects.BitmapEffect.GetOutput(BitmapEffectInput input) at System.Windows.Media.Effects.BitmapEffectState.GetEffectOutput(Visual visual, RenderTargetBitmap& renderBitmap, Matrix worldTransform, Rect windowClip, Matrix& finalTransform) at System.Windows.Media.Effects.BitmapEffectVisualState.RenderBitmapEffect(Visual visual, Channel channel) at System.Windows.Media.Effects.BitmapEffectContent.ExecuteRealizationsUpdate() at System.Windows.Media.RealizationContext.RealizationUpdateSchedule.Execute() at System.Windows.Media.MediaContext.Render(ICompositionTarget resizedCompositionTarget) at System.Windows.Media.MediaContext.RenderMessageHandlerCore(Object resizedCompositionTarget) at System.Windows.Media.MediaContext.AnimatedRenderMessageHandler(Object resizedCompositionTarget) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.WrappedInvoke(Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.WrappedInvoke(Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at System.Windows.Threading.Dispatcher.Invoke(DispatcherPriority priority, Delegate method, Object arg) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.Run() at System.Windows.Application.RunDispatcher(Object ignore) at System.Windows.Application.RunInternal(Window window) at System.Windows.Application.Run(Window window) at System.Windows.Application.Run() at Main() Source :PresentationCore Type :System.AccessViolationException In Application Event Log I found following entries : Dispatcher processing has been suspended, but messages are still being processed. Faulting application **.exe, version 1.0.0.*, stamp 4c08d288, faulting module wpfgfx_v0300.dll, version 3.0.6920.1427, stamp 488f3056, debug? 0, fault address 0x0012ec36. My Application uses Dispatcher from another thread, to change the values of the controls , enable - disable those, change visibility etc., the thred is run multiple times in a second. Please let me know if anybody has faced this problem? Thanks in advance, -Prasad

    Read the article

  • Access, ADO & 64-bit

    - by JTeagle
    We have a large codebase that uses ADO under 32-bit, and we need to convert the code to 64-bit. We were using the Jet provider, but I know this is not supported under x64. We're importing definitions from msado15.dll. As of a while ago a 64-bit version of this DLL became available, but we are unable to get it to work. I have written a test program as follows (MFC, using the #imported DLL): map<CString, CString> mapResults ; _ConnectionPtr pConn = NULL ; CString strConn = _T("Provider=Microsoft.ACE.OLEDB.14.0;") _T("Data Source=c:\\program files\\our_company\\our_database.mdb;"); // (Above string only split for readability here.) CString strSQL = _T("SELECT * FROM [our_table] ORDER BY [our_field_1];"); try { pConn.CreateInstance(__uuidof(Connection) ); pConn->Open(_bstr_t(strConn), _bstr_t(_T("") ), _bstr_t(_T("") ), -1); _CommandPtr pCommand = NULL; pCommand.CreateInstance(__uuidof(Command) ); pCommand->CommandType = adCmdText ; pCommand->ActiveConnection = pConn ; pCommand->CommandText = _bstr_t(strSQL); _RecordsetPtr pRS = NULL ; pRS.CreateInstance(__uuidof(Recordset) ); pRS->CursorLocation = adUseClient ; pRS = pCommand->Execute(NULL, NULL, adCmdText); while (pRS->adoEOF != VARIANT_TRUE) { CString strField = (LPCTSTR)(_bstr_t)pRS->Fields->GetItem( (_bstr_t)_T("our_field_1") )->Value ; CString strValue = (LPCTSTR)(_bstr_t)pRS->Fields->GetItem( (_bstr_t)_T("our_field_2") )->Value ; mapResults[strField] = strValue ; pRS->MoveNext(); } } catch(_com_error &e) { CString strError ; strError.Format(_T("Error %08x: %s"),(int)e.Error(), e.ErrorMessage() ); mapResults[_T("COM error") ] = strError ; } Basically, the code will list the table if it succeeds, or list the COM error obtained if it fails. Obviously, we tested the code under 32 bit and got the desired results. On the 64-bit machine, the code explicitly imports from the known 64-bit version of msado15.dll (v6.1.7600.nnn). The machine has had the Office Data Providers (AccessDatabaseEngine_x64.exe) applied to get the new ACE drivers (ACEODBC.DLL, v14.nnn.nnn.nnn). If I look at Data Source under Administrator Tools (I know ODBC isn't the same as ADO, it was just to confirm the DLL was installed correctly), it shows the expected DLL. I can even confirm, using Process Explorer, that the version of msado15.dll it loads at run time (thus confirming that COM is finding the ADO dll) is the 64-bit version. I believe we have MDAC 2.8 installed (we have msado28.tlb in the same place as msado15.dll, but that might have been installed by AccessDatabaseEngine_x64.exe). The test machine is Windows 7 Ultimate, 64-bit. The test code was recompiled on that machine using VS2008 for x64 in full Release and run externally. And yet, we still get COM error 0x800a0e7a (Provider not found). Is there anything any one can suggest as to why this isn't working, or what further tests / checks I can perform to verify that I have all the right stuff on the machine (and thus, that it should work)? I know that ODBC will work under x64 (tried the test program using that) but rewriting our code base for ODBC would be undesirable!

    Read the article

  • Regular expression to remove one parameter from query string

    - by Kip
    I'm looking for a regular expression to remove a single parameter from a query string, and I want to do it in a single regular expression if possible. Say I want to remove the foo parameter. Right now I use this: /&?foo\=[^&]+/ That works as long as foo is not the first parameter in the query string. If it is, then my new query string starts with an ampersand. (For example, "foo=123&bar=456" gives a result of "&bar=456".) Right now, I'm just checking after the regex if the query string starts with ampersand, and chopping it off if it does. Example edge cases: Input | Output -------------------------+----------------- foo=123 | (empty string) foo=123&bar=456 | bar=456 bar=456&foo=123 | bar=456 abc=789&foo=123&bar=456 | abc=789&bar=456 Edit OK as pointed out in comments there are there are way more edge cases than I originally considered. I got the following regex to work with all of them: /&foo(\=[^&]*)?(?=&|$)|^foo(\=[^&]*)?(&|$)/ This is modified from Mark Byers's answer, which is why I'm accepting that one, but Roger Pate's input helped a lot too. Here is the full suite of test cases I'm using, and a Perl script which tests them. Input | Output -------------------------+------------------- foo | foo&bar=456 | bar=456 bar=456&foo | bar=456 abc=789&foo&bar=456 | abc=789&bar=456 foo= | foo=&bar=456 | bar=456 bar=456&foo= | bar=456 abc=789&foo=&bar=456 | abc=789&bar=456 foo=123 | foo=123&bar=456 | bar=456 bar=456&foo=123 | bar=456 abc=789&foo=123&bar=456 | abc=789&bar=456 xfoo | xfoo xfoo&bar=456 | xfoo&bar=456 bar=456&xfoo | bar=456&xfoo abc=789&xfoo&bar=456 | abc=789&xfoo&bar=456 xfoo= | xfoo= xfoo=&bar=456 | xfoo=&bar=456 bar=456&xfoo= | bar=456&xfoo= abc=789&xfoo=&bar=456 | abc=789&xfoo=&bar=456 xfoo=123 | xfoo=123 xfoo=123&bar=456 | xfoo=123&bar=456 bar=456&xfoo=123 | bar=456&xfoo=123 abc=789&xfoo=123&bar=456 | abc=789&xfoo=123&bar=456 foox | foox foox&bar=456 | foox&bar=456 bar=456&foox | bar=456&foox abc=789&foox&bar=456 | abc=789&foox&bar=456 foox= | foox= foox=&bar=456 | foox=&bar=456 bar=456&foox= | bar=456&foox= abc=789&foox=&bar=456 | abc=789&foox=&bar=456 foox=123 | foox=123 foox=123&bar=456 | foox=123&bar=456 bar=456&foox=123 | bar=456&foox=123 abc=789&foox=123&bar=456 | abc=789&foox=123&bar=456 Test script (Perl) @in = ('foo' , 'foo&bar=456' , 'bar=456&foo' , 'abc=789&foo&bar=456' ,'foo=' , 'foo=&bar=456' , 'bar=456&foo=' , 'abc=789&foo=&bar=456' ,'foo=123' , 'foo=123&bar=456' , 'bar=456&foo=123' , 'abc=789&foo=123&bar=456' ,'xfoo' , 'xfoo&bar=456' , 'bar=456&xfoo' , 'abc=789&xfoo&bar=456' ,'xfoo=' , 'xfoo=&bar=456' , 'bar=456&xfoo=' , 'abc=789&xfoo=&bar=456' ,'xfoo=123', 'xfoo=123&bar=456', 'bar=456&xfoo=123', 'abc=789&xfoo=123&bar=456' ,'foox' , 'foox&bar=456' , 'bar=456&foox' , 'abc=789&foox&bar=456' ,'foox=' , 'foox=&bar=456' , 'bar=456&foox=' , 'abc=789&foox=&bar=456' ,'foox=123', 'foox=123&bar=456', 'bar=456&foox=123', 'abc=789&foox=123&bar=456' ); @exp = ('' , 'bar=456' , 'bar=456' , 'abc=789&bar=456' ,'' , 'bar=456' , 'bar=456' , 'abc=789&bar=456' ,'' , 'bar=456' , 'bar=456' , 'abc=789&bar=456' ,'xfoo' , 'xfoo&bar=456' , 'bar=456&xfoo' , 'abc=789&xfoo&bar=456' ,'xfoo=' , 'xfoo=&bar=456' , 'bar=456&xfoo=' , 'abc=789&xfoo=&bar=456' ,'xfoo=123', 'xfoo=123&bar=456', 'bar=456&xfoo=123', 'abc=789&xfoo=123&bar=456' ,'foox' , 'foox&bar=456' , 'bar=456&foox' , 'abc=789&foox&bar=456' ,'foox=' , 'foox=&bar=456' , 'bar=456&foox=' , 'abc=789&foox=&bar=456' ,'foox=123', 'foox=123&bar=456', 'bar=456&foox=123', 'abc=789&foox=123&bar=456' ); print "Succ | Input | Output | Expected \n"; print "-----+--------------------------+--------------------------+-------------------------\n"; for($i=0; $i <= $#in; $i++) { $out = $in[$i]; $out =~ s/_PUT_REGEX_HERE_//; $succ = ($out eq $exp[$i] ? 'PASS' : 'FAIL'); #if($succ eq 'FAIL') #{ printf("%s | %- 24s | %- 24s | %- 24s\n", $succ, $in[$i], $out, $exp[$i]); #} }

    Read the article

  • Getting JAX-WS client work on Weblogic 9.2 with ant

    - by michuk
    I've recently had lots of issues trying to deploy a JAX-WS web servcie client on Weblogic 9.2. It turns out there is no straightforward guide on how to achieve this, so I decided to put together this short wiki entry hoping it might be useful for others. Firstly, Weblogic 9.2 does not support web servcies using JAX-WS in general. It comes with old versions of XML-related java libraries that are incompatible with the latest JAX-WS (similar issues occur with Axis2, only Axis1 seems to be working flawlessly with Weblogic 9.x but that's a very old and unsupported library). So, in order to get it working, some hacking is required. This is how I did it (note that we're using ant in our legacy corporate project, you probably should be using maven which should eliminate 50% of those steps below): Download the most recent JAX-WS distribution from https://jax-ws.dev.java.net/ (The exact version I got was JAXWS2.2-20091203.zip) Place the JAX-WS jars with the dependencies in a separate folder like lib/webservices. Create a patternset in ant to reference those jars: Include the patternset in your WAR-related goal. This could look something like: (not the flatten="true" parameter - it's important as Weblogic 9.x is by default not smart enough to access jars located in a different lcoation than WEB-INF/lib inside your WAR file) In case of clashes, Weblogic uses its own jars by default. We want it to use the JAX-WS jars from our application instead. This is achieved by preparing a weblogic-application.xml file and placing it in META-INF folder of the deplotyed EAR file. It should look like this: javax.jws. javax.xml.bind. javax.xml.crypto. javax.xml.registry. javax.xml.rpc. javax.xml.soap. javax.xml.stream. javax.xml.ws. com.sun.xml.api.streaming.* Remember to place that weblogic-application.xml file in your EAR! The ant goal for that may look similar to: <jar destfile="${warfile}" basedir="${wardir}"/> <ear destfile="${earfile}" appxml="resources/${app.name}/application.xml"> <fileset dir="${dist}" includes="${app.name}.war"/> <metainf dir="resources/META-INF"/> </ear> Also you need to tell weblogic to prefer your WEB-INF classes to those in distribution. You do that by placing the following lines in your WEB-INF/weblogic.xml file: true And that's it for the weblogic-related configuration. Now only set up your JAX-WS goal. The one below is going to simply generate the web service stubs and classes based on a locally deployed WSDL file and place them in a folder in your app: Remember about the keep="true" parameter. Without it, wsimport generates the classes and... deletes them, believe it or not! For mocking a web service I suggest using SOAPUI, an open source project. Very easy to deploy, crucial for web servcies intergation testing. We're almost there. The final thing is to write a Java class for testing the web service, try to run it as a standalone app first (or as part of your unit tests) And then try to run the same code from withing Weblogic. It should work. It worked for me. After some 3 days of frustration. And yes, I know I should've put 9 and 10 under a single bullet-point, but the title "10 steps to deploy a JAX-WS web service under Web logic 9.2 using ant" sounds just so much better. Please, edit this post and improve it if you find something missing!

    Read the article

  • Dojox grid having problem with Contentpane

    - by ice
    the grid appears properly on template's first loading. But when you click the paging button to load flooders.php thru list_result1() only the paging buttons will appear. I already tested the flooders.php outside the template and it works properly. what seems to be the problem? and what are the tools that i can use to see if the javascript is loading properly because i think the error console of ff browser which i use to track errors won't give you that much info when you are working with contentpane. thanks! ice note: below are the codes... ** from contentpane js function list_result1(){ args=""; uri = "flooders.php" + args; dojo.xhrGet( { url: uri, handleAs: "text", timeout: 500, // Time in milliseconds load: function(response, ioArgs) { //alert(response); dojo.byId("flooders_table").innerHTML = response; return response; }, // The ERROR function will be called in an error case. error: function(response, ioArgs) { console.error("HTTP status code: ", ioArgs.xhr.status); return response; } }); //end of dojo.xhrGet } **flooders.php starts here*** @import "js/dojo-0.9.0/dojo/resources/dojo.css"; @import "js/dojo-0.9.0/dijit/themes/tundra/tundra.css"; @import "js/dojo-0.9.0/dijit/themes/tundra/tundra_rtl.css"; @import "css/ash.css"; @import "js/dojo-0.9.0/dojox/grid/resources/Grid.css"; @import "js/dojo-0.9.0/dojox/grid/resources/tundraGrid.css"; @import "js/dojo-0.9.0/dojo/resources/dojo.css"; @import "js/dojo-0.9.0/dijit/tests/css/dijitTests.css"; .dojoxGridRowEditing td { background-color: #F4FFF4; } .dojoxGrid input, .dojoxGrid select, .dojoxGrid textarea { margin: 0; padding: 0; border-style: none; width: 100%; font-size: 100%; font-family: inherit; } .dojoxGrid input { } .dojoxGrid select { } .dojoxGrid textarea { } #controls { padding: 0px 0; } #controls button { margin-left: 10px; } .myGrid { width: 550px; height: 230px; margin-left: 20px; /* border: 1px solid silver; */ } echo " // it has script heading here (function(){ // some sample data // global var 'data' data = { identifier: 'id', label: 'id', items: [] }; data_list = [ $banlist ]; var rows = $listnum ; var x=1; for(var i=0, l=data_list.length; i // global var 'test_store' test_store = new dojo.data.ItemFileWriteStore({data: data}); })(); // it has ending here "; ?   -- here's the javascript dojo.require("dijit.TitlePane"); dojo.require("dijit.dijit"); dojo.require("dojox.grid.DataGrid"); dojo.require("dojo.data.ItemFileWriteStore"); dojo.require("dojo.parser"); // scan page for widgets and instantiate them dojo.require("dijit.layout.LayoutContainer"); dojo.require("dijit.layout.AccordionContainer"); dojo.require("dijit.layout.ContentPane"); dojo.require("dijit.layout.TabContainer"); dojo.require("dijit.Editor"); dojo.require("dijit._editor.plugins.AlwaysShowToolbar"); dojo.require("dijit._editor.plugins.LinkDialog"); //this must be inlcuded below function() selectCell = { styles: 'text-align: center;', type: dojox.grid.cells.Select }; gridLayout = { defaultCell: { width: 5, styles: 'text-align: right;' }, rows: [ [ { name: 'Mark', width: 3, field: 'col1', editable: true, styles: 'text-align: center;', type: dojox.grid.cells.Bool }, { name: 'Id', width: 3, field: 'id' , editable: false }, { name: 'Username', field: 'col2', editable: false, styles: '', width: '70%' }, { name: 'Reason', field: 'col3', editable: false , styles: '', width: '100%' }, { name: 'Date Banned', field: 'col4', editable: false , styles: '', width: '70%' } ] ] };

    Read the article

  • How to write a test for accounts controller for forms authenticate

    - by Anil Ali
    Trying to figure out how to adequately test my accounts controller. I am having problem testing the successful logon scenario. Issue 1) Am I missing any other tests.(I am testing the model validation attributes separately) Issue 2) Put_ReturnsOverviewRedirectToRouteResultIfLogonSuccessAndNoReturnUrlGiven() and Put_ReturnsRedirectResultIfLogonSuccessAndReturnUrlGiven() test are not passing. I have narrowed it down to the line where i am calling _membership.validateuser(). Even though during my mock setup of the service i am stating that i want to return true whenever validateuser is called, the method call returns false. Here is what I have gotten so far AccountController.cs [HandleError] public class AccountController : Controller { private IMembershipService _membershipService; public AccountController() : this(null) { } public AccountController(IMembershipService membershipService) { _membershipService = membershipService ?? new AccountMembershipService(); } [HttpGet] public ActionResult LogOn() { return View(); } [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (_membershipService.ValidateUser(model.UserName,model.Password)) { if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); } return RedirectToAction("Index", "Overview"); } ModelState.AddModelError("*", "The user name or password provided is incorrect."); } return View(model); } } AccountServices.cs public interface IMembershipService { bool ValidateUser(string userName, string password); } public class AccountMembershipService : IMembershipService { public bool ValidateUser(string userName, string password) { throw new System.NotImplementedException(); } } AccountControllerFacts.cs public class AccountControllerFacts { public static AccountController GetAccountControllerForLogonSuccess() { var membershipServiceStub = MockRepository.GenerateStub<IMembershipService>(); var controller = new AccountController(membershipServiceStub); membershipServiceStub .Stub(x => x.ValidateUser("someuser", "somepass")) .Return(true); return controller; } public static AccountController GetAccountControllerForLogonFailure() { var membershipServiceStub = MockRepository.GenerateStub<IMembershipService>(); var controller = new AccountController(membershipServiceStub); membershipServiceStub .Stub(x => x.ValidateUser("someuser", "somepass")) .Return(false); return controller; } public class LogOn { [Fact] public void Get_ReturnsViewResultWithDefaultViewName() { // Arrange var controller = GetAccountControllerForLogonSuccess(); // Act var result = controller.LogOn(); // Assert Assert.IsType<ViewResult>(result); Assert.Empty(((ViewResult)result).ViewName); } [Fact] public void Put_ReturnsOverviewRedirectToRouteResultIfLogonSuccessAndNoReturnUrlGiven() { // Arrange var controller = GetAccountControllerForLogonSuccess(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, null); var redirectresult = (RedirectToRouteResult) result; // Assert Assert.IsType<RedirectToRouteResult>(result); Assert.Equal("Overview", redirectresult.RouteValues["controller"]); Assert.Equal("Index", redirectresult.RouteValues["action"]); } [Fact] public void Put_ReturnsRedirectResultIfLogonSuccessAndReturnUrlGiven() { // Arrange var controller = GetAccountControllerForLogonSuccess(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, "someurl"); var redirectResult = (RedirectResult) result; // Assert Assert.IsType<RedirectResult>(result); Assert.Equal("someurl", redirectResult.Url); } [Fact] public void Put_ReturnsViewIfInvalidModelState() { // Arrange var controller = GetAccountControllerForLogonFailure(); var user = new LogOnModel(); controller.ModelState.AddModelError("*","Invalid model state."); // Act var result = controller.LogOn(user, "someurl"); var viewResult = (ViewResult) result; // Assert Assert.IsType<ViewResult>(result); Assert.Empty(viewResult.ViewName); Assert.Same(user,viewResult.ViewData.Model); } [Fact] public void Put_ReturnsViewIfLogonFailed() { // Arrange var controller = GetAccountControllerForLogonFailure(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, "someurl"); var viewResult = (ViewResult) result; // Assert Assert.IsType<ViewResult>(result); Assert.Empty(viewResult.ViewName); Assert.Same(user,viewResult.ViewData.Model); Assert.Equal(false,viewResult.ViewData.ModelState.IsValid); } } }

    Read the article

  • debian lenny xen bridge networking problem

    - by Sasha
    DomU isn't talking to the world, but it talks to Dom0. Here are the tests that I made: Dom0 (external networking is working): ping 188.40.96.238 #Which is Domu's ip PING 188.40.96.238 (188.40.96.238) 56(84) bytes of data. 64 bytes from 188.40.96.238: icmp_seq=1 ttl=64 time=0.092 ms DomU: ping 188.40.96.215 #Which is Dom0's ip PING 188.40.96.215 (188.40.96.215) 56(84) bytes of data. 64 bytes from 188.40.96.215: icmp_seq=1 ttl=64 time=0.045 ms ping 188.40.96.193 #Which is the gateway - fail PING 188.40.96.193 (188.40.96.193) 56(84) bytes of data. ^C --- 188.40.96.193 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1013ms The system is debian lenny with a normal setup. Here is my configs: uname -a Linux green0 2.6.26-2-xen-686 #1 SMP Wed Aug 19 08:47:57 UTC 2009 i686 GNU/Linux cat /etc/xen/green1.cfg |grep -v '#' kernel = '/boot/vmlinuz-2.6.26-2-xen-686' ramdisk = '/boot/initrd.img-2.6.26-2-xen-686' memory = '2000' root = '/dev/xvda2 ro' disk = [ 'file:/home/xen/domains/green1/swap.img,xvda1,w', 'file:/home/xen/domains/green1/disk.img,xvda2,w', ] name = 'green1' vif = [ 'ip=188.40.96.238,mac=00:16:3E:1F:C4:CC' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' ifconfig eth0 Link encap:Ethernet HWaddr 00:24:21:ef:2f:86 inet addr:188.40.96.215 Bcast:188.40.96.255 Mask:255.255.255.192 inet6 addr: fe80::224:21ff:feef:2f86/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3296 errors:0 dropped:0 overruns:0 frame:0 TX packets:2204 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:262717 (256.5 KiB) TX bytes:330465 (322.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) peth0 Link encap:Ethernet HWaddr 00:24:21:ef:2f:86 inet6 addr: fe80::224:21ff:feef:2f86/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:3407 errors:0 dropped:657431448 overruns:0 frame:0 TX packets:2291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:319941 (312.4 KiB) TX bytes:338423 (330.4 KiB) Interrupt:16 Base address:0x8000 vif2.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:27 errors:0 dropped:0 overruns:0 frame:0 TX packets:151 errors:0 dropped:33 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:1164 (1.1 KiB) TX bytes:20974 (20.4 KiB) ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: peth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:24:21:ef:2f:86 brd ff:ff:ff:ff:ff:ff inet6 fe80::224:21ff:feef:2f86/64 scope link valid_lft forever preferred_lft forever 4: vif0.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 5: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: vif0.1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 7: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 8: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 9: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 10: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 11: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 12: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:24:21:ef:2f:86 brd ff:ff:ff:ff:ff:ff inet 188.40.96.215/26 brd 188.40.96.255 scope global eth0 inet6 fe80::224:21ff:feef:2f86/64 scope link valid_lft forever preferred_lft forever 14: vif2.0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 32 link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever brctl show bridge name bridge id STP enabled interfaces eth0 8000.002421ef2f86 no peth0 vif2.0 ip r l Dom0: 188.40.96.192/26 dev eth0 proto kernel scope link src 188.40.96.215 default via 188.40.96.193 dev eth0 DomU: 188.40.96.192/26 dev eth0 proto kernel scope link src 188.40.96.238 default via 188.40.96.193 dev eth0

    Read the article

  • Is Multicast broken for Android 2.0.1 (currently on the DROID) or am I missing something?

    - by Gubatron
    This code works perfectly in Ubuntu, in Windows and MacOSX, it also works fine with a Nexus-One currently running firmware 2.1.1. I start sending and listening multicast datagrams, and all the computers and the Nexus-One will see each other perfectly. Then I run the same code on a Droid (Firmware 2.0.1), and everybody will get the packets sent by the Droid, but the droid will listen only to it's own packets. This is the run() method of a thread that's constantly listening on a Multicast group for incoming packets sent to that group. I'm running my tests on a local network where I have multicast support enabled in the router. My goal is to have devices meet each other as they come on line by broadcasting packages to a multicast group. public void run() { byte[] buffer = new byte[65535]; DatagramPacket dp = new DatagramPacket(buffer, buffer.length); try{ MulticastSocket ms = new MulticastSocket(_port); ms.setNetworkInterface(_ni); //non loopback network interface passed ms.joinGroup(_ia); //the multicast address, currently 224.0.1.16 Log.v(TAG,"Joined Group " + _ia); while (true) { ms.receive(dp); String s = new String(dp.getData(),0,dp.getLength()); Log.v(TAG,"Received Package on "+ _ni.getName() +": " + s); Message m = new Message(); Bundle b = new Bundle(); b.putString("event", "Listener ("+_ni.getName()+"): \"" + s + "\""); m.setData(b); dispatchMessage(m); //send to ui thread } } catch (SocketException se) { System.err.println(se); } catch (IOException ie) { System.err.println(ie); } } Over here, is the code that sends the Multicast Datagram out of every valid network interface available (that's not the loopback interface). public void sendPing() { MulticastSocket ms = null; try { ms = new MulticastSocket(_port); ms.setTimeToLive(TTL_GLOBAL); List<NetworkInterface> interfaces = getMulticastNonLoopbackNetworkInterfaces(); for (NetworkInterface iface : interfaces) { //skip loopback if (iface.getName().equals("lo")) continue; ms.setNetworkInterface(iface); _buffer = ("FW-"+ _name +" PING ("+iface.getName()+":"+iface.getInetAddresses().nextElement()+")").getBytes(); DatagramPacket dp = new DatagramPacket(_buffer, _buffer.length,_ia,_port); ms.send(dp); Log.v(TAG,"Announcer: Sent packet - " + new String(_buffer) + " from " + iface.getDisplayName()); } } catch (IOException e) { e.printStackTrace(); } catch (Exception e2) { e2.printStackTrace(); } } Update (April 2nd 2010) I found a way to have the Droid's network interface to communicate using Multicast! _wifiMulticastLock = ((WifiManager) context.getSystemService(Context.WIFI_SERVICE)).createMulticastLock("multicastLockNameHere"); _wifiMulticastLock.acquire(); Then when you're done... if (_wifiMulticastLock != null && _wifiMulticastLock.isHeld()) _wifiMulticastLock.release(); After I did this, the Droid started sending and receiving UDP Datagrams on a Multicast group. gubatron

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when setup projects need to be built?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • Cyrus on CentOS with sasl / pam / ldap

    - by Oscar
    SASL/PAM/LDAP is driving me crazy... that's what I read a lot when googling for problems in this area, and what I experience myself :-S I'm trying to get Cyrus imap working for virtual hosting on CentOS with this authorisation backend and really don't know what's happening. In saslauthd I configured the LDAP search filter to use, but it looks like pam completely ignores it. Here's what I do for testing (done more tests but all with similar results): [root@testserv ~]# imtest -u [email protected] -a [email protected] WARNING: no hostname supplied, assuming localhost S: * OK [CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS] testserv. Cyrus IMAP4 v2.3.7-Invoca-RPM-2.3.7-7.el5_6.4 server ready C: C01 CAPABILITY S: * CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS ACL RIGHTS=kxte QUOTA MAILBOX-REFERRALS NAMESPACE UIDPLUS NO_ATOMIC_RENAME UNSELECT CHILDREN MULTIAPPEND BINARY SORT SORT=MODSEQ THREAD=ORDEREDSUBJECT THREAD=REFERENCES ANNOTATEMORE CATENATE CONDSTORE IDLE LISTEXT LIST-SUBSCRIBED X-NETSCAPE URLAUTH S: C01 OK Completed Please enter your password: C: L01 LOGIN [email protected] {6} S: + go ahead C: <omitted> S: L01 NO Login failed: authentication failure Authentication failed. generic failure Security strength factor: 0 C: Q01 LOGOUT * BYE LOGOUT received Q01 OK Completed Connection closed. The LDAP entry does exist (and so does the mailbox in Cyrus): [root@testserv ~]# ldapsearch -WxD cn=Manager,o=mydomain,c=com [email protected] Enter LDAP Password: # extended LDIF # # LDAPv3 # base <> with scope subtree # filter: [email protected] # requesting: ALL # # myuser, accounts, testserv.mydomain.com, mydomain, com dn: uid=myuser,ou=accounts,dc=testserv.mydomain.com,o=mydomain,c=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uidNumber: 16 uid: myuser gidNumber: 5 givenName: My sn: Name mail: [email protected] cn: My Name userPassword:: dYN5ebB0fXhNRn1pZllhRnJX7Uk= shadowLastChange: 15176 homeDirectory: /dev/null # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 This is what I get in /var/log/messages Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] ... /var/adm/auth.log Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:11 testserv cyrus/imap[12514]: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb Aug 2 04:00:19 testserv saslauthd[5926]: DEBUG: auth_pam: pam_authenticate failed: User not known to the underlying authentication module Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] (AFAIK I can ignore the auxprop msg) ... and /var/log/slapd.log: Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 ACCEPT from IP=127.0.0.1:51403 (IP=0.0.0.0:389) Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 BIND dn="" method=128 Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 RESULT tag=97 err=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SRCH base="o=mydomain,c=com" scope=2 deref=0 filter="([email protected])" Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=2 UNBIND Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 closed These are the settings in In /etc/imapd.conf: sasl_mech_list: PLAIN LOGIN sasl_pwcheck_method: saslauthd ## sasl_auxprop_plugin: sasldb sasl_auto_transition: no and my sasl config: [root@testserv ~]# cat /etc/sysconfig/saslauthd # Directory in which to place saslauthd's listening socket, pid file, and so # on. This directory must already exist. SOCKETDIR=/var/run/saslauthd # Mechanism to use when checking passwords. Run "saslauthd -v" to get a list # of which mechanism your installation was compiled with the ablity to use. MECH=pam # Additional flags to pass to saslauthd on the command line. See saslauthd(8) # for the list of accepted flags. FLAGS="-c -r -O /etc/saslauthd.conf" [root@testserv ~]# cat /etc/saslauthd.conf ldap_servers: ldap://127.0.0.1/ ldap_search_base: dc=%d,o=mydomain,c=com ldap_auth_method: bind #ldap_filter: (|(uid=%u)((&(mail=%u@%d)(accountStatus=active))) ldap_filter: (&(mail=%u@%d)(accountStatus=active)) ldap_debug: 1 ldap_version: 3 The accountStatus=active is not in ldap yet, but that doesn't make a difference since I don't see it in the filter... that's not the reason for the failure. The weird thing is, I do get an error when I rename or remove /etc/saslauthd.conf, but when the file exists it seems happily ignored... The filter in slapd.log seems to be taken from /etc/ldap.conf. Apart from some timers, that only contains: host 127.0.0.1 base o=mydomain,c=com pam_login_attribute mail Outcommenting the pam_login_attribute results in this filter in slapd.log: filter="([email protected])" Pam-imap looks like this: [root@testserv ~]# cat /etc/pam.d/imap auth required pam_ldap.so debug account required pam_ldap.so debug #auth sufficient pam_unix.so likeauth nullok #auth sufficient pam_ldap.so use_first_pass #auth required pam_deny.so #account sufficient pam_unix.so #account sufficient pam_ldap.so The outcommented stuff is because I don't have the cyrus admin user in Ldap; that's a Linux user. That works fine when uncommented, but I still need to play around with that a little and first I wanna get imap working. Finally nsswitch: [root@testserv ~]# cat /etc/nsswitch.conf # # /etc/nsswitch.conf # # An example Name Service Switch config file. This file should be # sorted with the most-used services at the beginning. # # The entry '[NOTFOUND=return]' means that the search for an # entry should stop if the search in the previous entry turned # up nothing. Note that if the search failed due to some other reason # (like no NIS server responding) then the search continues with the # next entry. # # Legal entries are: # # nisplus or nis+ Use NIS+ (NIS version 3) # nis or yp Use NIS (NIS version 2), also called YP # dns Use DNS (Domain Name Service) # files Use the local files # db Use the local database (.db) files # compat Use NIS on compat mode # hesiod Use Hesiod for user lookups # [NOTFOUND=return] Stop searching if not found so far # # To use db, put the "db" in front of "files" for entries you want to be # looked up first in the databases # # Example: #passwd: db files nisplus nis #shadow: db files nisplus nis #group: db files nisplus nis passwd: compat ldap group: compat ldap shadow: compat ldap hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: nisplus publickey: nisplus automount: files nisplus aliases: files nisplus Any info where to start looking will be greatly appreciated! Thnx in advance

    Read the article

  • Why does IHttpAsyncHandler leak memory under load?

    - by Anton
    I have noticed that the .NET IHttpAsyncHandler (and the IHttpHandler, to a lesser degree) leak memory when subjected to concurrent web requests. In my tests, the development web server (Cassini) jumps from 6MB memory to over 100MB, and once the test is finished, none of it is reclaimed. The problem can be reproduced easily. Create a new solution (LeakyHandler) with two projects: An ASP.NET web application (LeakyHandler.WebApp) A Console application (LeakyHandler.ConsoleApp) In LeakyHandler.WebApp: Create a class called TestHandler that implements IHttpAsyncHandler. In the request processing, do a brief Sleep and end the response. Add the HTTP handler to Web.config as test.ashx. In LeakyHandler.ConsoleApp: Generate a large number of HttpWebRequests to test.ashx and execute them asynchronously. As the number of HttpWebRequests (sampleSize) is increased, the memory leak is made more and more apparent. LeakyHandler.WebApp TestHandler.cs namespace LeakyHandler.WebApp { public class TestHandler : IHttpAsyncHandler { #region IHttpAsyncHandler Members private ProcessRequestDelegate Delegate { get; set; } public delegate void ProcessRequestDelegate(HttpContext context); public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { Delegate = ProcessRequest; return Delegate.BeginInvoke(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { Delegate.EndInvoke(result); } #endregion #region IHttpHandler Members public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext context) { Thread.Sleep(10); context.Response.End(); } #endregion } } LeakyHandler.WebApp Web.config <?xml version="1.0"?> <configuration> <system.web> <compilation debug="false" /> <httpHandlers> <add verb="POST" path="test.ashx" type="LeakyHandler.WebApp.TestHandler" /> </httpHandlers> </system.web> </configuration> LeakyHandler.ConsoleApp Program.cs namespace LeakyHandler.ConsoleApp { class Program { private static int sampleSize = 10000; private static int startedCount = 0; private static int completedCount = 0; static void Main(string[] args) { Console.WriteLine("Press any key to start."); Console.ReadKey(); string url = "http://localhost:3000/test.ashx"; for (int i = 0; i < sampleSize; i++) { HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "POST"; request.BeginGetResponse(GetResponseCallback, request); Console.WriteLine("S: " + Interlocked.Increment(ref startedCount)); } Console.ReadKey(); } static void GetResponseCallback(IAsyncResult result) { HttpWebRequest request = (HttpWebRequest)result.AsyncState; HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(result); try { using (Stream stream = response.GetResponseStream()) { using (StreamReader streamReader = new StreamReader(stream)) { streamReader.ReadToEnd(); System.Console.WriteLine("C: " + Interlocked.Increment(ref completedCount)); } } response.Close(); } catch (Exception ex) { System.Console.WriteLine("Error processing response: " + ex.Message); } } } }

    Read the article

  • Can connect to Samba server but cannot access shares?

    - by jlego
    I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. Needs to be able to share files with a PC running Windows 7 and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error 0x80070035 " Windows cannot access \SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error "The original item for ShareName cannot be found." When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? UPDATE 2: As the directory I am trying to share is actually an entire internal disk, I have tried to things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. UPDATE 3: Not sure exactly what I did, but now whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Any suggestions? UPDATE 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. UPDATE 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally) UPDATE 6: Figured it out. See answer below

    Read the article

  • Bacula windows client could not connect to Bacula director

    - by pr0f-r00t
    I have a Bacula server on my Linux Debian squeeze host (Bacula version 5.0.2) and a Bacula client on Windows XP SP3. On my network each client can see each other, can share files and can ping. On my local server I could run bconsole and the server responds but when I run bconsole or bat on my windows client the server does not respond. Here are my configuration files: bacula-dir.conf: # # Default Bacula Director Configuration file # # The only thing that MUST be changed is to add one or more # file or directory names in the Include directive of the # FileSet resource. # # For Bacula release 5.0.2 (28 April 2010) -- debian squeeze/sid # # You might also want to change the default email address # from root to your address. See the "mail" and "operator" # directives in the Messages resource. # Director { # define myself Name = nima-desktop-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 1 Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" # Console password Messages = Daemon DirAddress = 127.0.0.1 # DirAddress = 72.16.208.1 } JobDefs { Name = "DefaultJob" Type = Backup Level = Incremental Client = nima-desktop-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultJob" } #Job { # Name = "BackupClient2" # Client = nima-desktop2-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=nima-desktop-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /nonexistant/path/to/file/archive/dir/bacula-restores } # job for vmware windows host Job { Name = "nimaxp-fd" Type = Backup Client = nimaxp-fd FileSet = "nimaxp-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # job for vmware windows host Job { Name = "arg-michael-fd" Type = Backup Client = nimaxp-fd FileSet = "arg-michael-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } # # Put your list of files here, preceded by 'File =', one per line # or include an external list with: # # File = <file-name # # Note: / backs up everything on the root partition. # if you have other partitions such as /usr or /home # you will probably want to add them too. # # By default this is defined to point to the Bacula binary # directory to give a reasonable FileSet to backup to # disk storage during initial testing. # File = /usr/sbin } # # If you backup the root directory, the following two excluded # files can be useful # Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "nimaxp-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # List of files to be backed up FileSet { Name = "arg-michael-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # # When to do the backups, full backup on first sunday of the month, # differential (i.e. incremental since full) every other sunday, # and incremental backups other days Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = nima-desktop-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V6" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = nimaxp-fd Address = nimaxp FDPort = 9102 Catalog = MyCatalog Password = "Ku8F1YAhDz5EMUQjiC9CcSw95Aho9XbXailUmjOaAXJP" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = arg-michael-fd Address = 192.168.0.61 FDPort = 9102 Catalog = MyCatalog Password = "b4E9FU6s/9Zm4BVFFnbXVKhlyd/zWxj0oWITKK6CALR/" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = nima-desktop2-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V62" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" Device = FileStorage Media Type = File } # Definition of DDS tape storage device #Storage { # Name = DDS-4 # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # password for Storage daemon # Device = DDS-4 # must be same as Device in Storage daemon # Media Type = DDS-4 # must be same as MediaType in Storage daemon # Autochanger = yes # enable for autochanger device #} # Definition of 8mm tape storage device #Storage { # Name = "8mmDrive" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "Exabyte 8mm" # MediaType = "8mm" #} # Definition of DVD storage device #Storage { # Name = "DVD" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "DVD Writer" # MediaType = "DVD" #} # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; dbuser = ""; dbpassword = "" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard # # NOTE! If you send to two email or more email addresses, you will need # to replace the %r in the from field (-f part) with a single valid # email address in both the mailcommand and the operatorcommand. # What this does is, it sets the email address that emails would display # in the FROM field, which is by default the same email as they're being # sent to. However, if you send email to more than one address, then # you'll have to set the FROM address manually, to a single address. # for example, a '[email protected]', is better since that tends to # tell (most) people that its coming from an automated source. # mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = nima-desktop-mon Password = "-T0h6HCXWYNy0wWqOomysMvRGflQ_TA6c" CommandACL = status, .status } bacula-fd.conf on client: # # Default Bacula File Daemon Configuration file # # For Bacula release 5.0.3 (08/05/10) -- Windows MinGW32 # # There is not much to change here except perhaps the # File daemon Name # # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = nimaxp-fd FDport = 9102 # where we listen for the director WorkingDirectory = "C:\\Program Files\\Bacula\\working" Pid Directory = "C:\\Program Files\\Bacula\\working" # Plugin Directory = "C:\\Program Files\\Bacula\\plugins" Maximum Concurrent Jobs = 10 } # # List Directors who are permitted to contact this File daemon # Director { Name = Nima-desktop-dir Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = nimaxp-mon Password = "q5b5g+LkzDXorMViFwOn1/TUnjUyDlg+gRTBp236GrU3" Monitor = yes } # Send all messages except skipped files back to Director Messages { Name = Standard director = Nima-desktop = all, !skipped, !restored } I have checked my firewall and disabled the firewall but it doesn't work.

    Read the article

  • Hudson + Jboss AS 7.1 + Maven 3 Project Build Error

    - by Zehra Gül Çabuk
    I've wanted to prepare an environment that I'm able to build my project and run my tests on a integration server. So I've installed an Jboss application server 7.1 and deployed hudson.war to it. Then I've created a project to trigger "mvn clean install" and when I built it I've got following exception. [INFO] Using bundled Maven 3 installation [INFO] Checking Maven 3 installation environment [workspace] $ /disk7/hudson_home/maven/slavebundle/bundled-maven/bin/mvn --help [INFO] Checking Maven 3 installation version [INFO] Detected Maven 3 installation version: 3.0.3 [workspace] $ /disk7/hudson_home/maven/slavebundle/bundled-maven/bin/mvn clean install -V -B -Dmaven.ext.class.path=/disk7/hudson_home/maven/slavebundle/resources:/disk7/hudson_home/maven/slavebundle/lib/maven3-eventspy-3.0.jar:/disk7/jboss-as-7.1.0.Final/standalone/tmp/vfs/deploymentdf2a6dfa59ee3407/hudson-remoting-2.2.0.jar-407215e5de02980f/contents -Dhudson.eventspy.port=37183 -f pom.xml [DEBUG] Waiting for connection on port: 37183 Apache Maven 3.0.3 (r1075438; 2011-02-28 19:31:09+0200) Maven home: /disk7/hudson_home/maven/slavebundle/bundled-maven Java version: 1.7.0_04, vendor: Oracle Corporation Java home: /usr/lib/jvm/jdk1.7.0_04/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.2.0-24-generic", arch: "amd64", family: "unix" [ERROR] o.h.m.e.DelegatingEventSpy - Init failed java.lang.NoClassDefFoundError: hudson/remoting/Channel at org.hudsonci.maven.eventspy.common.RemotingClient.open(RemotingClient.java:103) ~[maven3-eventspy-runtime.jar:na] at org.hudsonci.maven.eventspy_30.RemotingEventSpy.openChannel(RemotingEventSpy.java:86) ~[maven3-eventspy-3.0.jar:na] at org.hudsonci.maven.eventspy_30.RemotingEventSpy.init(RemotingEventSpy.java:114) ~[maven3-eventspy-3.0.jar:na] at org.hudsonci.maven.eventspy_30.DelegatingEventSpy.init(DelegatingEventSpy.java:128) ~[maven3-eventspy-3.0.jar:na] at org.apache.maven.eventspy.internal.EventSpyDispatcher.init(EventSpyDispatcher.java:84) [maven-core-3.0.3.jar:3.0.3] at org.apache.maven.cli.MavenCli.container(MavenCli.java:403) [maven-embedder-3.0.3.jar:3.0.3] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:191) [maven-embedder-3.0.3.jar:3.0.3] at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) [maven-embedder-3.0.3.jar:3.0.3] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_04] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_04] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_04] at java.lang.reflect.Method.invoke(Method.java:601) ~[na:1.7.0_04] at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) [plexus-classworlds-2.4.jar:na] at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) [plexus-classworlds-2.4.jar:na] at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) [plexus-classworlds-2.4.jar:na] at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) [plexus-classworlds-2.4.jar:na] Caused by: java.lang.ClassNotFoundException: hudson.remoting.Channel at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50) ~[plexus-classworlds-2.4.jar:na] at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244) ~[plexus-classworlds-2.4.jar:na] at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230) ~[plexus-classworlds-2.4.jar:na] ... 16 common frames omitted [ERROR] ABORTED [ERROR] Failed to initialize [ERROR] Caused by: hudson/remoting/Channel [ERROR] Caused by: hudson.remoting.Channel [ERROR] Failure: java.net.SocketException: Connection reset FATAL: Connection reset java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:189) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.FilterInputStream.read(FilterInputStream.java:133) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at hudson.remoting.Channel.<init>(Channel.java:385) at hudson.remoting.Channel.<init>(Channel.java:347) at hudson.remoting.Channel.<init>(Channel.java:320) at hudson.remoting.Channel.<init>(Channel.java:315) at hudson.slaves.Channels$1.<init>(Channels.java:71) at hudson.slaves.Channels.forProcess(Channels.java:71) at org.hudsonci.maven.plugin.builder.internal.PerformBuild.doExecute(PerformBuild.java:174) at org.hudsonci.utils.tasks.PerformOperation.execute(PerformOperation.java:58) at org.hudsonci.maven.plugin.builder.MavenBuilder.perform(MavenBuilder.java:169) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:630) at hudson.model.Build$RunnerImpl.build(Build.java:175) at hudson.model.Build$RunnerImpl.doRun(Build.java:137) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:429) at hudson.model.Run.run(Run.java:1366) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:145) I want to point out the command which is tried to execute by hudson : /disk7/hudson_home/maven/slavebundle/bundled-maven/bin/mvn clean install -V -B -Dmaven.ext.class.path=/disk7/hudson_home/maven/slavebundle/resources:/disk7/hudson_home/maven/slavebundle/lib/maven3-eventspy-3.0.jar:/disk7/jboss-as-7.1.0.Final/standalone/tmp/vfs/deploymentdf2a6dfa59ee3407/hudson-remoting-2.2.0.jar-407215e5de02980f/contents -Dhudson.eventspy.port=37183 -f pom.xml It tries to find "hudson-remoting-2.2.0.jar" to put it to build path but it searches it at the wrong place because when I look where the hudson-remoting jar I found it at /disk7/jboss-as-7.1.0.Final/standalone/tmp/vfs/deploymentdf2a6dfa59ee3407/hudson-remoting-2.2.0.jar-407215e5de02980f/hudson-remoting-2.2.0.jar for this build(not in contents). So how can I configure the hudson to force it looking at the right place for jars? Is there anyone has an idea? Thanks in advance.

    Read the article

  • Linux networking crash: best steps to find out the cause?

    - by Aron Rotteveel
    One of our Linux (CentOS) servers was unreachable last night. The server was not reachable in any way except for the remote console. After logging in with the remote console, it turned out I could not ping any outside hosts either. A simple service network restart solved the issue, but I am still wondering what could have caused this. My log files seem to indicate no error at all (except for the various daemons that need a network connection and failed after the network failure). Are there any additional steps I can take to find out the cause of this problem? EDIT: this just happened again. The server was completely unresponsive until I issued a networking service restart. Any advise is welcome. Could this be caused by a faulty hardware component? As per Madhatters request, here are some excerpts from the log at the time (the network crashed at 20:13): /var/log/messages: Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=100 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:01:05 graviton kernel: Firewall: *TCP_IN Blocked* IN=eth0 OUT= MAC=<stripped> SRC=<stripped> DST=<stripped> LEN=40 TOS=0x00 PREC=0x00 TTL=101 ID=256 PROTO=TCP SPT=6000 DPT=3306 WINDOW=16384 RES=0x00 SYN URGP=0 Dec 2 20:13:34 graviton junglediskserver: Connection to gateway failed: xGatewayTransport - Connection to gateway failed. The first three messages are simple responses to iptables rules I have set up through the LFD firewall. The last message indicates that JungleDisk, which I use for backups can no longer connect to the gateway. Apart from this, there are no interesting messages around this time. EDIT 4 dec: as per Mattdm's request, here is the output of ethtool eth0: (Please not that these are the settings that currently work. If things go wrong again, I will be sure to post this again if necessary. Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: g Wake-on: d Link detected: yes As per Joris' request, here is also the output of route -n: aron@graviton [~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xx.xx.xx.58 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.42 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.43 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.41 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.46 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.47 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.44 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.45 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.50 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.51 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.48 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.49 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.54 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.52 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.53 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 xx.xx.xx.62 0.0.0.0 UG 0 0 0 eth0 The bottom xx.62 is my gateway. EDIT december 28th: the problem occurred again and I got the chance to compare some of the outputs of the above tests. What I found out is that arp -an returns an incomplete MAC address for my gateway (which is not under my control; the server is in a shared rack): During failure: ? (xx.xx.xx.62) at <incomplete> on eth0 After service network restart: ? (xx.xx.xx.62) at 00:00:0C:9F:F0:30 [ether] on eth0 Is this something I can fix or is it time for me to contact the data centre?

    Read the article

  • Mysql - Help me change this single complex query to use temporary tables

    - by sandeepan-nath
    About the system: - There are tutors who create classes and packs - A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is the concerned query Can anybody help me suggest an approach using temporary tables. We have indexed all the relevant fields and it looks like this is the least time possible with this approach:- SELECT SUM(DISTINCT( t.tag LIKE "%Dictatorship%" OR tt.tag LIKE "%Dictatorship%" OR ttt.tag LIKE "%Dictatorship%" )) AS key_1_total_matches , SUM(DISTINCT( t.tag LIKE "%democracy%" OR tt.tag LIKE "%democracy%" OR ttt.tag LIKE "%democracy%" )) AS key_2_total_matches , COUNT(DISTINCT( od.id_od )) AS tutor_popularity, CASE WHEN ( IF(( wc.id_wc > 0 ), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'classes_published' , CASE WHEN ( IF(( lp.id_lp > 0 ), ( lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'packs_published', td . *, u . * FROM tutor_details AS td JOIN users AS u ON u.id_user = td.id_user LEFT JOIN learning_packs_tag_relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN learning_packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN learning_packs_categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN learning_packs_categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN learning_pack_content AS lpct ON ( lp.id_lp = lpct.id_lp ) LEFT JOIN webclasses_tag_relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN webclasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN learning_packs_categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN learning_packs_categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN order_details AS od ON td.id_tutor = od.id_author LEFT JOIN orders AS o ON od.id_order = o.id_order LEFT JOIN tutors_tag_relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor LEFT JOIN tags AS t ON t.id_tag = ttagrels.id_tag LEFT JOIN tags AS tt ON tt.id_tag = lptagrels.id_tag LEFT JOIN tags AS ttt ON ttt.id_tag = wtagrels.id_tag WHERE ( u.country = 'IE' OR u.country IN ( 'INT' ) ) AND CASE WHEN ( ( tt.id_tag = lptagrels.id_tag ) AND ( lp.id_lp > 0 ) ) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( ( ttt.id_tag = wtagrels.id_tag ) AND ( wc.id_wc > 0 ) ) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( od.id_od > 0 ) THEN od.id_author = td.id_tutor AND o.order_status = 'paid' AND CASE WHEN ( od.id_wc > 0 ) THEN od.can_attend_class = 1 ELSE 1 END ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%" OR tt.tag LIKE "%Dictatorship%" OR tt.tag LIKE "%democracy%" OR ttt.tag LIKE "%Dictatorship%" OR ttt.tag LIKE "%democracy%" ) GROUP BY td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 ORDER BY tutor_popularity DESC, u.surname ASC, u.name ASC LIMIT 0, 20 The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query rises alarmingly for heavier data and for the current data I have it is like 10 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. Somebody suggested in my previous question to do the following:- create a temporary table and insert here all relevant data that might end up in the final result set run several updates on this table, joining the required tables one at a time instead of all of them at the same time finally perform a query on this temporary table to extract the end result All this was done in a stored procedure, the end result has passed unit tests, and is blazing fast. I have never worked with temporary tables till now. Only if I could get some hints, kind of schematic representations so that I can start with... Is there something faulty with the query? What can be the reason behind 10+ seconds of execution time? How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. The explain query output:- Please see this screenshot - http://www.test.examvillage.com/Explain_query_improved.jpg

    Read the article

  • Arquillian - Weld SE - getting NullPointerException

    - by Walter White
    I am new to Arquillian and want to get some basic testing working (inject a bean and assert it does something). Exception: ------------------------------------------------------------------------------- Test set: com.walterjwhite.test.TestCase ------------------------------------------------------------------------------- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.231 sec <<< FAILURE! test(com.walterjwhite.test.TestCase) Time elapsed: 0.02 sec <<< ERROR! java.lang.RuntimeException: Could not inject members at org.jboss.arquillian.testenricher.cdi.CDIInjectionEnricher.injectClass(CDIInjectionEnricher.java:113) at org.jboss.arquillian.testenricher.cdi.CDIInjectionEnricher.enrich(CDIInjectionEnricher.java:61) at org.jboss.arquillian.impl.enricher.ClientTestEnricher.enrich(ClientTestEnricher.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.arquillian.impl.core.ObserverImpl.invoke(ObserverImpl.java:90) at org.jboss.arquillian.impl.core.EventContextImpl.invokeObservers(EventContextImpl.java:98) at org.jboss.arquillian.impl.core.EventContextImpl.proceed(EventContextImpl.java:80) at org.jboss.arquillian.impl.client.ContainerDeploymentContextHandler.createContext(ContainerDeploymentContextHandler.java:133) at org.jboss.arquillian.impl.client.ContainerDeploymentContextHandler.createBeforeContext(ContainerDeploymentContextHandler.java:115) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.arquillian.impl.core.ObserverImpl.invoke(ObserverImpl.java:90) at org.jboss.arquillian.impl.core.EventContextImpl.proceed(EventContextImpl.java:87) at org.jboss.arquillian.impl.TestContextHandler.createTestContext(TestContextHandler.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.arquillian.impl.core.ObserverImpl.invoke(ObserverImpl.java:90) at org.jboss.arquillian.impl.core.EventContextImpl.proceed(EventContextImpl.java:87) at org.jboss.arquillian.impl.TestContextHandler.createClassContext(TestContextHandler.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.arquillian.impl.core.ObserverImpl.invoke(ObserverImpl.java:90) at org.jboss.arquillian.impl.core.EventContextImpl.proceed(EventContextImpl.java:87) at org.jboss.arquillian.impl.TestContextHandler.createSuiteContext(TestContextHandler.java:54) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.arquillian.impl.core.ObserverImpl.invoke(ObserverImpl.java:90) at org.jboss.arquillian.impl.core.EventContextImpl.proceed(EventContextImpl.java:87) at org.jboss.arquillian.impl.core.ManagerImpl.fire(ManagerImpl.java:126) at org.jboss.arquillian.impl.core.ManagerImpl.fire(ManagerImpl.java:106) at org.jboss.arquillian.impl.EventTestRunnerAdaptor.before(EventTestRunnerAdaptor.java:85) at org.jboss.arquillian.junit.Arquillian$4.evaluate(Arquillian.java:210) at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:303) at org.jboss.arquillian.junit.Arquillian.access$300(Arquillian.java:45) at org.jboss.arquillian.junit.Arquillian$5.evaluate(Arquillian.java:228) at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.jboss.arquillian.junit.Arquillian$2.evaluate(Arquillian.java:173) at org.jboss.arquillian.junit.Arquillian.multiExecute(Arquillian.java:303) at org.jboss.arquillian.junit.Arquillian.access$300(Arquillian.java:45) at org.jboss.arquillian.junit.Arquillian$3.evaluate(Arquillian.java:187) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:127) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:35) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:115) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:97) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103) at $Proxy0.invoke(Unknown Source) at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150) at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69) Caused by: java.lang.NullPointerException at org.jboss.arquillian.testenricher.cdi.CDIInjectionEnricher.getBeanManager(CDIInjectionEnricher.java:51) at org.jboss.arquillian.testenricher.cdi.CDIInjectionEnricher.injectClass(CDIInjectionEnricher.java:100) ... 71 more TestCase class @RunWith(Arquillian.class) public class TestCase { @Deployment public static JavaArchive createDeployment() { return ShrinkWrap.create(JavaArchive.class).addClasses(TestEntity.class, Implementation.class) .addAsManifestResource(EmptyAsset.INSTANCE, ArchivePaths.create("beans.xml")); } @Inject Implementation implementation; @Test public void test() throws Exception { final TestEntity testEntity = implementation.create(); Assert.assertNotNull(testEntity); } } When I run this, I get a NullPointerException, the bean manager is null. It looks like I am missing a step, but from the examples, it looks like this is all I should need. Any ideas? Walter

    Read the article

  • Oracle data warehouse design - fact table acting as a dimension?

    - by Elizabeth
    THANKS: Both answers here are very helpful, but I could only pick one. I really appreciate the advice! our datawarehouse will be used more for workflow reports than traditional analytical reports. Our users care about "current picture" far more than history. (though history matters, too.) We are a government entity that does not have costs or related calculations. Mostly just counts of people within given locations and with related history. We are using Oracle, and I have found distinct advantage in using the star join whenever possible and would like to rearchitect everything to as closely resemble the star schema as is reasonable for our business uses. Speed in this DW is vital, and a number of tests have already proven the star schema approach to me. Our "person" table is key - it contains over 4 million records and will be the most frequently used source in queries. It can be seen at the center of a star with multiple dimensions (like age, gender, affiliation, location, etc.). It is a very LONG table, particularly when I join it to the address and contact information. However, it is more like a dimension table when we start looking at history. For example, there are two different history tables that have a person key pointing to the person table. One has over 20 million records and the other has almost 50 million and grows daily. Is this table a fact table or a dimension table? Can one work as both? If so, is that going to be a big performance problem? Is it common to query more off of a dimension than a fact? What happens if a DIFFERENT fact table that uses the person table as a dimension is actually only 60,000 records (much smaller.). I think my problem is that our data and use of it does not fit with the commonly use examples of star schemas. CLARIFICATION: Some good thoughts have been added below, but perhaps I left too much out to really explain well. Here's some more info: We handle a voter database. We don't have any measures except voter counts by various groups: voter counts by party, by age, by location; voter counts by ballot type and election, by ballot status and election, etc. We do have a "voting history" log as well as an activity audit log (change of address, party, etc.). We have information on which voters are election workers and all that related information. I figure I'll get to the peripheral stuff later. For now I'm focusing on our two major "business processes": voter registration(which IS a voter.) and election turnout. In the first, voter is a fact. In the second, voter is a dimension, along with party, election, and type of ballot. (and in case anyone is worried - no we don't know HOW people vote. Just that they do. LOL ) I hope that clarifies things a bit.

    Read the article

  • maven sonar problem

    - by senzacionale
    I want to use sonar for analysis but i can't get any data in localhost:9000 <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <artifactId>KIS</artifactId> <groupId>KIS</groupId> <version>1.0</version> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.4</version> <executions> <execution> <id>compile</id> <phase>compile</phase> <configuration> <tasks> <property name="compile_classpath" refid="maven.compile.classpath"/> <property name="runtime_classpath" refid="maven.runtime.classpath"/> <property name="test_classpath" refid="maven.test.classpath"/> <property name="plugin_classpath" refid="maven.plugin.classpath"/> <ant antfile="${basedir}/build.xml"> <target name="maven-compile"/> </ant> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> output when running sonar: jar file is empty [INFO] Executed tasks [INFO] [resources:testResources {execution: default-testResources}] [WARNING] Using platform encoding (Cp1250 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory J:\ostalo_6i\KIS deploy\ANT\src\test\resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] No sources to compile [INFO] [surefire:test {execution: default-test}] [INFO] No tests to run. [INFO] [jar:jar {execution: default-jar}] [WARNING] JAR will be empty - no content was marked for inclusion! [INFO] Building jar: J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar [INFO] [install:install {execution: default-install}] [INFO] Installing J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar to C:\Documents and Settings\MitjaG\.m2\repository\KIS\KIS\1.0\KIS-1.0.jar [INFO] ------------------------------------------------------------------------ [INFO] Building Unnamed - KIS:KIS:jar:1.0 [INFO] task-segment: [sonar:sonar] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] [sonar:sonar {execution: default-cli}] [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.1.2 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] Database dialect class org.sonar.api.database.dialect.Oracle [INFO] ------------- Analyzing Unnamed - KIS:KIS:jar:1.0 [INFO] Selected quality profile : KIS, language=java [INFO] Configure maven plugins... [INFO] Sensor SquidSensor... [INFO] Sensor SquidSensor done: 16 ms [INFO] Sensor JavaSourceImporter... [INFO] Sensor JavaSourceImporter done: 0 ms [INFO] Sensor AsynchronousMeasuresSensor... [INFO] Sensor AsynchronousMeasuresSensor done: 15 ms [INFO] Sensor SurefireSensor... [INFO] parsing J:\ostalo_6i\KIS deploy\ANT\target\surefire-reports [INFO] Sensor SurefireSensor done: 47 ms [INFO] Sensor ProfileSensor... [INFO] Sensor ProfileSensor done: 16 ms [INFO] Sensor ProjectLinksSensor... [INFO] Sensor ProjectLinksSensor done: 0 ms [INFO] Sensor VersionEventsSensor... [INFO] Sensor VersionEventsSensor done: 31 ms [INFO] Sensor CpdSensor... [INFO] Sensor CpdSensor done: 0 ms [INFO] Sensor Maven dependencies... [INFO] Sensor Maven dependencies done: 16 ms [INFO] Execute decorators... [INFO] ANALYSIS SUCCESSFUL, you can browse http://localhost:9000 [INFO] Database optimization... [INFO] Database optimization done: 172 ms [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 minutes 16 seconds [INFO] Finished at: Fri Jun 11 08:28:26 CEST 2010 [INFO] Final Memory: 24M/43M [INFO] ------------------------------------------------------------------------ any idea why, i successfully compile with maven ant plugin java project.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Process spawned by exec-maven-plugin blocks the maven process

    - by Arnab Biswas
    I am trying to execute the following scenario using maven : pre-integration-phase : Start a java based application using a main class (using exec-maven-plugin) integration-phase : Run the integration test cases (using maven-failsafe-plugin) post-integration-phase: Stop the application gracefully (using exec-maven-plugin) Here is pom.xml snip: <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <executions> <execution> <id>launch-myApp</id> <phase>pre-integration-test</phase> <goals> <goal>exec</goal> </goals> </execution> </executions> <configuration> <executable>java</executable> <arguments> <argument>-DMY_APP_HOME=/usr/home/target/local</argument> <argument>-Djava.library.path=/usr/home/other/lib</argument> <argument>-classpath</argument> <classpath/> <argument>com.foo.MyApp</argument> </arguments> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>2.12</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> </execution> </executions> <configuration> <forkMode>always</forkMode> </configuration> </plugin> </plugins> If I execute mvn post-integration-test, my application is getting started as a child process of the maven process, but the application process is blocking the maven process from executing the integration tests which comes in the next phase. Later I found that there is a bug (or missing functionality?) in maven exec plugin, because of which the application process blocks the maven process. To address this issue, I have encapsulated the invocation of MyApp.java in a shell script and then appended “/dev/null 2&1 &” to spawn a separate background process. Here is the snip (this is just a snip and not the actual one) from runTest.sh: java - DMY_APP_HOME =$2 com.foo.MyApp > /dev/null 2>&1 & Although this solves my issue, is there any other way to do it? Am I missing any argument for exec-maven-plugin?

    Read the article

  • Why do UInt16 arrays seem to add faster than int arrays?

    - by scraimer
    It seems that C# is faster at adding two arrays of UInt16[] than it is at adding two arrays of int[]. This makes no sense to me, since I would have assumed the arrays would be word-aligned, and thus int[] would require less work from the CPU, no? I ran the test-code below, and got the following results: Int for 1000 took 9896625613 tick (4227 msec) UInt16 for 1000 took 6297688551 tick (2689 msec) The test code does the following: Creates two arrays named a and b, once. Fills them with random data, once. Starts a stopwatch. Adds a and b, item-by-item. This is done 1000 times. Stops the stopwatch. Reports how long it took. This is done for int[] a, b and for UInt16 a,b. And every time I run the code, the tests for the UInt16 arrays take 30%-50% less time than the int arrays. Can you explain this to me? Here's the code, if you want to try if for yourself: public static UInt16[] GenerateRandomDataUInt16(int length) { UInt16[] noise = new UInt16[length]; Random random = new Random((int)DateTime.Now.Ticks); for (int i = 0; i < length; ++i) { noise[i] = (UInt16)random.Next(); } return noise; } public static int[] GenerateRandomDataInt(int length) { int[] noise = new int[length]; Random random = new Random((int)DateTime.Now.Ticks); for (int i = 0; i < length; ++i) { noise[i] = (int)random.Next(); } return noise; } public static int[] AddInt(int[] a, int[] b) { int len = a.Length; int[] result = new int[len]; for (int i = 0; i < len; ++i) { result[i] = (int)(a[i] + b[i]); } return result; } public static UInt16[] AddUInt16(UInt16[] a, UInt16[] b) { int len = a.Length; UInt16[] result = new UInt16[len]; for (int i = 0; i < len; ++i) { result[i] = (ushort)(a[i] + b[i]); } return result; } public static void Main() { int count = 1000; int len = 128 * 6000; int[] aInt = GenerateRandomDataInt(len); int[] bInt = GenerateRandomDataInt(len); Stopwatch s = new Stopwatch(); s.Start(); for (int i=0; i<count; ++i) { int[] resultInt = AddInt(aInt, bInt); } s.Stop(); Console.WriteLine("Int for " + count + " took " + s.ElapsedTicks + " tick (" + s.ElapsedMilliseconds + " msec)"); UInt16[] aUInt16 = GenerateRandomDataUInt16(len); UInt16[] bUInt16 = GenerateRandomDataUInt16(len); s = new Stopwatch(); s.Start(); for (int i=0; i<count; ++i) { UInt16[] resultUInt16 = AddUInt16(aUInt16, bUInt16); } s.Stop(); Console.WriteLine("UInt16 for " + count + " took " + s.ElapsedTicks + " tick (" + s.ElapsedMilliseconds + " msec)"); }

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >