Search Results

Search found 3773 results on 151 pages for 'args'.

Page 106/151 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Is there a way to control how pytest-xdist runs tests in parallel?

    - by superselector
    I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff .... .... .... def test40(self): # Do some test40 related stuff if __name__=='__main()__' pytest.main(args=[os.path.abspath(__file__)]) The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially. When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after). Essentially what I want the execution to look like is: worker1: executes all tests in testsuite1.py serially worker2: executes all tests in testsuite2.py serially worker3: executes all tests in testsuite3.py serially worker4: executes all tests in testsuite4.py serially while worker1, worker2, worker3 and worker4 are all executed in parallel. Is there a way to achieve this in 'pytest-xidst' framework? The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py: def test_execute_func(testsuite_path): subprocess.process('py.test %s' % testsuite_path) if __name__=='__main__': #Gather all the testsuite names for each testsuite: multiprocessing.Process(test_execute_func,(testsuite_path,))

    Read the article

  • extension methods with generics - when does caller need to include type parameters?

    - by Greg
    Hi, Is there a rule for knowing when one has to pass the generic type parameters in the client code when calling an extension method? So for example in the Program class why can I (a) not pass type parameters for top.AddNode(node), but where as later for the (b) top.AddRelationship line I have to pass them? class Program { static void Main(string[] args) { // Create Graph var top = new TopologyImp<string>(); // Add Node var node = new StringNode(); node.Name = "asdf"; var node2 = new StringNode(); node2.Name = "test child"; top.AddNode(node); top.AddNode(node2); top.AddRelationship<string, RelationshipsImp>(node,node2); // *** HERE *** } } public static class TopologyExtns { public static void AddNode<T>(this ITopology<T> topIf, INode<T> node) { topIf.Nodes.Add(node.Key, node); } public static INode<T> FindNode<T>(this ITopology<T> topIf, T searchKey) { return topIf.Nodes[searchKey]; } public static void AddRelationship<T,R>(this ITopology<T> topIf, INode<T> parentNode, INode<T> childNode) where R : IRelationship<T>, new() { var rel = new R(); rel.Child = childNode; rel.Parent = parentNode; } } public class TopologyImp<T> : ITopology<T> { public Dictionary<T, INode<T>> Nodes { get; set; } public TopologyImp() { Nodes = new Dictionary<T, INode<T>>(); } }

    Read the article

  • ASMX schema varies when using WCF Service

    - by Lijo
    Hi, I have a client (created using ASMX "Add Web Reference"). The service is WCF. The signature of the methods varies for the client and the Service. I get some unwanted parameteres to the method. Note: I have used IsRequired = true for DataMember. Service: [OperationContract] int GetInt(); Client: proxy.GetInt(out requiredResult, out resultBool); Could you please help me to make the schame non-varying in both WCF clinet and non-WCF cliet? Do we have any best practices for that? using System.ServiceModel; using System.Runtime.Serialization; namespace SimpleLibraryService { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] int GetInt(); [OperationContract] int SecondTestInt(); } public class NameDecorator : IElementaryService { [DataMember(IsRequired=true)] int resultIntVal = 1; int firstVal = 1; public int GetInt() { return firstVal; } public int SecondTestInt() { return resultIntVal; } } } Binding = "basicHttpBinding" using NonWCFClient.WebServiceTEST; namespace NonWCFClient { class Program { static void Main(string[] args) { NonWCFClient.WebServiceTEST.NameDecorator proxy = new NameDecorator(); int requiredResult =0; bool resultBool = false; proxy.GetInt(out requiredResult, out resultBool); Console.WriteLine("GetInt___"+requiredResult.ToString() +"__" + resultBool.ToString()); int secondResult =0; bool secondBool = false; proxy.SecondTestInt(out secondResult, out secondBool); Console.WriteLine("SecondTestInt___" + secondResult.ToString() + "__" + secondBool.ToString()); Console.ReadLine(); } } } Please help.. Thanks Lijo

    Read the article

  • Project Euler, Problem 10 java solution not working

    - by Dennis S
    Hi, I'm trying to find the sum of the prime numbers < 2'000'000. This is my solution in java but I can't seem get the correct answer. Please give some input on what could be wrong and general advice on the code is appreciated. Printing 'sum' gives: 1308111344, which is incorrect. /* The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below two million. */ class Helper{ public void run(){ Integer sum = 0; for(int i = 2; i < 2000000; i++){ if(isPrime(i)) sum += i; } System.out.println(sum); } private boolean isPrime(int nr){ if(nr == 2) return true; else if(nr == 1) return false; if(nr % 2 == 0) return false; for(int i = 3; i < Math.sqrt(nr); i += 2){ if(nr % i == 0) return false; } return true; } } class Problem{ public static void main(String[] args){ Helper p = new Helper(); p.run(); } }

    Read the article

  • Why does TrimStart trims a char more when asked to trim "PRN.NUL" ?

    - by James
    Here is the code: namespace TrimTest { class Program { static void Main(string[] args) { string ToTrim = "PRN.NUL"; Console.WriteLine(ToTrim); string Trimmed = ToTrim.TrimStart("PRN.".ToCharArray()); Console.WriteLine(Trimmed); ToTrim = "PRN.AUX"; Console.WriteLine(ToTrim); Trimmed = ToTrim.TrimStart("PRN.".ToCharArray()); Console.WriteLine(Trimmed); ToTrim = "AUX.NUL"; Console.WriteLine(ToTrim); Trimmed = ToTrim.TrimStart("AUX.".ToCharArray()); Console.WriteLine(Trimmed); } } } The output is like this: PRN.NUL UL PRN.AUX AUX AUX.NUL NUL As you can see, the TrimStart took out the N from NUL. But it doesn't do that for other strings even if it started with PRN. I tried with .NET Framework 3.5 and 4.0 and the results are same. Are there any explanation on what causes this behavior?

    Read the article

  • C++ data member initializer is not allowed

    - by user1435915
    I totally new to C++ so bear with me. I want to make a class with a static array, and access to this array from the main. Here is what i want to do in C#. namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Class a = new Class(); Console.WriteLine(a.arr[1]); } } } ===================== namespace ConsoleApplication1 { class Class { public static string[] s_strHands = new string[]{"one","two","three"}; } } Here is what i have tried: // justfoolin.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <string> #include <iostream> using namespace std; class Class { public: static string arr[3] = {"one", "two", "three"}; }; int _tmain(int argc, _TCHAR* argv[]) { Class x; cout << x.arr[2] << endl; return 0; } But i got: IntelliSense: data member initializer is not allowed

    Read the article

  • Broken console in Maven project using Netbeans

    - by Maciek Sawicki
    Hi, I have strange problem with my Neatens+Maven installation. This is the shortest code to reproduce the problem: public class App { public static void main( String[] args ) { // Create a scanner to read from keyboard Scanner scanner = new Scanner (System.in); Scanner s= new Scanner(System.in); String param= s.next(); System.out.println(param); } } When I'm running it as Maven Project inside Netbeans console seems to be broken. It just ignores my input. It's look like infinitive loop in System.out.println(param);. However this project works fine when it's compiled as "Java Aplication" project. It also works O.K. if I build and run it from cmd. System info: Os: Vista IDE: Netbeans 6.8 Maven: apache-maven-2.2.1 //edit Built program (using mavean from Netbeans) works fine. I just can't test it using Net beans. And I think I forgot to ask the question ;). So of course my first question is: how can I fix this problem? And second is: Is it any workaround for this? For example configuring Netbeans to run external commend line app instead of using built in console.

    Read the article

  • How do I rewrite a for loop with a shared dependency using actors

    - by Thomas Rynne
    We have some code which needs to run faster. Its already profiled so we would like to make use of multiple threads. Usually I would setup an in memory queue, and have a number of threads taking jobs of the queue and calculating the results. For the shared data I would use a ConcurrentHashMap or similar. I don't really want to go down that route again. From what I have read using actors will result in cleaner code and if I use akka migrating to more than 1 jvm should be easier. Is that true? However, I don't know how to think in actors so I am not sure where to start. To give a better idea of the problem here is some sample code: case class Trade(price:Double, volume:Int, stock:String) { def value(priceCalculator:PriceCalculator) = (priceCalculator.priceFor(stock)-> price)*volume } class PriceCalculator { def priceFor(stock:String) = { Thread.sleep(20)//a slow operation which can be cached 50.0 } } object ValueTrades { def valueAll(trades:List[Trade], priceCalculator:PriceCalculator):List[(Trade,Double)] = { trades.map { trade => (trade,trade.value(priceCalculator)) } } def main(args:Array[String]) { val trades = List( Trade(30.5, 10, "Foo"), Trade(30.5, 20, "Foo") //usually much longer ) val priceCalculator = new PriceCalculator val values = valueAll(trades, priceCalculator) } } I'd appreciate it if someone with experience using actors could suggest how this would map on to actors.

    Read the article

  • What does the windbg command "kd" do?

    - by Oskar
    I ran kd by mistake and got some output that inteerested me, a reference to a line of code in my module that I can't see on the call stack of any thread. The lines weren't the beginnning of the method so I don't think the reference is to a function pointer, but possibly the result of an exception being stored in memory??? Of course, that happens to be what I'm looking for... Update: The stack trace of the exception is: 0:000> kb *** Stack trace for last set context - .thread/.cxr resets it ChildEBP RetAddr Args to Child 0174f168 734ea84f 2cb9e950 00000000 2cb9e950 kernel32!LoadTimeZoneInformation+0x2b 0174f1c4 734ead92 00000022 00000001 000685d0 msvbvm60! RUN_INSTMGR::ExecuteInitTerm+0x178 0174f1f8 734ea9ee 00000000 0000002f 2dbc2abc msvbvm60! RUN_INSTMGR::CreateObjInstanceWithParts+0x1e4 0174f278 7350414e 2cb9e96c 00000000 0174f2f0 msvbvm60! RUN_INSTMGR::CreateObjInstance+0x14d 0174f2e4 734fa071 00000000 2cb9e96c 0174f2fc msvbvm60!RcmConstructObjectInstance+0x75 0174f31c 00976ef1 2cb9e950 00591bc0 0174fddc msvbvm60!__vbaNew+0x21 and into our code (create a new Form derived class) the dds output: 0:000> dds esp-0x40 esp+0x100 0174f05c 00000000 0174f060 00000000 0174f064 00000000 0174f068 00000000 0174f06c 00000000 0174f070 00000000 0174f074 00000000 0174f078 00000000 0174f07c 00000000 0174f080 00000000 0174f084 00000000 0174f088 00000000 0174f08c 00000000 0174f090 00000000 0174f094 00000000 0174f098 00000000 0174f09c 007f4f9b ourDll!formDerivedClass::Form_Initialize+0x10b [C:\Buildbox\formDerivedClass.frm @ 1452] etc which seems to indicate that Initialize is being called even though it isn't on the stack trace of either this exception or any of the threads. As suggested, it might all be a mismatch between pdbs and dlls, but it seems a coincidence that we end up in the right classes and methods

    Read the article

  • Text piped to PowerShell.exe isn't recieved when using [Console]::ReadLine()

    - by crtracy
    I'm getting itermittent data loss when calling .NET [Console]::ReadLine() to read piped input to PowerShell.exe: >ping localhost | powershell -NonInteractive -NoProfile -C "do {$line = [Console]::ReadLine(); ('' + (Get-Date -f 'HH:mm :ss') + $line) | Write-Host; } while ($line -ne $null)" 23:56:45time<1ms 23:56:45 23:56:46time<1ms 23:56:46 23:56:47time<1ms 23:56:47 23:56:47 Normally 'ping localhost' from Vista64 looks like this, so there is a lot of data missing from the output above: Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Ping statistics for ::1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms But using the same API from C# recieves all the data sent to the process (excluding some newline differences). Code: namespace ConOutTime { class Program { static void Main (string[] args) { string s; while ((s = Console.ReadLine ()) != null) { if (s.Length > 0) // don't write time for empty lines Console.WriteLine("{0:HH:mm:ss} {1}", DateTime.Now, s); } } } } Output: 00:44:30 Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: 00:44:30 Reply from ::1: time<1ms 00:44:31 Reply from ::1: time<1ms 00:44:32 Reply from ::1: time<1ms 00:44:33 Reply from ::1: time<1ms 00:44:33 Ping statistics for ::1: 00:44:33 Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), 00:44:33 Approximate round trip times in milli-seconds: 00:44:33 Minimum = 0ms, Maximum = 0ms, Average = 0ms So, if calling the same API from PowerShell instead of C# many parts of StdIn get 'eaten'. Is the PowerShell host reading string from StdIn even though I didn't use 'PowerShell.exe -Command -'?

    Read the article

  • Unity in C# for Platform Specific Implementations

    - by DxCK
    My program has heavy interaction with the operating system through Win32API functions. now i want to migrate my program to run under Mono under Linux (No wine), and this requires different implementations to the interaction with the operating system. I started designing a code that can have different implementation for difference platform and is extensible for new future platforms. public interface ISomeInterface { void SomePlatformSpecificOperation(); } [PlatformSpecific(PlatformID.Unix)] public class SomeImplementation : ISomeInterface { #region ISomeInterface Members public void SomePlatformSpecificOperation() { Console.WriteLine("From SomeImplementation"); } #endregion } public class PlatformSpecificAttribute : Attribute { private PlatformID _platform; public PlatformSpecificAttribute(PlatformID platform) { _platform = platform; } public PlatformID Platform { get { return _platform; } } } public static class PlatformSpecificUtils { public static IEnumerable<Type> GetImplementationTypes<T>() { foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies()) { foreach (Type type in assembly.GetTypes()) { if (typeof(T).IsAssignableFrom(type) && type != typeof(T) && IsPlatformMatch(type)) { yield return type; } } } } private static bool IsPlatformMatch(Type type) { return GetPlatforms(type).Any(platform => platform == Environment.OSVersion.Platform); } private static IEnumerable<PlatformID> GetPlatforms(Type type) { return type.GetCustomAttributes(typeof(PlatformSpecificAttribute), false) .Select(obj => ((PlatformSpecificAttribute)obj).Platform); } } class Program { static void Main(string[] args) { Type first = PlatformSpecificUtils.GetImplementationTypes<ISomeInterface>().FirstOrDefault(); } } I see two problems with this design: I can't force the implementations of ISomeInterface to have a PlatformSpecificAttribute. Multiple implementations can be marked with the same PlatformID, and i dont know witch to use in the Main. Using the first one is ummm ugly. How to solve those problems? Can you suggest another design?

    Read the article

  • Importing into a Exported object with MEF

    - by Nathan W
    I'm sorry if this question has already been asked 100 times, but I'm really struggling to get it to work. Say I have have three projects. Core.dll Has common interfaces Shell.exe Loads all modules in assembly folder. References Core.dll ModuleA.dll Exports Name, Version of module. References Core.dll Shell.exe has a [Export] that contains an single instance of a third party application that I need to inject into all loaded modules. So far the code that I have in Shell.exe is: static void Main(string[] args) { ThirdPartyApp map = new ThirdPartyApp(); var ad = new AssemblyCatalog(Assembly.GetExecutingAssembly()); var dircatalog = new DirectoryCatalog("."); var a = new AggregateCatalog(dircatalog, ad); // Not to sure what to do here. } class Test { [Export(typeof(ThirdPartyApp))] public ThirdPartyApp Instance { get; set; } [Import(typeof(IModule))] public IModule Module { get; set; } } I need to create a instance of Test, and load Instance with map from the Main method then load the Module from ModuleA.dll that is in the executing directory then [Import] Instance into the loaded module. In ModuleA I have a class like this: [Export(IModule)] class Module : IModule { [Import(ThirdPartyApp)] public ThirdPartyApp Instance {get;set;} } I know I'm half way there I just don't know how to put it all together, mainly with loading up test with a instance of map from Main. Could anyone help me with this.

    Read the article

  • Visual Studio Unit Test failure to start

    - by swmi
    Hi, I am having an issue when starting the tests under debug mode in Visual Studio 2008 Team Test where it gives the following error: "Failed to queue test run '{user@machinename}': Object reference not set to an instance of an object." I googled for the error but no joy. Don't even understand what it means as it is too brief. Has anyone come across this? Note that I can run tests fine if I am not debugging and I get the same error irrespective of the test I run. Thank you, Swati ETA: Being new to Visual Studio Team Test, I didn't know there was a better exception log then what I was seeing. Anyhow, here it is: <Exception> System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.VisualStudio.TestTools.TestCaseManagement.QualityToolsPackage. ShowToolWindow [T](T&amp; toolWindow, String errorMessage, Boolean show) at Microsoft.VisualStudio.TestTools.TestCaseManagement.QualityToolsPackage. OpenTestResultsToolWindow() at Microsoft.VisualStudio.TestTools.TestCaseManagement.SolutionIntegrationManager. DebugTarget(DebugInfo debugInfo, Boolean prepareEnvironment) at Microsoft.VisualStudio.TestTools.TestManagement.DebugProcessLauncher.Launch( String exeFileName, String args, String workingDir, EventHandler processExitedHandler, Process&amp; process) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.StartProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.RestartProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.PrepareProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy. InitializeController(TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.ControllerProxy.QueueTestRunWorker( Object state) </Exception>

    Read the article

  • Django: DatabaseLockError exception with Djapian

    - by jul
    Hi, I've got the exception shown below when executing indexer.update(). I have no idea about what to do: it used to work and now index database seems "locked". Anybody can help? Thanks Environment: Request Method: POST Request URL: http://piem.org:8000/restaurant/add/ Django Version: 1.1.1 Python Version: 2.5.2 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.comments', 'django.contrib.sites', 'django.contrib.admin', 'registration', 'djapian', 'resto', 'multilingual'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.locale.LocaleMiddleware', 'multilingual.middleware.DefaultLanguageMiddleware') Traceback: File "/var/lib/python-support/python2.5/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/jul/atable/../atable/resto/views.py" in addRestaurant 639. Restaurant.indexer.update() File "/home/jul/python-modules/Djapian-2.3.1-py2.5.egg/djapian/indexer.py" in update 181. database = self._db.open(write=True) File "/home/jul/python-modules/Djapian-2.3.1-py2.5.egg/djapian/database.py" in open 20. xapian.DB_CREATE_OR_OPEN, File "/usr/lib/python2.5/site-packages/xapian.py" in __init__ 2804. _xapian.WritableDatabase_swiginit(self,_xapian.new_WritableDatabase(*args)) Exception Type: DatabaseLockError at /restaurant/add/ Exception Value: Unable to acquire database write lock on /home/jul/atable /djapian_spaces/resto/restaurant/resto.index.restaurantindexer: already locked

    Read the article

  • Why two subprocesses created by Java behave differently?

    - by Lily
    I use Java Runtime.getRuntime().exec(command) to create a subprocess and print its pid as follows: public static void main(String[] args) { Process p2; try { p2 = Runtime.getRuntime().exec(cmd); Field f2 = p2.getClass().getDeclaredField("pid"); f2.setAccessible(true); System.out.println( f2.get( p2 ) ); } catch (Exception ie) { System.out.println("Yikes, you are not supposed to be here"); } } I tried both C++ executable and Java executable (.jar file). Both executables will continuously print out "Hello World" to stdout. When cmd is the C++ executable, the pid is printed out to console but the subprocess gets killed as soon as main() returns. However, when I call the .jar executable in cmd, the subprocess does not get killed, which is the desired behavior. I don't understand why same Java code, with different executables can behave so differently. How should I modify my code so that I could have persistent subprocesses in Java. Newbie in this field. Any suggestion is welcomed. Lily

    Read the article

  • What's keeping this timer in scope? The anonymous method?

    - by Andy
    Ok, So I have a method which fires when someone clicks on our Icon in a silverlight application, seen below: private void Logo_MouseLeftButtonUp(object sender, MouseButtonEventArgs e) { e.Handled = true; ShowInfo(true); DispatcherTimer autoCloseTimer = new DispatcherTimer(); autoCloseTimer.Interval = new TimeSpan(0, 0, 10); autoCloseTimer.Tick +=new EventHandler((timerSender,args) => { autoCloseTimer.Stop(); ShowInfo(false); }); autoCloseTimer.Start(); } Whats meant to happen is that the method ShowInfo() opens up a box with the company info in and the dispatch timer auto closes it after said timespan. And this all works... But what I'm not sure about is because the dispatch timer is a local var, after the Logo_MouseLeftButtonUp method finishes, what is there to keep the dispatch timer referenced and not availible for GC collection before the anonymous method is fired? Is it the reference to the ShowInfo() method in the anonymous method? Just feels like some thing I should understand deeper as I can imagine with using events etc it can be very easy to create a leak with something like this. Hope this all makes sense! Andy.

    Read the article

  • Different behaviour of method overloading in C#

    - by Wondering
    Hi All, I was going through C# Brainteasers(http://www.yoda.arachsys.com/csharp/teasers.html) and came accross one question:what should be the o/p of below code class Base { public virtual void Foo(int x) { Console.WriteLine ("Base.Foo(int)"); } } class Derived : Base { public override void Foo(int x) { Console.WriteLine ("Derived.Foo(int)"); } public void Foo(object o) { Console.WriteLine ("Derived.Foo(object)"); } } class Test { static void Main() { Derived d = new Derived(); int i = 10; d.Foo(i); // it prints ("Derived.Foo(object)" } } but if I change the code to enter code here class Derived { public void Foo(int x) { Console.WriteLine("Derived.Foo(int)"); } public void Foo(object o) { Console.WriteLine("Derived.Foo(object)"); } } class Program { static void Main(string[] args) { Derived d = new Derived(); int i = 10; d.Foo(i); // prints Derived.Foo(int)"); Console.ReadKey(); } } I want to why the o/p is getting changde when we are inheriting vs not inheriting , why method overloading is behaving differently in both the cases

    Read the article

  • Java downcasting dilemma

    - by Shades88
    please have a look at this code here. class Vehicle { public void printSound() { System.out.print("vehicle"); } } class Car extends Vehicle { public void printSound() { System.out.print("car"); } } class Bike extends Vehicle{ public void printSound() { System.out.print("bike"); } } public class Test { public static void main(String[] args) { Vehicle v = new Car(); Bike b = (Bike)v; v.printSound(); b.printSound(); Object myObj = new String[]{"one", "two", "three"}; for (String s : (String[])myObj) System.out.print(s + "."); } } Executing this code will give ClassCastException saying inheritance.Car cannot be cast to inheritance.Bike. Now look at the line Object myObj = new String[]{"one", "two", "three"};. This line is same as Vehicle v = new Car(); right? In both lines we are assigning sub class object to super class reference variable. But downcasting String[]myObj is allowed but (Bike)v is not. Please help me understand what is going on around here.

    Read the article

  • How to get SimpleRpcClient.Call() to be a blocking call to achieve synchronous communication with RabbitMQ?

    - by Nick Josevski
    In the .NET version (2.4.1) of RabbitMQ the RabbitMQ.Client.MessagePatterns.SimpleRpcClient has a Call() method with these signatures: public virtual object[] Call(params object[] args); public virtual byte[] Call(byte[] body); public virtual byte[] Call(IBasicProperties requestProperties, byte[] body, out IBasicProperties replyProperties); The problem: With various attempts, the method still continues to not block where I expect it to, so it's unable ever handle the response. The Question: Am I missing something obvious in the setup of the SimpleRpcClient, or earlier with the IModel, IConnection, or even PublicationAddress? More Info: I've also tried various paramater configurations of the QueueDeclare() method too with no luck. string QueueDeclare(string queue, bool durable, bool exclusive, bool autoDelete, IDictionary arguments); Some more reference code of my setup of these: IConnection conn = new ConnectionFactory{Address = "127.0.0.1"}.CreateConnection()); using (IModel ch = conn.CreateModel()) { var client = new SimpleRpcClient(ch, queueName); var queueName = ch.QueueDeclare("t.qid", true, true, true, null); ch.QueueBind(queueName, "exch", "", null); //HERE: does not block? var replyMessageBytes = client.Call(prop, msgToSend, out replyProp); } Looking elsewhere: Or is it likely there's an issue in my "server side" code? With and without the use of BasicAck() it appears the client has already continued execution.

    Read the article

  • Nose2 multiprocess error on Windows7

    - by tt293
    I was looking into nose2 as a way to get around the restrictions of having both xunit output and multiprocessing in nose1.3. However, when always-on is set to False in the [multiprocess] section, I can only get a single process running, while when running with always-on set to True, I get the following error: ---------------------------------------------------------------------- Ran 0 tests in 0.043s OK Traceback (most recent call last): File "C:\dev\testing\Tests\PythonTests\venv\Scripts\nose2-script.py", line 8, in <module> load_entry_point('nose2==0.4.7', 'console_scripts', 'nose2')() File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 284, in discover return main(*args, **kwargs) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 98, in __init__ super(PluggableTestProgram, self).__init__(**kw) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\unittest2-0.5.1- py2.7.egg\unittest2\main.py", line 98, in __init__ self.runTests() File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 260, in runTests self.result = runner.run(self.test) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\runner.py", line 53, in run executor(test, result) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\plugins\mp.py", line 60, in _runmp ready, _, _ = select.select(rdrs, [], [], self.testRunTimeout) select.error: (10038, 'An operation was attempted on something that is not a soc ket') This is running python 2.7.5 (32bit) on Windows 7 in a virtualenv with six-1.1.0, unittest2-0.5.1 and nose2-0.4.7 (I get the same behavior outside of the venv, so I don't think that is the issue here).

    Read the article

  • Java: creating objects of arrays with different names at runtime and accessing/updating them

    - by scriptingalias
    Hello, I'm trying to create a class that can instantiate arrays at runtime by giving each array a "name" created by the createtempobjectname() method. I'm having trouble making this program run. I would also like to see how I could access specific objects that were created during runtime and accessing those arrays by either changing value or accessing them. This is my mess so far, which compiles but gets a runtime exception. import java.lang.reflect.Array; public class arrays { private static String temp; public static int name = 0; public static Object o; public static Class c; public static void main(String... args) { assignobjectname(); //getclassname();//this is supposed to get the name of the object and somehow //allow the arrays to become updated using more code? } public static void getclassname() { String s = c.getName(); System.out.println(s); } public static void assignobjectname()//this creates the object by the name returned { //createtempobjectname() try { String object = createtempobjectname(); c = Class.forName(object); o = Array.newInstance(c, 20); } catch (ClassNotFoundException exception) { exception.printStackTrace(); } } public static String createtempobjectname() { name++; temp = Integer.toString(name); return temp; } }

    Read the article

  • Wrong answer on c# application, finding the n prime of a list of prime numbers.

    - by user300484
    Hello, I am new to C# and I am practicing by trying to solve this easy C# problem. This application will receive an "n" number. After receiving this number, the program has to show the n prime of the list of primes. For example, if the user enters "3", the program is supposed to display "5", because 5 is the third prime starting at 2. I koow that something is wrong with my code but I dont know where is the problem and how I can fix it. Can you please tell me? Thank you :P using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Console.WriteLine("Determinar el n-esimo primo."); long n = Convert.ToInt64(Console.ReadLine()); // N lugar de primos long[] array = new long[n]; long c=0; while (c >= 2) { if(siprimo(c++) == true) for (long i = 0; i < n; i++) { array[i] = c; } } Console.WriteLine(array[n - 1]); Console.ReadLine(); } static private bool siprimo(long x) { bool sp = true; for (long k = 2; k <= x / 2; k++) if (x % k == 0) sp = false; return sp; } } }

    Read the article

  • How to: generate UnhandledException?

    - by serhio
    I use this code to catch the WinForm application UnhandledException. [STAThread] static void Main(string[] args) { // Add the event handler for handling UI thread exceptions to the event. Application.ThreadException += new System.Threading.ThreadExceptionEventHandler(Application_ThreadException); // Set the unhandled exception mode to force all Windows Forms errors // to go through our handler. Application.SetUnhandledExceptionMode(UnhandledExceptionMode.CatchException); // Add the event handler for handling non-UI thread exceptions to the event. AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); try { Application.Run(new MainForm()); } catch.... There I will try to restart the application. Now my problem is to simulate a exception like this. I tried before try (in main): throw new NullReferenceException("test"); VS caught it. Tried also in MainForm code with button : private void button1_Click(object sender, EventArgs ev) { ThreadPool.QueueUserWorkItem(new WaitCallback(TestMe), null); } protected void TestMe(object state) { string s = state.ToString(); } did not help, VS caught it, even in Release mode. How should I, finally, force the application generate UnhandleldException? Will I be able to restart the application in CurrentDomain_UnhandledException?

    Read the article

  • exception in thread "main" java.lang.NoclassDefFoundError: cal/class

    - by Gaurav
    enter import java.io.*; class eval { double add(double a,double b) { return (a+b); } double sub(double a,double b) { return (a-b); } double mul(double a,double b) { return (a*b); } double div(double a,double b) { return (a/b); } } class cal extends eval { public static void main(String args[])throws IOException { eval a1=new eval(); try{ System.out.println("1) Add"); System.out.println("2) Subtract"); System.out.println("3) Multiply"); System.out.println("4) Divide"); System.out.println("5) Enter your choice"); BufferedReader br=new BufferedReader(new InputStreamReader(System.in)); int ch;ch=Integer.parseInt(br.readLine()); System.out.println("Enter two number"); double a;a=Integer.parseInt(br.readLine()); double b;b=Integer.parseInt(br.readLine()); switch(ch) { case 1: a1.add(a,b); break; case 2: a1.sub(a,b); break; case 3: a1.mul(a,b); break; case 4: a1.div(a,b); break; } } catch (IOException e) { System.out.println("Error occured, please restart application."); } } }

    Read the article

  • Is this a hole in dynamic binding in C# 4?

    - by Galilyou
    I've seen a very interesting post on Fabio Maulo's blog. Here's the code and the bug if you don't want to jump to the url. I defined a new generic class like so: public class TableStorageInitializer<TTableEntity> where TTableEntity : class, new() { public void Initialize() { InitializeInstance(new TTableEntity()); } public void InitializeInstance(dynamic entity) { entity.PartitionKey = Guid.NewGuid().ToString(); entity.RowKey = Guid.NewGuid().ToString(); } } Note that InitializeInstance accepts one parameter, which is of type dynamic. Now to test this class, I defined another class that is nested inside my main Program class like so: class Program { static void Main(string[] args) { TableStorageInitializer<MyClass> x = new TableStorageInitializer<MyClass>(); x.Initialize(); } private class MyClass { public string PartitionKey { get; set; } public string RowKey { get; set; } public DateTime Timestamp { get; set; } } } Note: the inner class "MyClass" is declared private. Now if i run this code I get a "Microsoft.CSharp.RuntimeBinder.RuntimeBinderException" on the line "entity.PartitionKey = Guide.NewGuid().ToString()". The interesting part, though is that the message of the exception says "Object doesn't contain a definition for PartitionKey". Also note that if you changed the modifier of the nested class to public, the code will execute with no problems. So what do you guys think is really happening under the hood? Please refer to any documentation -of course if this is documented anywhere- that you may find?

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >