Search Results

Search found 473 results on 19 pages for 'karma runner'.

Page 2/19 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Selenium Test Runner and variables problem

    - by quilovnic
    Hi, In my selenium test suite (html), I define a first test case to initialize variable called in the next test case. Sample : In first script : store|//div[@id="myfield"]|myvar In my second script : type|${myvar}|myvalue But when I start test runner (from maven), it returns an error telling that ${myvar} is not found The value contained in the stored var is not used. Any suggestion ? Thans a lot

    Read the article

  • Load Runner Connection time out

    - by user1662008
    Our performance testing team is running test on our WPF-WCF-Sql Server application and they are facing connection timeout after the load goes above 75 users Error -27796: Failed to connect to server "81.171.180.119:4567": [10060] Connection timed out I would like to know what can be steps to look at bottlenecks which may be causing issues like maybe some setting in Load Runner or identify the code bottlenecks. Thanks

    Read the article

  • How to handle "circular dependency" in dependency injection

    - by Roel
    The title says "Circular Dependency", but it is not the correct wording, because to me the design seems solid. However, consider the following scenario, where the blue parts are given from external partner, and orange is my own implementation. Also assume there is more then one ConcreteMain, but I want to use a specific one. (In reality, each class has some more dependencies, but I tried to simplify it here) I would like to instanciate all of this with Depency Injection (Unity), but I obviously get a StackOverflowException on the following code, because Runner tries to instantiate ConcreteMain, and ConcreteMain needs a Runner. IUnityContainer ioc = new UnityContainer(); ioc.RegisterType<IMain, ConcreteMain>() .RegisterType<IMainCallback, Runner>(); var runner = ioc.Resolve<Runner>(); How can I avouid this? Is there any way to structure this so that I can use it with DI? The scenario I'm doing now is setting everything up manually, but that puts a hard dependency on ConcreteMain in the class which instantiates it. This is what i'm trying to avoid (with Unity registrations in configuration). All source code below (very simplified example!); public class Program { public static void Main(string[] args) { IUnityContainer ioc = new UnityContainer(); ioc.RegisterType<IMain, ConcreteMain>() .RegisterType<IMainCallback, Runner>(); var runner = ioc.Resolve<Runner>(); Console.WriteLine("invoking runner..."); runner.DoSomethingAwesome(); Console.ReadLine(); } } public class Runner : IMainCallback { private readonly IMain mainServer; public Runner(IMain mainServer) { this.mainServer = mainServer; } public void DoSomethingAwesome() { Console.WriteLine("trying to do something awesome"); mainServer.DoSomething(); } public void SomethingIsDone(object something) { Console.WriteLine("hey look, something is finally done."); } } public interface IMain { void DoSomething(); } public interface IMainCallback { void SomethingIsDone(object something); } public abstract class AbstractMain : IMain { protected readonly IMainCallback callback; protected AbstractMain(IMainCallback callback) { this.callback = callback; } public abstract void DoSomething(); } public class ConcreteMain : AbstractMain { public ConcreteMain(IMainCallback callback) : base(callback){} public override void DoSomething() { Console.WriteLine("starting to do something..."); var task = Task.Factory.StartNew(() =>{ Thread.Sleep(5000);/*very long running task*/ }); task.ContinueWith(t => callback.SomethingIsDone(true)); } }

    Read the article

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

  • How to get SpecFlow working with xUnit.net as the test runner

    - by Mike Scott
    I'm trying to use xUnit.net as the test runner for SpecFlow. The SpecFlow 1.2 binaries from the official download area don't contain an xUnit.net provider but the master branch on GitHub has one, so I build SpecFlow.Core.dll from that. I'm using xUnit.net 1.5. However, when I change the unitTestProvider name in the app.config in my spec project, I get a null reference custom tool error and the generated .feature.cs file is the single line: Object reference not set to an instance of an object. Has anyone succeeded in getting SpecFlow to work with xUnit.net? If so, how?

    Read the article

  • .NET unit test runner outputting FaultException.Detail

    - by Adam
    Hello, I am running some unit tests on a WCF service. The service is configured to include exception details in the fault response (with the following in my service configuration file). <serviceDebug includeExceptionDetailInFaults="true" /> If a test causes an unhandled exception on the server the fault is received by the client with a fully populated server stack trace. I can see this by calling the exception's ToString() method. The problem is that this doesn't seem to be output by any of the test runners that I have tried (xUnit, Gallio, MSTest). They appear to just output the Message and the StackTrace properties of the exception. To illustrate what I mean, the following unit test run by MSTest would output three sections: Error Message Error Stack Trace Standard Console Output (contains the information I would like, e.g. "Fault Detail is equal to An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is: ..." try { service.CallMethodWhichCausesException(); } catch (Exception ex) { Console.WriteLine(ex); // this outputs the information I would like throw; } Having this information will make the initial phase of testing and deployment a lot less painful. I know I can just wrap each unit test in a generic exception handler and write the exception to the console and rethrow (as above) within all my unit tests but that seems a very long-winded way of achieving this (and would look pretty awful). Does anyone know if there's any way to get this information included for free whenever an unhandled exception occurs? Is there a setting that I am missing? Is my service configuration lacking in proper fault handling? Perhaps I could write some kind of plug-in / adapter for some unit testing framework? Perhaps theres a different unit testing framework which I should be using instead! My actual set-up is xUnit unit tests executed via Gallio for the development environment, but I do have a separate suite of "smoke tests" written which I would like to be able to have our engineers run via the xUnit GUI test runner (or Gallio or whatever) to simplify the final deployment. Thanks. Adam

    Read the article

  • Gathering all data in single iteration vs using functions for readable code

    - by user828584
    Say I have an array of runners with which I need to find the tallest runner, the fastest runner, and the lightest runner. It seems like the most readable solution would be: runners = getRunners(); tallestRunner = getTallestRunner(runners); fastestRunner = getFastestRunner(runners); lightestRunner = getLightestRunner(runners); ..where each function iterates over the runners and keeps track of the largest height, greatest speed, and lowest weight. Iterating over the array three times, however, doesn't seem like a very good idea. It would instead be better to do: int greatestHeght, greatestSpeed, leastWeight; Runner tallestRunner, fastestRunner, lightestRunner; for(runner in runners){ if(runner.height > greatestHeight) { greatestHeight = runner.height; tallestRunner = runner; } if(runner.speed > ... } While this isn't too unreadable, it can get messy when there is more logic for each piece of information being extracted in the iteration. What's the middle ground here? How can I use only a single iteration while still keeping the code divided into logical units?

    Read the article

  • junit4 test runner

    - by lamisse
    hello how could we uses junir related methods? could we launch setuponce from each java test? if in my test I launch the appli by calling setuponce, is it correct ?

    Read the article

  • Creating parallel selenium tests in C# and using Nunit as the runner application

    - by damianmartin
    I am writing a new test suite for the company to test a very complex ASP.NET application which is heavily AJAX driven. We have decided to use Selenium (Grid & Remote Control) and Nunit to run these tests. The actually tests are dynamically created at run time from a spreadsheet. Each Column in an excel spreadsheet relates to a new test and each row relates to a selenium command (but in plain English and the dll converts this into Selenium code). My problem i have at the moment is getting the tests running in parallel. There will be 1000+ tests so it is too time consuming to have 1 test run at a time. Selenium Grid and Selenium Remote Control(s) are setup correctly because I can run there demo. From what i have read I need to use Punit but i can not find any documentation on what a test in punit should look like. Nunit tests are [SetUp] [TearDown] [Test]. Can anyone point me in the right direction. Thanks in advance.

    Read the article

  • How do I remedy "Error: Cannot find module 'child-process-close'"?

    - by Tyler Sloan
    I was going about business as usual and about to checkout generator-angular-fullstack. I got no red errors but a message a the end saying Error: Cannot find module 'child-process-close'. I tried many a-thing–uninstalling node, reinstalling, manually getting rid of files and directories in local and/or global paths and tried to make sure Homebrew was the one who installed everything and somehow I've made things worse. (Also, I initially saw errors regarding karma. Everything looked right but it doesn't seem I did any good by throwing commands at it.) I am at a loss. All the stackoverflow questions have been clicked and I'm afraid I've probably tried too many of the suggestions. I cannot install any Yeoman generator. I cannot install anything with npm. When inside the project directory when I run npm install it throws the error. I really have no clue. Is there a way I can basically start over all together? A simple uninstall and install isn't cutting it. Something in the system needs to change but I don't know what. Any ideas?

    Read the article

  • Error after second spec run with rspec and autospec

    - by Sean Chambers
    After installing rspec/ZenTest and running autospec, it runs my specs the first time as expected. After making a change to one of my specs and upon running the second time I get the following results: /usr/bin/ruby1.8 /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/bin/spec --autospec /home/schambers/Projects/notebook/spec/models/user_spec.rb -O spec/spec.opts /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/formatter/progress_bar_formatter.rb:17:in `flush': Broken pipe (Errno::EPIPE) from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/formatter/progress_bar_formatter.rb:17:in `example_passed' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/reporter.rb:136:in `example_passed' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/reporter.rb:136:in `each' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/reporter.rb:136:in `example_passed' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/reporter.rb:31:in `example_finished' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/example/example_methods.rb:55:in `execute' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/example/example_group_methods.rb:214:in `run_examples' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/example/example_group_methods.rb:212:in `each' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/example/example_group_methods.rb:212:in `run_examples' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/example/example_group_methods.rb:103:in `run' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/example_group_runner.rb:23:in `run' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/example_group_runner.rb:22:in `each' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/example_group_runner.rb:22:in `run' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/options.rb:152:in `run_examples' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/lib/spec/runner/command_line.rb:9:in `run' from /usr/lib/ruby/gems/1.8/gems/rspec-1.3.0/bin/spec:5 Has anyone run into this or know what the heck is going on here? Thanks

    Read the article

  • How to handle jumping up a slope in a runner game?

    - by you786
    In an 2D endless runner, what should happen when the player is running "too fast" up a slope and jumps? For example, in a "normal" case: .O. . __..O_____ . / . / O/ _/ If he is moving to the right slowly enough, he will jump upwards and land on the flat part of the surface. However, if he is moving too fast, the jump will have no effect as his forward motion will bring him back in contact with the slope before he can get high enough to pass over it. When the speed is sufficiently high, there will effectively be no jump. _________ / .O/ O/ _/ Are there any known ways to solve this issue? I know it's physically correct*, but are there techniques that other games use to overcome this in a reasonable manner? As a last resort I'll have to just remove all slopes that are too slanted. *If you constrain the player to never jumping backwards.

    Read the article

  • Pass string between two threads in java

    - by geeta
    I have to search a string in a file and write the matched lines to another file. I have a thread to read a file and a thread to write a file. I want to send the stringBuffer from read thread to write thread. Please help me to pass this. I amm getting null value passed. write thread: class OutputThread extends Thread{ /****************** Writes the line with search string to the output file *************/ Thread runner1,runner; File Out_File; public OutputThread() { } public OutputThread(Thread runner,File Out_File) { runner1 = new Thread(this,"writeThread"); // (1) Create a new thread. this.Out_File=Out_File; this.runner=runner; runner1.start(); // (2) Start the thread. } public void run() { try{ BufferedWriter bufferedWriter=new BufferedWriter(new FileWriter(Out_File,true)); System.out.println("inside write"); synchronized(runner){ System.out.println("inside wait"); runner.wait(); } System.out.println("outside wait"); // bufferedWriter.write(line.toString()); Buffer Buf = new Buffer(); bufferedWriter.write(Buf.buffers); System.out.println(Buf.buffers); bufferedWriter.flush(); } catch(Exception e){ System.out.println(e); e.printStackTrace(); } } } Read Thraed: class FileThread extends Thread{ Thread runner; File dir; String search_string,stats; File Out_File,final_output; StringBuffer sb = new StringBuffer(); public FileThread() { } public FileThread(CountDownLatch latch,String threadName,File dir,String search_string,File Out_File,File final_output,String stats) { runner = new Thread(this, threadName); // (1) Create a new thread. this.dir=dir; this.search_string=search_string; this.Out_File=Out_File; this.stats=stats; this.final_output=final_output; this.latch=latch; runner.start(); // (2) Start the thread. } public void run() { try{ Enumeration entries; ZipFile zipFile; String source_file_name = dir.toString(); File Source_file = dir; String extension; OutputThread out = new OutputThread(runner,Out_File); int dotPos = source_file_name.lastIndexOf("."); extension = source_file_name.substring(dotPos+1); if(extension.equals("zip")) { zipFile = new ZipFile(source_file_name); entries = zipFile.entries(); while(entries.hasMoreElements()) { ZipEntry entry = (ZipEntry)entries.nextElement(); if(entry.isDirectory()) { (new File(entry.getName())).mkdir(); continue; } searchString(runner,entry.getName(),new BufferedInputStream(zipFile.getInputStream(entry)),Out_File,final_output,search_string,stats); } zipFile.close(); } else { searchString(runner,Source_file.toString(),new BufferedInputStream(new FileInputStream(Source_file)),Out_File,final_output,search_string,stats); } } catch(Exception e){ System.out.println(e); e.printStackTrace(); } } /********* Reads the Input Files and Searches for the String ******************************/ public void searchString(Thread runner,String Source_File,BufferedInputStream in,File output_file,File final_output,String search,String stats) { int count = 0; int countw = 0; int countl=0; String s; String[] str; String newLine = System.getProperty("line.separator"); try { BufferedReader br2 = new BufferedReader(new InputStreamReader(in)); //OutputFile outfile = new OutputFile(); BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(output_file,true)); Buffer Buf = new Buffer(); //StringBuffer sb = new StringBuffer(); StringBuffer sb1 = new StringBuffer(); while((s = br2.readLine()) != null ) { str = s.split(search); count = str.length-1; countw += count; if(s.contains(search)){ countl++; sb.append(s); sb.append(newLine); } if(countl%100==0) { System.out.println("inside count"); Buf.setBuffers(sb.toString()); sb.delete(0,sb.length()); System.out.println("outside notify"); synchronized(runner) { runner.notify(); } //outfile.WriteFile(sb,bufferedWriter); //sb.delete(0,sb.length()); } } } synchronized(runner) { runner.notify(); } br2.close(); in.close(); if(countw == 0) { System.out.println("Input File : "+Source_File ); System.out.println("Word not found"); System.exit(0); } else { System.out.println("Input File : "+Source_File ); System.out.println("Matched word count : "+countw ); System.out.println("Lines with Search String : "+countl); System.out.println("Output File : "+output_file.toString()); System.out.println(); } } catch(Exception e){ System.out.println(e); e.printStackTrace(); } } }

    Read the article

  • How to profile the execution of an OSGi deployment?

    - by Jaime Soriano
    I'm starting the development of an OSGi bundle for an application that will be deployed in a device with some hardware limitations. I'd like to know how could I profile the execution of that bundle to be always sure that it's going to fit with its dependencies in the final device. It would be nice to have a profiler to know how much memory is each bundle using, to localize bottle necks and to compare different implementations of the same service. Is there any profiler for OSGi deployments or should I use a general Java profiler? For developing I'm using Pax runner with Apache felix to run the bundle and maven to manage project dependencies and building.

    Read the article

  • How to debug a GWT application running on OSGi?

    - by Jaime Soriano
    I'm developing a web UI using GWT. While working only with the widgets I could debug from Eclipse using the Firefox extension, but now that I'm integrating the UI with other OSGi bundles I cannot use this solution. For deploying the GWT application I create the .war and convert it to an OSGi bundle using BND. Then I launch the OSGi container with all the bundles using Pax Runner and Pax Web and the application works correctly, but when something fails in the generated javascript code I don't have any decent output error or debugging facility. Is there any way to launch the GWT application in "debug mode" from OSGi? Any other idea that could help in this scenario?

    Read the article

  • How to access project files from NUnit tests

    - by Daren Thomas
    I have some Tests that I run with ReSharpers "Run All Tests from Solution" feature. One of the classes being tested has a dependency on a file in the same folder as the assembly containing it. This file is copied to the output directory via MSBuild (set "Copy To Output Directory" to "Copy always"). Problem: The tests are not being run from the normal assembly output directory, but instead some temporary location in my user profile. Therefore, I don't really know where to look for the file - the test runner does not copy it there. Can I force it to?

    Read the article

  • Stringing multiple ShellExecute calls

    - by IVlad
    Consider the following code and its executable - runner.exe: #include <iostream> #include <string> #include <windows.h> using namespace std; int main(int argc, char *argv[]) { SHELLEXECUTEINFO shExecInfo; shExecInfo.cbSize = sizeof(SHELLEXECUTEINFO); shExecInfo.fMask = NULL; shExecInfo.hwnd = NULL; shExecInfo.lpVerb = "open"; shExecInfo.lpFile = argv[1]; string Params = ""; for ( int i = 2; i < argc; ++i ) Params += argv[i] + ' '; shExecInfo.lpParameters = Params.c_str(); shExecInfo.lpDirectory = NULL; shExecInfo.nShow = SW_SHOWNORMAL; shExecInfo.hInstApp = NULL; ShellExecuteEx(&shExecInfo); return 0; } These two batch files both do what they're supposed to, which is run notepad.exe and run notepad.exe and tell it to try to open test.txt: 1. runner.exe notepad.exe 2. runner.exe notepad.exe test.txt Now, consider this batch file: 3. runner.exe runner.exe notepad.exe This one should run runner.exe and send notepad.exe as one of its command line arguments, shouldn't it? Then, that second instance of runner.exe should run notepad.exe - which doesn't happen. If I print the argc argument, it's 14 for the second instance of runner.exe, and they are all weird stuff like Files\Microsoft, SQL, Files\Common and so on. I can't figure out why this happens. I want to be able to string as many runner.exe calls using command line arguments as possible, or at least 2. How can I do that? I am using Windows 7 if that makes a difference.

    Read the article

  • Rails passenger production cache definition

    - by mark
    Hi I'm having a bit of a problem with storing data in rails cache under production. What I currently have is this, though I have been trying to work this out for hours already: #production.rb config.cache_store = :mem_cache_store if defined?(PhusionPassenger) PhusionPassenger.on_event(:starting_worker_process) do |forked| if forked Rails.cache.instance_variable_get(:@data).reset end end end I am using a cron job to (try to) save remote data to the cache for display. It is logged as being written to the cache but reportedly null. If anyone could point me toward a decent current tutorial on the subject or offer guidance I'd be extremely grateful. This is really, really frustrating me. :(

    Read the article

  • sonar code coverage issue

    - by user1490244
    Hi I am running sonar for my impl class, i have written junit for all the methods of impl class but when i ran the sonar the code coverage is just 11% and all the file is in red color. stating that the code is not covered. I really dont understand inspite of writing all the test methods for all the impl methods why is it showing such a less percentage. Any help or tips or guidelines will be greatly appreciated. Thanks

    Read the article

  • Can running object be garbage collected?

    - by Kugel
    I have a simple class: public class Runner { public void RunAndForget(RunDelegate method) { ThreadPool.QueueUserWorkItem(new WaitCallback(Run), method); } private void Run(object o) { ((RunDelegate )o).Invoke(); } } And if I use this like so: private void RunSomethingASync() { Runner runner = new Runner(); runner.FireAndForget(new RunDelegate(Something)); } Is there any danger using it like this? My C++ guts tell me that runner object should be destroyed after RunSomethingASync is finished. Am I right? What happens then to the method running on different thread? Or perhaps it is other way around and runner will not be collected? That would be a problem considering I may call RunSomethingASync() many times.

    Read the article

  • puppet rspec no such file to load -- rspec-puppet (LoadError)

    - by Vorsprung
    I have no prior experience at all of ruby. I am not interested in ruby (and so have no knowledge of rails etc) as such but am using puppet to manage a group of servers. I have written some modules and the rspec-puppet system looks like it would be very useful. However, I cannot get rspec-puppet to work I am using Ubuntu LTS 10.04 I have installed puppet rspec using the directions on their web page What I actually did apt-get install rubygems # (installs 1.8) gem install rspec-expectations gem install rspec-puppet I also installed librspec-ruby1.8 Then I ran rspec-puppet-init in a puppet module directory I'd already made (it's a working puppet module) I made a file as defined in the tutorial $ more spec/defines/rule_spec.rb require 'spec_helper' describe 'vanusers::rule' do let(:title) { 't1' } it { should contain_class('vanusers::JamieA') } end but when I try and run it there is a mysterious dependancy issue $ spec spec/defines/rule_spec.rb /home/jamie/git/puppet/modules/vanusers/spec/spec_helper.rb:1:in `require': no such file to load -- rspec-puppet (LoadError) from /home/jamie/git/puppet/modules/vanusers/spec/spec_helper.rb:1 from ./spec/defines/rule_spec.rb:1:in `require' from ./spec/defines/rule_spec.rb:1 from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:15:in `load' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:15:in `load_files' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:14:in `each' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:14:in `load_files' from /usr/lib/ruby/1.8/spec/runner/options.rb:132:in `run_examples' from /usr/lib/ruby/1.8/spec/runner/command_line.rb:9:in `run' from /usr/bin/spec:3 Here is the solution I came up with in the end:: apt-get install rubygems gem install rspec-expectations rspec-puppet puppet-lint puppetlabs_spec_helper so your path picks up the gem stuff export PATH=/var/lib/gems/1.8/bin:$PATH cd into module and rm spec/spec_helper.rb rspec-puppet-init replace Rakefile with require 'rake' require 'rspec/core/rake_task' require 'puppetlabs_spec_helper/rake_tasks' Then "rake spec" to run tests or "rake lint" to check files http://sysadvent.blogspot.co.uk/2013/12/day-22-getting-started-testing-your.html was an excellent source of info

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >