Search Results

Search found 5214 results on 209 pages for 'j unit 122'.

Page 117/209 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • How to use QCOMPARE Macro to compare events

    - by vels
    Hi, I have MyWindow class which popus a blank window, which accepts a mouse click, I need to unit test the mouse click event Code snippet: void TestGui::testGUI_data() { QTest::addColumn<QTestEventList>("events"); QTest::addColumn<QTestEventList>("expected"); Mywindow mywindow; QSize editWidgetSize = mywindow.size(); QPoint clickPoint(editWidgetSize.rwidth()-2, editWidgetSize.rheight()-2); QTestEventList events, expected ; events.addMouseClick( Qt::LeftButton, 0, clickPoint); expected.addMouseClick( Qt::LeftButton, 0, clickPoint); QTest::newRow("mouseclick") << events << expected ; } void TestGui::testGUI() { QFETCH(QTestEventList, events); QFETCH(QTestEventList, expected); Mywindow mywindow; mywindow.show(); events.simulate(&mywindow); QCOMPARE(events, expected); } // prints FAIL! : TestGui::testGUI(mouseclick) Compared values are not the same How to test the mouse click on mywindow. is there any beeter approach to unit test mouse events? Thanks, vels

    Read the article

  • Supress output from Visual Studio output pane (C++)

    - by Ryan Ginstrom
    When I run my Win32 project in the Visual Studio debugger, I get this huge screed of output about which DLLs were loaded, first-chance exceptions, and so on. Is there a way that I can suppress this output? Some day, I might want to know when 'C:\Windows\SysWOW64\ntdll.dll' was loaded, but normally I don't care. This is especially true when I'm running unit tests, and just want to be told whether any of the tests failed. This stuff isn't output with console applications, but it is with windows applications. To give an example of what I mean, here are the first lines from the output of a recent unit-test run. 'MyProject.exe': Loaded 'C:\dev\MyProject\Testing\MyProject.exe', Symbols loaded. 'MyProject.exe': Loaded 'C:\Windows\SysWOW64\ntdll.dll' 'MyProject.exe': Loaded 'C:\Windows\SysWOW64\kernel32.dll' 'MyProject.exe': Loaded 'C:\Windows\SysWOW64\KernelBase.dll' 'MyProject.exe': Loaded 'C:\Windows\SysWOW64\dbghelp.dll' 'MyProject.exe': Loaded 'C:\Windows\SysWOW64\msvcrt.dll' 'MyProject.exe': Loaded 'C:\Windows\SysWOW64\user32.dll' 'MyProject.exe': Loaded 'C:\Windows\SysWOW64\gdi32.dll' 'MyProject.exe': Loaded 'C:\Windows\SysWOW64\lpk.dll' 'MyProject.exe': Loaded 'C:\Windows\SysWOW64\usp10.dll' 'MyProject.exe': Loaded 'C:\Windows\SysWOW64\advapi32.dll' ... and on and on ...

    Read the article

  • WCF data services (OData), query with inheritance limitation?

    - by Mathieu Hétu
    Project: WCF Data service using internally EF4 CTP5 Code-First approach. I configured entities with inheritance (TPH). See previous question on this topic: Previous question about multiple entities- same table The mapping works well, and unit test over EF4 confirms that queries runs smoothly. My entities looks like this: ContactBase (abstract) Customer (inherits from ContactBase), this entity has also several Navigation properties toward other entities Resource (inherits from ContactBase) I have configured a discriminator, so both Customer and Resource map to the same table. Again, everythings works fine on the Ef4 point of view (unit tests all greens!) However, when exposing this DBContext over WCF Data services, I get: - CustomerBases sets exposed (Customers and Resources sets seems hidden, is it by design?) - When I query over Odata on Customers, I get this error: Navigation Properties are not supported on derived entity types. Entity Set 'ContactBases' has a instance of type 'CodeFirstNamespace.Customer', which is an derived entity type and has navigation properties. Please remove all the navigation properties from type 'CodeFirstNamespace.Customer'. Stacktrace: at System.Data.Services.Serializers.SyndicationSerializer.WriteObjectProperties(IExpandedResult expanded, Object customObject, ResourceType resourceType, Uri absoluteUri, String relativeUri, SyndicationItem item, DictionaryContent content, EpmSourcePathSegment currentSourceRoot) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, ResourceType expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__b.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) Seems like a limitation of WCF Data services... is it? Not much documentation can be found on the web about WCF Data services (OData) and inheritance specifications. How can I overpass this exception? I need these navigation properties on derived entities, and inheritance seems the only way to provide mapping of 2 entites on the same table with Ef4 CTP5... Any thoughts?

    Read the article

  • ASP.NET - Accessing copied content

    - by James Kolpack
    I have a class library project which contains some content files configured with the "Copy if newer" copy build action. This results in the files being copied to a folder under ...\bin\ for every project in the solution. In this same solution, I've got a ASP.NET web project (which is MVC, by the way). In the library I have a static constructor load the files into data structures accessible by the web project. Previously I've been including the content as an embedded resource. I now need to be able to replace them without recompiling. I want to access the data in three different contexts: Unit testing the library assembly Debugging the web application Hosting the site in IIS For unit testing, Environment.CurrentDirectory points to a path containing the copied content. When debugging however, it points to C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE. I've also looked at Assembly.GetExecutingAssembly().Location which points to C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\c44f9da4\9238ccc\assembly\dl3\eb4c23b4\9bd39460_f7d4ca01\. What I need is to the physical location of the webroot \bin folder, but since I'm in a static constructor in the library project, I don't have access to a Request.PhysicalApplicationPath. Is there some other environment variable or structure where I can always find my "Copy if newer" files?

    Read the article

  • PHP OOP: Avoid Singleton/Static Methods in Domain Model Pattern

    - by sunwukung
    I understand the importance of Dependency Injection and its role in Unit testing, which is why the following issue is giving me pause: One area where I struggle not to use the Singleton is the Identity Map/Unit of Work pattern (Which keeps tabs on Domain Object state). //Not actual code, but it should demonstrate the point class Monitor{//singleton construction omitted for brevity static $members = array();//keeps record of all objects static $dirty = array();//keeps record of all modified objects static $clean = array();//keeps record of all clean objects } class Mapper{//queries database, maps values to object fields public function find($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; } $values = $this->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); Monitor::new[$id]=$Object return $Object; } $User = $UserMapper->find(1);//domain object is registered in Id Map $User->changePropertyX();//object is marked "dirty" in UoW // at this point, I can save by passing the Domain Object back to the Mapper $UserMapper->save($User);//object is marked clean in UoW //but a nicer API would be something like this $User->save(); //but if I want to do this - it has to make a call to the mapper/db somehow $User->getBlogPosts(); //or else have to generate specific collection/object graphing methods in the mapper $UserPosts = $UserMapper->getBlogPosts(); $User->setPosts($UserPosts); Any advice on how you might handle this situation? I would be loathe to pass/generate instances of the mapper/database access into the Domain Object itself to satisfy DI - At the same time, avoiding that results in lots of calls within the Domain Object to external static methods. Although I guess if I want "save" to be part of its behaviour then a facility to do so is required in its construction. Perhaps it's a problem with responsibility, the Domain Object shouldn't be burdened with saving. It's just quite a neat feature from the Active Record pattern - it would be nice to implement it in some way.

    Read the article

  • Some questions about special operators i've never seen in C++ code.

    - by toto
    I have downloaded the Phoenix SDK June 2008 (Tools for compilers) and when I'm reading the code of the Hello sample, I really feel lost. public ref class Hello { //-------------------------------------------------------------------------- // // Description: // // Class Variables. // // Remarks: // // A normal compiler would have more flexible means for holding // on to all this information, but in our case it's simplest (if // somewhat inelegant) if we just keep references to all the // structures we'll need to access as classstatic variables. // //-------------------------------------------------------------------------- static Phx::ModuleUnit ^ module; static Phx::Targets::Runtimes::Runtime ^ runtime; static Phx::Targets::Architectures::Architecture ^ architecture; static Phx::Lifetime ^ lifetime; static Phx::Types::Table ^ typeTable; static Phx::Symbols::Table ^ symbolTable; static Phx::Phases::PhaseConfiguration ^ phaseConfiguration; 2 Questions : What's that ref keyword? What is that sign ^ ? What is it doing protected: virtual void Execute ( Phx::Unit ^ unit ) override; }; override is a C++ keyword too? It's colored as such in my Visual Studio. I really want to play with this framework, but this advanced C++ is really an obstacle right now. Thank you.

    Read the article

  • I want tell the VC++ Compiler to compile all code. Can it be done?

    - by KGB
    I am using VS2005 VC++ for unmanaged C++. I have VSTS and am trying to use the code coverage tool to accomplish two things with regards to unit tests: See how much of my referenced code under test is getting executed See how many methods of my code under test (if any) are not unit tested at all Setting up the VSTS code coverage tool (see the link text) and accomplishing task #1 was straightforward. However #2 has been a surprising challenge for me. Here is my test code. class CodeCoverageTarget { public: std::string ThisMethodRuns() { return "Running"; } std::string ThisMethodDoesNotRun() { return "Not Running"; } }; #include <iostream> #include "CodeCoverageTarget.h" using namespace std; int main() { CodeCoverageTarget cct; cout<<cct.ThisMethodRuns()<<endl; } When both methods are defined within the class as above the compiler automatically eliminates the ThisMethodDoesNotRun() from the obj file. If I move it's definition outside the class then it is included in the obj file and the code coverage tool shows it has not been exercised at all. Under most circumstances I want the compiler to do this elimination for me but for the code coverage tool it defeats a significant portion of the value (e.g. finding untested methods). I have tried a number of things to tell the compiler to stop being smart for me and compile everything but I am stumped. It would be nice if the code coverage tool compensated for this (I suppose by scanning the source and matching it up with the linker output) but I didn't find anything to suggest it has a special mode to be turned on. Am I totally missing something simple here or is this not possible with the VC++ compiler + VSTS code coverage tool? Thanks in advance, KGB

    Read the article

  • Export sheet from Excel to CSV

    - by Mike Wills
    I am creating a spread sheet to help ease the entry of data into one of our systems. They are entering inventory items into this spread sheet to calculate the unit cost of the item (item cost + tax + S&H). The software we purchased cannot do this. Aan invoice can have one or more lines (duh!) and I calculate the final unit cost. This is working fine. I then want to take that data and create a CSV from that so they can load it into our inventory system. I currently have a second tab that is laid out like I want the CSV, and I do an equal cell (=Sheet!A3) to get the values on the "export sheet". The problem is when they save this to a CSV, there are many blank lines that need to be deleted before they can upload it. I want a file that only contains the data that is needed. I am sure this could be done in VBA, but I don't know where to start or know how to search for an example to start. Any direction or other options would be appreciated.

    Read the article

  • Writing functions of tuples conveniently in Scala

    - by Alexey Romanov
    Quite a few functions on Map take a function on a key-value tuple as the argument. E.g. def foreach(f: ((A, B)) ? Unit): Unit. So I looked for a short way to write an argument to foreach: > val map = Map(1 -> 2, 3 -> 4) map: scala.collection.immutable.Map[Int,Int] = Map(1 -> 2, 3 -> 4) > map.foreach((k, v) => println(k)) error: wrong number of parameters; expected = 1 map.foreach((k, v) => println(k)) ^ > map.foreach({(k, v) => println(k)}) error: wrong number of parameters; expected = 1 map.foreach({(k, v) => println(k)}) ^ > map.foreach(case (k, v) => println(k)) error: illegal start of simple expression map.foreach(case (k, v) => println(k)) ^ I can do > map.foreach(_ match {case (k, v) => println(k)}) 1 3 Any better alternatives?

    Read the article

  • Overriding check box in JavaScript with jQuery

    - by Gutzofter
    Help with unit testing checkbox behavior. I have this page: <!DOCTYPE html> <html> <head> <title></title> <script type="text/javascript" src="../js/jquery-1.4.2.min.js"></script> <script type="text/javascript"> $(function() { $('<div><input type="checkbox" name="makeHidden" id="makeHidden" checked="checked" />Make Hidden</div>').appendTo('body'); $('<div id="displayer" style="display:none;">Was Hidden</div>').appendTo('body'); $('#makeHidden').click(function() { var isChecked = $(this).is(':checked'); if (isChecked) { $('#displayer').hide(); } else { $('#displayer').show(); } return false; }); }); </script> </head> <body> </body> </html> This doesn't work it is because of the return false; in the click handler. If I remove it it works great. The problem is if I pull the click function out into it's own function and unit test it with qunit it will not work without the return false;

    Read the article

  • after assembling jar - No Persistence provider for EntityManager named ....

    - by alabalaa
    im developing a standalone application and it works fine when starting it from my ide(intellij idea), but after creating an uberjar and start the application from it javax.persistence.spi.PersistenceProvider is thrown saying "No Persistence provider for EntityManager named testPU" here is my persistence.xml which is placed under meta-inf directory: <persistence-unit name="testPU" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <class>test.model.Configuration</class> <properties> <property name="hibernate.connection.username" value="root"/> <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/> <property name="hibernate.connection.password" value="root"/> <property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/test"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.dialect" value="org.hibernate.dialect.MySQLInnoDBDialect"/> <property name="hibernate.c3p0.timeout" value="300"/> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> and here is how im creating the entity manager factory: emf = Persistence.createEntityManagerFactory("testPU"); im using maven and tried the assembly plug-in with the default configuration fot it, i dont have much experience with assembling jars and i dont know if im missing something, so if u have any ideas ill be glad to hear them

    Read the article

  • performance issue: difference between select s.* vs select *

    - by kamil
    Recently I had some problem in performance of my query. The thing is described here: poor Hibernate select performance comparing to running directly - how debug? After long time of struggling, I've finally discovered that the query with select prefix like: select sth.* from Something as sth... Is 300x times slower then query started this way: select * from Something as sth.. Could somebody help me, and asnwer why is that so? Some external documents on this would be really useful. The table used for testing was: SALES_UNIT table contains some basic info abot sales unit node such as name and etc. The only association is to table SALES_UNIT_TYPE, as ManyToOne. The primary key is ID and field VALID_FROM_DTTM which is date. SALES_UNIT_RELATION contains relation PARENT-CHILD between sales unit nodes. Consists of SALES_UNIT_PARENT_ID, SALES_UNIT_CHILD_ID and VALID_TO_DTTM/VALID_FROM_DTTM. No association with any tables. The PK here is ..PARENT_ID, ..CHILD_ID and VALID_FROM_DTTM The actual query I've done was: select s.* from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null select * from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null Same query, both uses left join and only difference is with select.

    Read the article

  • How to configure multiple mappings using FluentHibernate?

    - by chris.baglieri
    First time rocking it with NHibernate/Fluent so apologies in advance if this is a naive question. I have a set of Models I want to map. When I create my session factory I'm trying to do all mappings at once. I am not using auto-mapping (though I may if what I am trying to do ends up being more painful than it ought to be). The problem I am running into is that it seems only the top map is taking. Given the code snippet below and running a unit test that attempts to save 'bar', it fails and checking the logs I see NHibernate is trying to save a bar entity to the foo table. While I suspect it's my mappings it could be something else that I am simply overlooking. Code that creates the session factory (note I've also tried separate calls into .Mappings): Fluently.Configure().Database(MsSqlConfiguration.MsSql2008 .ConnectionString(c => c .Server(@"localhost\SQLEXPRESS") .Database("foo") .Username("foo") .Password("foo"))) .Mappings(m => { m.FluentMappings.AddFromAssemblyOf<FooMap>() .Conventions.Add(FluentNHibernate.Conventions.Helpers .Table.Is(x => "foos")); m.FluentMappings.AddFromAssemblyOf<BarMap>() .Conventions.Add(FluentNHibernate.Conventions.Helpers .Table.Is(x => "bars")); }) .BuildSessionFactory(); Unit test snippet: using (var session = Data.SessionHelper.SessionFactory.OpenSession()) { var bar = new Bar(); session.Save(bar); Assert.NotNull(bar.Id); }

    Read the article

  • Methods to see result fo a code change faster

    - by Can't Tell
    This question came to me when developing using Eclipse. I use JBoss Application Server and use hot code replacement. But this option requires that the 'build automatically' option to be enabled. This makes Eclipse build the workspace automatically (periodically or when a file is saved?) and for a large code base this takes too much time and processing which makes the machine freeze for a while. Also sometimes an error message is shown saying that hot code replacement failed. The question that I have is: is there a better way to see the result of a code change? Currently I have the following two suggestions: Have unit tests - this will allow to run a single test and see the result of a code change. ( But for a JavaEE application that uses EJBs is it easy to setup unit tests?) Use OSGi - which allows to add jars to the running system without bringing down the JVM. Any ideas on above suggestions or any other suggestion or a framework that allows to do this is welcome.

    Read the article

  • Netbeans, JPA Entity Beans in seperate projects. Unknown entity bean class

    - by Stu
    I am working in Netbeans and have my entity beans and web services in separate projects. I include the entity beans in the web services project however the ApplicaitonConfig.java file keeps getting over written and removing the entries I make for the entity beans in the associated jar file. My question is: is it required to have both the EntityBeans and the WebServices share the same project/jar file? If not what is the appropriate way to include the entity beans which are in the jar file? <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"> <persistence-unit name="jdbc/emrPool" transaction-type="JTA"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jta-data-source>jdbc/emrPool</jta-data-source> <exclude-unlisted-classes>false</exclude-unlisted-classes> <properties> <property name="eclipselink.ddl-generation" value="none"/> <property name="eclipselink.cache.shared.default" value="false"/> </properties> </persistence-unit> </persistence> Based on Melc's input i verified that that the transaction type is set to JTA and the jta-data-source is set to the value for the Glassfish JDBC Resource. Unfortunately the problem still persists. I have opened the WAR file and validated that the EntityBean.jar file is the latest version and is located in the WEB-INF/lib directory of the War file. I think it is tied to the fact that the entities are not being "registered" with the entity manager. However i do not know why they are not being registered.

    Read the article

  • Can I transform this asynchronous java network API into a monadic representation (or something else

    - by AlecZorab
    I've been given a java api for connecting to and communicating over a proprietary bus using a callback based style. I'm currently implementing a proof-of-concept application in scala, and I'm trying to work out how I might produce a slightly more idiomatic scala interface. A typical (simplified) application might look something like this in Java: DataType type = new DataType(); BusConnector con = new BusConnector(); con.waitForData(type.getClass()).addListener(new IListener<DataType>() { public void onEvent(DataType t) { //some stuff happens in here, and then we need some more data con.waitForData(anotherType.getClass()).addListener(new IListener<anotherType>() { public void onEvent(anotherType t) { //we do more stuff in here, and so on } }); } }); //now we've got the behaviours set up we call con.start(); In scala I can obviously define an implicit conversion from (T = Unit) into an IListener, which certainly makes things a bit simpler to read: implicit def func2Ilistener[T](f: (T => Unit)) : IListener[T] = new IListener[T]{ def onEvent(t:T) = f } val con = new BusConnector con.waitForData(DataType.getClass).addListener( (d:DataType) => { //some stuff, then another wait for stuff con.waitForData(OtherType.getClass).addListener( (o:OtherType) => { //etc }) }) Looking at this reminded me of both scalaz promises and f# async workflows. My question is this: Can I convert this into either a for comprehension or something similarly idiomatic (I feel like this should map to actors reasonably well too) Ideally I'd like to see something like: for( d <- con.waitForData(DataType.getClass); val _ = doSomethingWith(d); o <- con.waitForData(OtherType.getClass) //etc )

    Read the article

  • C# Casting system.__comobject to class type

    - by ijrufus
    I have an Excel Add-In that I'm currently trying to set up a unit test framework for. For the unit tests I've followed this guide: http://blogs.msdn.com/b/varsha/archive/2010/08/17/writing-automated-test-cases-for-vsto-application.aspx It seems to work fine, until I want to return a class object from my interface. Specifying the class object as the return type throws a "return argument has an invalid type" exception when calling the method. Changing the return type from the class to an object allows me to call the method and get the object, but now I'm unable to cast it as the class and use it as intended, getting this exception message when I try: > Unable to cast COM object of type 'System.__ComObject' to class type > 'anaplan.Utility.XYCoordinates'. Instances of types that represent COM > components cannot be cast to types that do not represent COM > components; however they can be cast to interfaces as long as the > underlying COM component supports QueryInterface calls for the IID of > the interface. I've retrieved the Type name using VisualBasic.Information.TypeName and it's showing it as the class I expect. Is there any way to get the comobject cast back to the class? Or another way to access the properties it has? Or am I just being a bit stupid here?

    Read the article

  • Cant we use a Set or collection as a return type in GAE?

    - by user273422
    In my code i have used Set<Employees> as a return type to my function addEmp(). So, i m gettin an Compilation error. The Error is: Compiling module com.employeedepartmentgae.Employeedepartmentgae Refreshing module from source Validating newly compiled units Removing units with errors [ERROR] Errors in 'file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/GreetingServiceAsync.java' [ERROR] Line 6: The import com.employeedepartmentgae.server.domainobject.Employee cannot be resolved [ERROR] Line 18: Employee cannot be resolved to a type [ERROR] Errors in 'file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/GreetingService.java' [ERROR] Line 6: The import com.employeedepartmentgae.server.domainobject.Employee cannot be resolved [ERROR] Line 20: Employee cannot be resolved to a type [ERROR] Errors in 'file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/EmployeeWidget.java' [ERROR] Line 12: The import com.employeedepartmentgae.server.domainobject.Employee cannot be resolved [ERROR] Line 75: The method addEmp(String, String, String, AsyncCallback) from the type GreetingServiceAsync refers to the missing type Employee [ERROR] Line 75: The type new AsyncCallback(){} must implement the inherited abstract method AsyncCallback.onSuccess(Set) [ERROR] Line 75: Employee cannot be resolved to a type [ERROR] Line 94: The method onSuccess(Set) of type new AsyncCallback(){} must override or implement a supertype method [ERROR] Line 94: Employee cannot be resolved to a type [ERROR] Line 96: Employee cannot be resolved to a type [ERROR] Line 96: Employee cannot be resolved to a type [ERROR] Line 98: Employee cannot be resolved to a type Removing invalidated units [WARN] Compilation unit 'file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/Employeedepartmentgae.java' is removed due to invalid reference(s): [WARN] file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/EmployeeWidget.java [WARN] Compilation unit 'file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/DepartmentWidget.java' is removed due to invalid reference(s): [WARN] file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/GreetingService.java [WARN] file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/GreetingServiceAsync.java Computing all possible rebind results for 'com.employeedepartmentgae.client.Employeedepartmentgae' Rebinding com.employeedepartmentgae.client.Employeedepartmentgae Checking rule [ERROR] Unable to find type 'com.employeedepartmentgae.client.Employeedepartmentgae' [ERROR] Hint: Previous compiler errors may have made this type unavailable [ERROR] Hint: Check the inheritance chain from your module; it may not be inheriting a required module or a module may not be adding its source path entries properly So please help me.....

    Read the article

  • Django Admin Running Same Query Thousands of Times for Model

    - by Tom
    Running into an odd . . . loop when trying to view a model in the Django admin. I have three related models (code trimmed for brevity, hopefully I didn't trim something I shouldn't have): class Association(models.Model): somecompany_entity_id = models.CharField(max_length=10, db_index=True) name = models.CharField(max_length=200) def __unicode__(self): return self.name class ResidentialUnit(models.Model): building = models.CharField(max_length=10) app_number = models.CharField(max_length=10) unit_number = models.CharField(max_length=10) unit_description = models.CharField(max_length=100, blank=True) association = models.ForeignKey(Association) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) def __unicode__(self): return '%s: %s, Unit %s' % (self.association, self.building, self.unit_number) class Resident(models.Model): unit = models.ForeignKey(ResidentialUnit) type = models.CharField(max_length=20, blank=True, default='') lookup_key = models.CharField(max_length=200) jenark_id = models.CharField(max_length=20, blank=True) user = models.ForeignKey(User) is_association_admin = models.BooleanField(default=False, db_index=True) show_in_contact_list = models.BooleanField(default=False, db_index=True) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) _phones = {} home_phone = None work_phone = None cell_phone = None app_number = None account_cache_key = None def __unicode__(self): return '%s' % self.user.get_full_name() It's the last model that's causing the problem. Trying to look at a Resident in the admin takes 10-20 seconds. If I take 'self.association' out of the __unicode__ method for ResidentialUnit, a resident page renders pretty quickly. Looking at it in the debug toolbar, without the association name in ResidentialUnit (which is a foreign key on Resident), the page runs 14 queries. With the association name put back in, it runs a far more impressive 4,872 queries. The strangest part is the extra queries all seem to be looking up the association name. They all come from the same line, the __unicode__ method for ResidentialUnit. Each one is the exact same thing, e.g., SELECT `residents_association`.`id`, `residents_association`.`jenark_entity_id`, `residents_association`.`name` FROM `residents_association` WHERE `residents_association`.`id` = 1096 ORDER BY `residents_association`.`name` ASC I assume I've managed to create a circular reference, but if it were truly circular, it would just die, not run 4000x and then return. Having trouble finding a good Google or StackOverflow result for this.

    Read the article

  • Writing a generic function that can take a Writer as well as an OutputStream

    - by ebruchez
    I wrote a couple of functions that look like this: def myWrite(os: OutputStream) = {} def myWrite(w: Writer) = {} Now both are very similar and I thought I would try to write a single parametrized version of the function. I started with a type with the two methods that are common in the Java OutputStream and Writer: type Writable[T] = { def close() : Unit def write(cbuf: Array[T], off: Int, len: Int): Unit } One issue is that OutputStream writes Byte and Writer writes Char, so I parametrized the type with T. Then I write my function: def myWrite[T, A[T] <: Writable[T]](out: A[T]) = {} and try to use it: val w = new java.io.StringWriter() myWrite(w) Result: <console>:9: error: type mismatch; found : java.io.StringWriter required: ?A[ ?T ] Note that implicit conversions are not applicable because they are ambiguous: both method any2ArrowAssoc in object Predef of type [A](x: A)ArrowAssoc[A] and method any2Ensuring in object Predef of type [A](x: A)Ensuring[A] are possible conversion functions from java.io.StringWriter to ?A[ ?T ] myWrite(w) I tried a few other combinations of types and parameters, to no avail so far. My question is whether there is a way of achieving this at all, and if so how. (Note that the implementation of myWrite will need, internally, to know the type T that parametrizes the write() method, because it needs to create a buffer as in new ArrayT.)

    Read the article

  • How can I [simply] consume JSON Data in a Line of Business Web Application

    - by Atomiton
    I usually use JSON with jQuery to just return a string with html. However, I want to start to use Javascript objects in my code. What's the simplest way to get started using json objects on my page? Here's a sample Ajax call ( after $(document).ready( { ... }) of course: $('#btn').click(function(event) { event.preventDefault(); var out = $('#result'); $.ajax({ url: "CustomerServices.asmx/GetCustomersByInvoiceCount", success: function(msg) { // // Iterate through the json results and spit them out to a page? // }, data: "{ 'invoiceCount' : 100 }" }); }); My WebMethod: [WebMethod(Description="Gets customers with more than n invoices")] public List<Customer> GetCustomersByInvoiceCount(int? invoiceCount) { using (dbDataContext db = new dbDataContext()) { return db.Customers.Where(c => c.InvoiceCount >= invoiceCount); } } What gets returned: {"d":[{"__type":"Customer","Account":"1116317","Name":"SOME COMPANY","Address":"UNit 1 , 392 JOHN ST. ","LastTransaction":"\/Date(1268294400000)\/","HighestBalance":13922.34},{"__type":"Customer","Account":"1116318","Name":"ANOTHER COMPANY","Address":"UNIT #345 , 392 JOHN ST. ","LastTransaction":"\/Date(1265097600000)\/","HighestBalance":549.42}]} What I'd LIKE to know, is what are people generally doing with this returned json? Do you iterate through the properties and create an html table on the fly? Is there way to "bind" JSON data using a javascript version of reflection ( something like the .Net GridView Control ) Do you throw this returned data into a Javascript Object and then do something with it? An example of what I want to achieve is to have an plain ol' html page ( on a mobile device )with a list of a Salesperson's Customers. When one of those customers are clicked, the customer id gets sent to a webservice which retrieves the customer details that are relevant to a sales person. I know the SO talent pool is quite deep so I figured you all here would be able to guide in the right direction and give me a few ideas on the best way to approach this.

    Read the article

  • Python OOP - object has no attribute

    - by user1744269
    I am attempting to learn how to program. I really do want to learn how to program; I love the building and design aspect of it. However, in Java and Python, I have tried and failed with programs as they pertain to objects, classes, methods.. I am trying to develop some code for a program, but im stumped. I know this is a simple error. However I am lost! I am hoping someone can guide me to a working program, but also help me learn (criticism is not only expected, but APPRECIATED). class Converter: def cTOf(self, numFrom): numFrom = self.numFrom numTo = (self.numFrom * (9/5)) + 32 print (str(numTo) + ' degrees Farenheit') return numTo def fTOc(self, numFrom): numFrom = self.numFrom numTo = ((numFrom - 32) * (5/9)) return numTo convert = Converter() numFrom = (float(input('Enter a number to convert.. '))) unitFrom = input('What unit would you like to convert from.. ') unitTo = input('What unit would you like to convert to.. ') if unitFrom == ('celcius'): convert.cTOf(numFrom) print(numTo) input('Please hit enter..') if unitFrom == ('farenheit'): convert.fTOc(numFrom) print(numTo) input('Please hit enter..')

    Read the article

  • How can I [simply] consume JSON Data to display to the page

    - by Atomiton
    I usually use JSON with jQuery to just return a string with html. However, I want to start to use Javascript objects in my code. What's the simplest way to get started using json objects on my page? Here's a sample Ajax call ( after $(document).ready( { ... }) of course: $('#btn').click(function(event) { event.preventDefault(); var out = $('#result'); $.ajax({ url: "CustomerServices.asmx/GetCustomersByInvoiceCount", success: function(msg) { // // Iterate through the json results and spit them out to a page? // }, data: "{ 'invoiceCount' : 100 }" }); }); My WebMethod: [WebMethod(Description="Gets customers with more than n invoices")] public List<Customer> GetCustomersByInvoiceCount(int? invoiceCount) { using (dbDataContext db = new dbDataContext()) { return db.Customers.Where(c => c.InvoiceCount >= invoiceCount); } } What gets returned: {"d":[{"__type":"Customer","Account":"1116317","Name":"SOME COMPANY","Address":"UNit 1 , 392 JOHN ST. ","LastTransaction":"\/Date(1268294400000)\/","HighestBalance":13922.34},{"__type":"Customer","Account":"1116318","Name":"ANOTHER COMPANY","Address":"UNIT #345 , 392 JOHN ST. ","LastTransaction":"\/Date(1265097600000)\/","HighestBalance":549.42}]} What I'd LIKE to know, is what are people generally doing with this returned json? Do you iterate through the properties and create an html table on the fly? Is there way to "bind" JSON data using a javascript version of reflection ( something like the .Net GridView Control ) Do you throw this returned data into a Javascript Object and then do something with it? An example of what I want to achieve is to have an plain ol' html page ( on a mobile device )with a list of a Salesperson's Customers. When one of those customers are clicked, the customer id gets sent to a webservice which retrieves the customer details that are relevant to a sales person. I know the SO talent pool is quite deep so I figured you all here would be able to guide in the right direction and give me a few ideas on the best way to approach this.

    Read the article

  • Set tier price on different products (special conditions) in Magento?

    - by user1547625
    Let’s say I sell mobile phones. I might sell phones with the following tiered pricing iPhone4 (1x$200ea. 3x$150ea. 10x$100ea.) iPhone4s (1x$300ea. 3x$250ea. 10x$200ea.) iPhone5 (1x$500ea. 3x$400ea. 10x$1300ea.) Plus maybe I sell Samsungs, Nokias, accessories, etc. So let’s say a customer wants to buy 3 iphones, but they want 1x iPhone 4, 1x iPhone 4s and 1x iPhone 5. Instead of getting the single unit price for each, we would want to group iphones, so that they would get the 3x price for each unit. So they would spend $150 on the iPhone 4, $250 on the iPhone 4s and $400 on the iPhone 5.... We’d have several categories and could create some type of spreadsheet for you, but would want to also have the ability to set up this categorization in the future. So please tell me how to get this done from magento admin or any other way?

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >