Search Results

Search found 16930 results on 678 pages for 'entity model'.

Page 134/678 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Core Data Model Design Question - Changing "Live" Objects also Changes Saved Objects

    - by mwt
    I'm working on my first Core Data project (on iPhone) and am really liking it. Core Data is cool stuff. I am, however, running into a design difficulty that I'm not sure how to solve, although I imagine it's a fairly common situation. It concerns the data model. For the sake of clarity, I'll use an imaginary football game app as an example to illustrate my question. Say that there are NSMO's called Downs and Plays. Plays function like templates to be used by Downs. The user creates Plays (for example, Bootleg, Button Hook, Slant Route, Sweep, etc.) and fills in the various properties. Plays have a to-many relationship with Downs. For each Down, the user decides which Play to use. When the Down is executed, it uses the Play as its template. After each down is run, it is stored in history. The program remembers all the Downs ever played. So far, so good. This is all working fine. The question I have concerns what happens when the user wants to change the details of a Play. Let's say it originally involved a pass to the left, but the user now wants it to be a pass to the right. Making that change, however, not only affects all the future executions of that Play, but also changes the details of the Plays stored in history. The record of Downs gets "polluted," in effect, because the Play template has been changed. I have been rolling around several possible fixes to this situation, but I imagine the geniuses of SO know much more about how to handle this than I do. Still, the potential fixes I've come up with are: 1) "Versioning" of Plays. Each change to a Play template actually creates a new, separate Play object with the same name (as far as the user can tell). Underneath the hood, however, it is actually a different Play. This would work, AFAICT, but seems like it could potentially lead to a wild proliferation of Play objects, esp. if the user keeps switching back and forth between several versions of the same Play (creating object after object each time the user switches). Yes, the app could check for pre-existing, identical Plays, but... it just seems like a mess. 2) Have Downs, upon saving, record the details of the Play they used, but not as a Play object. This just seems ridiculous, given that the Play object is there to hold those just those details. 3) Recognize that Play objects are actually fulfilling 2 functions: one to be a template for a Down, and the other to record what template was used. These 2 functions have a different relationship with a Down. The first (template) has a to-many relationship. But the second (record) has a one-to-one relationship. This would mean creating a second object, something like "Play-Template" which would retain the to-many relationship with Downs. Play objects would get reconfigured to have a one-to-one relationship with Downs. A Down would use a Play-Template object for execution, but use the new kind of Play object to store what template was used. It is this change from a to-many relationship to a one-to-one relationship that represents the crux of the problem. Even writing this question out has helped me get clearer. I think something like solution 3 is the answer. However if anyone has a better idea or even just a confirmation that I'm on the right track, that would be helpful. (Remember, I'm not really making a football game, it's just faster/easier to use a metaphor everyone understands.) Thanks.

    Read the article

  • What's a good entity hierarchy for a 2D game?

    - by futlib
    I'm in the process of building a new 2D game out of some code I wrote a while ago. The object hierarchy for entities is like this: Scene (e.g. MainMenu): Contains multiple entities and delegates update()/draw() to each Entity: Base class for all things in a scene (e.g. MenuItem or Alien) Sprite: Base class for all entities that just draw a texture, i.e. don't have their own drawing logic Does it make sense to split up entities and sprites up like that? I think in a 2D game, the terms entity and sprite are somewhat synonymous, right? But I do believe that I need some base class for entities that just draw a texture, as opposed to drawing themselves, to avoid duplication. Most entities are like that. One weird case is my Text class: It derives from Sprite, which accepts either the path of an image or an already loaded texture in its constructor. Text loads a texture in its constructor and passes that to Sprite. Can you outline a design that makes more sense? Or point me to a good object-oriented reference code base for a 2D game? I could only find 3D engine code bases of decent code quality, e.g. Doom 3 and HPL1Engine.

    Read the article

  • Entity system in Lua, communication with C++ and level editor. Need advice.

    - by Notbad
    Hi!, I know this is a really difficult subject. I have been reading a lot this days about Entity systems, etc... And now I'm ready to ask some questions (if you don't mind answering them) because I'm really messed. First of all, I have a 2D basic editor written in Qt, and I'm in the process of adding entitiy edition. I want the editor to be able to receive RTTI information from entities to change properties, create some logic being able to link published events to published actions (Ex:A level activate event throws a door open action), etc... Because all of this I guess my entity system should be written in scripting, in my case Lua. In the other hand I want to use a component based design for my entities, and here starts my questions: 1) Should I define my componentes en C++? If I do this en C++ won't I loose all the RTTI information I want for my editor?. In the other hand, I use box2d for physics, if I define all my components in script won't it be a lot of work to expose third party libs to lua? 2) Where should I place the messa system for my game engine? Lua? C++?. I'm tempted to just have C++ object to behave as servers, offering services to lua business logic. Things like physics system, rendering system, input system, World class, etc... And for all the other things, lua. Creation/Composition of entities based on components, game logic, etc... Could anyone give any insight on how to accomplish this? And what aproach is better?. Thanks in advance, HexDump.

    Read the article

  • Is the Entity Component System architecture object oriented by definition?

    - by tieTYT
    Is the Entity Component System architecture object oriented, by definition? It seems more procedural or functional to me. My opinion is that it doesn't prevent you from implementing it in an OO language, but it would not be idiomatic to do so in a staunchly OO way. It seems like ECS separates data (E & C) from behavior (S). As evidence: The idea is to have no game methods embedded in the entity. And: The component consists of a minimal set of data needed for a specific purpose Systems are single purpose functions that take a set of entities which have a specific component I think this is not object oriented because a big part of being object oriented is combining your data and behavior together. As evidence: In contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead, the data is accessed by calling specially written functions, commonly called methods, which are bundled in with the data. ECS, on the other hand, seems to be all about separating your data from your behavior.

    Read the article

  • How to access Youtube_it ruby query results?

    - by spectro
    I am trying to implement the youtube_it youtube api wrapper for ruby and have it working except I'm stumped as to how the query results should be accessed. Here is my query: client.videos_by(:query => "penguin", :max_results => 1) Submitting request [url=http://gdata.youtube.com/feeds/api/videos?max-results=1&start-index=1&vq=penguin]. => #<YouTubeIt::Response::VideoSearch:0xb6c41b14 @feed_id="http://gdata.youtube.com/feeds/api/videos", @updated_at=Wed Nov 03 18:01:39 UTC 2010, @videos=[#<YouTubeIt::Model::Video:0xb6c424d8 @thumbnails=[#<YouTubeIt::Model::Thumbnail:0xb6c6b694 @url="http://i.ytimg.com/vi/oSbLpQEZP1Y/2.jpg", @width=120, @height=90, @time="00:01:34">, #<YouTubeIt::Model::Thumbnail:0xb6c6b248 @url="http://i.ytimg.com/vi/oSbLpQEZP1Y/1.jpg", @width=120, @height=90, @time="00:00:47">, #<YouTubeIt::Model::Thumbnail:0xb6c6a988 @url="http://i.ytimg.com/vi/oSbLpQEZP1Y/3.jpg", @width=120, @height=90, @time="00:02:21">, #<YouTubeIt::Model::Thumbnail:0xb6c69e34 @url="http://i.ytimg.com/vi/oSbLpQEZP1Y/0.jpg", @width=320, @height=240, @time="00:01:34">], @categories=[#<YouTubeIt::Model::Category:0xb6ca5d6c @term="Music", @label="Music">], @noembed=false, @racy=false, @favorite_count=7862, @duration=188, @author=#<YouTubeIt::Model::Author:0xb6c9942c @name="wili", @uri="http://gdata.youtube.com/feeds/api/users/wili">, @updated_at=Tue Nov 02 08:45:25 UTC 2010, @longitude=nil, @position=nil, @view_count=1682350, @html_content="penguin", @media_content=[#<YouTubeIt::Model::Content:0xb6c770d4 @url="http://www.youtube.com/v/oSbLpQEZP1Y?f=videos&app=youtube_gdata", @duration=188, @format=#<YouTubeIt::Model::Video::Format:0xb656d108 @name=:swf, @format_code=5>, @default=true, @mime_type="application/x-shockwave-flash">, #<YouTubeIt::Model::Content:0xb6c766d4 @url="rtsp://v5.cache3.c.youtube.com/CiILENy73wIaGQlWPxkBpcsmoRMYDSANFEgGUgZ2aWRlb3MM/0/0/0/video.3gp", @duration=188, @format=#<YouTubeIt::Model::Video::Format:0xb656d11c @name=:rtsp, @format_code=1>, @default=false, @mime_type="video/3gpp">, #<YouTubeIt::Model::Content:0xb6c75d38 @url="rtsp://v8.cache3.c.youtube.com/CiILENy73wIaGQlWPxkBpcsmoRMYESARFEgGUgZ2aWRlb3MM/0/0/0/video.3gp", @duration=188, @format=#<YouTubeIt::Model::Video::Format:0xb656d0f4 @name=:three_gpp, @format_code=6>, @default=false, @mime_type="video/3gpp">], @description="penguin", @latitude=nil, @title="penguin", @published_at=Mon May 08 18:11:01 UTC 2006, @player_url="http://www.youtube.com/watch?v=oSbLpQEZP1Y&feature=youtube_gdata_player", @rating=#<YouTubeIt::Model::Rating:0xb6c5eb4c @min=1, @max=5, @average=4.676985, @rater_count=2746>, @keywords=["pigloo", "penguin"], @video_id="http://gdata.youtube.com/feeds/api/videos/oSbLpQEZP1Y", @where=nil>], @total_result_count=291282, @offset=1, @max_result_count=1> I would like to retrieve the URL and thumbnail links. Any ideas?

    Read the article

  • Of Models / Entities and N-tier applications

    - by Jonn
    I've only discovered a month ago the folly of directly accessing entities / models from the data access layer of an n-tier app. After reading about ViewModels while studying ASP.NET MVC, I've come to understand that to make a truly extensible application the model that the UI layer interacts with must be different from the one that the Data Access layer has access to. But what about the Business layer? Should I have a different set of models for my business layer as well? For true separation of concern, should I have a specific set of models that are relevant only to my business layer so as not to mess around with any entities (possibly generated by for example, the entity framework, or EJB) in the DAL or would that be overkill?

    Read the article

  • Jqplot - How to have vertical lines and make xaxis go up in plotted values?

    - by Beginner
    I have plotted a line graph using jqplot. What i would like is vertical lines and to start from 3 and go up in the plotted values along the bottom. so 3, 6, 9, 12 , 15, 29, 36 Also a dash marker along the left. This is what i have at the moment: $(document).ready(function(){ $.jqplot('chart2', [[[3, @(Model.LearnerWeek[0])], [6, @(Model.LearnerWeek[1])], [9, @(Model.LearnerWeek[2])], [12, @(Model.LearnerWeek[3])], [15, @(Model.LearnerWeek[4])], [29, @(Model.LearnerWeek[5])], [36, @(Model.LearnerWeek[6])]], [[3, @(Model.ManagerWeek[0])], [6, @(Model.ManagerWeek[1])], [9, @(Model.ManagerWeek[2])], [12, @(Model.ManagerWeek[3])], [15, @(Model.ManagerWeek[4])], [29, @(Model.ManagerWeek[5])], [36, @(Model.ManagerWeek[6])]]], { axes: { yaxis: { tickOptions: { show: false}, min: 0, max: 100, label: 'Participation Rate', labelRenderer: $.jqplot.CanvasAxisLabelRenderer }, xaxis: { min: 3, max: 36, label: 'Week', tickOptions: { formatString: '%d' } } }, seriesDefaults: { showMarker: false , rendererOptions: { diameter: undefined, // diameter of pie, auto computed by default. padding: 10, // padding between pie and neighboring legend or plot margin. fill: true, // render solid (filled) slices. shadowOffset: 2, // offset of the shadow from the chart. shadowDepth: 15, // Number of strokes to make when drawing shadow. Each stroke // offset by shadowOffset from the last. shadowAlpha: 1 // Opacity of the shadow } }, seriesColors: ['#3591cf', '#ef4058', '#73C774', '#C7754C', '#17BDB8'] }); }); I have played around with render options and xasis but cant seem to work it out

    Read the article

  • Custom bean instantiation logic in Spring MVC

    - by Michal Bachman
    I have a Spring MVC application trying to use a rich domain model, with the following mapping in the Controller class: @RequestMapping(value = "/entity", method = RequestMethod.POST) public String create(@Valid Entity entity, BindingResult result, ModelMap modelMap) { if (entity== null) throw new IllegalArgumentException("An entity is required"); if (result.hasErrors()) { modelMap.addAttribute("entity", entity); return "entity/create"; } entity.persist(); return "redirect:/entity/" + entity.getId(); } Before this method gets executed, Spring uses BeanUtils to instantiate a new Entity and populate its fields. It uses this: ... ReflectionUtils.makeAccessible(ctor); return ctor.newInstance(args); Here's the problem: My entities are Spring managed beans. The reason for this is to inject DAOs on them. Instead of calling new, I use EntityFactory.createEntity(). When they're retrieved from the database, I have an interceptor that overrides the public Object instantiate(String entityName, EntityMode entityMode, Serializable id) method and hooks the factories into that. So the last piece of the puzzle missing here is how to force Spring to use the factory rather than its own BeanUtils reflective approach? Any suggestions for a clean solution? Thanks very much in advance.

    Read the article

  • Why do I get Detached Entity exception when upgrading Spring Boot 1.1.4 to 1.1.5

    - by mmeany
    On updating Spring Boot from 1.1.4 to 1.1.5 a simple web application started generating detached entity exceptions. Specifically, a post authentication inteceptor that bumped number of visits was causing the problem. A quick check of loaded dependencies showed that Spring Data has been updated from 1.6.1 to 1.6.2 and a further check of the change log shows a couple of issues relating to optimistic locking, version fields and JPA issues that have been fixed. Well I am using a version field and it starts out as Null following recommendation to not set in the specification. I have produced a very simple test scenario where I get detached entity exceptions if the version field starts as null or zero. If I create an entity with version 1 however then I do not get these exceptions. Is this expected behaviour or is there still something amiss? Below is the test scenario I have for this condition. In the scenario the service layer that has been annotated @Transactional. Each test case makes multiple calls to the service layer - the tests are working with detached entities as this is the scenario I am working with in the full blown application. The test case comprises four tests: Test 1 - versionNullCausesAnExceptionOnUpdate() In this test the version field in the detached object is Null. This is how I would usually create the object prior to passing to the service. This test fails with a Detached Entity exception. I would have expected this test to pass. If there is a flaw in the test then the rest of the scenario is probably moot. Test 2 - versionZeroCausesExceptionOnUpdate() In this test I have set the version to value Long(0L). This is an edge case test and included because I found reference to Zero values being used for version field in the Spring Data change log. This test fails with a Detached Entity exception. Of interest simply because the following two tests pass leaving this as an anomaly. Test 3 - versionOneDoesNotCausesExceptionOnUpdate() In this test the version field is set to value Long(1L). Not something I would usually do, but considering the notes in the Spring Data change log I decided to give it a go. This test passes. Would not usually set the version field, but this looks like a work-around until I figure out why the first test is failing. Test 4 - versionOneDoesNotCausesExceptionWithMultipleUpdates() Encouraged by the result of test 3 I pushed the scenario a step further and perform multiple updates on the entity that started life with a version of Long(1L). This test passes. Reinforcement that this may be a useable work-around. The entity: package com.mvmlabs.domain; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import javax.persistence.Version; @Entity @Table(name="user_details") public class User { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Version private Long version; @Column(nullable = false, unique = true) private String username; @Column(nullable = false) private Integer numberOfVisits; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public Long getVersion() { return version; } public void setVersion(Long version) { this.version = version; } public Integer getNumberOfVisits() { return numberOfVisits == null ? 0 : numberOfVisits; } public void setNumberOfVisits(Integer numberOfVisits) { this.numberOfVisits = numberOfVisits; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } } The repository: package com.mvmlabs.dao; import org.springframework.data.repository.CrudRepository; import com.mvmlabs.domain.User; public interface UserDao extends CrudRepository<User, Long>{ } The service interface: package com.mvmlabs.service; import com.mvmlabs.domain.User; public interface UserService { User save(User user); User loadUser(Long id); User registerVisit(User user); } The service implementation: package com.mvmlabs.service; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional; import org.springframework.transaction.support.TransactionSynchronizationManager; import com.mvmlabs.dao.UserDao; import com.mvmlabs.domain.User; @Service @Transactional(propagation=Propagation.REQUIRED, readOnly=false) public class UserServiceJpaImpl implements UserService { @Autowired private UserDao userDao; @Transactional(readOnly=true) @Override public User loadUser(Long id) { return userDao.findOne(id); } @Override public User registerVisit(User user) { user.setNumberOfVisits(user.getNumberOfVisits() + 1); return userDao.save(user); } @Override public User save(User user) { return userDao.save(user); } } The application class: package com.mvmlabs; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @Configuration @ComponentScan @EnableAutoConfiguration public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } The POM: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mvmlabs</groupId> <artifactId>jpa-issue</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>spring-boot-jpa-issue</name> <description>JPA Issue between spring boot 1.1.4 and 1.1.5</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.5.RELEASE</version> <relativePath /> <!-- lookup parent from repository --> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <start-class>com.mvmlabs.Application</start-class> <java.version>1.7</java.version> </properties> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> The application properties: spring.jpa.hibernate.ddl-auto: create spring.jpa.hibernate.naming_strategy: org.hibernate.cfg.ImprovedNamingStrategy spring.jpa.database: HSQL spring.jpa.show-sql: true spring.datasource.url=jdbc:hsqldb:file:./target/testdb spring.datasource.username=sa spring.datasource.password= spring.datasource.driverClassName=org.hsqldb.jdbcDriver The test case: package com.mvmlabs; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import com.mvmlabs.domain.User; import com.mvmlabs.service.UserService; @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) public class ApplicationTests { @Autowired UserService userService; @Test public void versionNullCausesAnExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Null"); user.setNumberOfVisits(0); user.setVersion(null); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionZeroCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Zero"); user.setNumberOfVisits(0); user.setVersion(0L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version One"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(2L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionWithMultipleUpdates() throws Exception { User user = new User(); user.setUsername("Version One Multiple"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); user = userService.registerVisit(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(3), user.getNumberOfVisits()); Assert.assertEquals(new Long(4L), user.getVersion()); } } The first two tests fail with detached entity exception. The last two tests pass as expected. Now change Spring Boot version to 1.1.4 and rerun, all tests pass. Are my expectations wrong? Edit: This code saved to GitHub at https://github.com/mmeany/spring-boot-detached-entity-issue

    Read the article

  • IPMI sdr entity 8 (memory module) only showing 3 records?

    - by thinice
    I've got two Dell PE R710's - A has a single socket and 3 DIMMs in one bank B has both sockets and 6 (2 banks @ 3 DIMMs) filled The output from "ipmitool sdr entity 8" confuses me - according to the OpenIPMI documentation these are supposed to represent DIMM slots. Output from A (1 CPU, 3 DIMMS, 1 bank.): ~#: ipmitool sdr entity 8 Temp | 0Ah | ok | 8.1 | 27 degrees C Temp | 0Bh | ns | 8.1 | Disabled Temp | 0Ch | ucr | 8.1 | 52 degrees C Output from B (2 CPUs, 3 DIMMS in both banks, 6 total): ~#: ipmitool sdr entity 8 Temp | 0Ah | ok | 8.1 | 26 degrees C Temp | 0Bh | ok | 8.1 | 25 degrees C Temp | 0Ch | ucr | 8.1 | 51 degrees C Now, I'm starting to think this output isn't DIMMS themselves, but maybe a sensor for each bank and something else? (Otherwise, shouldn't I see 6 readings for the one with both banks active?) The CPU's aren't near 50 deg C, so I doubt the significantly higher reading is due to proximity - Is anyone able to explain what I'm seeing? Does the output from my ipmitool sdr entity 8 -v here on pastebin seem to hint at different sensors? The sensor naming conventions are poor - seems like a dell thing. Here is output from racadm racdump

    Read the article

  • Binding from View-Model to View-Model of a child User Control in Silverlight? 2 sources - 1 target..

    - by andrej351
    Hi there, So i have a UserControl for one of my Views and have another 'child' UserControl inside that. The outer 'parent' UserControl has a Collection on its View-Model and a Grid control on it to display a list of Items. I want to place another UserControl inside this UserControl to display a form representing the details of one Item. The outer / parent UserControl's View-Model already has a property on it to hold the currently selected Item and i would like to bind this to a DependancyProperty on the inner / child UserControl. I would then like to bind that DependancyProperty to a property on the child UserControl's View-Model. I can then set the DependancyProperty once in XAML with a binding expression and have the child UserControl do all its work in its View-Model like it should. The code i have looks like this.. Parent UserControl: <UserControl x:Class="ItemsListView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" DataContext="{Binding Source={StaticResource ServiceLocator}, Path=ItemsListViewModel}"> <!-- Grid Control here... --> <ItemDetailsView Item="{Binding Source={StaticResource ServiceLocator}, Path=ItemsListViewModel.SelectedItem}" /> </UserControl> Child UserControl: <UserControl x:Class="ItemDetailsView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" DataContext="{Binding Source={StaticResource ServiceLocator}, Path=ItemDetailsViewModel}" ItemDetailsView.Item="{Binding Source={StaticResource ServiceLocator}, Path=ItemDetailsViewModel.Item, Mode=TwoWay}"> <!-- Form controls here... --> </UserControl> The selected Item is bound to the DependancyProperty fine. However from the DependancyProperty to the child View-Model does not. It appears to be a situation where there are two concurrent bindings which need to work but with the same target for two sources. Why won't the second (in the child UserControl) binding work?? Is there a way to acheive the behaviour I'm after?? Cheers.

    Read the article

  • Storage model for various user setting and attributes in database?

    - by dvd
    I'm currently trying to upgrade a user management system for one web application. This web application is used to provide remote access to various networking equipment for educational purposes. All equipment is assigned to various pods, which users can book for periods of time. The current system is very simple - just 2 user types: administrators and students. All their security and other attributes are mostly hardcoded. I want to change this to something like the following model: user <-- (1..n)profile <--- (1..n) attributes. I.e. user can be assigned several profiles and each profile can have multiple attributes. At runtime all profiles and attributes are merged into single active profile. Some examples of attributes i'm planning to implement: EXPIRATION_DATE - single value, value type: date, specifies when user account will expire; ACCESS_POD - single value, value type: ref to object of Pod class, specifies which pod the user is allowed to book, user profile can have multiple such attributes with different values; TIME_QUOTA - single value, value type: integer, specifies maximum length of time for which student can reserve equipment. CREDIT_CHARGING - multi valued, specifies how much credits will be assigned to user over period of time. (Reservation of devices will cost credits, which will regenerate over time); Security permissions and user preferences can end up as profile or user attributes too: i.e CAN_CREATE_USERS, CAN_POST_NEWS, CAN_EDIT_DEVICES, FONT_SIZE, etc.. This way i could have, for example: students of course A will have profiles STUDENT (with basic attributes) and PROFILE A (wich grants acces to pod A). Students of course B will have profiles: STUDENT, PROFILE B(wich grants to pod B and have increased time quotas). I'm using Spring and Hibernate frameworks for this application and MySQL for database. For this web application i would like to stay within boundaries of these tools. The problem is, that i can't figure out how to best represent all these attributes in database. I also want to create some kind of unified way of retrieveing these attributes and their values. Here is the model i've come up with. Base classes. public abstract class Attribute{ private Long id; Attribute() {} abstract public String getName(); public Long getId() {return id; } void setId(Long id) {this.id = id;} } public abstract class SimpleAttribute extends Attribute{ public abstract Serializable getValue(); abstract void setValue(Serializable s); @Override public boolean equals(Object obj) { ... } @Override public int hashCode() { ... } } Simple attributes can have only one value of any type (including object and enum). Here are more specific attributes: public abstract class IntAttribute extends SimpleAttribute { private Integer value; public Integer getValue() { return value; } void setValue(Integer value) { this.value = value;} void setValue(Serializable s) { setValue((Integer)s); } } public class MaxOrdersAttribute extends IntAttribute { public String getName() { return "Maximum outstanding orders"; } } public final class CreditRateAttribute extends IntAttribute { public String getName() { return "Credit Regeneration Rate"; } } All attributes stored stored using Hibenate variant "table per class hierarchy". Mapping: <class name="ru.mirea.rea.model.abac.Attribute" table="ATTRIBUTES" abstract="true" > <id name="id" column="id"> <generator class="increment" /> </id> <discriminator column="attributeType" type="string"/> <subclass name="ru.mirea.rea.model.abac.SimpleAttribute" abstract="true"> <subclass name="ru.mirea.rea.model.abac.IntAttribute" abstract="true" > <property name="value" column="intVal" type="integer"/> <subclass name="ru.mirea.rea.model.abac.CreditRateAttribute" discriminator-value="CreditRate" /> <subclass name="ru.mirea.rea.model.abac.MaxOrdersAttribute" discriminator-value="MaxOrders" /> </subclass> <subclass name="ru.mirea.rea.model.abac.DateAttribute" abstract="true" > <property name="value" column="dateVal" type="timestamp"/> <subclass name="ru.mirea.rea.model.abac.ExpirationDateAttribute" discriminator-value="ExpirationDate" /> </subclass> <subclass name="ru.mirea.rea.model.abac.PodAttribute" abstract="true" > <many-to-one name="value" column="podVal" class="ru.mirea.rea.model.pods.Pod"/> <subclass name="ru.mirea.rea.model.abac.PodAccessAttribute" discriminator-value="PodAccess" lazy="false"/> </subclass> <subclass name="ru.mirea.rea.model.abac.SecurityPermissionAttribute" discriminator-value="SecurityPermission" lazy="false"> <property name="value" column="spVal" type="ru.mirea.rea.db.hibernate.customTypes.SecurityPermissionType"/> </subclass> </subclass> </class> SecurityPermissionAttribute uses enumeration of various permissions as it's value. Several types of attributes imlement GrantedAuthority interface and can be used with Spring Security for authentication and authorization. Attributes can be created like this: public final class AttributeManager { public <T extends SimpleAttribute> T createSimpleAttribute(Class<T> c, Serializable value) { Session session = HibernateUtil.getCurrentSession(); T att = null; ... att = c.newInstance(); att.setValue(value); session.save(att); session.flush(); ... return att; } public <T extends SimpleAttribute> List<T> findSimpleAttributes(Class<T> c) { List<T> result = new ArrayList<T>(); Session session = HibernateUtil.getCurrentSession(); List<T> temp = session.createCriteria(c).list(); result.addAll(temp); return result; } } And retrieved through User Profiles to which they are assigned. I do not expect that there would be very large amount of rows in the ATTRIBUTES table, but are there any serious drawbacks of such design?

    Read the article

  • Add SQL Azure database to Azure Web Role and persist data with entity framework code first.

    - by MagnusKarlsson
    In my last post I went for a warts n all approach to set up a web role on Azure. In this post I’ll describe how to add an SQL Azure database to the project. This will be described with an as minimal as possible amount of code and screen dumps. All questions are welcome in the comments area. Please don’t email since questions answered in the comments field is made available to other visitors. As an example we will add a comments section to the site we used in the previous post (Länk här). Steps: 1. Create a Comments entity and then use Scaffolding to set up controller and view, and add ConnectionString to web.config. 2. Create SQL Azure database in Management Portal and link the new database 3. Test it online!   1. Right click Models folder, choose add, choose “class…” . Name the Class Comment. 1.1 Replace the Code in the class with the following: using System.Data.Entity; namespace MvcWebRole1.Models { public class Comment {    public int CommentId { get; set; }    public string Name { get; set; }      public string Content { get; set; } } public class CommentsDb : DbContext { public DbSet<Comment> CommentEntries { get; set; } } } Now Entity Framework can create a database and a table named Comment. Build your project to assert there are no build errors.   1.2 Right click Controllers folder, choose add, choose “class…” . Name the Class CommentController and fill out the values as in the example below.     1.3 Click Add. Visual Studio now creates default View for CRUD operations and a Controller adhering to these and opens them. 1.3 Open Web.config and add the following connectionstring in <connectionStrings> node. <add name="CommentsDb” connectionString="data source=(LocalDB)\v11.0;Integrated Security=SSPI;AttachDbFileName=|DataDirectory|\CommentsDb.mdf;Initial Catalog=CommentsDb;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />   1.4 Save All and press F5 to start the application. 1.5 Go to http://127.0.0.1:81/Comments which will redirect you through CommentsController to the Index View which looks like this:     Click Create new. In the Create-view, add name and content and press Create.   1: // 2: // POST: /Comments/Create 3:  4: [HttpPost] 5: public ActionResult Create(Comment comment) 6: { 7: if (ModelState.IsValid) 8: { 9: db.CommentEntries.Add(comment); 10: db.SaveChanges(); 11: return RedirectToAction("Index"); 12: } 13:  14: return View(comment); 15: } 16:    The default View() is Index so that is the View you will come to. Looking like this: 1: // 2: // GET: /Comments/ 3: 4: public ActionResult Index() 5: { 6: return View(db.CommentEntries.ToList()); 7: } Resulting in the following screen dump(success!):   2. Now, go to the Management portal and Create a new db.   2.1 With the new database created. Click the DB icon in the left most menu. Then click the newly created database. Click DASHBOARD in the top menu. Finally click Connections strings in the right menu to get the connection string we need to add in our web.debug.config file.   2.2 Now, take a copy of the connection String earlier added to the web.config and paste in web.debug.conifg in the connectionstrings node. Replace everything within “ “ in the copied connectionstring with that you got from SQL Azure. You will have something like this:   2.3 Rebuild the application, right click the cloud project and choose “Package…” (if you haven’t set up publishing profile which we will do in our next blog post). Remember to choose the right config file, use debug for staging and release for production so your databases won’t collide. You should see something like this:   2.4 Go to Management Portal and click the Web Services menu, choose your service and click update in the bottom menu.   2.5 Link the newly created database to your application. Click the LINKED RESOURCES in the top menu and then click “Link” in the bottom menu. You should get something like this. 3. Alright then. Under the Dashboard you can find the link to your application. Click it to open it in a browser and then go to ~/Comments to try it out just the way we did locally. Success and end of this story!

    Read the article

  • SharePoint 2010 BDC Model Deployment Issue: “The default web application could not be determined.”

    - by Jan Tielens
    Yesterday I tried to deploy a Business Data Connectivity Model project created in Visual Studio 2010 to my SharePoint 2010 test server (all RTM versions), but during the deployment of the solution, SharePoint threw my following error: Add Solution:  Adding solution 'BCSDemo2.wsp'...  Deploying solution 'BCSDemo2.wsp'...Error occurred in deployment step 'Add Solution': The default web application could not be determined. Set the SiteUrl property in feature BCSDemo2_Feature1 to the URL of the desired site and retry activation.Parameter name: properties A little bit of searching on the internet taught me that I was not the only one having this issue, actually Paul Andrew describes how to solve it in this post. Although Paul describes what to do, his explanation is not, let’s say, very elaborate. :-) So let’s describe the steps a little bit more in detail: Create a new Business Data Connectivity Model project in Visual Studio 2010 and (optionally) implement all your code, change the model etc. When you try to deploy you get the error mentioned above. To fix it, in the Solution Explorer, navigate to and open the Feature1.Template.xml file (the name could be different if you decided to give your feature a different name of course). Add the following XML in the Feature element that’s already there (replace the Value with the URL of your site of course):  <Properties>    <Property Key='SiteUrl' Value='http://spf.u2ucourse.com'/>  </Properties>The resulting XML should look like:<?xml version="1.0" encoding="utf-8" ?><Feature xmlns="http://schemas.microsoft.com/sharepoint/">  <Properties>    <Property Key='SiteUrl' Value='http://spf.u2ucourse.com'/>  </Properties></Feature> Deploy the solution, now without any issues. :-) What happens now, is that when Visual Studio creates the SharePoint Solution (the WSP file), it will use the Feature template XML to generate the Feature manifest, which will now include the missing property.

    Read the article

  • Help find correct alsa model for onboard sound (alc887?) that will work with jack and have correct mixer setup

    - by Jazz
    I have a Gigabyte GA-MA74GMT-S2 motherboard. I am using Jack for sound - connected to ALSA. I am running Ubuntu 12.04. aplay -l reports card 0: SB [HDA ATI SB], device 0: ALC887 Analog [ALC887 Analog] The problem is that the default setup, that alsa decides to use, causes stuttering and xruns no matter how generous I set the frames/period or periods/buffer etc. Also, Jack works fine if I plug in an external USB sound system and use that. My processor is an AMD phenom x4 945, and I have 8GB ram, and Video card is Geforce GTX550 Ti, all of which should be quite capable enough. I also tried Pulseaudio and that works fine, but I need to use Jack At first I thought it might be an interrupt conflict, but I have found that adding "options snd-hda-intel model=generic" to /etc/modprobe.d/alsa-base.conf causes it to play correctly, but the limited mixer setup lacks controls I need - so this setup isn't good enough. Still, it seems to prove it isn't a hardware conflict. I have tried many other models, such as 3stack, 6stack, auto and even basic, and they all suffer from the stuttering. I eventually found "options snd-hda-intel model=3stack-6ch-intel" works without stuttering, and mixer is much closer to what it needs to be. Can anyone help on how to get a correct and accurate model for ALSA to use? More info on the hardware that might help... *-multimedia description: Audio device product: SBx00 Azalia (Intel HDA) vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 14.2 bus info: pci@0000:00:14.2 version: 00 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: driver=snd_hda_intel latency=32 resources: irq:16 memory:fe024000-fe027fff

    Read the article

  • DDD and Value Objects. Are mutable Value Objects a good candidate for Non Aggr. Root Entity?

    - by Tony
    Here is a little problem Have an entity, with a value object. Not a problem. I replace a value object for a new one, then nhibernate inserts the new value and orphan the old one, then deletes it. Ok, that's a problem. Insured is my entity in my domain. He has a collection of Addresses (value objects). One of the addresses is the MailingAddress. When we want to update the mailing address, let's say zipcode was wrong, following Mr. Evans doctrine, we must replace the old object for a new one since it's immutable (a value object right?). But we don't want to delete the row thou, because that address's PK is a FK in a MailingHistory table. So, following Mr. Evans doctrine, we are pretty much screwed here. Unless i make my addressses Entities, so i don't have to "replace" it, and simply update its zipcode member, like the old good days. What would you suggest me in this case? The way i see it, ValueObjects are only useful when you want to encapsulate a group of database table's columns (component in nhibernate). Everything that has a persistence id in the database, is better off to make it an Entity (not necessarily an aggregate root) so you can update its members without recreating the whole object graph, specially if that's a deep-nested object. Do you concur? Is it allowed by Mr. Evans to have a mutable value object? Or is a mutable value object a candidate for an Entity? Thanks

    Read the article

  • Do support sites like stackoverflow upset the paid-support open source model?

    - by ajax81
    In order to stay relevant in the marketplace, I'm researching new business models for my software company. The open source model with paid support seems like a good fit for our product, but I have concerns about whether or not a paid support model is viable in an era where top-notch help is readily available for free on sites like those in the StackExchange network. Case in point -- I moved my employees to Ubuntu last year because I didn't want to pay for Win 7 licenses and new hardware (plus, the mono platform was highly attractive). My staff had no Linux experience, but were able to achieve relative competency in about 120 days with the help of AskUbuntu, StackOverflow, and a few "For Dummies" books. We did employ an Ubuntu consultant for 7 days to provide training and support, but beyond that spent $0.00 on any kind of paid expertise. In regards to my due diligence, I ran a 3 month beta of the freemium-paid-support model with one of our smaller customers, and achieved mediocre results. I'd like to think its because our software is so stable and easy to use that the customer didn't need much paid support, but I suspect that they circumvented the terms of our SLA in the same manner that we did with the move to Ubuntu. Does anyone out there has any thoughts, advice, or experience relevant to the move I'm considering? What worked, what didn't, etc? Thanks in advance!

    Read the article

  • Do support sites like Stack Overflow upset the paid-support open source model?

    - by ajax81
    In order to stay relevant in the marketplace, I'm researching new business models for my software company. The open source model with paid support seems like a good fit for our product, but I have concerns about whether or not a paid support model is viable in an era where top-notch help is readily available for free on sites like those in the Stack Exchange network. Case in point -- I moved my employees to Ubuntu last year because I didn't want to pay for Win 7 licenses and new hardware (plus, the mono platform was highly attractive). My staff had no Linux experience, but were able to achieve relative competency in about 120 days with the help of AskUbuntu, Stack Overflow, and a few "For Dummies" books. We did employ an Ubuntu consultant for 7 days to provide training and support, but beyond that spent $0.00 on any kind of paid expertise. In regards to my due diligence, I ran a 3 month beta of the freemium-paid-support model with one of our smaller customers, and achieved mediocre results. I'd like to think its because our software is so stable and easy to use that the customer didn't need much paid support, but I suspect that they circumvented the terms of our SLA in the same manner that we did with the move to Ubuntu. Does anyone out there has any thoughts, advice, or experience relevant to the move I'm considering? What worked, what didn't, etc?

    Read the article

  • How do I separate model positions from view positions in MVC?

    - by tieTYT
    Using MVC in games (as opposed to web apps) always confuses me when it comes to the view. How am I supposed to keep the model agnostic of how the view is presenting things? I always end up giving the Model a position that holds x and y but invariably, these values end up being in units of pixels and that feels wrong. I can see the advantage* of avoiding that but how am I supposed to? This idea was suggested: Don't think of it in units of pixels, think of them in arbitrary distance units that just happen map to pixels at a 1:1 ratio. Oh, the resolution is half of what it was? We are now taking the x/y coordinates at 50% value for screen display, and your spells casting range is still 300 units long, which now is 150 pixels. But those numbers conveniently work out. What do I do if the numbers divide in such a way that I get decimal places? Floating points are unsafe. I think allowing decimal places would eventually cause really weird bugs in my game. *It'd let me write the model once and write different views depending on the device.

    Read the article

  • Observing MVC, can/should the Model be instantiated in the ViewController? Or where?

    - by user19410
    I'm writing an experimental iPhone app to learn about the MVC paradigm. I instantiate my Model class in the ViewController class. Is this stupid? I'm asking because storing the id of the Model class, and using it works where it's initialized, but referring to it later (in response to an interface action) crashes. Seemingly, the pointer address of my Model class instance changes, but how can that be? The code in question: @interface Soundcheck_Tone_GeneratorViewController : UIViewController { IBOutlet UIPickerView * frequencyWheel; @public Sinewave_Generation * sineGenerator; } @property(nonatomic,retain) Sinewave_Generation * sineGenerator; @end @implementation Soundcheck_Tone_GeneratorViewController @synthesize sineGenerator; - (void)viewDidLoad { [super viewDidLoad]; [self setSineGenerator:[[Sinewave_Generation alloc] initWithFrequency:20.0]]; // using reference -> fine } // pickerView handling is omitted here... - (void)pickerView:(UIPickerView *)thePickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { [[self sineGenerator] setFrequency:20.0]; // using reference -> crash } @end // the Sinewave_Generation class... only to be thorough. Works fine so far. @interface Sinewave_Generation : NSObject { AudioComponentInstance toneUnit; @public double frequency,theta; } @property double frequency; - (Sinewave_Generation *) initWithFrequency: (int) f; @end @implementation Sinewave_Generation @synthesize frequency; - (Sinewave_Generation *) initWithFrequency: (int) f { self = [super init]; if ( self ) { [self setFrequency: f]; } return self; } @end

    Read the article

  • Activation Error while testing Exception Handling Application Block

    - by CletusLoomis
    I'm getting the following error while testing my EHAB implementation: {"Activation error occured while trying to get instance of type ExceptionPolicyImpl, key "LogPolicy""} System.Exception Stack Trace: StackTrace " at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 53 at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance[TService](String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 103 at Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.GetExceptionPolicy(Exception exception, String policyName) in e:\Builds\EntLib\Latest\Source\Blocks\ExceptionHandling\Src\ExceptionHandling\ExceptionPolicy.cs:line 131 at Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException(Exception exceptionToHandle, String policyName) in e:\Builds\EntLib\Latest\Source\Blocks\ExceptionHandling\Src\ExceptionHandling\ExceptionPolicy.cs:line 55 at Blackbox.Exception.ExceptionMain.LogException(Exception pException) in C:_Work_Black Box\Blackbox.Exception\ExceptionMain.vb:line 14 at BlackBox.Business.BusinessMain.TestExceptionHandling() in C:_Work_Black Box\BlackBox.Business\BusinessMain.vb:line 16 at Blackbox.Service.Service1.TestExceptionHandling() in C:_Work_Black Box\Blackbox.Service\Service.svc.vb:line 43" String Inner Exception: InnerException {"Resolution of the dependency failed, type = "Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl", name = "LogPolicy". Exception occurred while: Calling constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener(System.String source, System.String log, System.String machineName, Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.ILogFormatter formatter). Exception is: ArgumentException - Event log names must consist of printable characters and cannot contain \, *, ?, or spaces At the time of the exception, the container was: Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl,LogPolicy Resolving parameter "policyEntries" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl(System.String policyName, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] policyEntries) Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry,LogPolicy.All Exceptions Resolving parameter "handlers" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry(System.Type exceptionType, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.PostHandlingAction postHandlingAction, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.IExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] handlers, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Instrumentation.IExceptionHandlingInstrumentationProvider instrumentationProvider) Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler,LogPolicy.All Exceptions.Logging Exception Handler (mapped from Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.IExceptionHandler, LogPolicy.All Exceptions.Logging Exception Handler) Resolving parameter "writer" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler(System.String logCategory, System.Int32 eventId, System.Diagnostics.TraceEventType severity, System.String title, System.Int32 priority, System.Type formatterType, Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter writer) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterImpl,LogWriter.default (mapped from Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter, (none)) Resolving parameter "structureHolder" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterImpl(Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder structureHolder, Microsoft.Practices.EnterpriseLibrary.Logging.Instrumentation.ILoggingInstrumentationProvider instrumentationProvider, Microsoft.Practices.EnterpriseLibrary.Logging.ILoggingUpdateCoordinator updateCoordinator) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder,LogWriterStructureHolder.default (mapped from Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder, (none)) Resolving parameter "traceSources" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder(System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.Logging.Filters.ILogFilter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] filters, System.Collections.Generic.IEnumerable1[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] traceSourceNames, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.Logging.LogSource, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] traceSources, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource allEventsTraceSource, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource notProcessedTraceSource, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource errorsTraceSource, System.String defaultCategory, System.Boolean tracingEnabled, System.Boolean logWarningsWhenNoCategoriesMatch, System.Boolean revertImpersonation) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogSource,General Resolving parameter "traceListeners" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogSource(System.String name, System.Collections.Generic.IEnumerable1[[System.Diagnostics.TraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] traceListeners, System.Diagnostics.SourceLevels level, System.Boolean autoFlush, Microsoft.Practices.EnterpriseLibrary.Logging.Instrumentation.ILoggingInstrumentationProvider instrumentationProvider) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.ReconfigurableTraceListenerWrapper,Event Log Listener (mapped from System.Diagnostics.TraceListener, Event Log Listener) Resolving parameter "wrappedTraceListener" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.ReconfigurableTraceListenerWrapper(System.Diagnostics.TraceListener wrappedTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging.ILoggingUpdateCoordinator coordinator) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener,Event Log Listener?implementation (mapped from System.Diagnostics.TraceListener, Event Log Listener?implementation) Calling constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener(System.String source, System.String log, System.String machineName, Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.ILogFormatter formatter) "} System.Exception My web.config is as follows: <?xml version="1.0"?> <configuration> <configSections> <section name="loggingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.LoggingSettings, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> <section name="exceptionHandling" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Configuration.ExceptionHandlingSettings, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> </configSections> <loggingConfiguration name="" tracingEnabled="true" defaultCategory="General"> <listeners> <add name="Event Log Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.FormattedEventLogTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" source="Enterprise Library Logging" formatter="Text Formatter" log="C:\Blackbox.log" machineName="." traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack" /> </listeners> <formatters> <add type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" template="Timestamp: {timestamp}{newline}&#xA;Message: {message}{newline}&#xA;Category: {category}{newline}&#xA;Priority: {priority}{newline}&#xA;EventId: {eventid}{newline}&#xA;Severity: {severity}{newline}&#xA;Title:{title}{newline}&#xA;Machine: {localMachine}{newline}&#xA;App Domain: {localAppDomain}{newline}&#xA;ProcessId: {localProcessId}{newline}&#xA;Process Name: {localProcessName}{newline}&#xA;Thread Name: {threadName}{newline}&#xA;Win32 ThreadId:{win32ThreadId}{newline}&#xA;Extended Properties: {dictionary({key} - {value}{newline})}" name="Text Formatter" /> </formatters> <categorySources> <add switchValue="All" name="General"> <listeners> <add name="Event Log Listener" /> </listeners> </add> </categorySources> <specialSources> <allEvents switchValue="All" name="All Events" /> <notProcessed switchValue="All" name="Unprocessed Category" /> <errors switchValue="All" name="Logging Errors &amp; Warnings"> <listeners> <add name="Event Log Listener" /> </listeners> </errors> </specialSources> </loggingConfiguration> <exceptionHandling> <exceptionPolicies> <add name="LogPolicy"> <exceptionTypes> <add name="All Exceptions" type="System.Exception, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="NotifyRethrow"> <exceptionHandlers> <add name="Logging Exception Handler" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" logCategory="General" eventId="100" severity="Error" title="Enterprise Library Exception Handling" formatterType="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.TextExceptionFormatter, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling" priority="0" /> </exceptionHandlers> </add> </exceptionTypes> </add> <add name="WcfExceptionShielding"> <exceptionTypes> <add name="InvalidOperationException" type="System.InvalidOperationException, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="ThrowNewException"> <exceptionHandlers> <add type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF.FaultContractExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" exceptionMessageResourceType="" exceptionMessageResourceName="This is the message" exceptionMessage="This is the exception" faultContractType="Blackbox.Service.WCFFault, Blackbox.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Fault Contract Exception Handler"> <mappings> <add source="{Guid}" name="Id" /> <add source="{Message}" name="MessageText" /> </mappings> </add> </exceptionHandlers> </add> </exceptionTypes> </add> </exceptionPolicies> </exceptionHandling> <connectionStrings> <add name="CompassEntities" connectionString="metadata=~\bin\CompassModel.csdl|~\bin\CompassModel.ssdl|~\bin\CompassModel.msl;provider=Devart.Data.Oracle;provider connection string=&quot;User Id=foo;Password=foo;Server=foo64mo;Home=OraClient11g_home1;Persist Security Info=True&quot;" providerName="System.Data.EntityClient" /> <add name="BlackboxEntities" connectionString="metadata=~\bin\BlackboxModel.csdl|~\bin\BlackboxModel.ssdl|~\bin\BlackboxModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=sqldev1\cps;Initial Catalog=FundServ;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> <system.web> <compilation debug="true" strict="false" explicit="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> </configuration> My code is as follows: Public Shared Function LogException(ByVal pException As System.Exception) As Boolean Return ExceptionPolicy.HandleException(pException, "LogPolicy") End Function Any assistance is appreciated.

    Read the article

  • EF4 - possible to mock ObjectContext for unit testing?

    - by steve.macdonald
    Can it be done without using TypeMock Islolator? I've found a few suggestions online such as passing in a metadata only connection string, however nothing I've come across besides TypeMock seems to truly allow for a mock ObjectContext that can be injected into services for unit testing. Do I plunk down the $$ for TypeMock, or are there alternatives? Has nobody managed to create anything comparable to TypeMock that is open source?

    Read the article

  • EF 4, POCO and AddOrUpdate

    - by Eric J.
    I'm trying to create a method on an EF 4 POCO repository called AddOrUpdate. The idea is that the business layer can pass in a POCO object and the persistence framework will add the object if it is new (not yet in the database), else will update the database (once SaveChanges() is called) with the new value. This is similar to some other questions I have asked about EF, but I'm only about 80% there in understanding this so please forgive partial duplication. The part I'm missing is how to update the object graph in my ObjectContext/associated ObjectSet for the passed-in business object once I have determined that the business object indeed already exists in the database (and now has been loaded thanks to TryGetObjectByKey). ApplyCurrentValues sounds sort of like what I want, but it only copies scalar values and doesn't seem intended to update the object graph in the ObjectContext/ObjectSet. Because of my particular use case I don't care about concurrency right now. public void AddOrUpdate(BO biz) { object obj; EntityKey ek = Ctx.CreateEntityKey(mySetName, biz); bool found = Ctx.TryGetObjectByKey(ek, out obj); if (found) { // How do I do what this method name implies? Biz is a parent with children. mySet.TellTheSetToUpdateThisObject(biz); } else { mySet.AddObject(biz); } Ctx.DetectChanges(); }

    Read the article

  • LINQ to Entities exceptions (ElementAtOrDefault and CompareObjectEqual)

    - by OffApps Cory
    I am working on a shipping platform which will eventually automate shipping through several major carriers. I have a ShipmentsView Usercontrol which displayes a list of Shipments (returned by EntityFramework), and when a user clicks on a shipment item, it spawns a ShipmentEditView and passes the ShipmentID (RecordKey) to that view. I initially wrestled with trying to get the context from the parent (ShipmentsView) and finally gave up resolving to get to it later. I wanted to do this to keep a single instance of the context. anyhow, I now create a new instance of the context in my ShipmentEditViewModel, and query against it for the Shipment record. I know I could just pass the record, but I wanted to use the Ocean Framework that Karl Shifflett wrote and don't want to muck about writing new transition methods. So anyhow, I query and when stepping through, I can see that it returns a record, as soon as execution reached the point where it assigned the query result to the e.Result property, it throws up the following exception depending on the query I used. LINQToEntities Dim RecordID As Decimal = CDec(e.Argument) Dim myResult = From ship In _Context.Shipment _ Where ship.ShipID = e.Argument _ Select ship Select Case myResult.Count Case 0 e.Result = New Shipment Case 1 e.Result = myResult(0) Case Else e.Result = Nothing End Select "LINQ to Entities does not recognize the method 'System.Object.CompareObjectEqual(System.Object, System.Object, Boolean)' method, and this method cannot be translated into a store expression. LINQToEntities via Method calls Dim RecordID As Decimal = CDec(e.Argument) Dim myResult = _Context.Shipment.Where(Function(s) s.ShipID = RecordID) Select Case myResult.Count Case 0 e.Result = New Shipment Case 1 e.Result = myResult(0) Case Else e.Result = Nothing End Select LINQ to Entities does not recognize the method 'SnazzyShippingDAL.Shipment ElementAtOrDefault[Shipment] (System.Linq.IQueryable`1[SnazzyShippingDAL.Shipment], Int32)' method, and this method cannot be translated into a store expression. I have been trying to get this thing to display a record for like three days. i am seriously thinking about going back and re=-engineering it without the MVVM pattern (which I realize I am only starting to learn and understand) if only to make the &$^%ed thing work. Any help will be muchly appreciated. Cory

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >