Search Results

Search found 30780 results on 1232 pages for 'object oriented modeling'.

Page 398/1232 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • How can I test a parser for a bespoke XML schema?

    - by Greg B
    I'm parsing a bespoke XML format into an object graph using .NET 4.0. My parser is using the System.XML namespace internally, I'm then interrogating the relevant properties of XmlNodes to create my object graph. I've got a first cut of the parser working on a basic input file and I want to put some unit tests around this before I progress on to more complex input files. Is there a pattern for how to test a parser such as this? When I started looking at this, my first move was to new up and XmlDocument, XmlNamespaceManager and create an XmlElement. But it occurs to me that this is quite lengthy and prone to human error. My parser is quite recursive as you can imagine and this might lead to testing the full system rather than the individual units (methods) of the system. So a second question might be What refactoring might make a recursive parser more testable?

    Read the article

  • Should I forward the a call to .Equals onto .Equals<T>?

    - by Jaimal Chohan
    So, I've got you bog standard c# object, overriding Equalsand implementing IEquatable public override int GetHashCode() { return _name.GetHashCode(); } public override bool Equals(object obj) { return Equals(obj as Tag) } #region IEquatable<Tag> Members public bool Equals(Tag other) { if (other == null) return false; else return _name == other._name; } #endregion Now, for some reason, I used to think that forwarding the calls from Equals into Equals was bad, no idea why, perhaps I read it a long time ago, anyway I'd write separate (but logically same) code for each method. Now I think forwarding Equals to Equals is okay, for obvious reasons, but for the life me I can't remember why I thought it wasn't before. Any thoughts?

    Read the article

  • How to Assure an Effective Data Model

    As a general rule in my opinion the effectiveness of a data model can be directly related to the accuracy and complexity of a project’s requirements. For example there is no need to work on very detailed data models when the details surrounding a specific data model have not been defined or even clarified. Developing data models when the clarity of project requirements is limited tends to introduce designed issues because the proper details to create an effective data model are not even known. One way to avoid this issue is to create data models that correspond to the complexity of the existing project requirements so that when requirements are updated then new data models can be created based any new discoveries regarding requirements on a fine grain level.  This allows for data models to be composed of general entities to be created initially when a project’s requirements are very vague and then the entities are refined as new and more substantial requirements are defined or redefined. This promotes communication amongst all stakeholders within a project as they go through the process of defining and finalizing project requirements.In addition, here are some general tips that can be applied to projects in regards to data modeling.Initially model all data generally and slowly reactor the data model as new requirements and business constraints are applied to a project.Ensure that data modelers have the proper tools and training they need to design a data model accurately.Create a common location for all project documents so that everyone will be able to review a project’s data models along with any other project documentation.All data models should follow a clear naming schema that tells readers the intended purpose for the data and how it is going to be applied within a project.

    Read the article

  • Syncing client and server CRUD operations using json and php

    - by Justin
    I'm working on some code to sync the state of models between client (being a javascript application) and server. Often I end up writing redundant code to track the client and server objects so I can map the client supplied data to the server models. Below is some code I am thinking about implementing to help. What I don't like about the below code is that this method won't handle nested relationships very well, I would have to create multiple object trackers. One work around is for each server model after creating or loading, simply do $model->clientId = $clientId; IMO this is a nasty hack and I want to avoid it. Adding a setCientId method to all my model object would be another way to make it less hacky, but this seems like overkill to me. Really clientIds are only good for inserting/updating data in some scenarios. I could go with a decorator pattern but auto generating a proxy class seems a bit involved. I could use a generic proxy class that uses a __call function to allow for original object data to be accessed, but this seems wrong too. Any thoughts or comments? $clientData = '[{name: "Bob", action: "update", id: 1, clientId: 200}, {name:"Susan", action:"create", clientId: 131} ]'; $jsonObjs = json_decode($clientData); $objectTracker = new ObjectTracker(); $objectTracker->trackClientObjs($jsonObjs); $query = $this->em->createQuery("SELECT x FROM Application_Model_User x WHERE x.id IN (:ids)"); $query->setParameters("ids",$objectTracker->getClientSpecifiedServerIds()); $models = $query->getResults(); //Apply client data to server model foreach ($models as $model) { $clientModel = $objectTracker->getClientJsonObj($model->getId()); ... } //Create new models and persist foreach($objectTracker->getNewClientObjs() as $newClientObj) { $model = new Application_Model_User(); .... $em->persist($model); $objectTracker->trackServerObj($model); } $em->flush(); $resourceResponse = $objectTracker->createResourceResponse(); //Id mappings will be an associtave array representing server id resources with client side // id. //This method Dosen't seem to flexible if we want to return additional data with each resource... //Would have to modify the returned data structure, seems like tight coupling... //Ex return value: //[{clientId: 200, id:1} , {clientId: 131, id: 33}];

    Read the article

  • How to create realistic 2d lighting using colour temperature

    - by Truncheon
    I'm looking for a lighting algorithm that produces realistic lights expressed in kelvins, from about 2500k to 6500k. What I'm confused about is how to make the lights properly interact with the colors of game objects. If a whole level is fully lit (overcast daylight) then it would seem that I should use just the color of the object. But what if I'm in a closed room with no windows, and there is an incandescent bulb shining light in the room? How would that light properly light up the objects in the room? There does not seem to be an obvious solution to the problem. And simply mixing the color of the light with the colors of the object, seems an inaccurate approach.

    Read the article

  • When to use functional programming approach and when not? (in Java)

    - by john smith optional
    let's assume I have a task to create a Set of class names. To remove duplication of .getName() method calls for each class, I used org.apache.commons.collections.CollectionUtils and org.apache.commons.collections.Transformer as follows: Snippet 1: Set<String> myNames = new HashSet<String>(); CollectionUtils.collect( Arrays.<Class<?>>asList(My1.class, My2.class, My3.class, My4.class, My5.class), new Transformer() { public Object transform(Object o) { return ((Class<?>) o).getName(); } }, myNames); An alternative would be this code: Snippet 2: Collections.addAll(myNames, My1.class.getName(), My2.class.getName(), My3.class.getName(), My4.class.getName(), My5.class.getName()); So, when using functional programming approach is overhead and when it's not and why? Isn't my usage of functional programming approach in snippet 1 is an overhead and why?

    Read the article

  • Making an efficient collision detection system

    - by Sri Harsha Chilakapati
    I'm very new to game development (just started 3 months ago) and I'm learning through creating a game engine. It's located here. In terms of collision, I know only brute-force detection, in which case, the game slows down if there are a number of objects. So my question is How should I program the collisions? I want them to happen automatically for every object and call the object's collision(GObject other) method on each collision. Are there any new algorithms which can make this fast? If so, can anybody shed some light on this topic?

    Read the article

  • API always returns JSONObject or JSONArray Best practices

    - by Michael Laffargue
    I'm making an API that will return data in JSON. I also wanted on client side to make an utility class to call this API. Something like : JSONObject sendGetRequest(Url url); JSONObject sendPostRequest(Url url, HashMap postData); However sometimes the API send back array of object [{id:1},{id:2}] I now got two choices (): Make the method test for JSONArray or JSONObject and send back an Object that I will have to cast in the caller Make a method that returns JSONObject and one for JSONArray (like sendGetRequestAndReturnAsJSONArray) Make the server always send Arrays even for one element Make the server always send Objects wrapping my Array I going for the two last methods since I think it would be a good thing to force the API to send consistent type of data. But what would be the best practice (if one exist). Always send arrays? or always send objects?

    Read the article

  • What is the optimum number of admins to server?

    - by monocasa
    Hello all. I'm starting a business, and I'd like to know what you guys think the optimum number of admins to server ratio is for financial modeling reasons. Or if there's a better metric to use? I come from an embedded programming background so this is an area that I'm pretty squishy on knowledge-wise. : \ Additional Info: There will be a lot of servers. Mainly Linux boxes, with about 10% Windows boxes. Thanks in advance!

    Read the article

  • Ubuntu 10.10: getting appropiate monitor resolution for lcd hdtv

    - by lurscher
    I'm running Ubuntu 10.10 x86_64 version, with Nvidia 9800 GT, installed 270.41.06 Nvidia drivers following this guide. I have a LG42LH30FR LCD TV connected with the dvi link - RGB PC input I'm able to get 1024x768 resolution without overscan (I can get 1080i = 1366x768 but there is a lot of hidden screen space to the right and I don't know what to do about it). I want to get full HD I can get full HD (1080p = 1920x1080) on Windows XP 64-bit with custom resolution created with Nvidia Control Panel, from reading over xorg.conf configurations it seems I need to add a certain modeling to the monitor configuration, but I don't know where to get the appropriate options for this task any suggestions?

    Read the article

  • SQL Rally Nordic & Amsterdam slides & demos

    - by Davide Mauri
    Last week I had the pleasure to speak at two GREAT conferences (as you can see from the wordcloud I’ve posted, here for Stockholm and here for Amsterdam. I used two different filtering techniques to produce the wordcloud, that’s why they look different. I’m playing a lot with R in these days…so I like to experiment different stuff). The workshop with my friend Thomas Kejser on “Data Warehouse Modeling – Making the Right Choices” and my sessions on “Automating DWH Patterns through Metadata” has been really appreciated by attendees, give the amount of good feedback I had on twitter and on some blog posts (Here and here). Of course many asked for slides & demos to download, so here you are! Automating DWH Patterns through Metadata – Stockholm http://sdrv.ms/1bcRAaW Automating DWH Patterns through Metadata – Amsterdam http://sdrv.ms/1cNDAex I’m still trying to understand if and how I can publicly post slides & demos of the workshop, so for that you have to wait a couple of days. I will post about it as soon as possible. Anyway, if you were in the workshop and would like to get the slide & demos ASAP, just send me an email, I’ll happily sent the protected link to my skydrive folder to you. Enjoy!

    Read the article

  • Lookup Viewer

    - by Geertjan
    The Maven integrated view that I showed yesterday I was able to create because I happened to know that an implementation of SubprojectProvider and LogicalViewProvider are in the Lookup of Maven projects. With that knowledge, I was able to use and even delegate to those implementations. But what if you don't know that those implementations are in the Lookup of the Project object? In the case of the Maven Project implementation, you could look in the source code of the Maven Project implementation, at the "getLookup" method. However, any other module could be putting its own objects into that Lookup, dynamically, i.e., at runtime. So there's no way of knowing what's in the Lookup of any Project object or any other object with a Lookup. But now imagine that you have a Lookup Viewer, as a tool during development, which you would exclude when distributing the application. Whenever new objects are found in the Lookup, the viewer displays them. You could install the Lookup Viewer into NetBeans IDE, or any other NetBeans Platform application, and then get a quick impression of what's actually in the Lookup when you select a different item in the application during development. Here it is (though I vaguely remember someone else writing something similar): Above, a Maven Project is selected. The Lookup Window shows that, among many other classes, an implementation of SubprojectProvider and LogicalViewProvider are found in the Lookup when the Maven Project is selected. If an item in the Lookup Window has its own Lookup, the content of that Lookup is displayed as child nodes of the Lookup, etc, i.e., you can explore all the way down the Lookup of each item found within objects found within the current selection. (What's especially fun is seeing the SaveCookieImpl being added and removed from the Lookup Window when you make/save a change in a document.) Another example is below, showing the Lookup Window installed in a custom application created during a course at MIT in Boston: A small trick I had to apply is that I always show the previous Lookup, since the current Lookup, when you select one of the Nodes in the Lookup Window, would be the Lookup of the Lookup Window itself! If anyone is interested in this, I can publish the NetBeans module providing the above window to the NetBeans update center. 

    Read the article

  • Introducing Code Map for Visual Studio 2012 September CTP

    - by krislankford
    As part of the Visual Studio 2012 CTP for September, Visual Studio got a little sexier at helping you discover and visualize your code. The introduction of the Code Map feature helps compliment the variety of other tools that are included with Visual Studio to help you analyze and visualize your projects and solutions. Code Map leverages the dgml format within Visual Studio that is currently used b the Architecture and Modeling tools. This is a nice addition that gets us from point A to point B a little faster. The great thing about Code Map is that you can gain access to the functionality from directly within your code from the context menu. This Code Map functionality is also context specific based on your cursor. You can evaluate and add items such as methods and variables directly to the Code Map window. As you add items the Code Map surface is updated to show your new item plus any relationships and dependencies that have been introduced in your code. Something that is also very nice is that the Code Map surface is interactive and allows you to use the F12 button (Go To Definition) which can help you navigate your code especially is you are adding items that span multiple files or projects. To get started all you have to do is go out and download the September CTP for Visual Studio 2012 located here. Happy Coding!   Code Map Window

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • How to get the Dash and HUD to appear. (and stop Unity spewing error messages.)

    - by Ubuntiac
    I just installed Ubuntu 12.04 on my wifes Dell Inspiron 1501, which uses an R300 ATI graphics chip. Neither the Dash or HUD appear when pushing the appropriate key. When I try unity --reset & in the terminal, I see that over and over it's spitting out: r300: CS space validation failed. (not enough memory?) Skipping rendering. This is just after starting Ubuntu with no apps open, so I find it hard to believe that just rendering the Dash / HUD is completely blowing out the VRAM. Any suggestions on getting this working? /usr/lib/nux/unity_support_test -p shows OpenGL vendor string: X.Org R300 Project OpenGL renderer string: Gallium 0.4 on ATI RS480 OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes All sections say "YES"

    Read the article

  • The performance implications of IEnumerable vs. IQueryable

    It all started innocently enough. I was implementing a "Older Posts/Newer Posts" feature for my new web site and was writing code like this:IEnumerable<Post> FilterByCategory(IEnumerable<Post> posts, string category) {  if( !string.IsNullOrEmpty(category) ) { return posts.Where(p => p.Category.Contains(category)); }}...  var posts = FilterByCategory(db.Posts, category);  int count = posts.Count();... The "db" was an EF object context object, but it could just as...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The performance implications of IEnumerable vs. IQueryable

    It all started innocently enough. I was implementing a "Older Posts/Newer Posts" feature for my new web site and was writing code like this:IEnumerable<Post> FilterByCategory(IEnumerable<Post> posts, string category) {  if( !string.IsNullOrEmpty(category) ) { return posts.Where(p => p.Category.Contains(category)); }}...  var posts = FilterByCategory(db.Posts, category);  int count = posts.Count();... The "db" was an EF object context object, but it could just as...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Delay command execution over sockets

    - by David
    I've been trying to fix the game loop in a real time (tick delay) MUD. I realized using Thread.Sleep would seem clunky when the user spammed commands through their choice of client (Zmud, etc) e.g. east;south;southwest would wait three move ticks and then output everything from the past couple rooms. The game loop basically calls a Flush and Fill method for each socket during each tick (50ms) private void DoLoop() { Stopwatch stopWatch = new Stopwatch(); stopWatch.Start(); while (running) { // for each socket, flush and fill ConnectionMonitor.Update(); stopWatch.Stop(); WaitIfNeeded(stopWatch.ElapsedMilliseconds); stopWatch.Reset(); } } The Fill method fires the command events, but as mentioned before, they currently block using Thread.Sleep. I tried adding a "ready" flag to the state object that attempts to execute the command along with a queue of spammed commands, but it ends up executing one command and queuing up the rest i.e. each subsequent command executes something that got queued up that should've been executed before. I must be missing something about the timer. private readonly Queue<SpammedCommand> queuedCommands = new Queue<SpammedCommand>(); private bool ready = true; private void TryExecuteCommand(string input) { var commandContext = CommandContext.Create(input); var player = Server.Current.Database.Get<Player>(Session.Player.Key); var commandInfo = Server.Current.CommandLookup .FindCommand(commandContext.CommandName, player.IsAdmin); if (commandInfo != null) { if (!ready) { // queue command queuedCommands.Enqueue(new SpammedCommand() { Context = commandContext, Info = commandInfo }); return; } if (queuedCommands.Count > 0) { // queue the incoming command queuedCommands.Enqueue(new SpammedCommand() { Context = commandContext, Info = commandInfo, }); // dequeue and execute var command = queuedCommands.Dequeue(); command.Info.Command.Execute(Session, command.Context); setTimeout(command.Info.TickLength); return; } commandInfo.Command.Execute(Session, commandContext); setTimeout(commandInfo.TickLength); } else { Session.WriteLine("Command not recognized"); } } Finally, setTimeout was supposed to set the execution delay (TickLength) for that command, and makeReady just sets the ready flag on the state object to true. private void setTimeout(TickDelay tickDelay) { ready = false; var t = new System.Timers.Timer() { Interval = (long) tickDelay, AutoReset = false, }; t.Elapsed += makeReady; t.Start(); // fire this in tickDelay ms } // MAKE READYYYYY!!!! private void makeReady(object sender, System.Timers.ElapsedEventArgs e) { ready = true; } Am I missing something about the System.Timers.Timer created in setTimeout? How can I execute (and output) spammed commands per TickLength without using Thread.Sleep?

    Read the article

  • Is passing the Model around in this way considered bad practice?

    - by Theomax
    If I have a view called, for example, ViewDetails that displays user information in labels and has a Model called ViewDetailsModel and if I want to allow the user to click a button to edit some of these details, is it considered bad practice is I pass the entire Model in the markup to a controller method which then assigns the values for another model, using the values stored in the Model that was passed in as a parameter to that action method? If so, should there instead be a service method that gets the data required for the edit view? For example: In the ViewDetails view, the user clicks the edit button which calls an action method in the controller (and passes in the Model object). The action method then uses the data in the Model object to populate another model which will be used for the EditDetails view that will be returned.

    Read the article

  • Proper response for a REST insert - full new record, or just the record id value?

    - by Keith Palmer
    I'm building a REST API which allows inserts (POST, not idempotent) and updates (PUT, idempotent) requests to add/update database to our application. I'm wondering if there are any standards or best practices regarding what data we send back to the client in the response for a POST (insert) operation. We need to send back at least a record ID value (e.g. your new record is record #1234). Should we respond with the full object? (e.g. essentially the same response they'd get back from a "GET /object_type/1234" request) Should we respond with only the new ID value? (e.g. "{ id: 1234 }", which means that if they want to fetch the whole record they need to do an additional HTTP GET request to grab the full record) A redirect header pointing them to the URL for the full object? Something else entirely?

    Read the article

  • Making a collision detection system

    - by Sri Harsha Chilakapati
    I'm very new to game development (just started 3 months ago) and I've learning through creating a game engine. It's located here. In terms of collision, I know only brutefoce detection, in which case, the game slows down if there are a number of objects. So my question is How should I program the collisions? I want them to happen automatically for every object and call the object's collision(GObject other) method on each collision. Are there any new algorithms which can make this fast? If so, can anybody6 sh6ed some light on this topic? And I think of making it like the game maker Thanks

    Read the article

  • Identify "non-secure" content IE warns about [on hold]

    - by Doug Harris
    As many know, if you serve a page over https and the content loads resources (images, stylesheets, js, SWF objects, etc) over http, older versions of Internet Explorer will show the user a warning saying "This page contains both secure and non-secure items". This is discomforting to many non-technical users. Usually, I can look at the HTML source and identify which item(s) are triggering this error. Sometimes a Flash object will load something else or some embedded javascript will put a new object in the DOM and trigger this. What tools are good for quickly tracking down the source of the warning?

    Read the article

  • design practice for business layer when supporting API versioning

    - by user1186065
    Is there any design pattern or practice recommended for business layer when dealing with multiple API version. For example, I have something like this. http://site.com/blogs/v1/?count=10 which calls business object method GetAllBlogs(int count) to get information http://site.com/blogs/v2/?blog_count=20 which calls business object method GetAllBlogs_v2(int blogCounts) Since parameter name is changed, I created another business method for version 2. This is just one example but it could have other breaking changes for which it requires me to create another method to support both version. Is there any design pattern or best practice for business/data access layer I should follow when supporting API Versioning?

    Read the article

  • Keypress Left is called twice in Update when key is pressed only once

    - by Simran kaur
    I have a piece of code that is changing the position of player when left key is pressed. It is inside of Update() function. I know, Update is called multiple times, but since I have an ifstatement to check if left arrow is pressed, it should update only once. I have tested using print statement that once pressed, it gets called twice. Problem: Position updated twice when key is pressed only once. Below given is the structure of my code: void Update() { if (Input.GetKeyDown (KeyCode.LeftArrow)) { print ("PRESSEEEEEEEEEEEEEEEEEEDDDDDDDDDDDDDD"); } } I looked up on web and what was suggested id this: if (Event.current.type == EventType.KeyDown && Event.current.keyCode == KeyCode.LeftArrow) { print("pressed"); } But, It gives me an error that says: Object reference not set to instance of an object How can I fix this?

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >