Search Results

Search found 9916 results on 397 pages for 'entity component'.

Page 275/397 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Two graphical entities, smooth blending between them (e.g. asphalt and grass)

    - by Gabriel Conrad
    Supposedly in a scenario there are, among other things, a tarmac strip and a meadow. The tarmac has an asphalt texture and its model is a triangle strip long that might bifurcate at some point into other tinier strips, and suppose that the meadow is covered with grass. What can be done to make the two graphical entities seem less cut out from a photo and just pasted one on top of the other at the edges? To better understand the problem, picture a strip of asphalt and a plane covered with grass. The grass texture should also "enter" the tarmac strip a little bit at the edges (i.e. feathering effect). My ideas involve two approaches: put two textures on the tarmac entity, but that involves a serious restriction in how the strip is modeled and its texture coordinates are mapped or try and apply a post-processing filter that mimics a bloom effect where "grass" is used instead of light. This could be a terrible failure to achieve correct results. So, is there a better or at least a more obvious way that's widely used in the game dev industry?

    Read the article

  • One Step-Ahead A-Star

    - by Jonathan Dickinson
    I am attempting to create a server-centric RTS (as opposed to usual parallel synchronised simulation route of most RTS games today) - however I am still leveraging the discreet N-turns-ahead paradigm discussed by one of the AOE developers on Gamasutra. I have [possibly questionably?] decided that the path finding should only ever find the next cell the entity needs to move to, and was wondering if anyone has any clever ideas on how to optimize the algorithm for this specific scenario - or any other ideas on how to keep the pathfinding as lean as possible on the server. I have investigated a few possible algorithms but could only come up with one appropriation: Tiered A-Star - Relatively large T1 tiles, work out (and cache) each cell as you enter it. Other than that: doing the full A-Star pass and caching the entire path, which might use too much memory if a large amount of units are present. I know about the existence of naive progressive pathfinding algorithms (if you hit a block, turn in the direction closer to your target etc.) but they suffer from infinite feedback loops - and very poor pathing even if visited blocks are memorised. Not an option. Many thanks.

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Now Available: Profit November 2012

    - by user462779
    The November 2012 issue of Profit is now available. In the five years I've worked on Profit, there has been measurable interest in content related to project management. Stories featuring project management as a key component have resulted in extra clicks, likes, and RTs (for you Twitter users) from our readers. I've chatted about this with Oracle customers, partners, and experts and received an assortment of ideas about why this might be. This issue of Profit is a bit of a culmination of those conversations, and the trends that are driving interest in project management best practices. Also, two online developments for Profit: check out my newly relaunched blog, Editor's Notebook, at blogs.oracle.com/profit, where readers can get a peek at the development of each issue of Profit as it happens. We've also launched a new LinkedIn group for our social media-inclined readers. In this issue: Three Keys to Project Management What can organizations with world-class project management teach the rest of us? Strong Medicine Gilead Sciences simplifies business processes to establish a foundation for continued growth. Architects of Reform Enterprise architecture plays an essential role in establishing Oregon as a leader in healthcare reform. Answering the Call Turkcell CIO Ilker Kuruoz finds IT-powered growth and innovation to be the calling card for success. Projected Results Sound project management practices and technology can have an immediate impact on the bottom line. Preparing for Impact Plans for dealing with enterprise information will define the big data winners. Is one issue of Profit not enough to get you through to February? Visit the Profit archives, or follow @OracleProfit on Twitter for a daily dose of enterprise technology news from Profit.

    Read the article

  • Oracle Identity Manager ADF Customization

    - by Arda Eralp
    This blog entry includes an example about customization Oracle Identity Manager (OIM) Self Service screen. Before customization all users that can be logged in OIM Self Service can see "Administration" tab on left menu. On this example we create "Managers" role and only users that have managers role can see "Administration" tab. Step 1: Create "Manager" role  Step 2: Create Sandbox  Step 3: Customize ADF Select "Customize" on the top menu Select "Source" instead of "Design" on top  Select "Administration" tab with blue rectangle and edit component Edit "visible" with expression builder #{oimcontext.currentUser.roles['Manager'] != null} Apply Step 4: Apply to All and Publish sandbox Notes:  This table objects can use for expression. Objects Description #{oimcontext.currentUser['ATTRIBUTE_NAME']} #{oimcontext.currentUser['UDF_NAME']} #{oimcontext.currentUser.roles} #{oimcontext.currentUser.roles['SYSTEM ADMINISTRATORS'] != null} Boolean #{oimcontext.currentUser.adminRoles['OrclOIMSystemAdministrator'] != null} Boolean

    Read the article

  • NetBeans 7.1 RC1 now available - JavaFX 2, Enhanced Java Editor, Improved JavaEE, WebLogic 12 support

    - by arungupta
    NetBeans 7.1 RC1 is now available! What's new in NetBeans 7.1 ? Support for JavaFX 2 Full compile/debug/profile development cycle Many editor enhancements Deployment tools  Customized UI controls using CSS3 Enhanced Java editor Upgrade projects completely to JDK 7 Import statement organizer Rectangular block selection Getters/Setters included in refactoring Java EE  50+ CDI improvements RichFaces4 and ICEFaces2 component libraries EJB Timer creation wizard Code completion for table, column, and PU names CSS3, GUI Builder, Git, Maven3, and several other features listed at New and Noteworthy Download and give us your feedback using NetBeans Community Acceptance Testing by Dec 7th. Check out the latest tutorials. To me the best part was creating a Java EE 6 application, deploying on GlassFish, and then re-deploying the same application by changing the target to Oracle WebLogic Server 12c (internal build). And now see the same application deployed to both the servers: Don't miss the Oracle WebLogic Server 12c Launch Event on Dec 1. You can provide additional feedback about NetBeans on mailing lists and forums, file reports, and contact us via Twitter. The final release of NetBeans IDE 7.1 is planned for December.

    Read the article

  • Submitting software to a competition, it becomes their property?

    - by myrkos
    So I'm about to submit a game to a competition, but as I looked through the rules a chunk grabbed my attention: All Entries become the sole and exclusive property of Sponsor and will not be acknowledged or returned. Sponsor shall own all right, title and interest in and to each Entry, including without limitation all results and proceeds thereof and all elements or constituent parts of Entry (including without limitation the Mobile App, the Design Documents, the Video Trailer, the Playable and all illustrations, logos, mechanicals, renderings, characters, graphics, designs, layouts or other material therein) and all copyrights and renewals and extensions of copyrights therein and thereto. Without limitation of the foregoing, each Eligible Entrant shall and hereby does absolutely and irrevocably assign and transfer all of his or her right, title and interest in his or her Entry to Sponsor, and Sponsor shall have the right and may authorize others to use, copy, sublicense, transmit, modify, manipulate, publish, delete, reproduce, perform, distribute, display and otherwise exploit the Entry (and to create and exploit derivative works thereof) in any manner, including without limitation to embody the Entry, in whole or in part, in apps and other works of any kind or nature created, developed, published or distributed by Sponsor and to and register as a trademark in any country in Sponsor’s name any component of the Entry, without such Eligible Entrant reserving any rights or claims with respect thereto. Sponsor shall have the exclusive right, in perpetuity, throughout the Territory to change, adapt, modify, use, combine with other material and otherwise exploit the Entry in all media now known or hereafter devised and in any manner, in its sole and absolute discretion, without the need for any payment or credit to Entrant. So the game will become the sponsor's property; however, they don't ask for source code. So will I still own the rights to the source code, whatever that means? And if it doesn't win said competition, will I be able to publish it myself without their trademarks? I am very new to software legality stuff, so I would appreciate any clarification. Since there's a possibility I won't even own the source, is it possible to make the game core engine open source software with a not-very-restrictive license and include that in the project, so I at least still own the game engine? Or does it not work that way?

    Read the article

  • Do objects maintain identity under all non-cloning conditions in PHP?

    - by Buttle Butkus
    PHP 5.5 I'm doing a bunch of passing around of objects with the assumption that they will all maintain their identities - that any changes made to their states from inside other objects' methods will continue to hold true afterwards. Am I assuming correctly? I will give my basic structure here. class builder { protected $foo_ids = array(); // set in construct protected $foo_collection; protected $bar_ids = array(); // set in construct protected $bar_collection; protected function initFoos() { $this->foo_collection = new FooCollection(); foreach($this->food_ids as $id) { $this->foo_collection->addFoo(new foo($id)); } } protected function initBars() { // same idea as initFoos } protected function wireFoosAndBars(fooCollection $foos, barCollection $bars) { // arguments are passed in using $this->foo_collection and $this->bar_collection foreach($foos as $foo_obj) { // (foo_collection implements IteratorAggregate) $bar_ids = $foo_obj->getAssociatedBarIds(); if(!empty($bar_ids) ) { $bar_collection = new barCollection(); // sub-collection to be a component of each foo foreach($bar_ids as $bar_id) { $bar_collection->addBar(new bar($bar_id)); } $foo_obj->addBarCollection($bar_collection); // now each foo_obj has a collection of bar objects, each of which is also in the main collection. Are they the same objects? } } } } What has me worried is that foreach supposedly works on a copy of its arrays. I want all the $foo and $bar objects to maintain their identities no matter which $collection object they become of a part of. Does that make sense?

    Read the article

  • Feature Updates to the Windows Azure Portal

    - by Clint Edmonson
    Lots of activity over at the Windows Azure portal this weekend, including some exciting new features and major improvements to existing features. Here are the highlights: Support for Managing Co-administrators Set up account co-administrators to allow others to share service management duties for each Azure subscription Import/Export support for SQL Databases Export existing SQL Azure databases to blob storage using SQL Server 2012’s BACPAC format. Create a new SQL Azure database from an existing BACPAC stored in blob storage Storage Container Management and Access Control Create blob storage containers directly within the portal Edit their public/private access settings Drill into storage containers and see the blobs contained within them Improved Cloud Service Status Notifications Detailed health status information about cloud services and roles as they transition between states Virtual Machine Experience Enhancements Option to automatically delete corresponding VHD files from blob storage when deleting VM disks Service Bus Management and Monitoring Ability to create and manage service bus Namespaces, Queues, Topics, Relays and Subscriptions Rich monitoring of Topics, Queues, and Subscriptions with detailed and customizable dashboard metrics Entity status (Topic, Queue, or Subscription) can be changed interactively via dashboard Direct links to the Access Control Services (ACS) namespaces when working with service bus access keys Media Services Monitoring Support Monitor encoding jobs that are queued for processing as well as active, failed and queued tasks for encoding jobs The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted Reference ID: P7VVJCM38V8R

    Read the article

  • PerlRegEx vs RegularExpressionsCore Delphi Units

    - by Jan Goyvaerts
    The RegularExpressionsCore unit that is part of Delphi XE is based on the latest class-based PerlRegEx unit that I developed. Embarcadero only made a few changes to the unit. These changes are insignificant enough that code written for earlier versions of Delphi using the class-based PerlRegEx unit will work just the same with Delphi XE. The unit was renamed from PerlRegEx to RegularExpressionsCore. When migrating your code to Delphi XE, you can choose whether you want to use the new RegularExpressionsCore unit or continue using the PerlRegEx unit in your application. All you need to change is which unit you add to the uses clause in your own units. Indentation and line breaks in the code were changed to match the style used in the Delphi RTL and VCL code. This does not change the code, but makes it harder to diff the two units. Literal strings in the unit were separated into their own unit called RegularExpressionsConsts. These strings are only used for error messages that indicate bugs in your code. If your code uses TPerlRegEx correctly then the user should not see any of these strings. My code uses assertions to check for out of bounds parameters, while Embarcadero uses exceptions. Again, if you use TPerlRegEx correctly, you should never get any assertions or exceptions. The Compile method raises an exception if the regular expression is invalid in both my original TPerlRegEx component and Embarcadero’s version. If your code allows the user to provide the regular expression, you should explicitly call Compile and catch any exceptions it raises so you can tell the user there is a problem with the regular expression. Even with user-provided regular expressions, you shouldn’t get any other assertions or exceptions if your code is correct. Note that Embarcadero owns all the rights to their RegularExpressionsCore unit. Like all the other RTL and VCL units, this unit cannot be distributed by myself or anyone other than Embarcadero. I do retain the rights to my original PerlRegEx unit which I will continue to make available for those using older versions of Delphi.

    Read the article

  • SOA Composite Sensors : Good Practice

    - by angelo.santagata
    I was discussing a interesting design problem with a colleague of mine Niall (his blog) on the topic of how to cancel an inflight SOA Composite process.  Obviously one way to do this is to cancel the process from enterprise Manager ( http://hostort/em ) , however we were thinking this isnt a “user friendly” way of doing this.. If you look at Nialls blog you’ll see he’s highlighted a number of different APIs which enable you the ability to manipulate the SCA instance, e.g. Code Snippet to purge (delete) an instance How to determine the instanceId from a composite_sensor_value using the “composite_sensor_value” table How to determine a BPEL Process status using the cube_instance table   Now all of these require that you know the instanceId of your SOA Composite, how does one find this out? Well the easiest way of doing this is to create a composite sensor on the SCA component. A composite sensor is simply a way of publishing a piece of business data as part of your composite. The magic here is that you can later query composites based on this value. So a good best practice is that for any composites you create consider publishing a composite sensor value using a primary key of some sort , e.g. orderId, that way if you need to manipulate/query composites you can easily look up the instanceId using the sensorid.   For information on how to create a composite Sensor id see this documentation link  

    Read the article

  • Service layer coupling

    - by Justin
    I am working on writing a service layer for an order system in php. It's the typical scenario, you have an Order that can have multiple Line Items. So lets say a request is received to store a line item with pictures and comments. I might receive a json request such as { 'type': 'Bike', 'color': 'Red', 'commentIds': [3193,3194] 'attachmentIds': [123,413] } My idea was to have a Service_LineItem_Bike class that knows how to take the json data and store an entity for a bike. My question is, the Service_LineItem class now needs to fetch comments and file attachments, and store the relationships. Service_LineItem seems like it should interact with a Service_Comment and a Service_FileUpload. Should instances of these two other services be instantiated and passed to the Service_LineItem constructor,or set by getters and setters? Dependency injection seems like the right solution, allowing a service access to a 'service fetching helper' seems wrong, and this should stay at the application level. I am using Doctrine 2 as a ORM, and I can technically write a dql query inside Service_LineItem to fetch the comments and file uploads necessary for the association, but this seems like it would have a tighter coupling, rather then leaving this up to the right service object.

    Read the article

  • Why is my shadowmap all white?

    - by Berend
    I was trying out a shadowmap. But all my shadow is white. I think there is some problem with my homogeneous component. Can anybody help me? The rest of my code is written in xna Here is the hlsl code I used float4x4 xWorld; float4x4 xView; float4x4 xProjection; struct VertexToPixel { float4 Position : POSITION; float4 ScreenPos : TEXCOORD1; float Depth : TEXCOORD2; }; struct PixelToFrame { float4 Color : COLOR0; }; //------- Technique: ShadowMap -------- VertexToPixel MyVertexShader(float4 inPos: POSITION0, float3 inNormal: NORMAL0) { VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul(xView, xProjection); float4x4 preWorldViewProjection = mul(xWorld, preViewProjection); Output.Position =mul(inPos, mul(xWorld, preViewProjection)); Output.Depth = Output.Position.z / Output.Position.w; Output.ScreenPos = Output.Position; return Output; } float4 MyPixelShader(VertexToPixel PSIn) : COLOR0 { PixelToFrame Output = (PixelToFrame)0; Output.Color = PSIn.ScreenPos.z/PSIn.ScreenPos.w; return Output.Color; } technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 MyVertexShader(); PixelShader = compile ps_2_0 MyPixelShader(); } }

    Read the article

  • ASP.NET MVVM Handling multiple Data Transfer Objects on a single page

    - by meffect
    I have an asp.net mvc "edit" page which allows the user to make edits to the parent entity, and then also "create" child entities on the same page. Note: I'm making these data transfer objects up. public class CustomerViewModel { public int Id { get; set; } public Byte[] Timestamp { get; set; } public string CustomerName { get; set; } public etc.. public CustomerOrderCreateViewModel CustomerOrderCreateViewModel { get; set; } } In my view I have two html form's. One for Customer "edit" Http Posts, and the other for CustomerOrder "create" Http Posts. In the view page, I load the CustomerOrder "create" form in using: <div id="CustomerOrderCreate"> @Html.Partial("Vendor/_CustomerOrderCreatePartial", Model.CustomerOrderCreateViewModel) </div> The CustomerOrder html form action posts to a different controller HttpPost ActionResult than the Customer "edit" Action Result. My concern is this, on the CustomerOrder controller, in the HttpPost ActionResult [HttpPost] public ActionResult Create(CustomerOrderCreateViewModel vm) { if (!ModelState.IsValid) { return [What Do I Return Here] } ...[Persist to database code]... } I don't know what to return if the model state isn't valid. Right now it's not a problem, because jquery unobtrusive validation handles validation on the client. But what if I need more complex validation (ie: the server needs to handle the validation).

    Read the article

  • Can higher-order functions in FP be interpreted as some kind of dependency injection?

    - by Giorgio
    According to this article, in object-oriented programming / design dependency injection involves a dependent consumer, a declaration of a component's dependencies, defined as interface contracts, an injector that creates instances of classes that implement a given dependency interface on request. Let us now consider a higher-order function in a functional programming language, e.g. the Haskell function filter :: (a -> Bool) -> [a] -> [a] from Data.List. This function transforms a list into another list and, in order to perform its job, it uses (consumes) an external predicate function that must be provided by its caller, e.g. the expression filter (\x -> (mod x 2) == 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] selects all even numbers from the input list. But isn't this construction very similar to the pattern illustrated above, where the filter function is the dependent consumer, the signature (a -> Bool) of the function argument is the interface contract, the expression that uses the higher-order is the injector that, in this particular case, injects the implementation (\x -> (mod x 2) == 0) of the contract. More in general, can one relate higher-order functions and their usage pattern in functional programming to the dependency injection pattern in object-oriented languages? Or in the inverse direction, can dependency injection be compared to using some kind of higher-order function?

    Read the article

  • Combining pathfinding with global AI objectives

    - by V_Programmer
    I'm making a turn-based strategy game using Java and LibGDX. Now I want to code the AI. I haven't written the AI code yet. I've simply designed it. The AI will have two components, one focused in tactics and resource management (create troops, determine who have strategical advantage, detect important objectives, etc) and a individual component, focused in assign the work to each unit, examine its possibilites and move the unit. Now I'm facing an important problem. The map where the action take place is a grid-based map. Each terrain has different movement cost. I read about pathfinding and I think A* is a very good option to determine a good route between two points. However, imagine I have an unit with movement = 5 (i.e, it can move 5 tiles of movement cost = 1). My tactical AI has found an objective at a distance d = 20 tiles (Manhattan distance) from my unit. My problem is the following: the unit won't be able to reach the objective in one turn. So the AI will have to store a list of position and execute them in various turns. I don't know how to solve this. PS. In my unit code, I have a list called "selectionMarks" which stores all the possible places where the unit can go in this turn. This places are calculed recursively using a "getSelectionMarks" function. Any help is appreciated :D

    Read the article

  • How does datomic handle "corrections"?

    - by blueberryfields
    tl;dr Rich Hickey describes datomic as a system which implicitly deals with timestamps associated with data storage from my experience, data is often imperfectly stored in systems, and on many occasions needs to retroactively be corrected (ie, often the question of "was a True on Tuesday at 12:00pm?" will have an incorrect answer stored in the database) This seems like a spot where the abstractions behind datomic might break - do they? If they don't, how does the system handle such corrections? Rich Hickey, in several of his talks, justifies the creation of datomic, and explains its benefits. His work, if I understand correctly, is motivated by core the insight that humans, when speaking about data and facts, implicitly associate some of the related context into their work(a date-time). By pushing the work required to manage the implicit date-time component of context into the database, he's created a system which is both much easier to understand, and much easier to program. This turns out to be relevant to most database programmers in practice - his work saves everyone a lot of time managing complex, hard to produce/debug/fix, time queries. However, especially in large databases, data is often damaged/incorrect (maybe it was not input correctly, maybe it eroded over time, etc...). While most database updates are insertions of new facts, and should indeed be treated that way, a non-trivial subset of the work required to manage time-queries has to do with retroactive updates. I have yet to see any documentation which explains how such corrections, or retroactive updates, are handled by datomic; from my experience, they are a non-trivial (and incredibly difficult to deal with) subset of time-related data manipulation that database programmers are faced with. Does datomic gracefully handle such updates? If so, how?

    Read the article

  • I need an approach to the problem of preventing inserting duplicate records into the database

    - by Maurice
    Apologies is this question is asked on the incorrect "stack" A webservice that I call returns a list of data. The data from the webservice is updated periodically, so a call to the webservice done in one hour could return the same data as a call done in an hour. Also, the data is returned based on a start and end date. We have multiple users that can run the webservice search, and duplicate data is most likely to be returned (especially for historical data). However I don't want to insert this duplicate data in the database. I've created a db table in which the data is stored (most important columns are) Id int autoincrement PK Date date not null --The date to which the data set belongs. LastUpdate date not null --The date the data set was last updated. UserName varchar(50) --The name of the user doing the search. I use sql server 2008 express with c# 4.0 and visual studio 2010. Entity Framework is used as the ORM. If stored procedures could be avoided in the proposed solution, then that will be a plus. Another way of looking interpreting what I'm asking a solution for is as follows: I have a million unique records in my table. A user does a new search. The search results from the user contains around 300k of the data that is already in the db. An efficient solution to finding an inserting only the unique records is needed.

    Read the article

  • Efficient path-finding in free space

    - by DeadMG
    I've got a game situated in space, and I'd like to issue movement orders, which requires pathfinding. Now, it's my understanding that A* and such mostly apply to trees, and not empty space which does not have pathfinding nodes. I have some obstacles, which are currently expressed as fixed AABBs- that is, there is no unbounded "terrain" obstacle. In addition, I expect most obstacles to be reasonably approximable as cubes or spheres. So I've been thinking of applying a much simpler pathfinding algorithm- that is, simply cast a ray from the current position to the target position, and then I can get a list of obstacles using spatial partitioning relatively quickly. What I'm not so sure about is how to determine the part where the ordered unit manoeuvres around the obstacles. What I've been thinking so far is that I will simply use potential fields- that is, all units will feel a strong repulsive force away from each other and a moderate force towards the desired point. This also has the advantage that to issue group orders, I can simply order a mid-level force towards another entity. But this obviously won't achieve the optimal solution. Will potential fields achieve a reasonable approximation given my parameters, or do I need another solution?

    Read the article

  • Unified data source for k2 installed Joomla websites

    - by Özkan ÖZLÜ
    I am responsible for a few web sites of my organization. I use Joomla! 2.5.9 for those web sites. They all are running at the same server. I use K2 component for content managing. I have a general website in which shows all the staff information at the 'Staff' page. Also some of those people and their contents are shown in another department's website. So, there are databases for each web site. For example: In the general website (let's say general.org), when I click on the 'Staff' menu item, page shows all of the people work at my organization. Also they work at different departments. In another web site (eg: education.general.org) when I click on the 'Staff' menu item, it shows the people work at education department. But for each web site, I have different user accounts which means a modification in one of them does not affect the other one. If the one of the education staff tries to change his profile picture on the education web site, he also has to do it on the general web site. And sometimes one person might be working at two departments. Thus he has to edit three times of his data. Is it possible to merge the records for all websites? In other words, I want everyone to insert/update their data on the general web site, and the other web sites will be updated automatically.

    Read the article

  • Pros and Cons of Facebook's React vs. Web Components (Polymer)

    - by CletusW
    What are the main benefits of Facebook's React over the upcoming Web Components spec and vice versa (or perhaps a more apples-to-apples comparison would be to Google's Polymer library)? According to this JSConf EU talk and the React homepage, the main benefits of React are: Decoupling and increased cohesion using a component model Abstraction, Composition and Expressivity Virtual DOM & Synthetic events (which basically means they completely re-implemented the DOM and its event system) Enables modern HTML5 event stuff on IE 8 Server-side rendering Testability Bindings to SVG, VML, and <canvas> Almost everything mentioned is being integrated into browsers natively through Web Components except this virtual DOM concept (obviously). I can see how the virtual DOM and synthetic events can be beneficial today to support old browsers, but isn't throwing away a huge chunk of native browser code kind of like shooting yourself in the foot in the long term? As far as modern browsers are concerned, isn't that a lot of unnecessary overhead/reinventing of the wheel? Here are some things I think React is missing that Web Components will care of. Correct me if I'm wrong. Native browser support (read "guaranteed to be faster") Write script in a scripting language, write styles in a styling language, write markup in a markup language. Style encapsulation using Shadow DOM React instead has this, which requires writing CSS in JavaScript. Not pretty. Two-way binding

    Read the article

  • Triggering Data Changes in N-Tier

    - by Ryan Kinal
    I've been studying n-tier architectures as of late, particularly in VB.NET with Entity Framework and/or LINQ to SQL. I understand the basic concepts, but have been wondering about best practices in regard to triggering CRUD-type operations from user input/action. So, the arcitecture looks something like the following: [presentation layer] - [business layer] - [data layer] - (database) Getting information from the database into the presentation layer is simple and abstracted. It's just a matter of instantiating a new object from the business layer, which in turn uses the data layer to get at the correct information. However, saving (updating and inserting), and deleting seem to require particular APIs on the relevant business objects. I have to assume this is standard practice, unless a business object will save itself on various operations (which seems inefficient), or on disposal (which seems like it just wouldn't work, or may be unwieldy and unreliable). Should my "savable" business objects all implement a particular "ISavable" or "IDatabaseObject" interface? Is this a recognized (anti-)pattern? Are there other, better patterns I should be using that I'm just unaware of? The TLDR question, I suppose, is How does the presentation layer trigger database changes?

    Read the article

  • Displaying a Grid of Data in ASP.NET MVC

    One of the most common tasks we face as a web developers is displaying data in a grid. In its simplest incarnation, a grid merely displays information about a set of records - the orders placed by a particular customer, perhaps; however, most grids offer features like sorting, paging, and filtering to present the data in a more useful and readable manner. In ASP.NET WebForms the GridView control offers a quick and easy way to display a set of records in a grid, and offers features like sorting, paging, editing, and deleting with just a little extra work. On page load, the GridView automatically renders as an HTML <table> element, freeing you from having to write any markup and letting you focus instead on retrieving and binding the data to display to the GridView. In an ASP.NET MVC application, however, developers are on the hook for generating the markup rendered by each view. This task can be a bit daunting for developers new to ASP.NET MVC, especially those who have a background in WebForms. This is the first in a series of articles that explore how to display grids in an ASP.NET MVC application. This installment starts with a walk through of creating the ASP.NET MVC application and data access code used throughout this series. Next, it shows how to display a set of records in a simple grid. Future installments examine how to create richer grids that include sorting, paging, filtering, and client-side enhancements. We'll also look at pre-built grid solutions, like the Grid component in the MvcContrib project and JavaScript-based grids like jqGrid. But first things first - let's create an ASP.NET MVC application and see how to display database records in a web page. Read on to learn more! Read More >

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • One-way platform collision

    - by TheBroodian
    I hate asking questions that are specific to my own code like this, but I've run into a pesky roadblock and could use some help getting around it. I'm coding floating platforms into my game that will allow a player to jump onto them from underneath, but then will not allow players to fall through them once they are on top, which require some custom collision detection code. The code I have written so far isn't working, the character passes through it on the way up, and on the way down, stops for a moment on the platform, and then falls right through it. Here is the code to handle collisions with floating platforms: protected void HandleFloatingPlatforms(Vector2 moveAmount) { //if character is traveling downward. if (moveAmount.Y > 0) { Rectangle afterMoveRect = collisionRectangle; afterMoveRect.Offset((int)moveAmount.X, (int)moveAmount.Y); foreach (World_Objects.GameObject platform in gameplayScreen.Entities) { if (platform is World_Objects.Inanimate_Objects.FloatingPlatform) { //wideProximityArea is just a rectangle surrounding the collision //box of an entity to check for nearby entities. if (wideProximityArea.Intersects(platform.CollisionRectangle) || wideProximityArea.Contains(platform.CollisionRectangle)) { if (afterMoveRect.Intersects(platform.CollisionRectangle)) { //This, in my mind would denote that after the character is moved, its feet have fallen below the top of the platform, but before he had moved its feet were above it... if (collisionRectangle.Bottom <= platform.CollisionRectangle.Top) { if (afterMoveRect.Bottom > platform.CollisionRectangle.Top) { //And then after detecting that he has fallen through the platform, reposition him on top of it... worldLocation.Y = platform.CollisionRectangle.Y - frameHeight; hasCollidedVertically = true; } } } } } } } } In case you are curious, the parameter moveAmount is found through this code: elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; float totalX = 0; float totalY = 0; foreach (Vector2 vector in velocities) { totalX += vector.X; totalY += vector.Y; } velocities.Clear(); velocity.X = totalX; velocity.Y = totalY; velocity.Y = Math.Min(velocity.Y, 1000); Vector2 moveAmount = velocity * elapsed;

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >