Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 343/1328 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Legal concerns with orchestrating a music submission contest

    - by Amplify91
    My team and I are getting pretty far along in the development of our latest game and have been thinking about audio. We decided to host an audio submission contest where we will offer a little cash and some equity stake in the game as prizes. We are also giving away copies of the game to participants. We hope not only to find audio for our game, but to meet some cool sound artists and promote the game a bit through the process. First of all, is this even a good idea? What are some potential dangers in doing this? Will it even be well received among artists? Secondly, I wrote up some Terms and Conditions in my best legal-speak to try to protect us and clarify how the contest will be run. Are these sufficient to make sure everyone involved is treated fairly and is legally protected? They are as follows: All submissions (The Submission) must be licensed under a Creative Commons Attribution 3.0 Unported License (CC-BY-3.0) By applying a CC-BY-3.0 license, you (The Submitter) expressly give Detour Games (and all members wherein) permission to copy, distribute, transmit, modify, adapt, and make commercial use of The Submission. The Submitter must own all rights to The Submission and be within their rights to license it as specified and submit it. The Submitter claims responsibility for the legality of The Submission. If The Submission is found to infringe on the rights of a person or entity other than those of The Submitter, Detour Games will not be held liable as all responsibility and liability for the legality of The Submission is that of The Submitter's. No more than two free copies of The Game per submitter. All flat cash prizes will only be disbursed pending the success of our first $5,000 Kickstarter campaign. These prizes will be disbursed 30 days after Detour Games receives the Kickstarter funds. All equity prizes (percentage of profits) are defined as the given percent of total profits after costs for a period of one year (12 months) after the release of RAW. These prizes will be disbursed semi-annually. All prize money will be disbursed through either an electronic fund transfer through a service such as PayPal or by a mailed money order. It is The Submitter's responsibility to cooperate with Detour Games in the disbursement of the funds. Detour Games reserves the right to change these Terms and Conditions at any time without notice. By participating in the contest, The Submitter agrees to and accepts all terms and conditions listed. What else could I do (legally) to protect everyone involved?

    Read the article

  • How to implement a component based system for items in a web game.

    - by Landstander
    Reading several other questions and answers on using a component based system to define items I want to use one for the items and spells in a web game written in PHP. I'm just stuck on the implementation. I'm going to use a DB schema suggested in this series (part 5 describes the schema); http://t-machine.org/index.php/2007/09/03/entity-systems-are-the-future-of-mmog-development-part-1/ This means I'll have an items table with generic item properties, a table listing all of the components for an item and finally records in each component table used to make up the item. Assuming I can select the first two together in a single query, I'm still going to do N queries for each component type. I'm kind of fine with this because I can cache the data into memcache and check there first before doing any queries. I'll need to build up the items on every request they are used in so the implementation needs to be on the lean side even if they're pulled from memcache. But right there is where I feel confident about implementing a component system for my items ends. I figure I'd need to bring attributes and behaviors into the container from each component it uses. I'm just not sure how to do that effectively and not end up writing a lot of specialized code to deal with each component. For example an AttackComponent might need to know how to filter targets inside of a battle context and also maybe provide an attack behavior. That same item might also have a UsableComponent which allows the item to be used and apply some effect onto a different set of targets filtered differently from the same battle context. Then not every part of an item is an active part, an AttributeBonusComponent might need to only kick in when the item is in an equipped state or when displaying the item details page. Ultimately, how should I bring all of the components together into the container so when I use an item as a weapon I get the correct list of targets? Know when a weapon can also be used as an item? Or to apply the bonuses the item provides to a character object? I feel like I've gone too far down the rabbit hole and I can't grasp onto the simple solution in front of me. (If that makes any sense at all.) Likewise if I were to implement the best answer from here I feel like I'd have a lot of the same questions. How to model multiple "uses" (e.g. weapon) for usable-inventory/object/items (e.g. katana) within a relational database.

    Read the article

  • How do I keep user input and rendering independent of the implementation environment?

    - by alex
    I'm writing a Tetris clone in JavaScript. I have a fair amount of experience in programming in general, but am rather new to game development. I want to separate the core game code from the code that would tie it to one environment, such as the browser. My quick thoughts led me to having the rendering and input functions external to my main game object. I could pass the current game state to the rendering method, which could render using canvas, elements, text, etc. I could also map input to certain game input events, such as move piece left, rotate piece clockwise, etc. I am having trouble designing how this should be implemented in my object. Should I pass references to functions that the main object will use to render and process user input? For example... var TetrisClone = function(renderer, inputUpdate) { this.renderer = renderer || function() {}; this.inputUpdate = input || function() {}; this.state = {}; }; TetrisClone.prototype = { update: function() { // Get user input via function passed to constructor. var inputEvents = this.inputUpdate(); // Update game state. // Render the current game state via function passed to constructor. this.renderer(this.state); } }; var renderer = function(state) { // Render blocks to browser page. } var inputEvents = {}; var charCodesToEvents = { 37: "move-piece-left" /* ... */ }; document.addEventListener("keypress", function(event) { inputEvents[event.which] = true; }); var inputUpdate = function() { var translatedEvents = [], event, translatedEvent; for (event in inputEvents) { if (inputEvents.hasOwnProperty(event)) { translatedEvent = charCodesToEvents[event]; translatedEvents.push(translatedEvent); } } inputEvents = {}; return translatedEvents; } var game = new TetrisClone(renderer, inputUpdate); Is this a good game design? How would you modify this to suit best practice in regard to making a game as platform/input independent as possible?

    Read the article

  • How to do proper Alpha in XNA?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • How to chain actions/animations together and delay their execution?

    - by codinghands
    I'm trying to build a simple game with a number of screens - 'TitleScreen', 'LoadingScreen', 'PlayScreen', 'HighScoreScreen' - each of which has it's own draw & update logic methods, sprites, other useful fields, etc. This works well for me as a game dev beginner, and it runs. However, on my 'PlayScreen' I want to run some animations before the player gets control - dropping in some artwork, playing some sound effects, generally prettifying things a little. However, I'm not sure what the best way to chain animations / sound effects / other timed general events is. I could make an intermediary screen, 'PrePlayScreen', which simply has all of this hardcoded like so: Update(){ Animation anim1 = new Animation(.....); Animation anim2 = new Animation(.....); anim1.Run(); if(anim1.State == AnimationState.Complete) anim2.Run(); if(anim2.State == AnimationState.Complete) // Load 'PlayScreen' screen } But this doesn't seem so great - surely their must be a better way? I then thought, 'Hey - an AnimationManager! That'd be awesome!'. But then that creeping OOP panic set in as I thought about it some more. If I create the Animation in my Screen, then add it to the AnimationManager (which may or may not be a GameComponent hooked up to Update/Draw), how can I get 'back' to it? To signal commands like start / end / repeat? I'd still need to keep a reference to the object in my Screen so that I could still communicate with it once it's buried in the bosom of a List in my AnimationManager. This seems bad. I've also tried using events - call 'Update' on all the animations in the PlayScreen update loop, but crucially all of the animations have a bool flag ('Active') which determines whether they should begin. The first animation has this set to 'true', all others 'false'. On completion the first animation raises an event, which sets animation 2's bool flag to true (and so it then runs). Once animation 2 is complete another 'anim complete' event is raised, and the screen state changes. Considering the game I'm making is basically as simple as it gets I know I'm overthinking this... it's just the paradigm shift from web - game development is making me break out in a serious case of the stupids.

    Read the article

  • April 14th Links: ASP.NET, ASP.NET MVC, ASP.NET Web API and Visual Studio

    - by ScottGu
    Here is the latest in my link-listing blog series: ASP.NET Easily overlooked features in VS 11 Express for Web: Good post by Scott Hanselman that highlights a bunch of easily overlooked improvements that are coming to VS 11 (and specifically the free express editions) for web development: unit testing, browser chooser/launcher, IIS Express, CSS Color Picker, Image Preview in Solution Explorer and more. Get Started with ASP.NET 4.5 Web Forms: Good 5-part tutorial that walks-through building an application using ASP.NET Web Forms and highlights some of the nice improvements coming with ASP.NET 4.5. What is New in Razor V2 and What Else is New in Razor V2: Great posts by Andrew Nurse, a dev on the ASP.NET team, about some of the new improvements coming with ASP.NET Razor v2. ASP.NET MVC 4 AllowAnonymous Attribute: Nice post from David Hayden that talks about the new [AllowAnonymous] filter introduced with ASP.NET MVC 4. Introduction to the ASP.NET Web API: Great tutorial by Stephen Walher that covers how to use the new ASP.NET Web API support built-into ASP.NET 4.5 and ASP.NET MVC 4. Comprehensive List of ASP.NET Web API Tutorials and Articles: Tugberk Ugurlu links to a huge collection of articles, tutorials, and samples about the new ASP.NET Web API capability. Async Mashups using ASP.NET Web API: Nice post by Henrik on how you can use the new async language support coming with .NET 4.5 to easily and efficiently make asynchronous network requests that do not block threads within ASP.NET. ASP.NET and Front-End Web Development Visual Studio 11 and Front End Web Development - JavaScript/HTML5/CSS3: Nice post by Scott Hanselman that highlights some of the great improvements coming with VS 11 (including the free express edition) for front-end web development. HTML5 Drag/Drop and Async Multi-file Upload with ASP.NET Web API: Great post by Filip W. that demonstrates how to implement an async file drag/drop uploader using HTML5 and ASP.NET Web API. Device Emulator Guide for Mobile Development with ASP.NET: Good post from Rachel Appel that covers how to use various device emulators with ASP.NET and VS to develop cross platform mobile sites. Fixing these jQuery: A Guide to Debugging: Great presentation by Adam Sontag on debugging with JavaScript and jQuery.  Some really good tips, tricks and gotchas that can save a lot of time. ASP.NET and Open Source Getting Started with ASP.NET Web Stack Source on CodePlex: Fantastic post by Henrik (an architect on the ASP.NET team) that provides step by step instructions on how to work with the ASP.NET source code we recently open sourced. Contributing to ASP.NET Web Stack Source on CodePlex: Follow-on to the post above (also by Henrik) that walks-through how you can submit a code contribution to the ASP.NET MVC, Web API and Razor projects. Overview of the WebApiContrib project: Nice post by Pedro Reys on the new open source WebApiContrib project that has been started to deliver cool extensions and libraries for use with ASP.NET Web API. Entity Framework Entity Framework 5 Performance Improvements and Performance Considerations for EF5:  Good articles that describes some of the big performance wins coming with EF5 (which will ship with both .NET 4.5 and ASP.NET MVC 4). Automatic compilation of LINQ queries will yield some significant performance wins (up to 600% faster). ASP.NET MVC 4 and EF Database Migrations: Good post by David Hayden that covers the new database migrations support within EF 4.3 which allows you to easily update your database schema during development - without losing any of the data within it. Visual Studio What's New in Visual Studio 11 Unit Testing: Nice post by Peter Provost (from the VS team) that talks about some of the great improvements coming to VS11 for unit testing - including built-in VS tooling support for a broad set of unit test frameworks (including NUnit, XUnit, Jasmine, QUnit and more) Hope this helps, Scott

    Read the article

  • 2d movement solution

    - by Phil
    Hi! I'm making a simple top-down tank game on the ipad where the user controls the movement of the tank with the left "joystick" and the rotation of the turret with the right one. I've spent several hours just trying to get it to work decently but now I turn to the pros :) I have two referencial objects, one for the movement and one for the rotation. The referencial objects always stay max two units away from the tank and I use them to tell the tank in what direction to move. I chose this approach to decouple movement and rotational behaviour from the raw input of the joysticks, I believe this will make it simpler to implement whatever behaviour I want for the tank. My problem is 1; the turret rotates the long way to the target. With this I mean that the target can be -5 degrees away in rotation and still it rotates 355 degrees instead of -5 degrees. I can't figure out why. The other problem is with the movement. It just doesn't feel right to have the tank turn while moving. I'd like to have a solution that would work as well for the AI as for the player. A blackbox function for the movement where the player only specifies in what direction it should move and it moves there under the constraints that are imposed on it. I am using the standard joystick class found in the Unity iPhone package. This is the code I'm using for the movement: public class TankFollow : MonoBehaviour { //Check angle difference and turn accordingly public GameObject followPoint; public float speed; public float turningSpeed; void Update() { transform.position = Vector3.Slerp(transform.position, followPoint.transform.position, speed * Time.deltaTime); //Calculate angle var forwardA = transform.forward; var forwardB = (followPoint.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 5) { //Rotate to transform.Rotate(new Vector3(0, (-turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 5) { transform.Rotate(new Vector3(0, (turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } transform.position = new Vector3(transform.position.x, 0, transform.position.z); } } And this is the code I'm using to rotate the turret: void LookAt() { var forwardA = -transform.right; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff - 180 > 1) { //Rotate to transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff - 180 < -1) { transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); print((angleDiff - 180).ToString()); } else { } } Since I want the turret reference point to turn in relation to the tank (when you rotate the body, the turret should follow and not stay locked on since it makes it impossible to control when you've got two thumbs to work with), I've made the TurretFollowPoint a child of the Turret object, which in turn is a child of the body. I'm thinking that I'm making it too difficult for myself with the reference points but I'm imagining that it's a good idea. Please be honest about this point. So I'll be grateful for any help I can get! I'm using Unity3d iPhone. Thanks!

    Read the article

  • Optimizing collision engine bottleneck

    - by Vittorio Romeo
    Foreword: I'm aware that optimizing this bottleneck is not a necessity - the engine is already very fast. I, however, for fun and educational purposes, would love to find a way to make the engine even faster. I'm creating a general-purpose C++ 2D collision detection/response engine, with an emphasis on flexibility and speed. Here's a very basic diagram of its architecture: Basically, the main class is World, which owns (manages memory) of a ResolverBase*, a SpatialBase* and a vector<Body*>. SpatialBase is a pure virtual class which deals with broad-phase collision detection. ResolverBase is a pure virtual class which deals with collision resolution. The bodies communicate to the World::SpatialBase* with SpatialInfo objects, owned by the bodies themselves. There currenly is one spatial class: Grid : SpatialBase, which is a basic fixed 2D grid. It has it's own info class, GridInfo : SpatialInfo. Here's how its architecture looks: The Grid class owns a 2D array of Cell*. The Cell class contains two collection of (not owned) Body*: a vector<Body*> which contains all the bodies that are in the cell, and a map<int, vector<Body*>> which contains all the bodies that are in the cell, divided in groups. Bodies, in fact, have a groupId int that is used for collision groups. GridInfo objects also contain non-owning pointers to the cells the body is in. As I previously said, the engine is based on groups. Body::getGroups() returns a vector<int> of all the groups the body is part of. Body::getGroupsToCheck() returns a vector<int> of all the groups the body has to check collision against. Bodies can occupy more than a single cell. GridInfo always stores non-owning pointers to the occupied cells. After the bodies move, collision detection happens. We assume that all bodies are axis-aligned bounding boxes. How broad-phase collision detection works: Part 1: spatial info update For each Body body: Top-leftmost occupied cell and bottom-rightmost occupied cells are calculated. If they differ from the previous cells, body.gridInfo.cells is cleared, and filled with all the cells the body occupies (2D for loop from the top-leftmost cell to the bottom-rightmost cell). body is now guaranteed to know what cells it occupies. For a performance boost, it stores a pointer to every map<int, vector<Body*>> of every cell it occupies where the int is a group of body->getGroupsToCheck(). These pointers get stored in gridInfo->queries, which is simply a vector<map<int, vector<Body*>>*>. body is now guaranteed to have a pointer to every vector<Body*> of bodies of groups it needs to check collision against. These pointers are stored in gridInfo->queries. Part 2: actual collision checks For each Body body: body clears and fills a vector<Body*> bodiesToCheck, which contains all the bodies it needs to check against. Duplicates are avoided (bodies can belong to more than one group) by checking if bodiesToCheck already contains the body we're trying to add. const vector<Body*>& GridInfo::getBodiesToCheck() { bodiesToCheck.clear(); for(const auto& q : queries) for(const auto& b : *q) if(!contains(bodiesToCheck, b)) bodiesToCheck.push_back(b); return bodiesToCheck; } The GridInfo::getBodiesToCheck() method IS THE BOTTLENECK. The bodiesToCheck vector must be filled for every body update because bodies could have moved meanwhile. It also needs to prevent duplicate collision checks. The contains function simply checks if the vector already contains a body with std::find. Collision is checked and resolved for every body in bodiesToCheck. That's it. So, I've been trying to optimize this broad-phase collision detection for quite a while now. Every time I try something else than the current architecture/setup, something doesn't go as planned or I make assumption about the simulation that later are proven to be false. My question is: how can I optimize the broad-phase of my collision engine maintaining the grouped bodies approach? Is there some kind of magic C++ optimization that can be applied here? Can the architecture be redesigned in order to allow for more performance? Actual implementation: SSVSCollsion Body.h, Body.cpp World.h, World.cpp Grid.h, Grid.cpp Cell.h, Cell.cpp GridInfo.h, GridInfo.cpp

    Read the article

  • Exception Handling Frequency/Log Detail

    - by Cyborgx37
    I am working on a fairly complex .NET application that interacts with another application. Many single-line statements are possible culprits for throwing an Exception and there is often nothing I can do to check the state before executing them to prevent these Exceptions. The question is, based on best practices and seasoned experience, how frequently should I lace my code with try/catch blocks? I've listed three examples below, but I'm open to any advice. I'm really hoping to get some pros/cons of various approaches. I can certainly come up with some of my own (greater log granularity for the O-C approach, better performance for the Monolithic approach), so I'm looking for experience over opinion. EDIT: I should add that this application is a batch program. The only "recovery" necessary in most cases is to log the error, clean up gracefully, and quit. So this could be seen to be as much a question of log granularity as exception handling. In my mind's eye I can imagine good reasons for both, so I'm looking for some general advice to help me find an appropriate balance. Monolitich Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } catch (Exception e) { Log(e); } finally { CleanUp(); } } public static void Step1(){ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } public static void Step2(){ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } public static void Step3(){ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } } Delegated Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Obsessive-Compulsive Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Other approaches welcomed and encouraged. Above are examples only.

    Read the article

  • 2d tank movement and turret solution

    - by Phil
    Hi! I'm making a simple top-down tank game on the ipad where the user controls the movement of the tank with the left "joystick" and the rotation of the turret with the right one. I've spent several hours just trying to get it to work decently but now I turn to the pros :) I have two referencial objects, one for the movement and one for the rotation. The referencial objects always stay max two units away from the tank and I use them to tell the tank in what direction to move. I chose this approach to decouple movement and rotational behaviour from the raw input of the joysticks, I believe this will make it simpler to implement whatever behaviour I want for the tank. My problem is 1; the turret rotates the long way to the target. With this I mean that the target can be -5 degrees away in rotation and still it rotates 355 degrees instead of -5 degrees. I can't figure out why. The other problem is with the movement. It just doesn't feel right to have the tank turn while moving. I'd like to have a solution that would work as well for the AI as for the player. A blackbox function for the movement where the player only specifies in what direction it should move and it moves there under the constraints that are imposed on it. I am using the standard joystick class found in the Unity iPhone package. This is the code I'm using for the movement: public class TankFollow : MonoBehaviour { //Check angle difference and turn accordingly public GameObject followPoint; public float speed; public float turningSpeed; void Update() { transform.position = Vector3.Slerp(transform.position, followPoint.transform.position, speed * Time.deltaTime); //Calculate angle var forwardA = transform.forward; var forwardB = (followPoint.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 5) { //Rotate to transform.Rotate(new Vector3(0, (-turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 5) { transform.Rotate(new Vector3(0, (turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } transform.position = new Vector3(transform.position.x, 0, transform.position.z); } } And this is the code I'm using to rotate the turret: void LookAt() { var forwardA = -transform.right; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff - 180 > 1) { //Rotate to transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff - 180 < -1) { transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); print((angleDiff - 180).ToString()); } else { } } Since I want the turret reference point to turn in relation to the tank (when you rotate the body, the turret should follow and not stay locked on since it makes it impossible to control when you've got two thumbs to work with), I've made the TurretFollowPoint a child of the Turret object, which in turn is a child of the body. I'm thinking that I'm making it too difficult for myself with the reference points but I'm imagining that it's a good idea. Please be honest about this point. So I'll be grateful for any help I can get! I'm using Unity3d iPhone. Thanks!

    Read the article

  • Patching and PCI Compliance

    - by Joel Weise
    One of my friends and master of the security universe, Darren Moffat, pointed me to Dan Anderson's blog the other day.  Dan went to Toorcon which is a security conference where he went to a talk on security patching titled, "Stop Patching, for Stronger PCI Compliance".  I realize that often times speakers will use a headline grabbing title to create interest in their talk and this one certainly got my attention.  I did not go to the conference and did not see the presentation, so I can only go by what is in the Toorcon agenda summary and on Dan's blog, but the general statement to stop patching for stronger PCI compliance seems a bit misleading to me.  Clearly patching is important to all systems management and should be a part of any organization's security hygiene.  Further, PCI does require the patching of systems to maintain compliance.  So it's important to mention that organizations should not simply stop patching their systems; and I want to believe that was not the speakers intent. So let's look at PCI requirement 6: "Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All critical systems must have the most recently released, appropriate software patches to protect against exploitation and compromise of cardholder data by malicious individuals and malicious software." Notice the word "appropriate" in the requirement.  This is stated to give organizations some latitude and apply patches that make sense in their environment and that target the vulnerabilities in question.  Haven't we all seen a vulnerability scanner throw a false positive and flag some module and point to a recommended patch, only to realize that the module doesn't exist on our system?  Applying such a patch would obviously not be appropriate.  This does not mean an organization can ignore the fact they need to apply security patches.  It's pretty clear they must.  Of course, organizations have other options in terms of compliance when it comes to patching.  For example, they could remove a system from scope and make sure that system does not process or contain cardholder data.  [This may or may not be a significant undertaking.  I just wanted to point out that there are always options available.] PCI DSS requirement 6.1 also includes the following note: "Note: An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months." Notice there is no mention to stop patching one's systems.  And the note also states organization may apply a risk based approach. [A smart approach but also not mandated].  Such a risk based approach is not intended to remove the requirement to patch one's systems.  It is meant, as stated, to allow one to prioritize their patch installations.   So what does this mean to an organization that must comply with PCI DSS and maintain some sanity around their patch management and overall operational readiness?  I for one like to think that most organizations take a common sense and balanced approach to their business and security posture.  If patching is becoming an unbearable task, review why that is the case and possibly look for means to improve operational efficiencies; but also recognize that security is important to maintaining the availability and integrity of one's systems.  Likewise, whether we like it or not, the cyber-world we live in is getting more complex and threatening - and I dont think it's going to get better any time soon.

    Read the article

  • Collision Detection, player correction

    - by DoomStone
    I am having some problems with collision detection, I have 2 types of objects excluding the player. Tiles and what I call MapObjects. The tiles are all 16x16, where the MapObjects can be any size, but in my case they are all 16x16. When my player runs along the mapobjects or tiles, it get verry jaggy. The player is unable to move right, and will get warped forward when moving left. I have found the problem, and that is my collision detection will move the player left/right if colliding the object from the side, and up/down if collision from up/down. Now imagine that my player is sitting on 2 tiles, at (10,12) and (11,12), and the player is mostly standing on the (11,12) tile. The collision detection will first run on then (10,12) tile, it calculates the collision depth, and finds that is is a collision from the side, and therefore move the object to the right. After, it will do the collision detection with (11,12) and it will move the character up. So the player will not fall down, but are unable to move right. And when moving left, the same problem will make the player warp forward. This problem have been bugging me for a few days now, and I just can't find a solution! Here is my code that does the collision detection. public void ApplyObjectCollision(IPhysicsObject obj, List<IComponent> mapObjects, TileMap map) { PhysicsVariables physicsVars = GetPhysicsVariables(); Rectangle bounds = ((IComponent)obj).GetBound(); int leftTile = (int)Math.Floor((float)bounds.Left / map.GetTileSize()); int rightTile = (int)Math.Ceiling(((float)bounds.Right / map.GetTileSize())) - 1; int topTile = (int)Math.Floor((float)bounds.Top / map.GetTileSize()); int bottomTile = (int)Math.Ceiling(((float)bounds.Bottom / map.GetTileSize())) - 1; // Reset flag to search for ground collision. obj.IsOnGround = false; // For each potentially colliding tile, for (int y = topTile; y <= bottomTile; ++y) { for (int x = leftTile; x <= rightTile; ++x) { IComponent tile = map.Get(x, y); if (tile != null) { bounds = HandelCollision(obj, tile, bounds, physicsVars); } } } // Handel collision for all Moving objects foreach (IComponent mo in mapObjects) { if (mo == obj) continue; if (mo.GetBound().Intersects(((IComponent)obj).GetBound())) { bounds = HandelCollision(obj, mo, bounds, physicsVars); } } } private Rectangle HandelCollision(IPhysicsObject obj, IComponent objb, Rectangle bounds, PhysicsVaraibales physicsVars) { // If this tile is collidable, SpriteCollision collision = ((IComponent)objb).GetCollisionType(); if (collision != SpriteCollision.Passable) { // Determine collision depth (with direction) and magnitude. Rectangle tileBounds = ((IComponent)objb).GetBound(); Vector2 depth = bounds.GetIntersectionDepth(tileBounds); if (depth != Vector2.Zero) { float absDepthX = Math.Abs(depth.X); float absDepthY = Math.Abs(depth.Y); // Resolve the collision along the shallow axis. if (absDepthY <= absDepthX || collision == SpriteCollision.Platform) { // If we crossed the top of a tile, we are on the ground. if (obj.PreviousBound.Bottom <= tileBounds.Top) obj.IsOnGround = true; // Ignore platforms, unless we are on the ground. if (collision == SpriteCollision.Impassable || obj.IsOnGround) { // Resolve the collision along the Y axis. ((IComponent)obj).Position = new Vector2(((IComponent)obj).Position.X, ((IComponent)obj).Position.Y + depth.Y); // If we hit something about us, remove all velosity upwards if (depth.Y > 0 && obj.IsJumping) { obj.Velocity = new Vector2(obj.Velocity.X, 0); obj.JumpTime = physicsVars.MaxJumpTime; } // Perform further collisions with the new bounds. return ((IComponent)obj).GetBound(); } } else if (collision == SpriteCollision.Impassable) // Ignore platforms. { // Resolve the collision along the X axis. ((IComponent)obj).Position = new Vector2(((IComponent)obj).Position.X + depth.X, ((IComponent)obj).Position.Y); // Perform further collisions with the new bounds. return ((IComponent)obj).GetBound(); } } } return bounds; } Update: I have uploaded the source code, if you want to look that through. I think that my general approach might be wrong when i am working with small tiles, I have also be unable to find any good information on physics and collision detection in Platform games. http://dl.dropbox.com/u/3181816/Sogaard.Games.SuperMario.rar

    Read the article

  • Automated Error Reporting = More Robust Software

    - by Laila
    I would like to tell you how to revolutionize your software development process </marketing hyperbole> On a more serious note, we (Red Gate's .NET Development team) recently rolled a new tool into our development process which has made our lives dramatically easier AND improved the quality of our software, and I (& one of our developers, Alex Davies) just wanted to take a quick moment to share the love. I work with a development team that takes pride in what they ship, so we take software testing rather seriously. For every development project we run, we allocate at least one software tester for every two developers, and we never ship software without first shipping early access releases and betas to get user feedback. And therein lies the challenge -encouraging users to provide consistent, useful feedback is a headache, but without that feedback, improving the software is. tricky. Until fairly recently, we used the standard (if long-winded) approach of receiving bug reports of variable quality via email or through our support forums. If that didn't give us enough information to reproduce the problem - which was most of the time - we had to enter into a time-consuming to-and-fro conversation with the end-user, to get scrape together the data we needed to work out where the problem lay. As I'm sure you're aware, this is painfully slow. To the delight of the team, we recently got to work with SmartAssembly, which lets us embed automated exception and error reporting into our software with very little pain, and we decided to do a little dogfooding. As a result, we've have made a really handy (if perhaps slightly obvious) discovery: As soon as we release a beta, or indeed any release of software, we now get tonnes of customer feedback through automated error reports. Making this process easier for our users has dramatically increased the amount (and quality) of feedback we get. From their point of view, they get an experience similar to Microsoft's error reporting, and process is essentially idiot-proof. From our side of things, we can now react much faster to the information we get, fixing the bugs and shipping a new-and-improved release, which our users rather appreciate. Smiles and hugs all round. Even more so because, as we're use SmartAssembly's Automated Error Reporting, we get to avoid having to spend weeks building an exception reporting mechanism. It takes just a few minutes to add reporting to a project, and we get a bunch of useful information back, like a stack trace and the values of all the local variables, which we can use to fix bugs. Happily, "Automated Error Reporting = More Robust Software" can actually be read two ways: we've found that we not only ship higher quality software, but we also release within a shorter time. We can ship stable software that our users are happy to upgrade to, and we then bask in the glory of lots of positive customer feedback. Once we'd starting working with SmartAssembly, we were curious to know how widespread error reporting was as a practice. Our product manager ran a survey in autumn last year, and found that 40% of software developers never really considered deploying error reporting. Considering how we've now got plenty of experience on the subject, one of our dev guys, Alex Davies, thought we should share what we've learnt, and he's kindly offered to host a webinar on delivering robust software with Automated Error Reporting. Drawing on our own in-house development experiences, he'll cover how to add error reporting to your program, how to actually use the error reports to fix bugs (don't snigger, not everyone's as bright as you), how to customize the error report dialog that your users see, and how to automatically get log files from your users' machine. The webinar will take place on Jan 25th (that's next week). It's free to attend, but you'll still need to register to hear Alex's dulcet tones.

    Read the article

  • Do you want to be an ALM Consultant?

    - by Martin Hinshelwood
    Northwest Cadence is looking for our next great consultant! At Northwest Cadence, we have created a work environment that emphasizes excellence, integrity, and out-of-the-box thinking.  Our customers have high expectations (rightfully so) and we wouldn’t have it any other way!   Northwest Cadence has some of the most exciting customers I have ever worked with and even though I have only been here just over a month I have already: Provided training/consulting for 3 government departments Created and taught courseware for delivering Scrum to teams within a high profile multinational company Started presenting Microsoft's ALM Engagement Program  So if you are interested in helping companies build better software more efficiently, then.. Enquire at [email protected] Application Lifecycle Management (ALM) Consultant An ALM Consultant with a minimum of 8 years of relevant experience with Application Lifecycle Management, Visual Studio (including Visual Studio Team System) and software design is needed. Must provide thought leadership on best practices for enterprise architecture, understand the Microsoft technology solution stack, and have a thorough understanding of enterprise application integration. The ALM Practice Lead will play a central role in designing and implementing the overall ALM Practice strategy, including creating, updating, and delivering ALM courseware and consultancy engagements. This person will also provide project support, deliverables, and quality solutions on Visual Studio Team System that exceed client expectations. Engagements will vary and will involve providing expert training, consulting, mentoring, formulating technical strategies and policies and acting as a “trusted advisor” to customers and internal teams. Sound sense of business and technical strategy required. Strong interpersonal skills as well as solid strategic thinking are key. The ideal candidate will be capable of envisioning the solution based on the early client requirements, communicating the vision to both technical and business stakeholders, leading teams through implementation, as well as training, mentoring, and hands-on software development. The ideal candidate will demonstrate successful use of both agile and formal software development methods, enterprise application patterns, and effective leadership on prior projects. Job Requirements Minimum Education: Bachelor’s Degree (computer science, engineering, or math preferred). Locale / Travel: The Practice Lead position requires estimated 50% travel, most of which will be in the Continental US (a valid national Passport must be maintained).  This is a full time position and will be based in the Kirkland office. Preferred Education: Master’s Degree in Information Technology or Software Engineering; Premium Microsoft Certifications on .NET (MCSD) or MCPD or relevant experience; Microsoft Certified Trainer (MCT) or relevant experience. Minimum Experience and Skills: 7+ years experience with business information systems integration or custom business application design and development in a professional technology consulting, corporate MIS or software development environment. Essential Duties & Responsibilities: Provide training, consulting, and mentoring to organizations on topics that include Visual Studio Team System and ALM. Create content, including labs and demonstrations, to be delivered as training classes by Northwest Cadence employees. Lead development teams through the complete ALM and/or Visual Studio Team System solution. Be able to communicate in detail how a solution will integrate into the larger technical problem space for large, complex enterprises. Define technical solution requirements. Provide guidance to the customer and project team with respect to technical feasibility, complexity, and level of effort required to deliver a custom solution. Ensure that the solution is designed, developed and deployed in accordance with the agreed upon development work plan. Create and deliver weekly status reports of training and/or consulting progress. Engagement Responsibilities: · Provide a strong desire to provide thought leadership related to technology and to help grow the business. · Work effectively and professionally with employees at all levels of a customer’s organization. · Have strong verbal and written communication skills. · Have effective presentation, organizational and planning skills. · Have effective interpersonal skills and ability to work in a team environment. Enquire at [email protected]

    Read the article

  • Level of detail algorithm not functioning correctly

    - by Darestium
    I have been working on this problem for months; I have been creating Planet Generator of sorts, after more than 6 months of work I am no closer to finishing it then I was 4 months ago. My problem; The terrain does not subdivide in the correct locations properly, it almost seems as if there is a ghost camera next to me, and the quads subdivide based on the position of this "ghost camera". Here is a video of the broken program: http://www.youtube.com/watch?v=NF_pHeMOju8 The best example of the problem occurs around 0:36. For detail limiting, I am going for a chunked LOD approach, which subdivides the terrain based on how far you are away from it. I use a "depth table" to determine how many subdivisions should take place. void PQuad::construct_depth_table(float distance) { tree[0] = -1; for (int i = 1; i < MAX_DEPTH; i++) { tree[i] = distance; distance /= 2.0f; } } The chuncked LOD relies on the child/parent structure of quads, the depth is determined by a constant e.g: if the constant is 6, there are six levels of detail. The quads which should be drawn go through a distance test from the player to the centre of the quad. void PQuad::get_recursive(glm::vec3 player_pos, std::vector<PQuad*>& out_children) { for (size_t i = 0; i < children.size(); i++) { children[i].get_recursive(player_pos, out_children); } if (this->should_draw(player_pos) || this->depth == 0) { out_children.emplace_back(this); } } bool PQuad::should_draw(glm::vec3 player_position) { float distance = distance3(player_position, centre); if (distance < tree[depth]) { return true; } return false; } The root quad has four children which could be visualized like the following: [] [] [] [] Where each [] is a child. Each child has the same amount of children up until the detail limit, the quads which have are 6 iterations deep are leaf nodes, these nodes have no children. Each node has a corresponding Mesh, each Mesh structure has 16x16 Quad-shapes, each Mesh's Quad-shapes halves in size each detail level deeper - creating more detail. void PQuad::construct_children() { // Calculate the position of the Quad based on the parent's location calculate_position(); if (depth < (int)MAX_DEPTH) { children.reserve((int)NUM_OF_CHILDREN); for (int i = 0; i < (int)NUM_OF_CHILDREN; i++) { children.emplace_back(PQuad(this->face_direction, this->radius)); PQuad *child = &children.back(); child->set_depth(depth + 1); child->set_child_index(i); child->set_parent(this); child->construct_children(); } } else { leaf = true; } } The following function creates the vertices for each quad, I feel that it may play a role in the problem - I just can't determine what is causing the problem. void PQuad::construct_vertices(std::vector<glm::vec3> *vertices, std::vector<Color3> *colors) { vertices->reserve(quad_width * quad_height); for (int y = 0; y < quad_height; y++) { for (int x = 0; x < quad_width; x++) { switch (face_direction) { case YIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, quad_height - 1.0f, -(position.y + y * element_width))); break; case YDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, 0.0f, -(position.y + y * element_width))); break; case XIncreasing: vertices->emplace_back(glm::vec3(quad_width - 1.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case XDecreasing: vertices->emplace_back(glm::vec3(0.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case ZIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, 0.0f)); break; case ZDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, -(quad_width - 1.0f))); break; } // Position the bottom, right, front vertex of the cube from being (0,0,0) to (-16, -16, 16) (*vertices)[vertices->size() - 1] -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); colors->emplace_back(Color3(255.0f, 255.0f, 255.0f, false)); } } switch (face_direction) { case YIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, quad_height - 1.0f, -(position.y + quad_height / 2.0f)); break; case YDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, 0.0f, -(position.y + quad_height / 2.0f)); break; case XIncreasing: this->centre = glm::vec3(quad_width - 1.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case XDecreasing: this->centre = glm::vec3(0.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case ZIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, 0.0f); break; case ZDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, -(quad_height - 1.0f)); break; } this->centre -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); } Any help in discovering what is causing this "subdivding in the wrong place" would be greatly appreciated.

    Read the article

  • How to properly add texture to multi-fixture/shape b2Body

    - by Blazej Wdowikowski
    Hello to everyone this is my first poste here I hope that will be not fail start. At start I must say I make part 1 in Ray's Tutorial "How To Make A Game Like Fruit Ninja With Box2D and Cocos2D". But I wonder what when I want make more complex body with texture? Simple just add n b2FixtureDef to the same body. OK but what about texture? If I will take code from that tutorial it only fill last fixture. Probably it does not takes every b2Vec2 point. I was right, it did not. So quick refactor and from that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int vertexCount = shape->GetVertexCount(); NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; for(int i = 0; i < vertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width, _centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I came up with that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { int vertexCount = 0; //gather total number of b2Vect2 points b2Fixture *currentFixture = body->GetFixtureList(); while (currentFixture) { //new b2PolygonShape *shape = (b2PolygonShape*)currentFixture->GetShape(); vertexCount += shape->GetVertexCount(); currentFixture = currentFixture->GetNext(); } NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); while (originalFixture) { //new NSLog((NSString*)@"-"); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int currentVertexCount = shape->GetVertexCount(); for(int i = 0; i < currentVertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } originalFixture = originalFixture->GetNext(); } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width,_centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I was working for simple two fixtures body like b2BodyDef bodyDef; bodyDef.type = b2_dynamicBody; bodyDef.position = position; bodyDef.angle = rotation; b2Body *body = world->CreateBody(&bodyDef); b2FixtureDef fixtureDef; fixtureDef.density = 1.0; fixtureDef.friction = 0.5; fixtureDef.restitution = 0.2; fixtureDef.filter.categoryBits = 0x0001; fixtureDef.filter.maskBits = 0x0001; b2Vec2 vertices[] = { b2Vec2(0.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(0.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(50.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(60.0/PTM_RATIO,60.0/PTM_RATIO) }; b2PolygonShape shape; shape.Set(vertices, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); b2Vec2 vertices2[] = { b2Vec2(20.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(20.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(70.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(80.0/PTM_RATIO,60.0/PTM_RATIO) }; shape.Set(vertices2, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); But if I try put secondary shape upper than first it starting wierd, texture goes crazy. For example not mention about more complex shapes. What's more if shapes have one common point texture will not render for them at all [For that I use Physics Edytor like in tutorial part1] BTW. I use PolygonSprite and in method createWithWorld... another shapes. Uff.. Question So my question is, why texture coords are in such a mess up? It's my modify method or just wrong approach? Maybe I should remove duplicated from points array?

    Read the article

  • Noob Objective-C/C++ - Linker Problem/Method Signature Problem

    - by Josh
    There is a static class Pipe, defined in C++ header that I'm including. The static method I'm interested in calling (from Objetive-c) is here: static ERC SendUserGet(const UserId &_idUser,const GUID &_idStyle,const ZoneId &_idZone,const char *_pszMsg); I have access to an objetive-c data structure that appears to store a copy of userID, and zoneID -- it looks like: @interface DataBlock : NSObject { GUID userID; GUID zoneID; } Looked up the GUID def, and its a struct with a bunch of overloaded operators for equality. UserId and ZoneId from the first function signature are #typedef GUID Now when I try to call the method, no matter how I cast it (const UserId), (UserId), etc, I get the following linker error: Ld build/Debug/Seeker.app/Contents/MacOS/Seeker normal i386 cd /Users/josh/Development/project/Mac/Seeker setenv MACOSX_DEPLOYMENT_TARGET 10.5 /Developer/usr/bin/g++-4.2 -arch i386 -isysroot /Developer/SDKs/MacOSX10.5.sdk -L/Users/josh/Development/TS/Mac/Seeker/build/Debug -L/Users/josh/Development/TS/Mac/Seeker/../../../debug -L/Developer/Platforms/iPhoneOS.platform/Developer/usr/lib/gcc/i686-apple-darwin10/4.2.1 -F/Users/josh/Development/TS/Mac/Seeker/build/Debug -filelist /Users/josh/Development/TS/Mac/Seeker/build/Seeker.build/Debug/Seeker.build/Objects-normal/i386/Seeker.LinkFileList -mmacosx-version-min=10.5 -framework Cocoa -framework WebKit -lSAPI -lSPL -o /Users/josh/Development/TS/Mac/Seeker/build/Debug/Seeker.app/Contents/MacOS/Seeker Undefined symbols: "SocPipe::SendUserGet(_GUID const&, _GUID const&, _GUID const&, char const*)", referenced from: -[PeoplePaneController clickGet:] in PeoplePaneController.o ld: symbol(s) not found collect2: ld returned 1 exit status Is this a type/function signature error, or truly some sort of linker error? I have the headers where all these types and static classes are defined #imported -- I tried #include too, just in case, since I'm already stumbling :P Forgive me, I come from a web tech background, so this c-style memory management and immutability stuff is super hazy. Edit: Added full linker error text. Changed "function" to "method" Thanks, Josh

    Read the article

  • Grails + GAE - Issue using app.servlet.version=2.5

    - by Taylor L
    Updating the servlet version in application.properties to 2.5 has no affect on the generated web.xml. The generated web.xml is still version 2.4. app.servlet.version=2.5 Also, if I try to execute "run-app" I get the exception below: Running Grails application.. Starting AppEngine generated indices thread. Starting reload monitor thread. [java] Jan 26, 2010 5:27:05 AM com.google.apphosting.utils.jetty.JettyLogger warn [java] WARNING: Failed startup of context com.google.apphosting.utils.jetty.DevAppEngineWebAppContext@4178460d{/,C:\Users\Taylor Leese\workspace\test-gae\web-app} [java] java.lang.IllegalStateException: No such servlet: grails [java] at org.mortbay.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:953) [java] at org.mortbay.jetty.servlet.ServletHandler.setServletMappings(ServletHandler.java:1037) [java] at org.mortbay.jetty.webapp.WebXmlConfiguration.initialize(WebXmlConfiguration.java:305) [java] at org.mortbay.jetty.webapp.WebXmlConfiguration.configure(WebXmlConfiguration.java:222) [java] at org.mortbay.jetty.webapp.WebXmlConfiguration.configureWebApp(WebXmlConfiguration.java:180) [java] at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1215) [java] at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:500) [java] at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448) [java] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [java] at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) [java] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [java] at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) [java] at org.mortbay.jetty.Server.doStart(Server.java:217) [java] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [java] at com.google.appengine.tools.development.JettyContainerService.startContainer(JettyContainerService.java:188) [java] at com.google.appengine.tools.development.AbstractContainerService.startup(AbstractContainerService.java:120) [java] at com.google.appengine.tools.development.DevAppServerImpl.start(DevAppServerImpl.java:217) [java] at com.google.appengine.tools.development.DevAppServerMain$StartAction.apply(DevAppServerMain.java:162) [java] at com.google.appengine.tools.util.Parser$ParseResult.applyArgs(Parser.java:48) [java] at com.google.appengine.tools.development.DevAppServerMain.<init>(DevAppServerMain.java:113) [java] at com.google.appengine.tools.development.DevAppServerMain.main(DevAppServerMain.java:89) [java] The server is running at http://localhost:8080/ Any ideas how to resolve these issues?

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Sorting, Filtering and Paging in ASP.NET MVC

    - by ali62b
    What is the best approach to implement these features and which part of project would involved? I see some example of JavaScript grids, but I'm talking about a general approach which best fits the MVC architecture. I've considered configuring routes and models to implement these features but I don't have a clear idea that if this is the right approach to implementing such features. On the one hand, I think if we put logic in routes (item/page/sort/), we would have benefits like bookmarking and avoiding JavaScript. On the other hand if we use JavaScript grids, we can have behavior like the old school grid views in ASP.NET web forms. I find that using HTML helpers may be useful for paging, but have no idea if they are good for sorting or not. I've looked at jQuery, tableSorter and quick search plug-ins, but they work just on the currently-fetched data and won't help in real sorting and filtering that may need to touch the database. I have some thoughts on using these tools side by side with AJAX to get something which works, but I have no idea if there are similar efforts done yet anywhere. Another approach I looked at was using Dynamic Data on web forms, but I didn't find any suggestions out there as to whether or not it is a good idea to integrate MVC and DD. I know implementing filtering and sorting for an individual case is simple (although it has some issues like using Dynamic LINQ, which is not yet a standard approach), but creating a sorting or filtering tool which works in all cases is the idea I'm looking for. (Maybe this is because I want have something in hand when web form developers are wondering why I'm writing same code each time I want to implement a sort scenario for different Entities).

    Read the article

  • Setting multiple SMTP settings in web.config?

    - by alphadogg
    I am building an app that needs to dynamically/programatically know of and use different SMTP settings when sending email. I'm used to using the system.net/mailSettings approach, but as I understand it, that only allows one SMTP connection definition at a time, used by SmtpClient(). However, I need more of a connectionStrings-like approach, where I can pull a set of settings based on a key/name. Any recommendations? I'm open to skipping the tradintional SmtpClient/mailSettings approach, and I think will have to...

    Read the article

  • Creating an SQL Compact file: Template or script?

    - by David Veeneman
    I am writing an application that writes to SQL Compact files that have a specific schema, and I am now implementing the New File use case. The simplest approach seems to be to use a Template pattern: first, create a template file that lives in the application directory. Then, when the user selects New File, the template is copied to the name and destination specified by the user in a New File dialog. The alternative is a scripted approach: Use the same New File dialog, but dispense with the template file. Instead, create an empty SQL Compact file using the name/destination specified by the user, and then execute a T-SQL script on it from managed code. At this point, I am leaning toward the Template approach, because it is simpler. Is there any reason I should not use that approach? Thanks for your help.

    Read the article

  • Could the cause of recent Toyota computing problems be an interface mismatch ?

    - by Spux
    Any ideas if the recent Toyota computing errors had something to do with the fact they where using an object orintated approach then took a data orientated approach thus causing user interface errors ? Studying programming languages with in interface robotic design and wondered if the car computing glitch Toyota has been having could have something to do with using a different programming approach with out reprogramming the whole system from scratch.

    Read the article

  • Spring 3.0 vs J2EE 6.0

    - by StudiousJoseph
    Hi everybody, I'm confronted with a situation... I've been asked to give an advise regarding which approach to take, in terms of J2EE development between Spring 3.0 and J2EE 6.0. I was, and still am, a promoter of Spring 2.5 over classic J2EE 5 development, specially with JBoss, I even migrated old apps to Spring and influenced the re-definition of the development policy here to include Spring specific APIs, and helped the development of a strategic plan to foster more lightweight solutions like Spring + Tomcat, instead of the heavier ones of JBoss, right now, we're using JBoss merely as a Web container, having what i call the "container inside the container paradox", that is, having Spring apps, with most of its APIs, running inside JBoss, So we're in the process of migrating to tomcat. However, with the coming of J2EE 6.0 many features, that made Spring attractive at that time, easy deployment, less-coupling, even some sort of D.I, etc, seems to have been mimicked, in one way or the other. JSF 2.0, JPA 2.0, WebBeans, WebProfiles, etc. So, the question goes... From your point of view, how save, and logical, it is to continue to invest in a non-standard J2EE development framework like Spring given the new perspectives offered by J2EE 6.0? Can we talk about maybe 3 or 4 more years of Spring development, or do you recommend early adoption of J2EE 6.0 APIs and it's practices? I'll appreciate any insights with this...

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >