Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 437/863 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Creating a DrawableGameComponent

    - by Christian Frantz
    If I'm going to draw cubes effectively, I need to get rid of the numerous amounts of draw calls I have and what has been suggested is that I create a "mesh" of my cubes. I already have them being stored in a single vertex buffer, but the issue lies in my draw method where I am still looping through every cube in order to draw them. I thought this was necessary as each cube will have a set position, but it lowers the frame rate incredibly. What's the easiest way to go about this? I have a class CubeChunk that inherits Microsoft.Stuff.DrawableGameComponent, but I don't know what comes next. I suppose I could just use the chunk of cubes created in my cube class, but that would just keep me going in circles and drawing each cube individually. The goal here is to create a draw method that draws my chunk as a whole, and to not draw individual cubes as I've been doing.

    Read the article

  • Bukkit inventory saving: crashing somewhere

    - by HcgRandon
    I'm working on a command for a bukkit plugin that lets you transfer worlds. In the section about saving the player's inventory, I'm getting a runtime error. My question is: Why is the error happening, and how can I prevent it? The plugin code public void savePlayerInv(Player p, World w){ File playerInvConfigFile = new File(plugin.getDataFolder() + File.separator + "players" + File.separator + p.getName(), "inventory.yml"); FileConfiguration pInv = YamlConfiguration.loadConfiguration(playerInvConfigFile); PlayerInventory inv = p.getInventory(); int i = 0; for (ItemStack stack : inv.getContents()) { //increment integer i++; String startInventory = w.getName() + ".inv." + Integer.toString(i); //save inv pInv.set(startInventory + ".amount", stack.getAmount()); pInv.set(startInventory + ".durability", Short.toString(stack.getDurability())); pInv.set(startInventory + ".type", stack.getTypeId()); //pInv.set(startInventory + ".enchantment", stack.getEnchantments()); //TODO add enchant saveing } i = 0; for (ItemStack armor : inv.getArmorContents()){ i++; String startArmor = w.getName() + ".armor." + Integer.toString(i); //save armor pInv.set(startArmor + ".amount", armor.getAmount()); pInv.set(startArmor + ".durability", armor.getDurability()); pInv.set(startArmor + ".type", armor.getTypeId()); //pInv.set(startArmor + ".enchantment", armor.getEnchantments()); } //save exp if (p.getExp() != 0) { pInv.set(w.getName() + ".exp", p.getExp()); } } The offending line The stack trace complains about line 130, which is this line. pInv.set(startInventory + ".amount", stack.getAmount()); The stack trace 2012-03-21 13:23:25 [SEVERE] null org.bukkit.command.CommandException: Unhandled exception executing command 'wtp' in plugin Needs v1.0 at org.bukkit.command.PluginCommand.execute(PluginCommand.java:42) at org.bukkit.command.SimpleCommandMap.dispatch(SimpleCommandMap.java:166) at org.bukkit.craftbukkit.CraftServer.dispatchCommand(CraftServer.java:461) at net.minecraft.server.NetServerHandler.handleCommand(NetServerHandler.java:818) at net.minecraft.server.NetServerHandler.chat(NetServerHandler.java:778) at net.minecraft.server.NetServerHandler.a(NetServerHandler.java:761) at net.minecraft.server.Packet3Chat.handle(Packet3Chat.java:33) at net.minecraft.server.NetworkManager.b(NetworkManager.java:229) at net.minecraft.server.NetServerHandler.a(NetServerHandler.java:112) at net.minecraft.server.NetworkListenThread.a(NetworkListenThread.java:78) at net.minecraft.server.MinecraftServer.w(MinecraftServer.java:554) at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:452) at net.minecraft.server.ThreadServerApplication.run(SourceFile:490) Caused by: java.lang.NullPointerException at com.devoverflow.improved.needs.commands.CommandWorldtp.savePlayerInv(CommandWorldtp.java:130) at com.devoverflow.improved.needs.commands.CommandWorldtp.onCommand(CommandWorldtp.java:60) at org.bukkit.command.PluginCommand.execute(PluginCommand.java:40) ... 12 more

    Read the article

  • How does this circle collision detection math work?

    - by Griffin
    I'm going through the wildbunny blog to learn about collision detection. I'm confused about how the vectors he's talking about come into play. Here's the part that confuses me: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. I understand that p is the distance by which the circles penetrate, but I don't get what exactly N and P are. It seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||). What's the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • Objects disappear when zoomed out in Unity

    - by Starkers
    Ignore the palm trees here. I have some oak-like trees when I'm zoomed in: They disappear when I zoom out: Is this normal? Is this something to do with draw distance? How can I change this so my trees don't disappear? The reason I ask is because my installation had a weird terrain glitch. If this isn't normal I'm going to reinstall right away because I'm always thinking 'is that a feature? Or a glitch'?

    Read the article

  • Fast lighting with multiple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • Octrees as data structure

    - by Christian Frantz
    In my cube world, I want to use octrees to represent my chunks of 20x20x20 cubes for frustum and occlusion culling. I understand how octrees work, I just dont know if I'm going about this the right way. My base octree class is taken from here: http://www.xnawiki.com/index.php/Octree What I'm wondering is how to apply occlusion culling using this class. Does it make sense to have one octree for each cube chunk? Or should I make the octree bigger? Since I'm using cubes, each cube should fit into a node without overlap so that won't be an issue

    Read the article

  • Why doesn't light continuous on my model?

    - by nosferat
    I created a basic textured cube model with Blender to practice modeling, and then I imported it into Unity. After I put up some lighting it looks pretty ugly. The light is not continuous on a row of textured cubes: What is more odd, the light on the blocks that makes up the floor is continuous. What am I doing wrong? UPDATE This is how it looks like without textures: https://dl.dropbox.com/u/45620018/without%20textures.PNG If I would not know that these are perfect cubes, I'd say there is a slight curve on surface. I also tried lightening the texture but it also didn't help: https://dl.dropbox.com/u/45620018/lighter%20texture.PNG I just simply exported the model from Blender and did not set up any normals or things like that. However I also did not do any special woth the floor brick model.

    Read the article

  • Calculating the correct particle angle in an outwards explosion

    - by Sun
    I'm creating a simple particle explosion but am stuck in finding the correct angle to rotate my particle. The effect I'm going for is similar to this: Where each particle is going outwards from the point of origin and at the correct angle. This is what I currently have: As you can see, each particle is facing the same angle, but I'm having a little difficulty figuring out the correct angle. I have the vector for the point of emission and the new vector for each particle, how can I use this to calculate the angle? Some code for reference: private Particle CreateParticle() { ... Vector2 velocity = new Vector2(2.0f * (float)(random.NextDouble() * 2 - 1), 2.0f * (float)(random.NextDouble() * 2 - 1)); direction = velocity - ParticleLocation; float angle = (float)Math.Atan2(direction.Y, direction.X); ... return new Particle(texture, position, velocity, angle, angularVelocity, color, size, ttl, EmitterLocation); } I am then using the angle created as so in my particles Draw method: spriteBatch.Draw(Texture, Position, null, Color, Angle, origin, Size, SpriteEffects.None, 0f);

    Read the article

  • Low CPU/Memory/Memory-bandwith Pathfinding (maybe like in Warcraft 1)

    - by Valmond
    Dijkstra and A* are all nice and popular but what kind of algorithm was used in Warcraft 1 for pathfinding? I remember that the enemy could get trapped in bowl-like caverns which means there were (most probably) no full-path calculations from "start to end". If I recall correctly, the algorithm could be something like this: A) Move towards enemy until success or hitting a wall B) If blocked by a wall, follow the wall until you can move towards the enemy without being blocked and then do A) But I'd like to know, if someone knows :-) [edit] As explained to Byte56, I'm searching for a low cpu/mem/mem-bandwidth algo and wanted to know if Warcraft had some special secrets to deliver (never seen that kind of pathfinding elsewhere), I hope that that is more concordant with the stackexchange rules.

    Read the article

  • Grid Based Lighting in XNA/Monogame

    - by sm81095
    I know that questions like this have been asked many times, but I have not found one exactly like this yes. I have implemented a top-down grid based world in Monogame, and am starting on the lighting system soon. How I want to do lighting is to have a grid that is 4 times wider and higher, basically splitting each world tile into a 4x4 system of "subtiles". I would like to use a flow like system to spread light across the tiles by reducing the light by a small amount each time. This is kind of the effect I was going for: http://i.imgur.com/rv8LCxZ.png The black grid lines are the light grid, and the red lines are the actual tile grid, and the light drop-off is very exaggerated. I plan to render the world by drawing the unlit grid to a separate RenderTarget2D, then rendering the lighting grid to a separate target and overlaying the two. Basically, my questions are: What would be the algorithm for a flow style lighting system like this? Would there be a more efficient way of rendering this? How would I handle the darkening of the light with colors, reducing the RGB values in each grid, or reducing the alpha in each grid, assuming that I render the light map over the grid using blending? Even assuming the former are possible, what BlendState would I use for that?

    Read the article

  • Marching squares: Finding multiple contours within one source field?

    - by TravisG
    Principally, this is a follow-up-question to a problem from a few weeks ago, even though this is about the algorithm in general without application to my actual problem. The algorithm basically searches through all lines in the picture, starting from the top left of it, until it finds a pixel that is a border. In pseudo-C++: int start = 0; for(int i=0; i<amount_of_pixels; ++i) { if(pixels[i] == border) { start = i; break; } } When it finds one, it starts the marching squares algorithm and finds the contour to whatever object the pixel belongs to. Let's say I have something like this: Where everything except the color white is a border. And have found the contour points of the first blob: For the general algorithm it's over. It found a contour and has done its job. How can I move on to the other two blobs to find their contours as well?

    Read the article

  • Avoiding orbiting in pursuit steering behavior

    - by bobobobo
    I have a missile that does pursuit behavior to track (and try and impact) its (stationary) target. It works fine as long as you are not strafing when you launch the missile. If you are strafing, the missile tends to orbit its target. I fixed this by accelerating tangentially to the target first, killing the tangential component of the velocity first, then beelining for the target. So I accelerate in -vT until vT is nearly 0. Then accelerate in the direction of vN. While that works, I'm looking for a more elegant solution where the missile is able to impact the target without explicitly killing the tangential component first.

    Read the article

  • What are the steps taken by this GLSL code?

    - by user827992
    1 void main(void) 2 { 3 vec2 pos = mod(gl_FragCoord.xy, vec2(50.0)) - vec2(25.0); 4 float dist_squared = dot(pos, pos); 5 6 gl_FragColor = (dist_squared < 400.0) 7 ? vec4(.90, .90, .90, 1.0) 8 : vec4(.20, .20, .40, 1.0); 9 } taken from http://people.freedesktop.org/~idr/OpenGL_tutorials/03-fragment-intro.html Now, this looks really trivial and simple, but my problem is with the mod function. This function is taking 2 vec2 as inputs but is supposed to take just 2 atomic arguments according to the official documentation, also this function makes an implicit use of the floor function that only accepts, again, 1 atomic argument. Can someone explain this to me step by step and point out what I'm not getting here? It's some kind of OpenGL trick? OpenGL Math trick? in the GLSL docs i always find and explicit reference to the type accepted by the function and vec2 it's not there.

    Read the article

  • How can I ease the work of getting pixel coordinates from a spritesheet?

    - by ThePlan
    When it comes to spritesheets they're usually easier to use, and they're very efficient memory-wise, but the problem that I'm always having is getting the actual position of a sprite from a sheet. Usually, I have to throw in some aproximated values and modify them several times until I get it right. My question: is there a tool which can basically show you the coordinates of the mouse relative to the image you have opened? Or is there a simpler method of getting the exact rectangle that the sprite is contained in?

    Read the article

  • Material usage, one per model or per object?

    - by WSkid
    Is it better (memory, time (of developer), space) to use single model that is unwrapped and uses a single material or to break a model down into appropriate bits, each with their own smaller texture/material? Or does it depend on the target platform as to what is acceptable - ie PC vs tablet? An example: Say you have a typical house with a tiled roof. Model it, make sure everything is attached, unwrap the walls/roof so in your UV template the walls and roof would be in one texture file, side-by-side in say a 512x512 file. Model the roof/walls as separate objects, unwrap them individually and have two UV templates. You could then have a 256x256 file for each one.

    Read the article

  • HTML5 Canvas Converting between cartesian and isometric coordinates

    - by Amir
    I'm having issues wrapping my head around the Cartesian to Isometric coordinate conversion in HTML5 canvas. As I understand it, the process is two fold: (1) Scale down the y-axis by 0.5, i.e. ctx.scale(1,0.5); or ctx.setTransform(1,0,0,0.5,0,0); This supposedly produces the following matrix: [x; y] x [1, 0; 0, 0.5] (2) Rotate the context by 45 degrees, i.e. ctx.rotate(Math.PI/4); This should produce the following matrix: [x; y] x [cos(45), -sin(45); sin(45), cos(45)] This (somehow) results in the final matrix of ctx.setTransform(2,-1,1,0.5,0,0); which I cannot seem to understand... How is this matrix derived? I cannot seem to produce this matrix by multiplying the scaling and rotation matrices produced earlier... Also, if I write out the equation for the final transformation matrix, I get: newX = 2x + y newY = -x + y/2 But this doesn't seem to be correct. For example, the following code draws an isometric tile at cartesian coordinates (500, 100). ctx.setTransform(2,-1,1,0.5,0,0); ctx.fillRect(500, 100, width*2, height); When I check the result on the screen, the actual coordinates are (285, 215) which do not satisfy the equations I produced earlier... So what is going on here? I would be very grateful if you could: (1) Help me understand how the final isometric transformation matrix is derived; (2) Help me produce the correct equation for finding the on-screen coordinates of an isometric projection. Many thanks and kind regards

    Read the article

  • Problems when rendering code on Nvidia GPU

    - by 2am
    I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y = sin.waveAmp * sin(u); giving error Error C1105 : Cannot call a non-function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else??

    Read the article

  • Amazon GameCircle Integration

    - by user1095509
    I'm trying to integrate Amazon GameCircle and I have been able to successfully initialize GameCircle in my app, but the problem is when I click on the button that displays achievements, the GameCircle achievement list comes up but it says "You have unlocked 0 of 0 achievements". Same happens with leaderboards i.e there are no leaderboards for this app. I have created a Leaderboard and a few achievements on the online developer portal for Amazon but they don't show for some reason. Can someone help me with this. Any links/resources that help with integrating GameCircle will be appreciated. Thanks.

    Read the article

  • Transform 3d viewport vector to 2d vector

    - by learning_sam
    I am playing around with 3d transformations and came along an issue. I have a 3d vector already within the viewport and need to transform it to a 2d vector. (let's say my screen is 10x10) Does that just straight works like regualar transformation or is something different here? i.e.: I have the vector a = (2, 1, 0) within the viewport and want the 2d vector. Does that works like this and if yes how do I handle the "0" within the 3rd component?

    Read the article

  • Simple 2D Flight Physics with Box2D

    - by MarkPowell
    I'm trying to build a simple side scroller with an airplane being the player. As such, I want to build simple flight controls with simple but realistic-feeling physics. I'm making use of cocos2D and Box2D. I have a basic system working, but just can't get the physics feeling correct. I am applying force to the plane (which is a b2CircleShape) based on the user's input. So, basically, if the user pushes up, body_->ApplyForce(b2Vec2(10,30), body_->GetPosition()) is called. Similarly, for down -30 is used. This works and the plane flys along with up/down causing it to dive or climb. But it just doesn't feel right. There is no slowdown on climbs, nor speed up during dives. My simple solution is far to simple. How can I get a better feel for a plane climbing/diving? Thanks!

    Read the article

  • Making a clone of Starcraft legal?

    - by user782220
    My question is similar to a previous question. Consider the following clone of Starcraft: Change the artwork, sound, music, change the names of units. However, leave the unit hit points unchanged, unit damage unchanged, unit movement speed unchanged, change ability names but not ability effects. Is that considered illegal? In other words, is copying the unit hit points, damage, etc. considered illegal even if everything else is changed?

    Read the article

  • How to apply Data Oriented Design with Object Oriented Programming?

    - by Pombal
    Hi. I've read lots of articles about DOD and I understand it but I can't design an Object Oriented system with DOD in mind, I think my OOP education is blocking me. How should I think to mix the two? The objective is to have a nice OO interface while using DOD behind the scenes. I saw this too but didn't help much: http://stackoverflow.com/questions/3872354/how-to-apply-dop-and-keep-a-nice-user-interface

    Read the article

  • XNA - 2D Rotation of an object to a selected direction

    - by lobsterhat
    I'm trying to figure out the best way of rotating an object towards the directional input of the user. I'm attempting to mimic making turns on ice skates. For instance, if the player is moving right and the input is down and left, the player should start rotating to the right a set amount each tick. I'll calculate a new vector based on current velocity and rotation and apply that to the current velocity. That should give me nice arcing turns, correct? At the moment I've got eight if/else statements for each key combination which in turn check the current rotation: // Rotate to 225 if (keyboardState.IsKeyDown(Keys.Up) && keyboardState.IsKeyDown(Keys.Left)) { // Rotate right if (rotation >= 45 || rotation < 225) { rotation += ROTATION_PER_TICK; } // Rotate left else if (rotation < 45 || rotation > 225) { rotation -= ROTATION_PER_TICK; } } This seems like a sloppy way to do this and eventually, I'll need to do this check about 10 times a tick. Any help toward a more efficient solution is appreciated.

    Read the article

  • Memory allocation strategy for the vertex buffers (DirectX 10/11)

    - by Alex
    I have the following question. I write CAD system. So I have a 3D scene and there are many different objects (walls, doors, windows and so on). User can add or delete some objects. The question is: how can I organise the keeping of vertices for all my objects. I can create vertex buffer for every object. But I think drawing/switching from one buffer to another would have performance penalty. Another way - I can create several big buffers for every object type. But I don't understand how to update such buffers. It is too big to update whole buffer (for example buffer for all walls). What I need to do if I want to delete the object from the middle of the buffer? Actually I have the similar question: http://stackoverflow.com/questions/5515700/how-to-properly-update-vertex-buffers-in-directx-10 Most examples I've found work with very static models. Therefore, they tend to create a single vertex buffer with their list of points, and then are just manipulated by matrix transformations. I, on the other hand, will be updating the scene very often.

    Read the article

  • Storing a Hex Grid

    - by Pedro Caetano
    I've been creating a small hex grid framework for Unity3D and have come to the following dilema. This is my coordinate system (taken from here) Link because I'm a new user It all works pretty nicely except for the fact I have no idea how to store it. I originally intended to store this in a 2D array and use images to generate my maps. One problem was that it had negative values (this was easily fixed by offsetting the coordinates a bit). However, due to this coordinate system, such an image or bitmap would have to be diamond shaped - and since these structures are square shaped, this would cause a lot of headaches even if I hack something together. Is there anything I'm missing that could fix this? I recall seeing a forum post regarding this in the unity forums but I can no longer find the link. Is writing a set of coordinate translators the best solution here? If you guys think it would be helpful, I can post code and images of my problem.

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >