Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 444/1027 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Get Specific depth values in Kinect (XNA)

    - by N0xus
    I'm currently trying to make a hand / finger tracking with a kinect in XNA. For this, I need to be able to specify the depth range I want my program to render. I've looked about, and I cannot see how this is done. As far as I can tell, kinect's depth values only work with pre-set ranged found in the depthStream. What I would like to do is make it modular so that I can change the depth range my kinect renders. I know this has been down before but I can't find anything online that can show me how to do this. Could someone please help me out? I have made it possible to render the standard depth view with the kinect, and the method that I have made for converting the depth frame is as follows (I've a feeling its something in here I need to set) private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream, int depthFrame32Length) { int tooNearDepth = depthStream.TooNearDepth; int tooFarDepth = depthStream.TooFarDepth; int unknownDepth = depthStream.UnknownDepth; byte[] depthFrame32 = new byte[depthFrame32Length]; for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < depthFrame32.Length; i16++, i32 += 4) { int player = depthFrame[i16] & DepthImageFrame.PlayerIndexBitmask; int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth; // transform 13-bit depth information into an 8-bit intensity appropriate // for display (we disregard information in most significant bit) byte intensity = (byte)(~(realDepth >> 8)); if (player == 0 && realDepth == 00) { // white depthFrame32[i32 + RedIndex] = 255; depthFrame32[i32 + GreenIndex] = 255; depthFrame32[i32 + BlueIndex] = 255; } // omitted other if statements. Simple changed the color of the pixels if they went out of the pre=set depth values else { // tint the intensity by dividing by per-player values depthFrame32[i32 + RedIndex] = (byte)(intensity >> IntensityShiftByPlayerR[player]); depthFrame32[i32 + GreenIndex] = (byte)(intensity >> IntensityShiftByPlayerG[player]); depthFrame32[i32 + BlueIndex] = (byte)(intensity >> IntensityShiftByPlayerB[player]); } } return depthFrame32; } I have a strong hunch it's something I need to change in the int player and int realDepth values, but i can't be sure.

    Read the article

  • Draw order in XNA

    - by Petr Abdulin
    It is possible to set draw order of a DrawableGameComponent by setting DrawOrder property. But is it possible to set draw order of "main" Game class? I have 2 DrawableGameComponents, and Draw method of a main Game class is called first, while I want it to be the last. Should I just mode all "main" draw code to another component and set it DrawOrder? Answer: seems like I'm just confused myself a little. Black on black, that's why I didn't saw it. Main Draw is called last, as expected.

    Read the article

  • Overriding component behavior

    - by deft_code
    I was thinking of how to implement overriding of behaviors in a component based entity system. A concrete example, an entity has a heath component that can be damaged, healed, killed etc. The entity also has an armor component that limits the amount of damage a character receives. Has anyone implemented behaviors like this in a component based system before? How did you do it? If no one has ever done this before why do you think that is. Is there anything particularly wrong headed about overriding component behaviors? Below is rough sketch up of how I imagine it would work. Components in an entity are ordered. Those at the front get a chance to service an interface first. I don't detail how that is done, just assume it uses evil dynamic_casts (it doesn't but the end effect is the same without the need for RTTI). class IHealth { public: float get_health( void ) const = 0; void do_damage( float amount ) = 0; }; class Health : public Component, public IHealth { public: void do_damage( float amount ) { m_damage -= amount; } private: float m_health; }; class Armor : public Component, public IHealth { public: float get_health( void ) const { return next<IHealth>().get_health(); } void do_damage( float amount ) { next<IHealth>().do_damage( amount / 2 ); } }; entity.add( new Health( 100 ) ); entity.add( new Armor() ); assert( entity.get<IHealth>().get_health() == 100 ); entity.get<IHealth>().do_damage( 10 ); assert( entity.get<IHealth>().get_health() == 95 ); Is there anything particularly naive about the way I'm proposing to do this?

    Read the article

  • How to implement soft edge areas with particles

    - by OpherV
    My game is created using Phaser, but the question itself is engine-agnostic. In my game I have several environments, essentially polygonal areas that player characters can move into and be affected by. For example ice, fire, poison etc' The graphic element of these areas is the color filled polygon area itself, and particles of the suitable type (in this example ice shards). This is how I'm currently implementing this - with a polygon mask covering a tilesprite with the particle pattern: The hard edge looks bad. I'd like to improve by doing two things: 1. Making the polygon fill area to have a soft edge, and blend into the background. 2. Have some of the shards go out of the polygon area, so that they are not cut in the middle and the area doesn't have a straight line for example (mockup): I think 1 can be achieved with blurring the polygon, but I'm not sure how to go about with 2. How would you go about implementing this?

    Read the article

  • Is it safe StringToHash() to use in Unity?

    - by Sebastian Krysmanski
    I'm currently browsing through the Unity tutorials and saw that they're recommending to use Animator.StringToHash("some string") to created unique ids for animation properties (see here). Since I'm a programmer, to me the word "hash" doesn't represents something unique. Like the Java documentation for hashValue() states: It is not required that if two objects are unequal [...], then calling the hashCode method on each of the two objects must produce distinct integer results. So, according to this (and my definition of "hash"), two strings may have the same hash value. (You can also argue that there are an infinite number of possible strings but only 2^32 possible int values.) So, is there a possibility that StringToHash() will give me an id that actually belongs to another property (than the one I requested the hash for)?

    Read the article

  • How to teach game programming at school ?

    - by jokoon
    I'm in this private school right now, and apart from my progressive stoppage of anti-depressants, I'm having an hard time focusing on what the school wants me to do. The school has a professional contract for a game we have to do with Unity. I don't really learn anything new while using unity, so I don't like using it. We recently learned how to use DirectX, and we have to do some sort of Gradius-precursor clone (Parsec) with directX, in 3D: this annoys me, and I'm currently learning to use Ogre3D by myself by making some game. The teacher is an engineer, and all of us won't be engineers. How would you teach game programming ?

    Read the article

  • What is the best way to render a 2d game map?

    - by Deukalion
    I know efficiency is key in game programming and I've had some experiences with rendering a "map" earlier but probably not in the best of ways. For a 2D TopDown game: (simply render the textures/tiles of the world, nothing else) Say, you have a map of 1000x1000 (tiles or whatever). If the tile isn't in the view of the camera, it shouldn't be rendered - it's that simple. No need to render a tile that won't be seen. But since you have 1000x1000 objects in your map, or perhaps less you probably don't want to loop through all 1000*1000 tiles just to see if they're suppose to be rendered or not. Question: What is the best way to implement this efficiency? So that it "quickly/quicker" can determine what tiles are suppose to be rendered? Also, I'm not building my game around tiles rendered with a SpriteBatch so there's no rectangles, the shapes can be different sizes and have multiple points, say a curved object of 10 points and a texture inside that shape; Question: How do you determine if this kind of objects is "inside" the View of the camera? It's easy with a 48x48 rectangle, just see if it X+Width or Y+Height is in the view of the camera. Different with multiple points. Simply put, how to manage the code and the data efficiently to not having to run through/loop through a million of objects at the same time.

    Read the article

  • Multiple setInterval in a HTML5 Canvas game

    - by kushsolitary
    I'm trying to achieve multiple animations in a game that I am creating using Canvas (it is a simple ping-pong game). This is my first game and I am new to canvas but have created a few experiments before so I have a good knowledge about how canvas work. First, take a look at the game here. The problem is, when the ball hits the paddle, I want a burst of n particles at the point of contact but that doesn't came right. Even if I set the particles number to 1, they just keep coming from the point of contact and then hides automatically after some time. Also, I want to have the burst on every collision but it occurs on first collision only. I am pasting the code here: //Initialize canvas var canvas = document.getElementById("canvas"), ctx = canvas.getContext("2d"), W = window.innerWidth, H = window.innerHeight, particles = [], ball = {}, paddles = [2], mouse = {}, points = 0, fps = 60, particlesCount = 50, flag = 0, particlePos = {}; canvas.addEventListener("mousemove", trackPosition, true); //Set it's height and width to full screen canvas.width = W; canvas.height = H; //Function to paint canvas function paintCanvas() { ctx.globalCompositeOperation = "source-over"; ctx.fillStyle = "black"; ctx.fillRect(0, 0, W, H); } //Create two paddles function createPaddle(pos) { //Height and width this.h = 10; this.w = 100; this.x = W/2 - this.w/2; this.y = (pos == "top") ? 0 : H - this.h; } //Push two paddles into the paddles array paddles.push(new createPaddle("bottom")); paddles.push(new createPaddle("top")); //Setting up the parameters of ball ball = { x: 2, y: 2, r: 5, c: "white", vx: 4, vy: 8, draw: function() { ctx.beginPath(); ctx.fillStyle = this.c; ctx.arc(this.x, this.y, this.r, 0, Math.PI*2, false); ctx.fill(); } }; //Function for creating particles function createParticles(x, y) { this.x = x || 0; this.y = y || 0; this.radius = 0.8; this.vx = -1.5 + Math.random()*3; this.vy = -1.5 + Math.random()*3; } //Draw everything on canvas function draw() { paintCanvas(); for(var i = 0; i < paddles.length; i++) { p = paddles[i]; ctx.fillStyle = "white"; ctx.fillRect(p.x, p.y, p.w, p.h); } ball.draw(); update(); } //Mouse Position track function trackPosition(e) { mouse.x = e.pageX; mouse.y = e.pageY; } //function to increase speed after every 5 points function increaseSpd() { if(points % 4 == 0) { ball.vx += (ball.vx < 0) ? -1 : 1; ball.vy += (ball.vy < 0) ? -2 : 2; } } //function to update positions function update() { //Move the paddles on mouse move if(mouse.x && mouse.y) { for(var i = 1; i < paddles.length; i++) { p = paddles[i]; p.x = mouse.x - p.w/2; } } //Move the ball ball.x += ball.vx; ball.y += ball.vy; //Collision with paddles p1 = paddles[1]; p2 = paddles[2]; if(ball.y >= p1.y - p1.h) { if(ball.x >= p1.x && ball.x <= (p1.x - 2) + (p1.w + 2)){ ball.vy = -ball.vy; points++; increaseSpd(); particlePos.x = ball.x, particlePos.y = ball.y; flag = 1; } } else if(ball.y <= p2.y + 2*p2.h) { if(ball.x >= p2.x && ball.x <= (p2.x - 2) + (p2.w + 2)){ ball.vy = -ball.vy; points++; increaseSpd(); particlePos.x = ball.x, particlePos.y = ball.y; flag = 1; } } //Collide with walls if(ball.x >= W || ball.x <= 0) ball.vx = -ball.vx; if(ball.y > H || ball.y < 0) { clearInterval(int); } if(flag == 1) { setInterval(emitParticles(particlePos.x, particlePos.y), 1000/fps); } } function emitParticles(x, y) { for(var k = 0; k < particlesCount; k++) { particles.push(new createParticles(x, y)); } counter = particles.length; for(var j = 0; j < particles.length; j++) { par = particles[j]; ctx.beginPath(); ctx.fillStyle = "white"; ctx.arc(par.x, par.y, par.radius, 0, Math.PI*2, false); ctx.fill(); par.x += par.vx; par.y += par.vy; par.radius -= 0.02; if(par.radius < 0) { counter--; if(counter < 0) particles = []; } } } var int = setInterval(draw, 1000/fps); Now, my function for emitting particles is on line 156, and I have called this function on line 151. The problem here can be because of I am not resetting the flag variable but I tried doing that and got more weird results. You can check that out here. By resetting the flag variable, the problem of infinite particles gets resolved but now they only animate and appear when the ball collides with the paddles. So, I am now out of any solution.

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • Markup format or script for data files?

    - by Aaron
    The game I'm designing will be mainly written in a high level scripting language (leaning towards either Lua or Squirrel) with a C++ core. In addition to scripts I'm also going to need different data files. Many data files will be for static information such as graphical assets and monster types. I'd also want to create and update data files at runtime for user information like option settings and game saves. Can I get away with using plain script files (i.e. .lua or .nut files) for my data files, or is it better to use dedicated markup formats like XML or YAML? If I use script files, loaded separately from my true scripts, then I wouldn't need an extra library to read those files. Scripting languages like Lua also have table syntax that lend themselves towards data definition. On the other hand I'd have to write my own schema check code. These languages also don't seem to support serialization "out of the box" like the markup format libraries do.

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • WIn API Basic Paint program

    - by Tom Burman
    Just trying to learn a bit of Win API. Im trying to make a basic drawing app, a bit like MS Paint. For the time being im trying to get one function to work which is, when you left click and drag the mouse around the screen a line is drawn behind the mouse. Heres what i have so far, but for some reason: 1) the line starts drawing straight away rather then waiting for the left click 2) the line isn't solid its very dotty. case WM_MOUSEMOVE: { if(MK_LBUTTON){ hdc = GetDC(hwnd); hPen = CreatePen(PS_SOLID,5,RGB(0, 0, 255)); SelectObject(hdc, hPen); int x = LOWORD(lParam); int y = HIWORD(lParam); MoveToEx(hdc,x,y,NULL); LineTo(hdc, LOWORD(lParam), HIWORD(lParam)); ReleaseDC(hwnd,hdc); } else break; } } Thanks for any help!

    Read the article

  • Precise Touch Screen Dragging Issue: Trouble Aligning with the Finger due to Different Screen Resolution

    - by David Dimalanta
    Please, I need your help. I'm trying to make a game that will drag-n-drop a sprite/image while my finger follows precisely with the image without being offset. When I'm trying on a 900x1280 (in X [900] and Y [1280]) screen resolution of the Google Nexus 7 tablet, it follows precisely. However, if I try testing on a phone smaller than 900x1280, my finger and the image won't aligned properly and correctly except it still dragging. This is the code I used for making a sprite dragging with my finger under touchDragged(): x = ((screenX + Gdx.input.getX())/2) - (fruit.width/2); y = ((camera_2.viewportHeight * multiplier) - ((screenY + Gdx.input.getY())/2) - (fruit.width/2)); This code above will make the finger and the image/sprite stays together in place while dragging but only works on 900x1280. You'll be wondering there's camera_2.viewportHeight in my code. Here are for two reasons: to prevent inverted drag (e.g. when you swipe with your finger downwards, the sprite moves upward instead) and baseline for reading coordinate...I think. Now when I'm adding another orthographic camera named camera_1 and changing its setting, I recently used it for adjusting the falling object by meter per pixel. Also, it seems effective independently for smartphones that has smaller resolution and this is what I used here: show() camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 280; // --> I set it to a smaller view port height so that the object would fall faster, decreasing the chance of drag force. camera_1.viewportWidth = 196; // --> Make it proportion to the original screen view size as possible. camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); touchDragged() x = ((screenX + (camera_1.viewportWidth/Gdx.input.getX()))/2) - (fruit.width/2); y = ((camera_1.viewportHeight * multiplier) - ((screenY + (camera_1.viewportHeight/Gdx.input.getY()))/2) - (fruit.width/2)); But the result instead of just following the image/sprite closely to my finger, it still has a space/gap between the sprite/image and the finger. It is possibly dependent on coordinates based on the screen resolution. I'm trying to drag the blueberry sprite with my finger. My expectation did not met since I want my finger and the sprite/image (blueberry) to stay close together while dragging until I release it. Here's what it looks like: I got to figure it out how to make independent on all screen sizes by just following the image/sprite closely to my finger while dragging even on most different screen sizes instead.

    Read the article

  • Everything turning black when pitching down

    - by Gordon
    Just a quick questions about something that's occurring in my world. Every time I pitch my camera downward, everything starts turning black, and if I pitch upward, everything sort of intensifies. I'm multiplying my normals by the normal matrix in the shader, and I'm multiplying my lights direction by the model view matrix. If I leave the normal and light dir in world space everything ends up fine. I thought putting them both in view space would not cause those weird things to happen?

    Read the article

  • Can SpriteBatch be used to fill a polygon with a texture?

    - by can poyrazoglu
    I basically need to fill a texture into a polygon using the SpriteBatch. I've done some research but couldn't find anything useful except polygon triangulation method, which works well only with convex polygons (without diving into super math which is definitely not something I'm pretty good at). Are there any solutions for filling in a polygon in a basic way? I of course need something dynamic (I'll have a map editor that you can define polygons, and the game will render them (and collision detection will also use them but that's off topic), basically I can't accept solutions like "pre-calculated" bitmaps or anything like that. I need to draw a polygon with the segments provided, to the screen, using the SpriteBatch.

    Read the article

  • How can I downsample a texture using FBOs?

    - by snape
    I am rendering a scene to FBO as my render target whose size is 8 times the size of the orignal screen in OpenGL. Now i wan to downsample the texture generated by FBO to the size of the screen so as to achieve spatial anti aliasing. How do i achieve the down sampling ? Please provide implementation details. Note : If there is a better way of doing anti aliasing in FBOs please mention that too. I am trying to remove the aliasing in the image attached below.

    Read the article

  • Comparing angles and working out the difference

    - by Thomas O
    I want to compare angles and get an idea of the distance between them. For this application, I'm working in degrees, but it would also work for radians and grads. The problem with angles is that they depend on modular arithmetic, i.e. 0-360 degrees. Say one angle is at 15 degrees and one is at 45. The difference is 30 degrees, and the 45 degree angle is greater than the 15 degree one. But, this breaks down when you have, say, 345 degrees and 30 degrees. Although they compare properly, the difference between them is 315 degrees instead of the correct 45 degrees. How can I solve this? I could write algorithmic code: if(angle1 > angle2) delta_theta = 360 - angle2 - angle1; else delta_theta = angle2 - angle1; But I'd prefer a solution that avoids compares/branches, and relies entirely on arithmetic.

    Read the article

  • Absorbtion 2d image effect

    - by Ed.
    I want to create a specyfic 2d image effect. It consists in modifying a sprite so it looks like it is being zoomed to a point or "absorbed" by that point. I'm not really sure what is the technical name of this effect so I cannot explain it correctly. Here you can see a video of what I'm talking about, it is the effect when the character absorbs the three glyphs. http://www.youtube.com/watch?v=PIo-GddsMcU&t=4m45s What is the name of this effect? How can I implement it with XNA for 2D textures/sprites?

    Read the article

  • Unity3D - Android pause screen - double click issue

    - by user3666251
    I made a pause script for the game im developing for android. I added the script to the GUITexture I created and placed on the top right side of the screen.The issue stands at the part where if the player clicks the pause button then clicks resume then he wants to pause the game again.When he clicks pause the second time the buttons dont show up unless he clicks again. This is the script : #pragma strict var paused = false; var isButtonVisible : boolean = true; function OnMouseDown(){ this.paused = !this.paused; Time.timeScale = 0; isButtonVisible = true; } function OnGUI(){ if ( isButtonVisible ) { if(this.paused){ if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2+3,200,50),"Restart")){ Application.LoadLevel(Application.loadedLevel); Time.timeScale = 1; isButtonVisible = false; } if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2-50,200,50),"Resume")){ Time.timeScale = 1; isButtonVisible = false; } // Insert the rest of the pause menu logic if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2+56,200,50),"Main Menu")){ Application.LoadLevel ("MainMenu"); isButtonVisible = false; Time.timeScale = 1; } } } } Thank you.

    Read the article

  • Multiplayer Network Game - Interpolation and Frame Rate

    - by J.C.
    Consider the following scenario: Let's say, for sake of example and simplicity, that you have an authoritative game server that sends state to its clients every 45ms. The clients are interpolating state with an interpolation delay of 100 ms. Finally, the clients are rendering a new frame every 15ms. When state is updated on the client, the client time is set from the incoming state update. Each time a frame renders, we take the render time (client time - interpolation delay) and identify a previous and target state to interpolate from. To calculate the interpolation amount/factor, we take the difference of the render time and previous state time and divide by the difference of the target state and previous state times: var factor = ((renderTime - previousStateTime) / (targetStateTime - previousStateTime)) Problem: In the example above, we are effectively displaying the same interpolated state for 3 frames before we collected the next server update and a new client (render) time is set. The rendering is mostly smooth, but there is a dash of jaggedness to it. Question: Given the example above, I'd like to think that the interpolation amount/factor should increase with each frame render to smooth out the movement. Should this be considered and, if so, what is the best way to achieve this given the information from above?

    Read the article

  • Collision detection with entities/AI

    - by James Williams
    I'm making my first game in Java, a top down 2D RPG. I've handled basic collision detection, rendering and have added an NPC, but I'm stuck on how to handle interaction between the player and the NPC. Currently I'm drawing out my level and then drawing characters, NPCs and animated tiles on top of this. The problem is keeping track of the NPCs so that my Character class can interact with methods in the NPC classes on collision. I'm not sure my method of drawing the level and drawing everything else on top is a good one - can anyone shed any light on this topic?

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • OpenGL textures trigger error 1281 if SFML is not called

    - by user3714670
    I am using SOIL to apply textures to VBOs, without textures i could change the background and display black (default color) vbos easily, but now with textures, openGL is giving an error 1281, the background is black and some textures are not applied. but the first texture IS applied (nothing else is working though). The strange thing is : if i create a dummy texture with SFML in the same program, all other textures do work. So i guess there is something i forgot in the texture creation/application, if someone could enlighten me. Here is the code i use to load textures, once loaded it is kept in memory, it mostly comes from the example of SOIL : texture = SOIL_load_OGL_single_cubemap( filename, SOIL_DDS_CUBEMAP_FACE_ORDER, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); if( texture > 0 ) { glEnable( GL_TEXTURE_CUBE_MAP ); glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glBindTexture( GL_TEXTURE_CUBE_MAP, texture ); std::cout << "the loaded single cube map ID was " << texture << std::endl; } else { std::cout << "Attempting to load as a HDR texture" << std::endl; texture = SOIL_load_OGL_HDR_texture( filename, SOIL_HDR_RGBdivA2, 0, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS ); if( texture < 1 ) { std::cout << "Attempting to load as a simple 2D texture" << std::endl; texture = SOIL_load_OGL_texture( filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); } if( texture > 0 ) { // enable texturing glEnable( GL_TEXTURE_2D ); // bind an OpenGL texture ID glBindTexture( GL_TEXTURE_2D, texture ); std::cout << "the loaded texture ID was " << texture << std::endl; } else { glDisable( GL_TEXTURE_2D ); std::cout << "Texture loading failed: '" << SOIL_last_result() << "'" << std::endl; } } and how i apply it when drawing : GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; // glEnableVertexAttribArray(TextureID); glActiveTexture(GL_TEXTURE0); if(SFML) sf::Texture::bind(sfml_texture); else { glBindTexture (GL_TEXTURE_2D, texture); // glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, &texture); } glUniform1i(TextureID, 0); I am not sure that SOIL is adapted to my program as i want something as simple as possible (i used sfml's texture object which was the best but i can't anymore), but if i can get it to work it would be great.

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >