Search Results

Search found 24037 results on 962 pages for 'game design'.

Page 494/962 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • 3DS Max exporting too many vertexes for model

    - by Juan Pablo
    I have a sample model of a cube and a buddha downloaded from internet in 3ds format which I can load correctly into my program and view them without problem, but wanted to try and create my own model. I created a simple box mesh in 3ds max, and exported it as .3ds (Converted to mesh - export as .3ds) When inspecting the .3ds file with a hex viewer, I was expecting to see 8 vertexes and 12 faces declared (as the model I downloaded from internet). But what i found was that it listed 26 vertexes, and 12 faces! And when I try to load that file with my .3ds viewer, my parser isn't detecting the face block (0x4120), which is strange because it worked for other objects downloaded from internet. Do I have to set any special property in order to export a 3ds file with minimum vertexes and a vertex-index list?

    Read the article

  • Unity3d: Box collider attached to animated FBX models through scripts at run-time have wrong dimension

    - by Heisenbug
    I have several scripts attached to static and non static models of my scene. All models are instantiated at run-time (and must be instantiated at run-time because I'm procedural building the scene). I'd like to add a BoxCollider or SphereCollider to my FBX models at runtime. With non animated models it works simply requiring BoxCollider component from the script attached to my GameObject. BoxCollider is created of the right dimension. Something like: [RequireComponent(typeof(BoxCollider))] public class AScript: MonoBehavior { } If I do the same thing with animated models, BoxCollider are created of the wrong dimension. For example if attach the script above to penelopeFBX model of the standard asset, BoxCollider is created smaller than the mesh itself. How can I solve this?

    Read the article

  • How to properly render a Frame Buffer to the BackBuffer in Stage3D / AGAL

    - by bigp
    After doing a render pass with RenderToTarget (RTT), how do you properly render that texture buffer to the screen while maintaining original scale / proportions so it doesn't stretch or lose quality? Can an AGAL VertexShader & FragmentShader be written so it's adaptable to any Texture size and Viewport dimensions? I find I'm getting some "blocky" effects in some of my first attempts at "ping-ponging" between two Texture buffers (to create trailing effects). Perhaps I'm not using the UVs correctly between the rendering-to-target and/or the backbuffer? Is there a simpler way just to "splash" the texture on the backbuffer, or is a Quad absolutely necessary (4 vertices, 2 triangles)? If it needs the Quad, should the Texture buffer be fully drawn (0.0 to 1.0 for vertical and horizontal UVs), or only a percentage of it should, like the example below? Texture Buffer U: 0.0 to viewport.width/texturebuffer.width; Texture Buffer V: 0.0 to viewport.height/texturebuffer.height; Thanks!

    Read the article

  • creating bounding box list

    - by Christian Frantz
    I'm trying to create a list of bounding boxes for each cube drawn, so I can use the boxes to intersect with a ray that my mouse position is casting, but I have no idea how. I've created a list that stores the boxes, but how am I getting the values from each box? for (int x = 0; x < mapHeight; x++) { for (int z = 0; z < mapWidth; z++) { cubes.Add(new Vector3(x, map[x, z], z), Matrix.Identity, grass); boxList.Add(something here); } } public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } There aren't any clearly defined variables for the bounds of each cube created, so where do I create the bounding box from?

    Read the article

  • I am looking for an graduation project idea for bacelor of computer engineering [closed]

    - by project idea
    I am interested in computer graphics and I have developed many hobby projects, mostly 2D and 3D games/scenes in directX and openGL, But for a grad project, proffesors wont allow games. I browsed many similar questions here and I am convinced project should be something I am really interested in as I will give considerable time to it. But apart from games I am not able to decide on the topic. I am also open to ideas on social apps and android.

    Read the article

  • Understanding normal maps on terrain

    - by JohnB
    I'm having trouble understanding some of the math behind normal map textures even though I've got it to work using borrowed code, I want to understand it. I have a terrain based on a heightmap. I'm generating a mesh of triangles at load time and rendering that mesh. Now for each vertex I need to calculate a normal, a tangent, and a bitangent. My understanding is as follows, have I got this right? normal is a unit vector facing outwards from the surface of the triangle. For a vertex I take the average of the normals of the triangles using that vertex. tangent is a unit vector in the direction of the 'u' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the x direction. So should be able to calculate this as simply the difference between vertices in the x direction to get a vector, (and normalize it). bitangent is a unit vector in the direction of the 'v' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the y direction. So should be able to calculate this as simply the difference between vertices in the y direction to get a vector, (and normalize it). However the code I have borrowed seems much more complicated than this and takes into account the actual values of u, and v at each vertex which I don't understand the need for as they increase in exactly the same direction as x, and y. I implemented what I thought from above, and it simply doesn't work, the normals are clearly not working for lighting. Have I misunderstood something? Or can someone explain to me the physical meaning of the tangent and bitangent vectors when applied to a mesh generated from a hightmap like this, when u and v texture coordinates map along the x and y directions. Thanks for any help understanding this.

    Read the article

  • Rotation of bitmap using a frame by frame animation

    - by pengume
    Hey every one I know this has probably been asked a ton of times but I just wanted to clarify if I am approaching this correctly, since I ran into some problems rotating a bitmap. So basically I have one large bitmap that has four frames drawn on it and I only draw one at a time by looping through the bitmap by increments to animate walking. I can get the bitmap to rotate correctly when it is not moving but once the animation starts it starts to cut off alot of the image and sometimes becomes very fuzzy. I have tried public void draw(Canvas canvas,int pointerX, int pointerY) { Matrix m; if (setRotation){ // canvas.save(); m = new Matrix(); m.reset(); // spriteWidth and spriteHeight are for just the current frame showed m.setTranslate(spriteWidth / 2, spriteHeight / 2); //get and set rotation for ninja based off of joystick m.preRotate((float) GameControls.getRotation()); //create the rotated bitmap flipedSprite = Bitmap.createBitmap(bitmap , 0, 0,bitmap.getWidth(),bitmap.getHeight() , m, true); //set new bitmap to rotated ninja setBitmap(flipedSprite); // canvas.restore(); Log.d("Ninja View", "angle of rotation= " +(float) GameControls.getRotation()); setRotation = false; } And then the Draw Method here // create the destination rectangle for the ninjas current animation frame // pointerX and pointerY are from the joystick moving the ninja around destRect = new Rect(pointerX, pointerY, pointerX + spriteWidth, pointerY + spriteHeight); canvas.drawBitmap(bitmap, getSourceRect(), destRect, null); The animation is four frames long and gets incremented by 66 (the size of one of the frames on the bitmap) for every frame and then back to 0 at the end of the loop.

    Read the article

  • Cannot convert parameter 1 from 'short *' to 'int *' [closed]

    - by Torben Carrington
    I'm trying to learn pointers and since I recently learned that short int takes up less memory [2 bytes as apposed to the long int's memory usage of 4 which is the default for int] I wanted to create a pointer that uses the memory address of a short integer. I'm following a tutorial in my book about Pointers and it's using the Swap function. The problem is I receive this error the moment I change everything from int to short int: error C2664: 'Swap' : cannot convert parameter 1 from 'short *' to 'int *' 1 Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast Since my code is so small here is the whole thing: void Swap(short int *sipX, short int *sipY) { short int siTemp = *sipX; *sipX = *sipY; *sipY = siTemp; } int main() { short int siBig = 100; short int siSmall = 1; std::cout << "Pre-Swap: " << siBig << " " << siSmall << std::endl; Swap(&siBig, &siSmall); std::cout << "Post-Swap: " << siBig << " " << siSmall << std::endl; return 0; }

    Read the article

  • Unity: Spin wheels to move vehicle

    - by Paul Manta
    I am just getting started with Unity and I'd like to ask a question. If I have a "Vehicle" object that has two children: "FrontWheel" and "BackWheel" (both 'wheels' are cylinders), how should I set everything up such that I can move the entire vehicle by turning its wheels? When I apply a torque to "FrontWheel", the vehicle starts to move, but instead of the whole thing the moving together, the chassis is rolling on the cylinders and eventually falls off. How can I prevent it from doing that?

    Read the article

  • Cocos2D 2.0 - masking a sprite

    - by Desperate Developer
    I have read this tutorial about how to mask sprites using Cocos2D 2.0. http://www.raywenderlich.com/4428/how-to-mask-a-sprite-with-cocos2d-2-0 But the author talks about OpenGL ES textures and vertices as they were common knowledge. My knowledge about OpenGl is zero raised to infinity. All I want is to use a rectangle to mask a sprite to it. How I would do in Photoshop using a rectangle as mask (yes, I want to clip a sprite to the rectangle bounds and no, I do not want to use the ClippingNode solution, that do not works for animation/scaling etc.). So, can you guys translate the klingon language used in this tutorial and tell how a solid rectangle can be used to mask a sprite in Cocos2D? I am desperate, as my username states. I am searching this for a week and have tried several solutions without satisfactory results. Please help me. Thanks!

    Read the article

  • Open Source AI Bot interfaces

    - by David Young
    What are some open source AI Bot interfaces? Similar to Pogamut 3 GameBots2004 for custom Unreal Tournament bots or Brood Wars API for Starcraft bots etc. If you could please post one AI bot interface per answer (make sure to provide a link) and give a brief summary as to the content of the blog posts. Please include what type of bot interface structure it is, client/server, server/server, etc e.g. BWAPI is client/server which emulates a real player

    Read the article

  • GLSL Bokeh using Quads and Textures

    - by Notoriousaur
    I'm trying to create a depth of field effect with bokeh sprites in GLSL. Specifically, what i would like to do is, for each pixel: See if the pixel is out of the focal range If it is, draw a quad and apply a texture to provide a bokeh sprite. This kind of implementation is seen in the Unreal Engine and by Matt Pettineo, however, both implementations are in DX11 and I'm using OpenGL. I'm a bit stuck on the drawing a quad and applying a texture bit. Does anyone know how I can do this, or provide any relevant links as to how I can do this? Thanks

    Read the article

  • How to create array with unique sprites? in cocos2d iphone

    - by prakash s
    I write the code like this. This displays only one sprite (red colour bubble) with number of times and moving down, but actually I want to display different sprites (different colour bubble) every time and moving down. I also add no of .png images in resource folder of my project. Here I used only 3.png, but I need to display all *.png images (different colour bubbles) in my project but I don't know how to get this. Please help me Thank you. Here is the code: -(void)addTarget { CCSprite *target = [CCSprite spriteWithFile:@"3.png" rect:CGRectMake(0, 0, 256, 256)]; CGSize winSize = [[CCDirector sharedDirector] winSize]; int minY = target.contentSize.height/2; int maxY = winSize.height - target.contentSize.height/2; int rangeY = maxY - minY; int actualY = (arc4random() % rangeY) + minY; // Create the target slightly off-screen along the right edge, // and along a random position along the Y axis as calculated above target.position = ccp(winSize.width + (target.contentSize.width/2), actualY); [self addChild:target]; // Determine speed of the target int minDuration = 4.0; int maxDuration = 12.0; int rangeDuration = maxDuration - minDuration; int actualDuration = (arc4random() % rangeDuration) + minDuration; // Create the actions id actionMove = [CCMoveTo actionWithDuration:actualDuration position:ccp(-target.contentSize.width/2,actualY)]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; // Add to targets array target.tag = 2; [_targets addObject:target]; } -(void)gameLogic:(ccTime)dt { [self addTarget]; } -(id) init { if( (self=[super initWithColor:ccc4(255,255,255,255)] )) { // Enable touch events self.isTouchEnabled = YES; // Initialize arrays _targets = [[NSMutableArray alloc] init]; _projectiles = [[NSMutableArray alloc] init]; // Get the dimensions of the window for calculation purposes CGSize winSize = [[CCDirector sharedDirector] winSize]; [self schedule:@selector(gameLogic:) interval:1.0]; [self schedule:@selector(update:)]; } return self; } - (void)update:(ccTime)dt { NSMutableArray *projectilesToDelete = [[NSMutableArray alloc] init]; for (CCSprite *projectile in _projectiles) { CGRect projectileRect = CGRectMake(projectile.position.x - (projectile.contentSize.width/2), projectile.position.y - (projectile.contentSize.height/2), projectile.contentSize.width, projectile.contentSize.height); NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *target in _targets) { CGRect targetRect = CGRectMake(target.position.x - (target.contentSize.width/2), target.position.y - (target.contentSize.height/2), target.contentSize.width, target.contentSize.height); if (CGRectIntersectsRect(projectileRect, targetRect)) { [targetsToDelete addObject:target]; } } for (CCSprite *target in targetsToDelete) { [_targets removeObject:target]; [self removeChild:target cleanup:YES]; _projectilesDestroyed++; if (_projectilesDestroyed > 30) { //GameOverScene *gameOverScene = [GameOverScene node]; // [gameOverScene.layer.label setString:@"You Win!"]; // [[CCDirector sharedDirector] replaceScene:gameOverScene]; } } if (targetsToDelete.count > 0) { [projectilesToDelete addObject:projectile]; } [targetsToDelete release]; } for (CCSprite *projectile in projectilesToDelete) { [_projectiles removeObject:projectile]; [self removeChild:projectile cleanup:YES]; } [projectilesToDelete release]; }

    Read the article

  • best way to compute vertex normals from a Triangle's list

    - by nkint
    hi i'm a complete newbie in computergraphics so sorry if it's a stupid answer. i'm trying to make a simple 3d engine from scratch, more for educational purpose than for real use. i have a Surface object with inside a Triangle's list. For now i compute normals inside Triangle class, in this way: triangle.computeFaceNormals() { Vec3D u = v1.sub(v3) Vec3D v = v1.sub(v2) Vec3D normal = Vec3D.cross(u,v) normal.normalized() this.n1 = this.n2 = this.n3 = normal } and when building surface: t = new Triangle(v1,v2,v3).computeFaceNormals() surface.addTriangle(t) and i think this is the best way to do that.. isn't it? now.. what about for vertex normals? i've found this simple algorithm: flipcode vertex normal but.. hei this algorithm has.. exponential complexity? (if my memory doesn't fail my computer science background..) (bytheway.. it has 3 nested loops.. i don't think it's the best way to do it..) any suggestion?

    Read the article

  • Bohemia Interactive's bio2s format

    - by Jaime Soto
    Does anyone have specifications for the bio2s scripting language from Bohemia Interactive? They develop Operation Flashpoint, Armed Assault (ArmA), and Virtual Battlespace. These scripts are sometimes called O2 or Oxygen scripts and are used in their terrain and modeling tools. Oxygen is Bohemia Interactive's modeling tool. I found additional examples of the format in this VBS2 tutorial and this ArmA forum thread. EDIT: I clarified the purpose of the bio2s format and provided some links to examples.

    Read the article

  • Any reliable polygon normal calculation code?

    - by Jenko
    Do you have any reliable face normal calculation code? I'm using this but it fails when faces are 90 degrees upright or similar. // the normal point var x:Number = 0; var y:Number = 0; var z:Number = 0; // if is a triangle with 3 points if (points.length == 3) { // read vertices of triangle var Ax:Number, Bx:Number, Cx:Number; var Ay:Number, By:Number, Cy:Number; var Az:Number, Bz:Number, Cz:Number; Ax = points[0].x; Bx = points[1].x; Cx = points[2].x; Ay = points[0].y; By = points[1].y; Cy = points[2].y; Az = points[0].z; Bz = points[1].z; Cz = points[2].z; // calculate normal of a triangle x = (By - Ay) * (Cz - Az) - (Bz - Az) * (Cy - Ay); y = (Bz - Az) * (Cx - Ax) - (Bx - Ax) * (Cz - Az); z = (Bx - Ax) * (Cy - Ay) - (By - Ay) * (Cx - Ax); // if is a polygon with 4+ points }else if (points.length > 3){ // calculate normal of a polygon using all points var n:int = points.length; x = 0; y = 0; z = 0 // ensure all points above 0 var minx:Number = 0, miny:Number = 0, minz:Number = 0; for (var p:int = 0, pl:int = points.length; p < pl; p++) { var po:_Point3D = points[p] = points[p].clone(); if (po.x < minx) { minx = po.x; } if (po.y < miny) { miny = po.y; } if (po.z < minz) { minz = po.z; } } if (minx > 0 || miny > 0 || minz > 0){ for (p = 0; p < pl; p++) { po = points[p]; po.x -= minx; po.y -= miny; po.z -= minz; } } var cur:int = 1, prev:int = 0, next:int = 2; for (var i:int = 1; i <= n; i++) { // using Newell method x += points[cur].y * (points[next].z - points[prev].z); y += points[cur].z * (points[next].x - points[prev].x); z += points[cur].x * (points[next].y - points[prev].y); cur = (cur+1) % n; next = (next+1) % n; prev = (prev+1) % n; } } // length of the normal var length:Number = Math.sqrt(x * x + y * y + z * z); // if area is 0 if (length == 0) { return null; }else{ // turn large values into a unit vector x = x / length; y = y / length; z = z / length; }

    Read the article

  • Camera changes view when controller connected

    - by ChocoMan
    I have a weird situation. I have a model set to 0 for X,Y and Z. My camera's position is set to: 0 (X-value, but updates when the model moves around) the model's height + 20f (about the same level as the model's shoulders) 25f (behind the model) Without the controller plugged in, everything looks fine as I want it. But as soon as I plug the controller in, the camera aims to the sky! But when I unplug the controller, the camera is back to what it should be. Does anyone have any insight as to what may cause this from plugging a controller in?

    Read the article

  • Entity component system -> handling components that depend on one another

    - by jtedit
    I really like the idea of an entity component system and feel it has great flexibility, but have a question. How should dependent components be handled? I'm not talking about how components should communicate with other components they depend on, I have that sorted, but rather how to ensure components are present. For example, an entity cannot have a "velocity" component if it doesn't have a "position" component, in the same way it cant have an "acceleration" component if it doesn't have a "velocity" component. My first idea was every component class overrides an "onAddedToEntity(Entity ent)" function. Then in that function it checks that prerequisite components are also added to the entity, eg: struct EntCompVelocity() : public EntityComponent{ //member variables here void onAddedToEntity(Entity ent){ if(!ent.hasComponent(EntCompPosition::Id)){ ent.addComponent(new EntCompPosition()); } } } This has the nice property that if the acceleration component adds the velocity component, the velocity component will itself add the position component to the entity so dependency "trees" will sort themselves out. However my concern is if I do this components will silently be added with default values and, in the example of adding position, many entities will appear at the origin. Another idea was to simple have the "Entity.addComponent();" function return false if the component's prerequisite components aren't already on the entity, this would force you to manually add the position component and set its value before adding the velocity component. Finally I could simply not ensure a components prerequisite components are added, the "UpdatePosition" system only deals with entities with both a position and velocity component, so therefore adding a velocity component without having a position component wont be a problem (it wont cause crashes due to null pointer/etc), but it does mean entities will carry useless unused data if you add components but not their prerequisite components. Does anyone have experience with this problem and/or any of these methods to solve it? How did you solve the problem?

    Read the article

  • Create dynamic buffer SharpDX

    - by fedab
    I want to set a buffer that is updated every frame but can't figure it out, what i have to do. The only working thing i have is this: mdexcription = new BufferDescription(Matrix.SizeInBytes * Matrices.Length, ResourceUsage.Dynamic, BindFlags.VertexBuffer, CpuAccessFlags.Write, ResourceOptionFlags.None, 0); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); Draw: //Change Matrices (Matrix[]) every frame... instanceBuffer.Dispose(); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); I guess Dispose() and creating a new buffer is slow and can be done much faster. I've read about DataStream but i do not know, how to set this up properly. What steps do i have to do to set up a DataStream to achieve fast every-frame update?

    Read the article

  • Combine Two Shader Program

    - by Siddharth
    For my android application, I want to apply brightness and contrast shader on same image. At present I am using gpuimage plugin. In that I found two separate program for brightness and contrast as per the following. Contrast shader: varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture; uniform lowp float contrast; void main() { lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate); gl_FragColor = vec4(((textureColor.rgb - vec3(0.5)) * contrast + vec3(0.5)), textureColor.w); } Brightness shader: varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture; uniform lowp float brightness; void main() { lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate); gl_FragColor = vec4((textureColor.rgb + vec3(brightness)), textureColor.w); } Now applying both of the effects I write following code varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture; varying highp vec2 textureCoordinate2; uniform sampler2D inputImageTexture2; uniform lowp float contrast; uniform lowp float brightness; void main() { lowp vec4 textureColorForContrast = texture2D(inputImageTexture, textureCoordinate); lowp vec4 contastVec4 = vec4(((textureColorForContrast.rgb - vec3(0.5)) * contrast + vec3(0.5)), textureColorForContrast.w); lowp vec4 textureColorForBrightness = texture2D(inputImageTexture2, textureCoordinate2); lowp vec4 brightnessVec4 = vec4((textureColorForBrightness.rgb + vec3(brightness)), textureColorForBrightness.w); gl_FragColor = contastVec4 + brightnessVec4; } Doesn't able to get desire result. I can't able to figure out what I have to do next? So please friends help me in this. What program I have to write?

    Read the article

  • (Libgdx) Move Vector2 along angle?

    - by gemurdock
    I have seen several answers on here about moving along angle, but I can't seem to get this to work properly for me and I am new to LibGDX... just trying to learn. These are my Vector2's that I am using for this function. public Vector2 position = new Vector2(); public Vector2 velocity = new Vector2(); public Vector2 movement = new Vector2(); public Vector2 direction = new Vector2(); Here is the function that I use to move the position vector along an angle. setLocation() just sets the new location of the image. public void move(float delta, float degrees) { position.set(image.getX() + image.getWidth() / 2, image.getY() + image.getHeight() / 2); direction.set((float) Math.cos(degrees), (float) Math.sin(degrees)).nor(); velocity.set(direction).scl(speed); movement.set(velocity).scl(delta); position.add(movement); setLocation(position.x, position.y); // Sets location of image } I get a lot of different angles with this, just not the correct angles. How should I change this function to move a Vector2 along an angle using the Vector2 class from com.badlogic.gdx.math.Vector2 within the LibGDX library? I found this answer, but not sure how to implement it. Update: I figured out part of the issue. Should convert degrees to radians. However, the angle of 0 degrees is towards the right. Is there any way to fix this? As I shouldn't have to add 90 to degrees in order to have correct heading. New code is below public void move(float delta, float degrees) { degrees += 90; // Set degrees to correct heading, shouldn't have to do this position.set(image.getX() + image.getWidth() / 2, image.getY() + image.getHeight() / 2); direction.set(MathUtils.cos(degrees * MathUtils.degreesToRadians), MathUtils.sin(degrees * MathUtils.degreesToRadians)).nor(); velocity.set(direction).scl(speed); movement.set(velocity).scl(delta); position.add(movement); setLocation(position.x, position.y); }

    Read the article

  • With 2 superposed cameras at different depths and switching their culling masks between layers to implement object-selective antialising:

    - by user36845
    We superposed two cameras, one of which uses AA as post-processing effect (AA filtering is cancelled). The camera with the AA effect has depth 0 and the camera with no effect has depth 1 as can be seen in the 5th and 6th Picture. The objects seen on the left are in layer 1 and the ones on the right are in layer 2. We then wrote a script that switches the culling masks of the cameras between the two layers at the push of buttons 1 and 2 respectively, and accomplishes object-selective antialiasing as seen in the first the three pictures. (The way two cameras separately switch culling masks between layers is illustrated in pictures 7,8 & 9.) HOWEVER, after making the environment 3D (see pictures 1-4), by parenting the 2 cameras under First-Person Controller, we started moving around in the environment and stumbled upon a big issue: When we look at the objects from such an angle as in the 4th Picture and we want to apply antialiasing to the first object (object on the left) which stands closer to our cameras now, the culling mask of 1st camera which is at depth 0, has to be switched to that object’s layer while the second object has to be in the culling mask of the 2nd camera at depth 1. And since the two image outputs of two superposed cameras are laid on top of one another; we obtain the erroneous/unrealistic result of the object farther in the back appearing closer to the camera than the front object (see 4th Picture). We already tried switching depths of cameras so that the 1st camera –with AA- now has depth 1 and the second has depth 0; BUT the camera with the AA effect Works in such a way that it applies the AA effect to its full view. So; the camera with the AA effect always has to remain at the lowest depth and the layer of the object to be antialiased has to be then assigned to the culling mask of the AA camera; otherwise all objects in the AA camera’s view (the two cubes in our case) become antialised, which we don’t want. So; how can we resolve this? The pictures are below and in the comments since each post can have 2 pics: Pic 1. No button is pushed: Both objects seem aliased. Pic 2. Button 1 is pushed: Left (1st) object is antialiased. 2nd object remains aliased. Pic 3. Button 2 is pushed: Right (2nd) object is antialiased. 1st object remains aliased. Pic 4. The problematic result in 3D, when using two superposed cameras with different depths Pic 5. Camera 1’s properties can be seen: using AA post-processing and its depth is 0 Pic 6. Camera 2’s properties can be seen: NOT using AA post-processing and its depth is 1 Pic 7. When no button is pushed, both objects are in the culling mask of Camera 2 and are aliased Pic 8. When pushed 1, camera 1 (bottom) shows the 1st object and camera 2 (top) shows the 2nd Pic 9. When pushed 2, camera 1 (bottom) shows the 2nd object and camera 2 (top) shows the 1st

    Read the article

  • How to prevent 2D camera rotation if it would violate the bounds of the camera?

    - by Andrew Price
    I'm working on a Camera class and I have a rectangle field named Bounds that determines the bounds of the camera. I have it working for zooming and moving the camera so that the camera cannot exit its bounds. However, I'm a bit confused on how to do the same for rotation. Currently I allow rotating of the camera's Z-axis. However, if sufficiently zoomed out, upon rotating the camera, areas of the screen outside the camera's bounds can be shown. I'd like to deny the rotation assuming it meant that the newly rotated camera would expose areas outside the camera's bounds, but I'm not quite sure how. I'm still new to Matrix and Vector math and I'm not quite sure how to model if the newly rotated camera sees outside of its bounds, undo the rotation. Here's an image showing the problem: http://i.stack.imgur.com/NqprC.png The red is out of bounds and as a result, the camera should never be allowed to rotate itself like this. This seems like it would be a problem with all rotated values, but this is not the case when the camera is zoomed in enough. Here are the current member variables for the Camera class: private Vector2 _position = Vector2.Zero; private Vector2 _origin = Vector2.Zero; private Rectangle? _bounds = Rectangle.Empty; private float _rotation = 0.0f; private float _zoom = 1.0f; Is this possible to do? If so, could someone give me some guidance on how to accomplish this? Thanks. EDIT: I forgot to mention I am using a transformation matrix style camera that I input in to SpriteBatch.Begin. I am using the same transformation matrix from this tutorial.

    Read the article

  • Slick2D + LWJGL collision system

    - by Connor W
    So I've been learning java for a while and have explored slick and lwjgl before but went away from using Slick for a while. But I've recently gone back to using it (as I'm making a platformer and Tiled will be really helpful). But here's where my problems begin: collision. I have a player polygon and I check to see if it's colliding with my tiled map with this method: public static boolean playerCollisionWith() { for(int i = 0; i < Blockmap.entities.size(); i++) { Block entity1 = (Block) Blockmap.entities.get(i); if(playerPoly.intersects(entity1.poly)) { return true; } } return false; } This would work normally but I'm using a different method for movement. Instead of just adding a speed variable to the player's x axis. I move like this: if(Keyboard.isKeyDown(Keyboard.KEY_RIGHT)) { speedX = Math.min(5, speedX + 1); moving = true; playerPoly.setX(x); if(playerCollisionWith()) { speedX = -5; playerPoly.setX(x); } } That Math.min call is what is messing me up =. I can't just call speedX = -5, because when I do the player "bounces" when the right mouse button is down and it's colliding. Bounces as in flashes back and forth REALLY quickly. But I don't really know how I would make it so that collisions on the y axis would work either, whether the player is jumping or not. So if I could get some help with how to fix this problem that would be great. Thank you for the help!

    Read the article

  • XNA - Error while rendering a texture to a 2D render target via SpriteBatch

    - by Jared B
    I've got this simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D: private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if (Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw( targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from my Draw(GameTime gameTime) function. First drawScene is called, then drawPostProcessing is called. The thing is, when I run this code I get an error on the spriteBatch.Draw call: The render target must not be set on the device when it is used as a texture. I already found the solution, which is to draw the actual render target (targetScene) to the texture so it doesn't create a reference to the loaded render target. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(outputTarget) SpriteBatch.Draw(inputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render inputTarget to outputTarget without reference issues?

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >