Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 543/1169 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • Where can I find free or buy "next-gen" 3D Assets?

    - by Valmond
    Usually I buy 3D Assets from sites like turbosquid.com or similar. My problem is that I have lately implemented glow, normal maps, specular (and specular power) maps and reflection maps and I can't find any models that use those techniques. So where can I find / buy "next gen" assets (at least models/items with a normal map)? I have checked for similar posts but those I found are about either free only or 2D or 'ordinary' 3D so I hope this is not a duplicate.

    Read the article

  • How can I implement an Iris Wipe effect?

    - by Vandell
    For those who doesn't know: An iris wipe is a wipe that takes the shape of a growing or shrinking circle. It has been frequently used in animated short subjects, such as those in the Looney Tunes and Merrie Melodies cartoon series, to signify the end of a story. When used in this manner, the iris wipe may be centered around a certain focal point and may be used as a device for a "parting shot" joke, a fourth wall-breaching wink by a character, or other purposes. Example from flasheff.com Your answer may or may not include a coding sample, a language agnostic explanation is considered enough.

    Read the article

  • Isometric Screen View to World View

    - by Sleepy Rhino
    I am having trouble working out the math to transform the screen coordinates to the Grid coordinates. The code below is how far I have got but it is totally wrong any help or resources to fix this issue would be great, had a complete mind block with this for some reason. private Point ScreenToIso(int mouseX, int mouseY) { int offsetX = WorldBuilder.STARTX; int offsetY = WorldBuilder.STARTY; Vector2 startV = new Vector2(offsetX, offsetY); int mapX = offsetX - mouseX; int mapY = offsetY - mouseY + (WorldBuilder.tileHeight / 2); mapY = -1 * (mapY / WorldBuilder.tileHeight); mapX = (mapX / WorldBuilder.tileHeight) + mapY; return new Point(mapX, mapY); }

    Read the article

  • Vertex Array Object (OpenGL)

    - by Shin
    I've just started out with OpenGL I still haven't really understood what Vertex Array Objects are and how they can be employed. If Vertex Buffer Object are used to store vertex data (such as their positions and texture coordinates) and the VAOs only contain status flags, where can they be used? What's their purpose? As far as I understood from the (very incomplete and unclear) GL Wiki, VAOs are used to set the flags/status for every vertex, following the order described in the Element Array Buffer, but the wiki was really ambiguous about it and I'm not really sure about what VAOs really do and how I could employ them.

    Read the article

  • How to Make Objects Fall Faster in a Physics Simulation

    - by David Dimalanta
    I used the collision physics (i.e. Box2d, Physics Body Editor) and implemented onto the java code. I'm trying to make the fall speed higher according to the examples: It falls slower if light object (i.e. feather). It falls faster depending on the object (i.e. pebble, rock, car). I decided to double its falling speed for more excitement. I tried adding the mass but the speed of falling is constant instead of gaining more speed. check my code that something I put under input processor's touchUp() return method under same roof of the class that implements InputProcessor and Screen: @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { // TODO Touch Up Event if(is_Next_Fruit_Touched) { BodyEditorLoader Fruit_Loader = new BodyEditorLoader(Gdx.files.internal("Shape_Physics/Fruity Physics.json")); Fruit_BD.type = BodyType.DynamicBody; Fruit_BD.position.set(x, y); FixtureDef Fruit_FD = new FixtureDef(); // --> Allows you to make the object's physics. Fruit_FD.density = 1.0f; Fruit_FD.friction = 0.7f; Fruit_FD.restitution = 0.2f; MassData mass = new MassData(); mass.mass = 5f; Fruit_Body[n] = world.createBody(Fruit_BD); Fruit_Body[n].setActive(true); // --> Let your dragon fall. Fruit_Body[n].setMassData(mass); Fruit_Body[n].setGravityScale(1.0f); System.out.println("Eggs... " + n); Fruit_Loader.attachFixture(Fruit_Body[n], Body, Fruit_FD, Fruit_IMG.getWidth()); Fruit_Origin = Fruit_Loader.getOrigin(Body, Fruit_IMG.getWidth()).cpy(); is_Next_Fruit_Touched = false; up = y; Gdx.app.log("Initial Y-coordinate", "Y at " + up); //Once it's touched, the next fruit will set to drag. if(n < 50) { n++; }else{ System.exit(0); } } return true; } And take note, at show() method , the view size from the camera is at 720x1280: camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 1280; camera_1.viewportWidth = 720; camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); I know it's a good idea to add weight to make the falling object falls faster once I released the finger from the touchUp() after I picked the object from the upper right of the screen but the speed remains either constant or slow. How can I solve this? Can you help?

    Read the article

  • Map and fill texture using PBO (OpenGL 3.3)

    - by NtscCobalt
    I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1-byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID; glGenBuffers(1, &pixelBufferID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); glBufferData(GL_PIXEL_UNPACK_BUFFER, 512 * 512 * 1, nullptr, GL_STREAM_DRAW); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); GLuint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 512, 512, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture(GL_TEXTURE_2D, textureID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); void *Memory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY); // Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); And then here is the render loop. // This chunk left in for completeness glUseProgram(glProgramId); glBindVertexArray(glVertexArrayId); glBindBuffer(GL_ARRAY_BUFFER, glVertexBufferId); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 20, 0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 20, 12); GLuint transformLocationID = glGetUniformLocation(3, 'transform'); glUniformMatrix4fv(transformLocationID , 1, true, somematrix) // Not sure if this is all I need to do glBindTexture(GL_TEXTURE_2D, pTex->glTextureId); GLuint textureLocationID = glGetUniformLocation(glProgramId, "texture"); glUniform1i(textureLocationID, 0); glDrawArrays(GL_TRIANGLES, Offset*3, Triangles*3); Vertex Shader #version 330 core in vec3 Position; in vec2 TexCoords; out vec2 TexOut; uniform mat4 transform; void main() { TexOut = TexCoords; gl_Position = vec4(Position, 1.0) * transform; } Pixel Shader #version 330 core uniform sampler2D texture; in vec2 TexCoords; out vec4 fragColor; void main() { // Output color fragColor.r = texture2D(texture, TexCoords).r; fragColor.g = 0.0f; fragColor.b = 0.0f; fragColor.a = 1.0; }

    Read the article

  • blender: 3D model from guide images

    - by Stefan
    In a effort to learn the blender interface, which is confusing to say the least, I've chosen to model a model from referrence pictures easily found on the web. Problem is that I can't ( and won't ) get perfect "right", "front" and "top" pictures. Blender only allows you to see the background pictures when in ortographic mode and only from right|front|top, which doesn't help me. How to I proceed to model from non-perfect guide images?

    Read the article

  • Overriding component behavior

    - by deft_code
    I was thinking of how to implement overriding of behaviors in a component based entity system. A concrete example, an entity has a heath component that can be damaged, healed, killed etc. The entity also has an armor component that limits the amount of damage a character receives. Has anyone implemented behaviors like this in a component based system before? How did you do it? If no one has ever done this before why do you think that is. Is there anything particularly wrong headed about overriding component behaviors? Below is rough sketch up of how I imagine it would work. Components in an entity are ordered. Those at the front get a chance to service an interface first. I don't detail how that is done, just assume it uses evil dynamic_casts (it doesn't but the end effect is the same without the need for RTTI). class IHealth { public: float get_health( void ) const = 0; void do_damage( float amount ) = 0; }; class Health : public Component, public IHealth { public: void do_damage( float amount ) { m_damage -= amount; } private: float m_health; }; class Armor : public Component, public IHealth { public: float get_health( void ) const { return next<IHealth>().get_health(); } void do_damage( float amount ) { next<IHealth>().do_damage( amount / 2 ); } }; entity.add( new Health( 100 ) ); entity.add( new Armor() ); assert( entity.get<IHealth>().get_health() == 100 ); entity.get<IHealth>().do_damage( 10 ); assert( entity.get<IHealth>().get_health() == 95 ); Is there anything particularly naive about the way I'm proposing to do this?

    Read the article

  • How do I adjust the origin of rotation for a group of sprites?

    - by Jon
    I am currently grouping sprites together, then applying a rotation transformation on draw: private void UpdateMatrix(ref Vector2 origin, float radians) { Vector3 matrixorigin = new Vector3(origin, 0); _rotationMatrix = Matrix.CreateTranslation(-matrixorigin) * Matrix.CreateRotationZ(radians) * Matrix.CreateTranslation(matrixorigin); } Where the origin is the Centermost point of my group of sprites. I apply this transformation to each sprite in the group. My problem is that when I adjust the point of origin, my entire sprite group will re-position itself on screen. How could I differentiate the point of rotation used in the transformation, from the position of the sprite group? Is there a better way of creating this transformation matrix? EDIT Here is the relevant part of the Draw() function: Matrix allTransforms = _rotationMatrix * camera.GetTransformation(); spriteBatch.Begin(SpriteSortMode.BackToFront, null, null, null, null, null, allTransforms); for (int i = 0; i < _map.AllParts.Count; i++) { for (int j = 0; j < _map.AllParts[0].Count; j++) { spriteBatch.Draw(_map.AllParts[i][j].Texture, _map.AllParts[i][j].Position, null, Color.White, 0, _map.AllParts[i][j].Origin, 1.0f, SpriteEffects.None, 0f); } } This all works fine, again, the problem is that when a rotation is set and the point of origin is changed, the sprite group's position is offset on screen. I am trying to figure out a way to adjust the point of origin without causing a shift in position. EDIT 2 At this point, I'm looking for workarounds as this is not working. Does anyone know of a better way to rotate a group of sprites in XNA? I need a method that will allow me to modify the point of rotation (origin) without affecting the position of the sprite group on screen.

    Read the article

  • Keep 3d model facing the camera at all angles

    - by Sparky41
    I'm trying to keep a 3d plane facing the camera at all angles but while i have some success with this: Vector3 gunToCam = cam.cameraPosition - getWorld.Translation; Vector3 beamRight = Vector3.Cross(torpDirection, gunToCam); beamRight.Normalize(); Vector3 beamUp = Vector3.Cross(beamRight, torpDirection); shipeffect.beamWorld = Matrix.Identity; shipeffect.beamWorld.Forward = (torpDirection) * 1f; shipeffect.beamWorld.Right = beamRight; shipeffect.beamWorld.Up = beamUp; shipeffect.beamWorld.Translation = shipeffect.beamPosition; *Note: Logic not wrote by me i just found this rather useful It seems to only face the camera at certain angles. For example if i place the camera behind the plane you can see it that only Roll's around the axis like this: http://i.imgur.com/FOKLx.png (imagine if you are looking from behind where you have fired from. Any idea what to what the problem is (angles are not my specialty) shipeffect is an object that holds this class variables: public Texture2D altBeam; public Model beam; public Matrix beamWorld; public Matrix[] gunTransforms; public Vector3 beamPosition;

    Read the article

  • Radiosity using a hemisphere

    - by P. Avery
    I'm working on a radiosity processor. I'm projecting scene geometry onto a hemisphere at a high order of tessellation during a visibility pass onto a 1024x1024 render target. The problem is that the edges of certain triangles are not being rendered to the item buffer( render target )...so when I test certain edges( or pixels during pixel shader ) for visibility during a reconstruction pass, visible edges are not identified and as a result the pixel for that edge is discarded. One solution was to increase the resolution of the item buffer( up to 4096x4096 )...this helped and more edges were visible, however, this was not fullproof. How do I increase visibility? Here is a screenshot of a scene after radiosity is applied: the seams are edges along a triangle face that were not visible due to the resolution of the item buffer... fixed the problem by sampling the item buffer w/8 points:

    Read the article

  • How to efficiently map tokens to code in a script interpreter?

    - by lithander
    I'm writing an interpreter for a simple scripting language where each line is a complete, executable command. (Like the instructions in assembler) When parsing a line I have to map the requested command to actual code. My current solution looks like this: std::string op, param1, param2; //parse line, identify op, param1, param2 ... //call command specific code if(op == "MOV") ExecuteMov(AsNumber(param1)); else if(op == "ROT") ExecuteRot(AsNumber(param1)); else if(op == "SZE") ExecuteSze(AsNumber(param1)); else if(op == "POS") ExecutePos((AsNumber(param1), AsNumber(param2)); else if(op == "DIR") ExecuteDir((AsNumber(param1), AsNumber(param2)); else if(op == "SET") ExecuteSet(param1, AsNumber(param2)); else if(op == "EVL") ... The more commands are supported the more string comparisions I'll have to do to identify and call the associated method. Can you point me to a more efficient implementation in the described scenario?

    Read the article

  • Workaround the flip queue (AKA pre-rendered frames) in OpenGL?

    - by user41500
    It appears that some drivers implement a "flip queue" such that, even with vsync enabled, the first few calls to swap buffers return immediately (queuing those frames for later use). It is only after this queue is filled that buffer swaps will block to synchronize with vblank. This behavior is detrimental to my application. It creates latency. Does anyone know of a way to disable it or a workaround for dealing with it? The OpenGL Wiki on Swap Interval suggests a call to glFinish after the swap but I've had no such luck with that trick.

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • Why don't Normal maps in tangent space have a single blue color?

    - by seahorse
    Normal maps are predominantly blue in color because the z component maps to Blue and since normals point out of the surface in the z direction we see Blue as the predominant component. If the above is true then why are normal maps just of one color i.e. blue and they should not be having any other shades(not even shades of blue) Since by definition tangent space is perpendicular to normal at any point we should have the normal always pointing in the Z (Blue direction) with no X(Red component) and Y(Green component). Thus the normal map(since it is a "normal map") should have had color of normals which is just the Blue(Z =Blue compoennt = 1, R=0, G=0) and the normal map should have been of only Blue color with no shades in between. But even then normal maps are not so, and they have gradients of shades in them, why is this so?

    Read the article

  • How can I downsample a texture using FBOs?

    - by snape
    I am rendering a scene to FBO as my render target whose size is 8 times the size of the orignal screen in OpenGL. Now i wan to downsample the texture generated by FBO to the size of the screen so as to achieve spatial anti aliasing. How do i achieve the down sampling ? Please provide implementation details. Note : If there is a better way of doing anti aliasing in FBOs please mention that too. I am trying to remove the aliasing in the image attached below.

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • Models with more than one mesh in JMonkeyEngine

    - by Andrea Tucci
    I’m a new jmonkey engine developer and I’m beginning to import models. I tried to import simple models and no problems appeared, but when I export some obj models having more than one mesh in the OgreXML format, Blender saves multiple meshes with their own materials (e.g. one mesh for face, another for body etc). Can I export all the meshes in one? I’ve tried to join all the meshes to a major one with blender (face joins body), but when I export the model and then create the Spatial in jme(loading the path of the “merged” mesh), all the meshes that are joined to the major doesn’t have their materials! I give a more clear example: I have an .obj model with 3 meshes and I export it. I have : mesh1.mesh.xml , mesh2.mesh.xml , mesh3.mesh.xml and their materials mesh1.material, mesh2.material mesh3.material so I import the folder in Assets/Models/Test and now I have to create something like: Spatial head = assetManager.loadModel( [path] ); Spatial face = assetManager.loadModel( [path] ) one for each mesh and than attach them to a common node. I think there is a way to merge those mesh maintaining their materials! What do you think? Thanks

    Read the article

  • Using MVC with a retained mode renderer

    - by David Gouveia
    I am using a retained mode renderer similar to the display lists in Flash. In other words, I have a scene graph data structure called the Stage to which I add the graphical primitives I would like to see rendered, such as images, animations, text. For simplicity I'll refer to them as Sprites. Now I'm implementing an architecture which is becoming very similar to MVC, but I feel that that instead of having to create View classes, that the sprites already behave pretty much like Views (except for not being explicitly connected to the Model). And since the Model is only changed through the Controller, I could simply update the view together with the Model in the controller, as in the example below: Example 1 class Controller { Model model; Sprite view; void TeleportTo(Vector2 position) { model.Position = view.Position = position; } } The alternative, I think, would be to create View classes that wrap the sprites, make the model observable, and make the view react to changes on the model. This seems like a lot of extra work and boilerplate code, and I'm not seeing the benefits if I'm just going to have one view per controller. Example 2 class Controller { Model model; View view; void TeleportTo(Vector2 position) { model.Position = position; } } class View { Model model; Sprite sprite; View() { model.PropertyChanged += UpdateView; } void UpdateView() { sprite.Position = model.Position; } } So, how is MVC or more specifically, the View, usually implemented when using a retained-mode renderer? And is there any reason why I shouldn't stick with example 1?

    Read the article

  • Mapping dynamic buffers in Direct3D11 in Windows Store apps

    - by Donnie
    I'm trying to make instanced geometry in Direct3D11, and the ID3D11DeviceContext1->Map() call is failing with the very helpful error of "Invalid Parameter" when I'm attempting to update the instance buffer. The buffer is declared as a member variable: Microsoft::WRL::ComPtr<ID3D11Buffer> m_instanceBuffer; Then I create it (which succeeds): D3D11_BUFFER_DESC instanceDesc; ZeroMemory(&instanceDesc, sizeof(D3D11_BUFFER_DESC)); instanceDesc.Usage = D3D11_USAGE_DYNAMIC; instanceDesc.ByteWidth = sizeof(InstanceData) * MAX_INSTANCE_COUNT; instanceDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; instanceDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; instanceDesc.MiscFlags = 0; instanceDesc.StructureByteStride = 0; DX::ThrowIfFailed(d3dDevice->CreateBuffer(&instanceDesc, NULL, &m_instanceBuffer)); However, when I try to map it: D3D11_MAPPED_SUBRESOURCE inst; DX::ThrowIfFailed(d3dContext->Map(m_instanceBuffer.Get(), 0, D3D11_MAP_WRITE, 0, &inst)); The map call fails with E_INVALIDARG. Nothing is NULL incorrectly, and this being one of my first D3D apps I'm currently stumped on what to do next to track it down. I'm thinking I must be creating the buffer incorrectly, but I can't see how. Any input would be appreciated.

    Read the article

  • Simple project - make a 3D box tumble and fall to the ground [closed]

    - by Dominic Bou-Samra
    Possible Duplicate: Resources to learn programming rigid body simulation Hi guys, I want to try learning rigid-body dynamic simulation. I have done a fluid and cloth simulation before, but never anything rigid. My maths knowledge is limited in that I don't know the notation that well. Are there any good cliff-notes, tutorials, guides on how I would accomplish a simple task like this? I don't want a super complex pdf that's only a little relevant. Thanks.

    Read the article

  • Libgdx ParallaxScrolling and TiledMaps

    - by kirchhoff
    I implemented ParallaxScrolling for my SideScroller project, everything is working but the tiled map (the most important part!). I've been trying out everything but it doesn't work (see the code below). I'm using ParallaxCamera from GdxTests, it's working perfectly for the background layers. I can't explain myself properly in english, so I recorded 2 videos: Before parallaxScrolling After parallaxScrolling As you can see, now the platforms appear in the middle of the Y-axis. I've got a Map class with 2 tiled maps, so I need two renderers too: private TiledMapRenderer renderer1; private TiledMapRenderer renderer2; public void update(GameCamera camera) { renderer1.setView(camera.calculateParallaxMatrix(1f, 0f), camera.position.x - camera.viewportWidth / 2, **camera.position.y - camera.viewportHeight/2**, camera.viewportWidth, camera.viewportHeight); renderer2.setView(camera.calculateParallaxMatrix(1f, 0f), camera.position.x - camera.viewportWidth / 2, **camera.position.y - camera.viewportHeight/2**, camera.viewportWidth, camera.viewportHeight); } In bold, the code I think I should change. I've tried changing parameters, even adding hardcoded values, etc, but one of two: 1. Nothing happens. 2. Platforms disappear. Here is some aditional code. The render method: world.update(delta); parallaxBackground.update(camera); clear(0.5f, 0.7f, 1.0f, 1); batch.setProjectionMatrix(camera.calculateParallaxMatrix(0, 0)); batch.disableBlending(); batch.begin(); batch.draw(background, -(int)background.getRegionWidth()/2, -(int)background.getRegionHeight()/2); batch.end(); batch.enableBlending(); parallaxBackground.draw(batch, camera); renderer.render(batch);

    Read the article

  • Render full-screen gradient or texture

    - by Filip Skakun
    What's the simplest way to fill the background of the screen with a gradient or a texture in Direct3D 10/11? I'm building a Windows 8 metro app in which the camera never moves and I render some content in D3D, but I need to fill the background with something else than a solid color. Do I need to figure out the size and position of a rectangle and position it in 3D space or can I have some simpler solution? I don't care about depth at all, I don't use any depth buffer since all my content is sorted back to front, so I could just start by drawing to the background.

    Read the article

  • OpenGL fovx question

    - by Nick
    To boil my question down to the simplest form, I fear I am oversimplifying how mat4 perspective works. I am using mat4.perspective(45, 2, 0.1, 1000.0) (the binding is WebGL fwiw). With a fovy of 45, and an aspect ratio of 2, I expect to have a fovx of 90. Thus, if I position my camera at (0, 0, 50), looking towards the origin, I expect to see a cube positioned at (50, 0, 0) (45 degrees) right at the very periphery of my screen, half on, half off,. Instead, a cube at (50, 0, 0) is totally off screen, and my actually periphery occurs at about (41.1, 0, 0). What am I missing here? Thanks, nick

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >