Search Results

Search found 24037 results on 962 pages for 'game design'.

Page 516/962 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • Java - Tile engine changing number in array not changing texture

    - by Corey
    I draw my map from a txt file. Would I have to write to the text file to notice the changes I made? Right now it changes the number in the array but the tile texture doesn't change. Do I have to do more than just change the number in the array? public class Tiles { public Image[] tiles = new Image[5]; public int[][] map = new int[64][64]; private Image grass, dirt, fence, mound; private SpriteSheet tileSheet; public int tileWidth = 32; public int tileHeight = 32; Player player = new Player(); public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); fence = tileSheet.getSprite(2, 0); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = fence; tiles[3] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.dat")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); y=y+1; } //System.out.println(""); x=x+1; y = 0; } in.close(); } public void update(GameContainer gc) { } public void render(GameContainer gc) { for(int x = 0; x < map.length; x++) { for(int y = 0; y < map.length; y ++) { int textureIndex = map[y][x]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } Mouse picking public void checkDistance(GameContainer gc) { Input input = gc.getInput(); float mouseX = input.getMouseX(); float mouseY = input.getMouseY(); double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { System.out.println("Clicked"); if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } System.out.println(tiles.map[(int)mousetileX][(int)mousetileY]); }

    Read the article

  • Convience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later

    Read the article

  • loading 3d model data into buffers

    - by mulletdevil
    I am using assimp to load 3d model data. I have noticed that each loaded model is made up of different meshes. I was wondering should each mesh have it's own vertex/index buffer or should there just be one for the whole model? From looking through the index data that is loaded it seems to suggest that I will need a vertex buffer per mesh but I'm not 100% sure. I am using C++ and DirectX9 Thank you, Mark

    Read the article

  • How to transform gameObjects in array?

    - by user1764781
    I have an array of available gameObjects in the scene. An array of GO should be transformed according to received floats through UDP connection. I know it is simple, but can't figure it out how to accomplish the transformation an array of GO according to received unique floats for each GO, any help will be appreciated. Here is a transformation floats, it might be helpful I guess: x =ReadSingleBigEndian(data, signalOffset); signalOffset+=4; y= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; z= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; alpha= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; theta= ReadSingleBigEndian(data,signalOffset); signalOffset+=4; phi= ReadSingleBigEndian(data,signalOffset); signalOffset+=4;

    Read the article

  • Android Swipe In Unity 3D World with AR

    - by Christian
    I am working on an AR application using Unity3D and the Vuforia SDK for Android. The way the application works is a when the designated image(a frame marker in our case) is recognized by the camera, a 3D island is rendered at that spot. Currently I am able to detect when/which objects are touched on the model by raycasting. I also am able to successfully detect a swipe using this code: if (Input.touchCount > 0) { Touch touch = Input.touches[0]; switch (touch.phase) { case TouchPhase.Began: couldBeSwipe = true; startPos = touch.position; startTime = Time.time; break; case TouchPhase.Moved: if (Mathf.Abs(touch.position.y - startPos.y) > comfortZoneY) { couldBeSwipe = false; } //track points here for raycast if it is swipe break; case TouchPhase.Stationary: couldBeSwipe = false; break; case TouchPhase.Ended: float swipeTime = Time.time - startTime; float swipeDist = (touch.position - startPos).magnitude; if (couldBeSwipe && (swipeTime < maxSwipeTime) && (swipeDist > minSwipeDist)) { // It's a swiiiiiiiiiiiipe! float swipeDirection = Mathf.Sign(touch.position.y - startPos.y); // Do something here in reaction to the swipe. swipeCounter.IncrementCounter(); } break; } touchInfo.SetTouchInfo (Time.time-startTime,(touch.position-startPos).magnitude,Mathf.Abs (touch.position.y-startPos.y)); } Thanks to andeeeee for the logic. But I want to have some interaction in the 3D world based on the swipe on the screen. I.E. If the user swipes over unoccluded enemies, they die. My first thought was to track all the points in the moved TouchPhase, and then if it is a swipe raycast into all those points and kill any enemy that is hit. Is there a better way to do this? What is the best approach? Thanks for the help!

    Read the article

  • Stencil buffer appears to not be decrementing values correctly

    - by Alex Ames
    I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing: A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below: static void drawStencil(Rect& rect, unsigned int ref) { // Save previous values of the color and depth masks GLboolean colorMask[4]; GLboolean depthMask; glGetBooleanv(GL_COLOR_WRITEMASK, colorMask); glGetBooleanv(GL_DEPTH_WRITEMASK, &depthMask); // Turn off drawing glColorMask(0, 0, 0, 0); glDepthMask(0); // Draw vertices here ... // Turn everything back on glColorMask(colorMask[0], colorMask[1], colorMask[2], colorMask[3]); glDepthMask(depthMask); // Only render pixels in areas where the stencil buffer value == ref glStencilFunc(GL_EQUAL, ref, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); } void pushScissor(Rect rect) { // increment things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); s_scissorStack.push_back(rect); drawStencil(rect, states, s_ScissorStack.size()); } void popScissor() { // undo what was done in the previous push, // decrement things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_DECR, GL_DECR); Rect rect = s_scissorStack.back(); s_scissorStack.pop_back(); drawStencil(rect, states, s_scissorStack.size()); } And this is how it's being used by the Widgets if (m_clip) pushScissor(m_rect); drawInternal(target, states); for (auto child : m_children) target.draw(*child, states); if (m_clip) popScissor(); This is the result of the above code: There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen.

    Read the article

  • How to remove seams from a tile map in 3D?

    - by Grimshaw
    I am using my OpenGL custom engine to render a tilemap made with Tiled, using a well spread tileset from the web. There is nothing fancy going on. I load the TMX file from Tiled and generate vertex arrays and index arrays to render the tilemap. I am rendering this tilemap as a wall in my 3D world, meaning that I move around with a fly camera in my 3D world and at Z=0 there is a plane showing me my tiles. Everything is working correctly but I get ugly seems between the tiles. I've tried orthographic and perspective cameras and with either I found particular sets of parameters for the projection and view matrices where the artifacts did not show, but otherwise they are there 99% of the time in multiple patterns, depending on the zoom and camera parameters like field of view. Here's a screenshot of the artifact being shown: http://i.imgur.com/HNV1g4M.png Here's the tileset I am using (which Tiled also uses and renders correctly): http://i.imgur.com/SjjHK4q.png My tileset has no mipmaps and is set to GL_NEAREST and GL_CLAMP_TO_EDGE values. I've looked around many articles in the internet and nothing helped. I tried uv correction so the uv fall at half of the texel, rather than the end of the texel to prevent interpolating with the neighbour value(which is transparency). I tried debugging with my geometry and I verified that with no texture and a random color in each tile, I don't seem to see any seams. All vertices have integer coordinates, i.e, the first tile is a quad from (0,0) to (1,1) and so on. Tried adding a little offset both to the UV and to the vertices to see if the gaps cease to exist. Disabled multisampling too. Nothing fixed it so far. Thanks.

    Read the article

  • What's wrong with this turn to face algorithm?

    - by Chan
    I implement a torpedo object that chases a rotating planet. Specifically, it will turn toward the planet each update. Initially my implement was: void move() { vector3<float> to_target = target - get_position(); to_target.normalize(); position += (to_target * speed); } which works perfectly for torpedo that is a solid sphere. Now my torpedo is actually a model, which has a forward vector, so using this method looks odd because it doesn't actually turn toward but jump toward. So I revised it a bit to get, double get_rotation_angle(vector3<float> u, vector3<float> v) const { u.normalize(); v.normalize(); double cosine_theta = u.dot(v); // domain of arccosine is [-1, 1] if (cosine_theta > 1) { cosine_theta = 1; } if (cosine_theta < -1) { cosine_theta = -1; } return math3d::to_degree(acos(cosine_theta)); } vector3<float> get_rotation_axis(vector3<float> u, vector3<float> v) const { u.normalize(); v.normalize(); // fix linear case if (u == v || u == -v) { v[0] += 0.05; v[1] += 0.0; v[2] += 0.05; v.normalize(); } vector3<float> axis = u.cross(v); return axis.normal(); } void turn_to_face() { vector3<float> to_target = (target - position); vector3<float> axis = get_rotation_axis(get_forward(), to_target); double angle = get_rotation_angle(get_forward(), to_target); double distance = math3d::distance(position, target); gl_matrix_mode(GL_MODELVIEW); gl_push_matrix(); { gl_load_identity(); gl_translate_f(position.get_x(), position.get_y(), position.get_z()); gl_rotate_f(angle, axis.get_x(), axis.get_y(), axis.get_z()); gl_get_float_v(GL_MODELVIEW_MATRIX, OM); } gl_pop_matrix(); move(); } void move() { vector3<float> to_target = target - get_position(); to_target.normalize(); position += (get_forward() * speed); } The logic is simple, I find the rotation axis by cross product, the angle to rotate by dot product, then turn toward the target position each update. Unfortunately, it looks extremely odds since the rotation happens too fast that it always turns back and forth. The forward vector for torpedo is from the ModelView matrix, the third column A: MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 -------------------------------------------------- Any suggestion or idea would be greatly appreciated.

    Read the article

  • D3D11 how to simulate multiple depth channels

    - by Nock
    Here's what I'd like to achieve: Rendering a first pass of objects in my scene, using standard depth comparison Rendering another pass of objects in the same scene, but with the following rules: A Pixel of the 2nd pass always override the first pass (no depth compare between them) Use Depth comparison between pixels written from the second pass. In English I want depth comparison made inside each pass but I always want the second pass pixels to override the first pass ones. Some things I've thought: I tried to think about using stencil to solve this, but I couldn't find a way. I know I could render into a separate target the second pass then composite the result into the first, but I'd like to avoid that. I could use two separate Depth Buffer, one dedicated to each pass. (I never tried, but I figure it's possible to switch the depth buffer in a Render Target "on the fly") Any idea of the best solution? Thanks

    Read the article

  • Platform jumping problems with AABB collisions

    - by Vee
    See the diagram first: When my AABB physics engine resolves an intersection, it does so by finding the axis where the penetration is smaller, then "push out" the entity on that axis. Considering the "jumping moving left" example: If velocityX is bigger than velocityY, AABB pushes the entity out on the Y axis, effectively stopping the jump (result: the player stops in mid-air). If velocityX is smaller than velocitY (not shown in diagram), the program works as intended, because AABB pushes the entity out on the X axis. How can I solve this problem? Source code: public void Update() { Position += Velocity; Velocity += World.Gravity; List<SSSPBody> toCheck = World.SpatialHash.GetNearbyItems(this); for (int i = 0; i < toCheck.Count; i++) { SSSPBody body = toCheck[i]; body.Test.Color = Color.White; if (body != this && body.Static) { float left = (body.CornerMin.X - CornerMax.X); float right = (body.CornerMax.X - CornerMin.X); float top = (body.CornerMin.Y - CornerMax.Y); float bottom = (body.CornerMax.Y - CornerMin.Y); if (SSSPUtils.AABBIsOverlapping(this, body)) { body.Test.Color = Color.Yellow; Vector2 overlapVector = SSSPUtils.AABBGetOverlapVector(left, right, top, bottom); Position += overlapVector; } if (SSSPUtils.AABBIsCollidingTop(this, body)) { if ((Position.X >= body.CornerMin.X && Position.X <= body.CornerMax.X) && (Position.Y + Height/2f == body.Position.Y - body.Height/2f)) { body.Test.Color = Color.Red; Velocity = new Vector2(Velocity.X, 0); } } } } } public static bool AABBIsOverlapping(SSSPBody mBody1, SSSPBody mBody2) { if(mBody1.CornerMax.X <= mBody2.CornerMin.X || mBody1.CornerMin.X >= mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y <= mBody2.CornerMin.Y || mBody1.CornerMin.Y >= mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsColliding(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsCollidingTop(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; if(mBody1.CornerMax.Y == mBody2.CornerMin.Y) return true; return false; } public static Vector2 AABBGetOverlapVector(float mLeft, float mRight, float mTop, float mBottom) { Vector2 result = new Vector2(0, 0); if ((mLeft > 0 || mRight < 0) || (mTop > 0 || mBottom < 0)) return result; if (Math.Abs(mLeft) < mRight) result.X = mLeft; else result.X = mRight; if (Math.Abs(mTop) < mBottom) result.Y = mTop; else result.Y = mBottom; if (Math.Abs(result.X) < Math.Abs(result.Y)) result.Y = 0; else result.X = 0; return result; }

    Read the article

  • GLSL: Strange light reflections [Solved]

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices?

    Read the article

  • Why does calling CreateDXGIFactory prevent my program from exiting?

    - by smoth190
    I'm using CreateDXGIFactory to get the graphics adapters and display modes. When I call it, it works fine and I get all the data. However, when I exit my program, the main Win32 thread exits, but something stays open because it keeps debugging. Does CreateDXGIFactory create an extra thread and I'm not closing it? I don't understand. The only thing I would suspect is that in the documentation it says it doesn't work if it's called from DllMain. It is in a DLL, but it's not called from DllMain. And it doesn't fail, either. I'm using DirectX 11. Here is the function that initializes DirectX. I haven't gotten past retrieving the refresh rate because of this problem. I commented everything out to pinpoint the problem. bool CGraphicsManager::InitDirectX(HWND hWnd, int width, int height) { HRESULT result; IDXGIFactory* factory; IDXGIOutput* output; IDXGIAdapter* adapter; DXGI_MODE_DESC* displayModes; DXGI_ADAPTER_DESC adapterDesc; unsigned int modeCount = 0; unsigned int refreshNum = 0; unsigned int refreshDen = 0; //First, we need to get the monitors refresh rater result = CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory); //if(FAILED(result)) //{ //MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to create DXGI factory\nError:\n%s"), DXGetErrorDescription(result)); //return false; //} /*//Create a graphics card adapter result = factory->EnumAdapters(0, &adapter); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get graphics adapters\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Get the output result = adapter->EnumOutputs(0, &output); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get adapter output\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Get the modes result = output->GetDisplayModeList(DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ENUM_MODES_INTERLACED, &modeCount, 0); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get mode count\nError:\n%s"), DXGetErrorDescription(result)); return false; } displayModes = new DXGI_MODE_DESC[modeCount]; result = output->GetDisplayModeList(DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ENUM_MODES_INTERLACED, &modeCount, displayModes); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get display modes\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Now we need to find one for our screen size for(unsigned int i = 0; i < modeCount; i++) { if(displayModes[i].Width == (unsigned int)width) { if(displayModes[i].Height == (unsigned int)height) { refreshNum = displayModes[i].RefreshRate.Numerator; refreshDen = displayModes[i].RefreshRate.Denominator; break; } } } //Store the video card data result = adapter->GetDesc(&adapterDesc); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get adapter description\nError:\n%s"), DXGetErrorDescription(result)); return false; } m_videoCard = new CVideoCard(); MemoryUtil::CreateGameObject(m_videoCard); m_videoCard->VideoCardMemory = (unsigned int)(adapterDesc.DedicatedVideoMemory); wcstombs_s(0, m_videoCard->VideoCardDescription, 128, adapterDesc.Description, 128);*/ //ReleaseCOM(output); //ReleaseCOM(adapter); ReleaseCOM(factory); //DeletePointerArray(displayModes); return true; } Also, I don't know if this means anything, but this is some of the output log when the function is commented out: //... 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msvcr100d.dll', Symbols loaded. 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\imm32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msctf.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\uxtheme.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Program Files (x86)\Common Files\microsoft shared\ink\tiptsf.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleaut32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\clbcatq.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Cannot find or open the PDB file The program '[6560] LostRock.exe: Native' has exited with code 0 (0x0). And when it isn't commented out... //... 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\cfgmgr32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\devobj.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\wintrust.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\crypt32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msasn1.dll', Cannot find or open the PDB file 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\setupapi.dll' 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\devobj.dll' 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\cfgmgr32.dll' 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\clbcatq.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Cannot find or open the PDB file The thread 'Win32 Thread' (0xb94) has exited with code 0 (0x0). The program '[8096] LostRock.exe: Native' has exited with code 0 (0x0). //This is called when I click "Stop Debugging" P.S. I know it is CreateDXGIFactory because if I comment it out, the program exits correctly.

    Read the article

  • Clear edged sprite

    - by Ananth
    I am a newbie to cocos2d. I would like make user to draw similar to what a painting brush would do. I am using CCSprite for that. I almost implemented the velocity, color and opacity factors for that tool, but I couldn't get the Sprite to be as clear as it should be. I can draw only in the below image http://i.imgur.com/KBe0L.png which has blunt edges. But I want it to be harder / clear outside edges as in http://i.stack.imgur.com/GrFlv.png. I am getting no idea to make it clear edged. The piece of code Im using is glEnable(GL_BLEND); [brush.texture setAliasTexParameters]; [brush setBlendFunc:(ccBlendFunc){GL_ONE, GL_ONE_MINUS_SRC_ALPHA}]; [brush visit]; I suspect the problem would be on blending mode. I tried some blending modes, but with no expected results. I am trying this for the past five days and so confused. Can some one help me sort this out? Thanks in advance.

    Read the article

  • View matrix question (rotate by 180 degrees)

    - by King Snail
    I am using a third party rendering API on top of OpenGL code and I cannot get my matrices correct. The API states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • Errors happen when using World.destroyBody( Body body )

    - by minami
    on Android application using libgdx, when I use World.destroyBody( Body body ) method, once in a while the application suddenly shuts down. Is there some setting I need to do with body collision or Box2DDebugRenderer before I destroy bodies? Below is the source I use for destroying bodies. private void deleteUnusedObject( ) { for( Iterator<Body> iter = mWorld.getBodies() ; iter.hasNext() ; ){ Body body = iter.next( ) ; if( body.getUserData( ) != null ) { Box2DUserData data = (Box2DUserData) body.getUserData( ) ; if( ! data.getActFlag() ) { if( body != null ) { mWorld.destroyBody( body ) ; } } } } } Thanks

    Read the article

  • how to make HLSL effect just for lighning without texture mapping?

    - by naprox
    I'm new to XNA, i created an effect and just want to use lightning but in default effect that XNA create we should do texture mapping or the model appears 'RED', because of this lines of code in the effect file: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 output = float4(1,0,0,1); return output; } and if i want to see my model (appear like when i use basiceffect) must do texture mapping by UV coordinates. but my model does not have UV coordinates assigned or its UV coordinates is not exported. and if i do texture mapping i got error. (i do texture mapping by this line of code in vertexshaderfunction and other necessary codes) output.UV= input.UV i have many of this models and want to work with them.(my models are in .FBX format) when i use Bassiceffect i have no problem and model appears correctly. how can i use "just" lightnings in my custom effects? and don't do texture mapping (because i have no UV coordinates in my models) and my model be look like when i use BasicEffect? if you need my complete code Here it is: http://www.mediafire.com/?4jexhd4ulm2icm2 here is inside of my Model Using BasicEffect http://i.imgur.com/ygP2h.jpg?1 and this is my code for drawing with or without BasicEffect inside of my draw() method: Matrix baseWorld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position); foreach(ModelMesh mesh in Model.Meshes) { Matrix localWorld = ModelTransforms[mesh.ParentBone.Index] * baseWorld; foreach(ModelMeshPart part in mesh.MeshParts) { Effect effect = part.Effect; if (effect is BasicEffect) { ((BasicEffect)effect).World = localWorld; ((BasicEffect)effect).View = View; ((BasicEffect)effect).Projection = Projection; ((BasicEffect)effect).EnableDefaultLighting(); } else { setEffectParameter(effect, "World", localWorld); setEffectParameter(effect, "View", View); setEffectParameter(effect, "Projection", Projection); setEffectParameter(effect, "CameraPosition", CameraPosition); } } mesh.Draw(); } setEffectParameter is another method that sets effect parameter if i use my custom effect.

    Read the article

  • The practical cost of swapping effects

    - by sebf
    I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • What is the correct and most efficient approach of streaming vertex data?

    - by Martijn Courteaux
    Usually, I do this in my current OpenGL ES project (for iOS): Initialization: Create two VBO's and one IndexBuffer (since I will use the same indices), same size. Create two VAO's and configure them, both bound to the same Index Buffer. Each frame: Choose a VBO/VAO couple. (Different from the previous frame, so I'm alternating.) Bind that VBO Upload new data using glBufferSubData(GL_ARRAY_BUFFER, ...). Bind the VAO Render my stuff using glDrawElements(GL_***, ...); Unbind the VAO However, someone told me to avoid uploading data (step 3) and render immediately the new data (step 5). I should avoid this, because the glDrawElements call will stall until the buffer is effectively uploaded to VRAM. So he suggested to draw all my geometry I uploaded the previous frame and upload in the current frame what will be drawn in the next frame. Thus, everything is rendered with the delay of one frame. Is this true or am I using the good approach to work with streaming vertex data? (I do know that the pipeline will stall the other way around. Ie: when you draw and immediately try to change the buffer data. But I'm not doing that, since I implemented double buffering.)

    Read the article

  • Light shaped like a line

    - by Michael
    I am trying to figure out how line-shaped lights fit into the standard point light/spotlight/directional light scheme. The way I see it, there are two options: Seed the line with regular point lights and just deal with the artifacts. Easy, but seems wasteful. Make the line some kind of emissive material and apply a bloom effect. Sounds like it could work, but I can't test it in my engine yet. Is there a standard way to do this? Or for non-point lights in general?

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be png, jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan b (actually, I'm probably on plan 'e' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the api. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via jni (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • Having trouble with projection matrix, need help

    - by Mr.UNOwen
    I'm having trouble with what appears to be the projection matrix. Given a wide enough of a screen, when a cube is on the left and right most edge, the left or right wall will appear stretched to the point that the front face is 1/10 the width of the side. So I do update the screen ratio along with the projection matrix and view port on screen resize, am I safe to assume all the trouble is from the matrix class? Also the cube follows the mouse, but it's only vertically aligned and ahead of the mouse when going left or right from the center of the screen. Perspective function call: * setPerspective * * @param fov: angle in radians * @param aspect: screen ratio w/h * @param near: near distance * @param far: far distance **/ void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far) { GMFloat_t difZ = near - far; GMFloat_t *data; mProjection->clear(); //set to identity matrix data = mProjection->getData(); GMFloat_t v = 1.0f / tan(fov / 2.0f); data[_AP_MAA] = v / aspect; data[_AP_MBB] = v; data[_AP_MCC] = (far + near) / difZ; data[_AP_MCD] = -1.0f; data[_AP_MDD] = 0.0f; data[_AP_MDC] = 2.0f * far * near/ difZ; mRatio = aspect; mInvProjOutdated = true; mIsPerspective = true; } and... #define _AP_MAA 0 #define _AP_MAB 1 #define _AP_MAC 2 #define _AP_MAD 3 #define _AP_MBA 4 #define _AP_MBB 5 #define _AP_MBC 6 #define _AP_MBD 7 #define _AP_MCA 8 #define _AP_MCB 9 #define _AP_MCC 10 #define _AP_MCD 11 #define _AP_MDA 12 #define _AP_MDB 13 #define _AP_MDC 14 #define _AP_MDD 15

    Read the article

  • Narrow-phase collision detection algorithms

    - by Marian Ivanov
    There are three phases of collision detection. Broadphase: It loops between all objecs that can interact, false positives are allowed, if it would speed up the loop. Narrowphase: Determines whether they collide, and sometimes, how, no false positives Resolution: Resolves the collision. The question I'm asking is about the narrowphase. There are multiple algorithms, differing in complexity and accuracy. Hitbox intersection: This is an a-posteriori algorithm, that has the lowest complexity, but also isn't too accurate, Color intersection: Hitbox intersection for each pixel, a-posteriori, pixel-perfect, not accuratee in regards to time, higher complexity Separating axis theorem: This is used more often, accurate for triangles, however, a-posteriori, as it can't find the edge, when taking last frame in account, it's more stable Linear raycasting: A-priori algorithm, useful for semi-realistic-looking physics, finds the intersection point, even more accurate than SAT, but with more complexity Spline interpolation: A-priori, even more accurate than linear rays, even more coplexity. There are probably many more that I've forgot about. The question is, in when is it better to use SAT, when rays, when splines, and whether there is anything better.

    Read the article

  • Tile map collision is not working properly

    - by Sigh-AniDe
    I am having problems setting collision between my sprite and the tiles. I have only done the code for colision for moving upwards but some places on the map it moves up and some places it doesn't. Here is what I have so far: Vector2 position; private static float scalingFactor = 32; static int tileWidth = 32; static int tileHeight = 32; int[ , ] map = { {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, }; // This is in turtle.update if ( keyboardState.IsKeyDown( Keys.Up ) ) { if ( position.Y > screenHeight / 4 ) { //// current position of the turtle on the tiles int mapX = ( int )( position.X / scalingFactor ); int mapY = ( int )( position.Y / scalingFactor ) - 1; if ( isMovable( mapX , mapY , map ) ) { position.Y = position.Y - scalingFactor; } } else { MoveUp(); } } private void MoveUp() { motion.Y = -1; } public bool isMovable( int mapX , int mapY , int[ , ] map ) { if ( mapX < 0 || mapX > 19 || mapY < 0 || mapY > 20 ) { return false; } int tile = map[ mapX , mapY ]; if ( tile == 0 ) { return false; } return true; } protected override void Update( GameTime gameTime ) { turtle.Update( screenHeight , scalingFactor , map ); base.Update( gameTime ); }

    Read the article

  • iOS : Creating a 3D Compass

    - by Md. Abdul Munim
    Originally posted here: iOS : Creating a 3D Compass Hi everybody, Quite new in this forum.Posted the same question in stackoverflow and there some people suggested to shift it here, so that I can get a quick help from more specialists in this regard. So what's the big matter? Actually, I want to make a 3D metal compass in iOS which will have a movable cover. That is when you touch it by 3 fingers and try to move your fingers upward the cover keeps moving with your fingers and after certain distance it gets opened.Once you pull it down using 3 fingers again, it gets closed.I can not attach an image here as I don't have that much reputation. So I request you to check the original question at stack overflow that I linked at top. Is it possible using core animations and CALayers? Or would I have to use OpenGL ES? Please someone help me out, I am badly in need of it.And I need to complete it asap!

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >