Search Results

Search found 39156 results on 1567 pages for 'device driver development'.

Page 556/1567 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • CSM DX11 issues

    - by KaiserJohaan
    I got CSM to work in OpenGL, and now Im trying to do the same in directx. I'm using the same math library and all and I'm pretty much using the alghorithm straight off. I am using right-handed, column major matrices from GLM. The light is looking (-1, -1, -1). The problem I have is twofolds; For some reason, the ground floor is causing alot of (false) shadow artifacts, like the vast shadowed area you see. I confirmed this when I disabled the ground for the depth pass, but thats a hack more than anything else The shadows are inverted compared to the shadowmap. If you squint you can see the chairs shadows should be mirrored instead. This is the first cascade shadow map, in range of the alien and the chair: I can't figure out why this is. This is the depth pass: for (uint32_t cascadeIndex = 0; cascadeIndex < NUM_SHADOWMAP_CASCADES; cascadeIndex++) { mShadowmap.BindDepthView(context, cascadeIndex); CameraFrustrum cameraFrustrum = CalculateCameraFrustrum(degreesFOV, aspectRatio, nearDistArr[cascadeIndex], farDistArr[cascadeIndex], cameraViewMatrix); lightVPMatrices[cascadeIndex] = CreateDirLightVPMatrix(cameraFrustrum, lightDir); mVertexTransformPass.RenderMeshes(context, renderQueue, meshes, lightVPMatrices[cascadeIndex]); lightVPMatrices[cascadeIndex] = gBiasMatrix * lightVPMatrices[cascadeIndex]; farDistArr[cascadeIndex] = -farDistArr[cascadeIndex]; } CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix) { CameraFrustrum ret = { Vec4(1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, -halfExtents.y, halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; } And this is the pixel shader used for the lighting stage: #define DEPTH_BIAS 0.0005 #define NUM_CASCADES 4 cbuffer DirectionalLightConstants : register(CBUFFER_REGISTER_PIXEL) { float4x4 gSplitVPMatrices[NUM_CASCADES]; float4x4 gCameraViewMatrix; float4 gSplitDistances; float4 gLightColor; float4 gLightDirection; }; Texture2D gPositionTexture : register(TEXTURE_REGISTER_POSITION); Texture2D gDiffuseTexture : register(TEXTURE_REGISTER_DIFFUSE); Texture2D gNormalTexture : register(TEXTURE_REGISTER_NORMAL); Texture2DArray gShadowmap : register(TEXTURE_REGISTER_DEPTH); SamplerComparisonState gShadowmapSampler : register(SAMPLER_REGISTER_DEPTH); float4 ps_main(float4 position : SV_Position) : SV_Target0 { float4 worldPos = gPositionTexture[uint2(position.xy)]; float4 diffuse = gDiffuseTexture[uint2(position.xy)]; float4 normal = gNormalTexture[uint2(position.xy)]; float4 camPos = mul(gCameraViewMatrix, worldPos); uint index = 3; if (camPos.z > gSplitDistances.x) index = 0; else if (camPos.z > gSplitDistances.y) index = 1; else if (camPos.z > gSplitDistances.z) index = 2; float3 projCoords = (float3)mul(gSplitVPMatrices[index], worldPos); float viewDepth = projCoords.z - DEPTH_BIAS; projCoords.z = float(index); float visibilty = gShadowmap.SampleCmpLevelZero(gShadowmapSampler, projCoords, viewDepth); float angleNormal = clamp(dot(normal, gLightDirection), 0, 1); return visibilty * diffuse * angleNormal * gLightColor; } As you can see I am using depth bias and a bias matrix. Any hints on why this behaves so wierdly?

    Read the article

  • Making XNA Play Nice With 3DS Max, Boundiing Spheres

    - by Jason R. Mick
    I'm using 3DS Max 2010 with the KW x-porter plugin, which outputs a .X file (just downloaded the very latest version). Been getting some odd results: http://www.picvalley.net/u/2930/2265240220441812321333990933PAStFeSONWQslOrMQC5q.PNG Looks like the culling is screwed up. Note, that models I make in Milkshape don't seem to be having these problems. I've also tried to export an FBX file from 3DS Max 2010 and have been getting similar results. What are your suggestions in terms of exporting *.3DS models to a workable XNA form? What tools do you use?. To be clear, the model in question has none of these defects when viewed from similar angles in 3DS Max 2010. http://www.picvalley.net/u/2563/151728957814855401111333991302mSvEJ03Zv22GwHFgIhiV.PNG Any ideas on this oddity would also be appreciated! Edit 1 -- Add'l issue Forgot to mention, that the model otherwise seems alright, but that rotation seems to double -- in other words, when I scroll my camera view left to right, the model (whose draw I give the camera for the view and perspective matrices w/ BasicEffect seems to rotate twice as much as models I draw natively in XNA

    Read the article

  • OpenGL font rendering

    - by DEElekgolo
    I am trying to make an openGL text rendering class using FreeType. I was originally following this code but it doesn't seem to work out for me. I get nothing reguardless of what parameters I put for Draw(). class Font { public: Font() { if (FT_Init_FreeType(&ftLibrary)) { printf("Could not initialize FreeType library\n"); return; } glGenBuffers(1,&iVerts); } bool Load(std::string sFont, unsigned int Size = 12.0f) { if (FT_New_Face(ftLibrary,sFont.c_str(),0,&ftFace)) { printf("Could not open font: %s\n",sFont.c_str()); return true; } iSize = Size; FT_Set_Pixel_Sizes(ftFace,0,(int)iSize); FT_GlyphSlot gGlyph = ftFace->glyph; //Generating the texture atlas. //Rather than some amazing rectangular packing method, I'm just going //to have one long strip of letters with the height being that of the font size. int width = 0; int height = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { printf("Error rendering letter %c for font %s.\n",i,sFont.c_str()); } width += gGlyph->bitmap.width; height += std::max(height,gGlyph->bitmap.rows); } //Generate the openGL texture glActiveTexture(GL_TEXTURE0); //if I texture exists then delete it. iTexture ? glDeleteBuffers(1,&iTexture):0; glGenTextures(1,&iTexture); glBindTexture(GL_TEXTURE_2D,iTexture); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glTexImage2D(GL_TEXTURE_2D,0,GL_ALPHA,width,height,0,GL_ALPHA,GL_UNSIGNED_BYTE,0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //load the glyphs and set the glyph data int x = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { //if it cant load the character continue; } //load the glyph map into the texture glTexSubImage2D(GL_TEXTURE_2D,0,x,0, gGlyph->bitmap.width, gGlyph->bitmap.rows, GL_ALPHA, GL_UNSIGNED_BYTE, gGlyph->bitmap.buffer); //move the "pen" down the strip x += gGlyph->bitmap.width; chars[i].ax = (float)(gGlyph->advance.x >> 6); chars[i].ay = (float)(gGlyph->advance.y >> 6); chars[i].bw = (float)gGlyph->bitmap.width; chars[i].bh = (float)gGlyph->bitmap.rows; chars[i].bl = (float)gGlyph->bitmap_left; chars[i].bt = (float)gGlyph->bitmap_top; chars[i].tx = (float)x/width; } printf("Loaded font: %s\n",sFont.c_str()); return true; } void Draw(std::string sString,Vector2f vPos = Vector2f(0,0),Vector2f vScale = Vector2f(1,1)) { struct pPoint { pPoint() { x = y = s = t = 0; } pPoint(float a,float b,float c,float d) { x = a; y = b; s = c; t = d; } float x,y; float s,t; }; pPoint* cCoordinates = new pPoint[6*sString.length()]; int n = 0; for (const char *p = sString.c_str(); *p; p++) { float x2 = vPos.x() + chars[*p].bl * vScale.x(); float y2 = -vPos.y() - chars[*p].bt * vScale.y(); float w = chars[*p].bw * vScale.x(); float h = chars[*p].bh * vScale.y(); float x = vPos.x() + chars[*p].ax * vScale.x(); float y = vPos.y() + chars[*p].ay * vScale.y(); //skip characters with no pixels //still advances though if (!w || !h) { continue; } //triangle one cCoordinates[n++] = pPoint( x2 , -y2 , chars[*p].tx , 0); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2-h , chars[*p].tx + chars[*p].bw / w , chars[*p].bh / h); } glBindBuffer(GL_ARRAY_BUFFER,iVerts); glBindBuffer(GL_TEXTURE_2D,iTexture); //Vertices glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].x); //TexCoord 0 glClientActiveTexture(GL_TEXTURE0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].s); glCullFace(GL_NONE); glBufferData(GL_ARRAY_BUFFER,6*sString.length(),cCoordinates,GL_DYNAMIC_DRAW); glDrawArrays(GL_TRIANGLES,0,n); glCullFace(GL_BACK); glBindBuffer(GL_ARRAY_BUFFER,0); glBindBuffer(GL_TEXTURE_2D,0); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } ~Font() { glDeleteBuffers(1,&iVerts); glDeleteBuffers(1,&iTexture); } private: unsigned int iSize; //openGL texture atlas unsigned int iTexture; //openGL geometry buffer; unsigned int iVerts; FT_Library ftLibrary; FT_Face ftFace; struct Character { float ax,ay;//Advance float bw,bh;//bitmap size float bl,bt;//bitmap left and top float tx; } chars[128]; };

    Read the article

  • Deferred rendering with both Clockwise and CounterClockwise culling

    - by user1423893
    I have a deferred rendering system that works well with objects that appear solid and drawn using CounterClockwise culling. I have a problem with Clockwise culled objects that are supposed to represent hollow that display their inside faces only. The image below shows a CounterClockwise culled object (left) Clockwise culled object (right). The Clockwise culled object faces display what would be displayed on the CounterClockwise face. How can I get the lighting to light the inner faces for Clockwise culled objects and continue lighting the outer CounterClockwise faces as normal? My lighting method is below private void DeferredLighting(GameTime gameTime) { // Set the render target for the lights game.GraphicsDevice.SetRenderTarget(lightMap); // Clear the render target to (0, 0, 0, 0) game.GraphicsDevice.Clear(Color.Transparent); // Set the render states game.GraphicsDevice.BlendState = BlendState.Additive; game.GraphicsDevice.DepthStencilState = DepthStencilState.None; game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; // Set sampler state to Point as the Surface type requires it in XNA 4.0 game.GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp; // Set the camera properties for all lights BaseLight.SetCameraProperties(game.ActiveCamera); // Draw the lights int numLights = lights.Count; for (int i = 0; i < numLights; ++i) { if (lights[i].Diffuse.W > 0f) { lights[i].Render(gameTime, ref normalMap, ref depthMap, ref sgrMap); } } // Resolve the render target game.GraphicsDevice.SetRenderTarget(null); } I have tried adjusting the render states but no combination works for both objects.

    Read the article

  • which platform to choose for designing a game

    - by Pramod
    I am new to gaming platform and don't have any experience in gaming as well. I want to develop a small shooting game and don't have any idea from where to start and which platform to use like things. I have some experience in java and .net. Can anyone help me in giving me a start? I don't mind even if this question is voted down or closed. But please do help me. I've tried searching other similar questions but everyone is already into gaming and i can't get any of the words. Please refer me to some books or tutorials

    Read the article

  • Normals vs Normal maps

    - by KaiserJohaan
    I am using Assimp asset importer (http://assimp.sourceforge.net/lib_html/index.html) to parse 3d models. So far, I've simply pulled out the normal vectors which are defined for each vertex in my meshes. Yet I have also found various tutorials on normal maps... As I understand it for normal maps, the normal vectors are stored in each texel of a normal map, and you pull these out of the normal texture in the shader. Why is there two ways to get the normals, which one is considered best-practice and why?

    Read the article

  • Rotate around the centre of the screen

    - by Dan Scott
    I want my camera to rotate around the centre of screen and I'm not sure how to achieve that. I have a rotation in the camera but I'm not sure what its rotating around. (I think it might be rotating around the position.X of camera, not sure) If you look at these two images: http://imgur.com/E9qoAM7,5qzyhGD#0 http://imgur.com/E9qoAM7,5qzyhGD#1 The first one shows how the camera is normally, and the second shows how I want the level to look when I would rotate the camera 90 degrees left or right. My camera: public class Camera { private Matrix transform; public Matrix Transform { get { return transform; } } private Vector2 position; public Vector2 Position { get { return position; } set { position = value; } } private float rotation; public float Rotation { get { return rotation; } set { rotation = value; } } private Viewport viewPort; public Camera(Viewport newView) { viewPort = newView; } public void Update(Player player) { position.X = player.PlayerPos.X + (player.PlayerRect.Width / 2) - viewPort.Width / 4; if (position.X < 0) position.X = 0; transform = Matrix.CreateTranslation(new Vector3(-position, 0)) * Matrix.CreateRotationZ(Rotation); if (Keyboard.GetState().IsKeyDown(Keys.D)) { rotation += 0.01f; } if (Keyboard.GetState().IsKeyDown(Keys.A)) { rotation -= 0.01f; } } } (I'm assuming you would need to rotate around the centre of the screen to achieve this)

    Read the article

  • How to integrate the .gdf with a specific exe for Games Explorer

    - by Kraemer
    Hello, I want to create an installer for a game and after that an icon to be put in Games Explorer for Win Vista and Win 7. I have created the GDF (game definitions file), then build the script for project and obtained the .h, GDF and .rc files. But i can't compile using Visual Studio 2010 the .rc file into an executable to be used after that to create the installer. Some error is popping up after i set the executable path "Could not load file or assembly'Microsoft.VisualStudio.HpcDebugger.Impl, Version 10.0.0.0, Culture=neutral, PublickKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified." Any ideas what i'm doing wrong ? I need to mention that i've never worked before with GDF Editor and Visual Studio. Any answer would be highly appreciated.Thanks!

    Read the article

  • Moving around/avoiding obstacles

    - by János Harsányi
    I would like to write a "game", where you can place an obstacle (red), and then the black dot tries to avoid it, and get to the green target. I'm using a very easy way to avoid it, if the black dot is close to the red, it changes its direction, and moves for a while, then it moves forward to the green point. How could I create a "smooth" path for the computer controlled "player"? Edit: Not the smoothness is the main point, but to avoid the red blocking "wall" and not to crash into it and then avoid it. How could I implement some path finding algorithm if I just have basically 3 points? (And what would it make the things much more complicated, if you could place multiple obstacles?)

    Read the article

  • Problem displaying tiles using tiled map loader with SFML

    - by user1905192
    I've been searching fruitlessly for what I did wrong for the past couple of days and I was wondering if anyone here could help me. My program loads my tile map, but then crashes with an assertion error. The program breaks at this line: spacing = atoi(tilesetElement-Attribute("spacing")); Here's my main game.cpp file. #include "stdafx.h" #include "Game.h" #include "Ball.h" #include "level.h" using namespace std; Game::Game() { gameState=NotStarted; ball.setPosition(500,500); level.LoadFromFile("meow.tmx"); } void Game::Start() { if (gameState==NotStarted) { window.create(sf::VideoMode(1024,768,320),"game"); view.reset(sf::FloatRect(0,0,1000,1000));//ball drawn at 500,500 level.SetDrawingBounds(sf::FloatRect(view.getCenter().x-view.getSize().x/2,view.getCenter().y-view.getSize().y/2,view.getSize().x, view.getSize().y)); window.setView(view); gameState=Playing; } while(gameState!=Exiting) { GameLoop(); } window.close(); } void Game::GameLoop() { sf::Event CurrentEvent; window.pollEvent(CurrentEvent); switch(gameState) { case Playing: { window.clear(sf::Color::White); window.setView(view); if (CurrentEvent.type==sf::Event::Closed) { gameState=Exiting; } if ( !ball.IsFalling() &&!ball.IsJumping() &&sf::Keyboard::isKeyPressed(sf::Keyboard::Space)) { ball.setJState(); } ball.Update(view); level.Draw(window); ball.Draw(window); window.display(); break; } } } And here's the file where the error happens: /********************************************************************* Quinn Schwab 16/08/2010 SFML Tiled Map Loader The zlib license has been used to make this software fully compatible with SFML. See http://www.sfml-dev.org/license.php This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. *********************************************************************/ #include "level.h" #include <iostream> #include "tinyxml.h" #include <fstream> int Object::GetPropertyInt(std::string name) { int i; i = atoi(properties[name].c_str()); return i; } float Object::GetPropertyFloat(std::string name) { float f; f = strtod(properties[name].c_str(), NULL); return f; } std::string Object::GetPropertyString(std::string name) { return properties[name]; } Level::Level() { //ctor } Level::~Level() { //dtor } using namespace std; bool Level::LoadFromFile(std::string filename) { TiXmlDocument levelFile(filename.c_str()); if (!levelFile.LoadFile()) { std::cout << "Loading level \"" << filename << "\" failed." << std::endl; return false; } //Map element. This is the root element for the whole file. TiXmlElement *map; map = levelFile.FirstChildElement("map"); //Set up misc map properties. width = atoi(map->Attribute("width")); height = atoi(map->Attribute("height")); tileWidth = atoi(map->Attribute("tilewidth")); tileHeight = atoi(map->Attribute("tileheight")); //Tileset stuff TiXmlElement *tilesetElement; tilesetElement = map->FirstChildElement("tileset"); firstTileID = atoi(tilesetElement->Attribute("firstgid")); spacing = atoi(tilesetElement->Attribute("spacing")); margin = atoi(tilesetElement->Attribute("margin")); //Tileset image TiXmlElement *image; image = tilesetElement->FirstChildElement("image"); std::string imagepath = image->Attribute("source"); if (!tilesetImage.loadFromFile(imagepath))//Load the tileset image { std::cout << "Failed to load tile sheet." << std::endl; return false; } tilesetImage.createMaskFromColor(sf::Color(255, 0, 255)); tilesetTexture.loadFromImage(tilesetImage); tilesetTexture.setSmooth(false); //Columns and rows (of tileset image) int columns = tilesetTexture.getSize().x / tileWidth; int rows = tilesetTexture.getSize().y / tileHeight; std::vector <sf::Rect<int> > subRects;//container of subrects (to divide the tilesheet image up) //tiles/subrects are counted from 0, left to right, top to bottom for (int y = 0; y < rows; y++) { for (int x = 0; x < columns; x++) { sf::Rect <int> rect; rect.top = y * tileHeight; rect.height = y * tileHeight + tileHeight; rect.left = x * tileWidth; rect.width = x * tileWidth + tileWidth; subRects.push_back(rect); } } //Layers TiXmlElement *layerElement; layerElement = map->FirstChildElement("layer"); while (layerElement) { Layer layer; if (layerElement->Attribute("opacity") != NULL)//check if opacity attribute exists { float opacity = strtod(layerElement->Attribute("opacity"), NULL);//convert the (string) opacity element to float layer.opacity = 255 * opacity; } else { layer.opacity = 255;//if the attribute doesnt exist, default to full opacity } //Tiles TiXmlElement *layerDataElement; layerDataElement = layerElement->FirstChildElement("data"); if (layerDataElement == NULL) { std::cout << "Bad map. No layer information found." << std::endl; } TiXmlElement *tileElement; tileElement = layerDataElement->FirstChildElement("tile"); if (tileElement == NULL) { std::cout << "Bad map. No tile information found." << std::endl; return false; } int x = 0; int y = 0; while (tileElement) { int tileGID = atoi(tileElement->Attribute("gid")); int subRectToUse = tileGID - firstTileID;//Work out the subrect ID to 'chop up' the tilesheet image. if (subRectToUse >= 0)//we only need to (and only can) create a sprite/tile if there is one to display { sf::Sprite sprite;//sprite for the tile sprite.setTexture(tilesetTexture); sprite.setTextureRect(subRects[subRectToUse]); sprite.setPosition(x * tileWidth, y * tileHeight); sprite.setColor(sf::Color(255, 255, 255, layer.opacity));//Set opacity of the tile. //add tile to layer layer.tiles.push_back(sprite); } tileElement = tileElement->NextSiblingElement("tile"); //increment x, y x++; if (x >= width)//if x has "hit" the end (right) of the map, reset it to the start (left) { x = 0; y++; if (y >= height) { y = 0; } } } layers.push_back(layer); layerElement = layerElement->NextSiblingElement("layer"); } //Objects TiXmlElement *objectGroupElement; if (map->FirstChildElement("objectgroup") != NULL)//Check that there is atleast one object layer { objectGroupElement = map->FirstChildElement("objectgroup"); while (objectGroupElement)//loop through object layers { TiXmlElement *objectElement; objectElement = objectGroupElement->FirstChildElement("object"); while (objectElement)//loop through objects { std::string objectType; if (objectElement->Attribute("type") != NULL) { objectType = objectElement->Attribute("type"); } std::string objectName; if (objectElement->Attribute("name") != NULL) { objectName = objectElement->Attribute("name"); } int x = atoi(objectElement->Attribute("x")); int y = atoi(objectElement->Attribute("y")); int width = atoi(objectElement->Attribute("width")); int height = atoi(objectElement->Attribute("height")); Object object; object.name = objectName; object.type = objectType; sf::Rect <int> objectRect; objectRect.top = y; objectRect.left = x; objectRect.height = y + height; objectRect.width = x + width; if (objectType == "solid") { solidObjects.push_back(objectRect); } object.rect = objectRect; TiXmlElement *properties; properties = objectElement->FirstChildElement("properties"); if (properties != NULL) { TiXmlElement *prop; prop = properties->FirstChildElement("property"); if (prop != NULL) { while(prop) { std::string propertyName = prop->Attribute("name"); std::string propertyValue = prop->Attribute("value"); object.properties[propertyName] = propertyValue; prop = prop->NextSiblingElement("property"); } } } objects.push_back(object); objectElement = objectElement->NextSiblingElement("object"); } objectGroupElement = objectGroupElement->NextSiblingElement("objectgroup"); } } else { std::cout << "No object layers found..." << std::endl; } return true; } Object Level::GetObject(std::string name) { for (int i = 0; i < objects.size(); i++) { if (objects[i].name == name) { return objects[i]; } } } void Level::SetDrawingBounds(sf::Rect<float> bounds) { drawingBounds = bounds; cout<<tileHeight; //Adjust the rect so that tiles are drawn just off screen, so you don't see them disappearing. drawingBounds.top -= tileHeight; drawingBounds.left -= tileWidth; drawingBounds.width += tileWidth; drawingBounds.height += tileHeight; } void Level::Draw(sf::RenderWindow &window) { for (int layer = 0; layer < layers.size(); layer++) { for (int tile = 0; tile < layers[layer].tiles.size(); tile++) { if (drawingBounds.contains(layers[layer].tiles[tile].getPosition().x, layers[layer].tiles[tile].getPosition().y)) { window.draw(layers[layer].tiles[tile]); } } } } I really hope that one of you can help me and I'm sorry if I've made any formatting issues. Thanks!

    Read the article

  • Resources for 2D rendering using OpenGL?

    - by nightcracker
    I noticed that there is quite some difference between 3D and 2D rendering using OpenGL, the techniques are different - pixel-perfect placing is a lot more desirable, among other things. Are there any good (complete) references on using OpenGL for rendering 2D graphics? There are quite a few "tutorials" around on the net that help you open a window, set up a half-decent environment and draw a sprite, but no real good information on rotation, blending, lightning, drawing order, using the z-buffer, particles, "complex" primitives (circles, stars, cross symbols), ensuring pixel-perfect rendering, instancing and many other staple 2D effects/techniques. Any books, great blogs, anything? Any particular awesome libraries to read?

    Read the article

  • Detecting long held keys on keyboard

    - by Robinson Joaquin
    I just want to ask if can I check for "KEY"(keyboard) that is HOLD/PRESSED for a long time, because I am to create a clone of breakout with air hockey for 2 different human players. Here's the list of my concern: Do I need other/ 3rd party library for KEY HOLDS? Is multi-threading needed? I don't know anything about this multi-threading stuff and I don't think about using one(I'm just a NEWBIE). One more thing, what if the two players pressed their respective key at the same time, how can I program to avoid error or worse one player's key is prioritized first before the the key of the other. example: Player 1 = W for UP & S for DOWN Player 2 = O for UP & L for DOWN (example: W & L is pressed at the same time) PS: I use GLUT for the visuals of the game.

    Read the article

  • Android-Libgdx-ProGuard: Usefulness without DexGuard? [on hold]

    - by Rico Pablo Mince
    So I'm developing a game for Android - using LibGDX - and noticed that the Android SDK (HDK, MDK, WhatTheHellEvarDK) has ProGuard built-in. Browsing the ProGuard page is like searching Google: you get that the idea is to sell some product (in this case, it's DexGuard). That leaves me wondering what features are left out of ProGuard that a game developer targeting Android should worry about. For instance, the ProGuard FAQs answer the question: "Does ProGuard encrypt string constants?" by saying: "No. String encryption in program code has to be perfectly reversible by definition, so it only improves the obfuscation level. It increases the footprint of the code. However, by popular demand, ProGuard's closed-source sibling for Android, DexGuard, does provide string encryption, along with more protection techniques against static and dynamic analysis." Alright. OK. But isn't "...improves the obfuscation level" EXACTLY what ProGuard is supposed to do? Are there better options that can be implemented at build-time in Eclipse using the Gradle options and Libgdx? In particular, the assets folder and res-specific folders will need some protection. The code itself doesn't cure cancer, but I'd prefer if nobody could copy/paste it with different game art and call it "IhAxEdUrGamE"....

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

  • I prefer C/C++ over Unity and other tools: is it such a big downer for a game developper ?

    - by jokoon
    We have a big game project on Unity at school on which we are 12 to work on. My teacher seems to be convinced it's an important tool to teach students, since it makes students look from the high level to the lower level. I can understand his view, and I'm wondering: Is unity such an important engine in game developping companies ? Are there a lot of companies using it because they can't afford to use something else ? He is talking like Unity is a big player in game making, but I only see it fit small indie game companies who want to do a game as fast as possible. Do you think Unity is that much important in the industry ? Does it endangers the value of C++ skills ? It's not that I don't like Unity, it's just that I don't learn nothing with it, I prefer to achieve little steps with Ogre or SFML instead. Also, we also have C++ practice exercises, but those are just practice with theory, nothing much.

    Read the article

  • Why aren't tangent space normal maps completely blue?

    - by seahorse
    Why aren't normal maps just blue? I would think that normal maps should be predominantly blue in color because the Z component of the normal is represented by blue. Normals point out of the surface in the Z direction so we should see blue as the predominant colour since the Z component is dominant. By definition tangent space is perpendicular to the surface. At any point we should have the normal always pointing in the Z (blue direction) with no X (red direction) or Y (green direction). Thus the normal map (since it is a "normal map") should have the colour of the normals which is just blue (R = x = 0, G = y = 0, B = z = 1) with no shades in between. But normal maps are not so, and they have gradients of shades in them. Why is this so?

    Read the article

  • Why use 3d matrix and camera in 2D world for 2d geometric figures?

    - by Navy Seal
    I'm working in XNA on a 2d isometric world/game and I'm using DrawUserPrimitives to draw some geometric figures... I saw some tutorials about creating dynamic shadows but I didn't understood why they use a "3d" matrix to control the transformations since the figure I'm drawing is in 2d perspective. I know I'm drawing a 2d figure in 3d but I still can't understand if I really need to work with the matrix. Is there any advantage in using a 3d Matrix to control camera and view? Any reason why I can't just update my vertex's positions by using a regular method since the view is always the same... And since I want to work only with single figures, won't this cause all the geometric figures have the same transformations simultaneously? To understand better what I mean here's a video http://www.youtube.com/watch?v=LjvsGHXaGEA&feature=player_embedded

    Read the article

  • Ways to earn money through Flash games

    - by Maged
    If you like developing flash games just for fun, why not make money through them? There are different ways you can monetize your flash game: In Game Ads: Some common examples: Mochi Ads gamejacket ad4game CPMStar InviziAds You can make money by helping online gaming companies test and evaluate new games. Many of those companies are seeking feedback and reviews of their newest games. Find a sponsor and license your game. One of the quickest yet hardest ways to make money from the flash games you create is to find a website who is willing to sponsor them. With a single sponsorship, an individual can make anywhere from $1000-$7000 for a game. What are the best ads from these sites? If the game will be in social websites like Facebook and MySpace, will it still be useful to try other sites? Are there any other ways to earn money from a Flash game?

    Read the article

  • What method replaces GL_SELECT for box selection?

    - by Jake
    Last 2 weeks I started working on a box selection that selects shapes using GL_SELECT and I just got it working finally. When looking up resources online, there is a significant number of posts that say GL_SELECT is deprecated in OpenGL 3.0, but there is no mention of what had replace that function. I learnt OpenGL 1.2 in back in college 2 years back but checking wikipedia now, I realise we already have OpenGL 4.0 but I am unaware of what I need to do to keep myself up to date. So, in the meantime, what would be the latest preferred method for box selection? EDIT: I found http://www.khronos.org/files/opengl-quick-reference-card.pdf on page 5 this card still lists glRenderMode(GL_SELECT) as part of the OpenGL 3.2 reference.

    Read the article

  • Obtaining a world point from a screen point with an orthographic projection

    - by vargonian
    I assumed this was a straightforward problem but it has been plaguing me for days. I am creating a 2D game with an orthographic camera. I am using a 3D camera rather than just hacking it because I want to support rotating, panning, and zooming. Unfortunately the math overwhelms me when I'm trying to figure out how to determine if a clicked point intersects a bounds (let's say rectangular) in the game. I was under the impression that I could simply transform the screen point (the clicked point) by the inverse of the camera's View * Projection matrix to obtain the world coordinates of the clicked point. Unfortunately this is not the case at all; I get some point that seems to be in some completely different coordinate system. So then as a sanity check I tried taking an arbitrary world point and transforming it by the camera's View*Projection matrices. Surely this should get me the corresponding screen point, but even that didn't work, and it is quickly shattering any illusion I had that I understood 3D coordinate systems and the math involved. So, if I could form this into a question: How would I use my camera's state information (view and projection matrices, for instance) to transform a world point to a screen point, and vice versa? I hope the problem will be simpler since I'm using an orthographic camera and can make several assumptions from that. I very much appreciate any help. If it makes a difference, I'm using XNA Game Studio.

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • How do you calculate UVW coordinates?

    - by Jenko
    I'm working on a 3d engine and I'm calculating UVT coordinates, where U and V represent pixels on the texture measured in 0-1, and T is: T = perspective / Z But I'm trying to use this perspective-correct triangle rasteriser, which requires a W, per vertex. How do I calculate the W for each vertex for the drawPerspectiveTexturedPolygon() function? Hint: The code comments refer to W as the "homogenous coordinate" ... does that mean anything?

    Read the article

  • Why does my game loop speed vary on different platforms with the same hardware?

    - by Sri Harsha Chilakapati
    I've got a serious issue with my game loop. This loop varies in time with the platform and with the same hardware. This is a list of FPS achieved: - Windows ======= 140 to 150 - Linux ======= 120 to 125 - Windows(WINE) ======= 125 to 135 And since my game loop is fixed timestep, the speed of the game is not stable. Here's my game loop. public final void run() { // Initialize the resources Map.initMap(); initResources(); // Start the timer GTimer.startTimer(); GTimer.refresh(); long elapsedTime = 0; // The game loop while (running) { // Update the game update(elapsedTime); if (state == GameState.GAME_PLAYING) { Map.updateObjects(elapsedTime); } // Show or hide the cursor if (Global.HIDE_CURSOR) { setCursor(GInput.INVISIBLE_CURSOR); } else { setCursor(Cursor.getDefaultCursor()); } // Repaint the game and sync repaint(); elapsedTime = GTimer.sync(); Toolkit.getDefaultToolkit().sync(); } } The timer package How could I improve it?

    Read the article

  • How does an Engine like Source process entities?

    - by Júlio Souza
    [background information] On the Source engine (and it's antecessor, goldsrc, quake's) the game objects are divided on two types, world and entities. The world is the map geometry and the entities are players, particles, sounds, scores, etc (for the Source Engine). Every entity has a think function, which do all the logic for that entity. So, if everything that needs to be processed comes from a base class with the think function, the game engine could store everything on a list and, on every frame, loop through it and call that function. On a first look, this idea is reasonable, but it can take too much resources, if the game has a lot of entities.. [end of background information] So, how does a engine like Source take care (process, update, draw, etc) of the game objects?

    Read the article

  • Missing z-axis rotation for transforming between two vectors

    - by Steve Baughman
    I'm trying to rotate a cube so that it's facing up, but am getting hung up on the final implementation details. It now reliably will rotate the x,y axis to the correct side, but the z-axis is never rotating (See photos of before and after rotation). When I'm using the code below I always get '0' for my rotationVector.z. What am I missing here? // Define lookAt vector lookAtVector = GLKVector3Make(0,0,1); // Define axes vectors axes[0] = GLKVector3Make(0,0,1); axes[1] = GLKVector3Make(-1,0,0); axes[2] = GLKVector3Make(0,1,0); axes[3] = GLKVector3Make(1,0,0); axes[4] = GLKVector3Make(0,-1,0); axes[5] = GLKVector3Make(0,0,-1); CGFloat highest_dot = -1.0; GLKVector3 closest_axis; for(int i = 0; i < 6; i++) { // multiply cube's axes by existing matrix GLKVector3 axis = GLKMatrix4MultiplyVector3(matrix, axes[i]); CGFloat dot = GLKVector3DotProduct(axis, lookAtVector); if(dot > highest_dot) { closest_axis = axis; highest_dot = dot; } } GLKVector3 rotationVector = GLKVector3CrossProduct(closest_axis, lookAtVector); // Get angle between vectors CGFloat angle = atan2(GLKVector3Length(rotationVector), GLKVector3DotProduct(closest_axis, lookAtVector)); // normalize the rotation vector rotationVector = GLKVector3Normalize(rotationVector); // Create transform CATransform3D rotationTransform = CATransform3DMakeRotation(angle, rotationVector.x, rotationVector.y, rotationVector.z); // add rotation transform to existing transformation baseTransform = CATransform3DConcat(baseTransform, rotationTransform); return baseTransform; Before 3d Rotation After 3d Rotation Implementation based on this post

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >