Search Results

Search found 25969 results on 1039 pages for 'rom development'.

Page 429/1039 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • Handling different screen densities in Android Devices?

    - by DevilWithin
    Well, i know there are plenty of different-sized screens in devices that run Android. The SDK I code with deploys to all major desktop platforms and android. I am aware i must have special cares to handle the different screen sizes and densities, but i just had an idea that would work in theory, and my question is exactly about that method, How could it FAIL ? So, what I do is to have an ortho camera of the same size for all devices, with possible tweaks, but anyway that would grant the proper positioning of all elements in all devices, right? We can assume everything is drawn in OpenGLES and input handling is converted to the proper camera coordinates. If you need me to improve the question, please tell me.

    Read the article

  • Automatically zoom out the camera to show all players

    - by user36159
    I am building a game in XNA that takes place in a rectangular arena. The game is multiplayer and each player may go where they like within the arena. The camera is a perspective camera that looks directly downwards. The camera should be automatically repositioned based on the game state. Currently, the xy position is a weighted sum of the xy positions of important entities. I would like the camera's z position to be calculated from the xy coordinates so that it zooms out to the point where all important entities are visible. My current approach is to: hw = the greatest x distance from the camera to an important entity hh = the greatest y distance from the camera to an important entity Calculate z = max(hw / tan(FoVx), hh / tan(FoVy)) My code seems to almost work as it should, but the resulting z values are always too low by a factor of about 4. Any ideas?

    Read the article

  • Get Specific depth values in Kinect (XNA)

    - by N0xus
    I'm currently trying to make a hand / finger tracking with a kinect in XNA. For this, I need to be able to specify the depth range I want my program to render. I've looked about, and I cannot see how this is done. As far as I can tell, kinect's depth values only work with pre-set ranged found in the depthStream. What I would like to do is make it modular so that I can change the depth range my kinect renders. I know this has been down before but I can't find anything online that can show me how to do this. Could someone please help me out? I have made it possible to render the standard depth view with the kinect, and the method that I have made for converting the depth frame is as follows (I've a feeling its something in here I need to set) private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream, int depthFrame32Length) { int tooNearDepth = depthStream.TooNearDepth; int tooFarDepth = depthStream.TooFarDepth; int unknownDepth = depthStream.UnknownDepth; byte[] depthFrame32 = new byte[depthFrame32Length]; for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < depthFrame32.Length; i16++, i32 += 4) { int player = depthFrame[i16] & DepthImageFrame.PlayerIndexBitmask; int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth; // transform 13-bit depth information into an 8-bit intensity appropriate // for display (we disregard information in most significant bit) byte intensity = (byte)(~(realDepth >> 8)); if (player == 0 && realDepth == 00) { // white depthFrame32[i32 + RedIndex] = 255; depthFrame32[i32 + GreenIndex] = 255; depthFrame32[i32 + BlueIndex] = 255; } // omitted other if statements. Simple changed the color of the pixels if they went out of the pre=set depth values else { // tint the intensity by dividing by per-player values depthFrame32[i32 + RedIndex] = (byte)(intensity >> IntensityShiftByPlayerR[player]); depthFrame32[i32 + GreenIndex] = (byte)(intensity >> IntensityShiftByPlayerG[player]); depthFrame32[i32 + BlueIndex] = (byte)(intensity >> IntensityShiftByPlayerB[player]); } } return depthFrame32; } I have a strong hunch it's something I need to change in the int player and int realDepth values, but i can't be sure.

    Read the article

  • Object detection in bitmap JavaScript canvas

    - by fallenAngel
    I want to detect clicks on canvas elements which are drawn using paths. So far I have stored element paths in a JavaScript data structure and then check the coordinates of hits which match the element's coordinates. Rendering each element path and checking the hits would be inefficient when there are a lot of elements. I believe there must be an algorithm for this kind of coordinate search, can anyone help me with this?

    Read the article

  • XNA Moddable Game - Architecture Design and Reflection

    - by David K
    I've decided to embark on an XNA moddable game project of a simple rogue style. For all purposes of this question, I'm going to not be using a scripting engine, but rather allow modders to directly compile assemblies that are loaded by the game at run time. I know about the security problems this may raise. So in order to expose the moddable content, I have gone about creating a generic project in XNA called MyModel. This contains a number of interfaces that all inherit from IPlugin, such as IGameSystem, IRenderingSystem, IHud, IInputSystem etc. Then I've created another project called MyRogueModel. This references MyModel project, and holds interfaces such as IMonster, IPlayer, IDungeonGenerator, IInventorySystem. More rogue specific interfaces, but again, all interfaces in this project inherit from IPlugin. Then finally, I've created another project called MyRogueGame, that references both MyModel and MyRogueModel projects. This project will be the game that you run and play. Here I have put the actual implementation of the Monster, DungeonGenerator, InputSystem and RenderingSystem classes. This project will also scan the mods directory during run time and load any IPlugins it finds using reflection and override anything it finds from the default. For example if it finds a new implementation of the DungeonGenerator it will use that one instead. Now my question is, in order to get this far, I have effectively 2 projects that contain nothing but interfaces... which seems a little... strange ? For people to create mods for the game, I would give them both the MyModel and MyRogueModel assemblies in which they would reference. I'm not sure whether this is the right way to do it, but my reasoning goes as follows : If I write 1 input system, I can use it in any game I write. If I create 3 rogue like games, and a modder writes 1 rendering system, that modder could use the rendering system for all 3 games, because it all comes from the MyModel project. I come from a more web based C# role, so having empty interface projects doesn't seem wrong, its just something I haven't done before. Before I embark on something that might be crazy, I'd just like to know whether this is a foolish idea and whether there's a better (or established) design principle I should be following ?

    Read the article

  • 3D Vector "End Point" Calculation for procedural Vector Graphics

    - by FrostFlame64
    Alright, So I need some help with some Vector Math. I've developing some game Engines that have Procedural Fractal Generation for Some Graphics, such as using Lindenmayer Systems for generating Trees and Plants. L-Systems, are drawn by using Turtle Graphics, which is a form of Vector graphics. I first created a system to draw in 2D Graphics, which works perfectly fine. But now I want to make a 3D equivalent, and I’ve run into an issue. For my 2D Version, I created a Method for quickly determining the “End Point” of a Vector-like movement. Given a starting point (X, Y), a direction (between 0 and 360 degrees), and a distance, the end point is calculated by these formulas: newX = startX + distance * Sin((PI * direction) / 180) newY = startY + distance * Cos((PI * direction) / 180) Now I need something Similarly Equivalent for performing this Calculation in 3D, But I haven’t been able to Google anything that could show me how to do this. I'm flexible enough to get whatever required information is needed for this method calculation, in any reasonable form (Vector3, Quaternion, ect). To summarize: Given a starting point/vector position in 3D space (X, Y, Z), a Direction in 3D space (Vector3, Quaternion, ect), and a Distance, I need to find the “End Point” in 3D Space. Thank you for your time and help.

    Read the article

  • Toon shader with Texture. Can this be optimized?

    - by Alex
    I am quite new to OpenGL, I have managed after long trial and error to integrate Nehe's Cel-Shading rendering with my Model loaders, and have them drawn using the Toon shade and outline AND their original texture at the same time. The result is actually a very nice Cel Shading effect of the model texture, but it is havling the speed of the program, it's quite very slow even with just 3 models on screen... Since the result was kind of hacked together, I am thinking that maybe I am performing some extra steps or extra rendering tasks that maybe are not needed, and are slowing down the game? Something unnecessary that maybe you guys could spot? Both MD2 and 3DS loader have an InitToon() function called upon creation to load the shader initToon(){ int i; // Looping Variable ( NEW ) char Line[255]; // Storage For 255 Characters ( NEW ) float shaderData[32][3]; // Storate For The 96 Shader Values ( NEW ) FILE *In = fopen ("Shader.txt", "r"); // Open The Shader File ( NEW ) if (In) // Check To See If The File Opened ( NEW ) { for (i = 0; i < 32; i++) // Loop Though The 32 Greyscale Values ( NEW ) { if (feof (In)) // Check For The End Of The File ( NEW ) break; fgets (Line, 255, In); // Get The Current Line ( NEW ) shaderData[i][0] = shaderData[i][1] = shaderData[i][2] = float(atof (Line)); // Copy Over The Value ( NEW ) } fclose (In); // Close The File ( NEW ) } else return false; // It Went Horribly Horribly Wrong ( NEW ) glGenTextures (1, &shaderTexture[0]); // Get A Free Texture ID ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind This Texture. From Now On It Will Be 1D ( NEW ) // For Crying Out Loud Don't Let OpenGL Use Bi/Trilinear Filtering! ( NEW ) glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage1D (GL_TEXTURE_1D, 0, GL_RGB, 32, 0, GL_RGB , GL_FLOAT, shaderData); // Upload ( NEW ) } This is the drawing for the animated MD2 model: void MD2Model::drawToon() { float outlineWidth = 3.0f; // Width Of The Lines ( NEW ) float outlineColor[3] = { 0.0f, 0.0f, 0.0f }; // Color Of The Lines ( NEW ) // ORIGINAL PART OF THE FUNCTION //Figure out the two frames between which we are interpolating int frameIndex1 = (int)(time * (endFrame - startFrame + 1)) + startFrame; if (frameIndex1 > endFrame) { frameIndex1 = startFrame; } int frameIndex2; if (frameIndex1 < endFrame) { frameIndex2 = frameIndex1 + 1; } else { frameIndex2 = startFrame; } MD2Frame* frame1 = frames + frameIndex1; MD2Frame* frame2 = frames + frameIndex2; //Figure out the fraction that we are between the two frames float frac = (time - (float)(frameIndex1 - startFrame) / (float)(endFrame - startFrame + 1)) * (endFrame - startFrame + 1); // I ADDED THESE FROM NEHE'S TUTORIAL FOR FIRST PASS (TOON SHADE) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good Calculations ( NEW ) glEnable (GL_LINE_SMOOTH); // Cel-Shading Code // glEnable (GL_TEXTURE_1D); // Enable 1D Texturing ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind Our Texture ( NEW ) glColor3f (1.0f, 1.0f, 1.0f); // Set The Color Of The Model ( NEW ) // ORIGINAL DRAWING CODE //Draw the model as an interpolation between the two frames glBegin(GL_TRIANGLES); for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd(); // ADDED THESE FROM NEHE'S FOR SECOND PASS (OUTLINE) glDisable (GL_TEXTURE_1D); // Disable 1D Textures ( NEW ) glEnable (GL_BLEND); // Enable Blending ( NEW ) glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // Set The Blend Mode ( NEW ) glPolygonMode (GL_BACK, GL_LINE); // Draw Backfacing Polygons As Wireframes ( NEW ) glLineWidth (outlineWidth); // Set The Line Width ( NEW ) glCullFace (GL_FRONT); // Don't Draw Any Front-Facing Polygons ( NEW ) glDepthFunc (GL_LEQUAL); // Change The Depth Mode ( NEW ) glColor3fv (&outlineColor[0]); // Set The Outline Color ( NEW ) // HERE I AM PARSING THE VERTICES AGAIN (NOT IN THE ORIGINAL FUNCTION) FOR THE OUTLINE AS PER NEHE'S TUT glBegin (GL_TRIANGLES); // Tell OpenGL What We Want To Draw for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd (); // Tell OpenGL We've Finished glDepthFunc (GL_LESS); // Reset The Depth-Testing Mode ( NEW ) glCullFace (GL_BACK); // Reset The Face To Be Culled ( NEW ) glPolygonMode (GL_BACK, GL_FILL); // Reset Back-Facing Polygon Drawing Mode ( NEW ) glDisable (GL_BLEND); } Whereas this is the drawToon function in the 3DS loader void Model_3DS::drawToon() { float outlineWidth = 3.0f; // Width Of The Lines ( NEW ) float outlineColor[3] = { 0.0f, 0.0f, 0.0f }; // Color Of The Lines ( NEW ) //ORIGINAL CODE if (visible) { glPushMatrix(); // Move the model glTranslatef(pos.x, pos.y, pos.z); // Rotate the model glRotatef(rot.x, 1.0f, 0.0f, 0.0f); glRotatef(rot.y, 0.0f, 1.0f, 0.0f); glRotatef(rot.z, 0.0f, 0.0f, 1.0f); glScalef(scale, scale, scale); // Loop through the objects for (int i = 0; i < numObjects; i++) { // Enable texture coordiantes, normals, and vertices arrays if (Objects[i].textured) glEnableClientState(GL_TEXTURE_COORD_ARRAY); if (lit) glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); // Point them to the objects arrays if (Objects[i].textured) glTexCoordPointer(2, GL_FLOAT, 0, Objects[i].TexCoords); if (lit) glNormalPointer(GL_FLOAT, 0, Objects[i].Normals); glVertexPointer(3, GL_FLOAT, 0, Objects[i].Vertexes); // Loop through the faces as sorted by material and draw them for (int j = 0; j < Objects[i].numMatFaces; j ++) { // Use the material's texture Materials[Objects[i].MatFaces[j].MatIndex].tex.Use(); // AFTER THE TEXTURE IS APPLIED I INSERT THE TOON FUNCTIONS HERE (FIRST PASS) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good Calculations ( NEW ) glEnable (GL_LINE_SMOOTH); // Cel-Shading Code // glEnable (GL_TEXTURE_1D); // Enable 1D Texturing ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind Our Texture ( NEW ) glColor3f (1.0f, 1.0f, 1.0f); // Set The Color Of The Model ( NEW ) glPushMatrix(); // Move the model glTranslatef(Objects[i].pos.x, Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces[j].numSubFaces, GL_UNSIGNED_SHORT, Objects[i].MatFaces[j].subFaces); glPopMatrix(); } glDisable (GL_TEXTURE_1D); // Disable 1D Textures ( NEW ) // THIS IS AN ADDED SECOND PASS AT THE VERTICES FOR THE OUTLINE glEnable (GL_BLEND); // Enable Blending ( NEW ) glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // Set The Blend Mode ( NEW ) glPolygonMode (GL_BACK, GL_LINE); // Draw Backfacing Polygons As Wireframes ( NEW ) glLineWidth (outlineWidth); // Set The Line Width ( NEW ) glCullFace (GL_FRONT); // Don't Draw Any Front-Facing Polygons ( NEW ) glDepthFunc (GL_LEQUAL); // Change The Depth Mode ( NEW ) glColor3fv (&outlineColor[0]); // Set The Outline Color ( NEW ) for (int j = 0; j < Objects[i].numMatFaces; j ++) { glPushMatrix(); // Move the model glTranslatef(Objects[i].pos.x, Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces[j].numSubFaces, GL_UNSIGNED_SHORT, Objects[i].MatFaces[j].subFaces); glPopMatrix(); } glDepthFunc (GL_LESS); // Reset The Depth-Testing Mode ( NEW ) glCullFace (GL_BACK); // Reset The Face To Be Culled ( NEW ) glPolygonMode (GL_BACK, GL_FILL); // Reset Back-Facing Polygon Drawing Mode ( NEW ) glDisable (GL_BLEND); glPopMatrix(); } Finally this is the tex.Use() function that loads a BMP texture and somehow gets blended perfectly with the Toon shading void GLTexture::Use() { glEnable(GL_TEXTURE_2D); // Enable texture mapping glBindTexture(GL_TEXTURE_2D, texture[0]); // Bind the texture as the current one }

    Read the article

  • Setting the values of a struct array from JS to GLSL

    - by mikidelux
    I've been trying to make a structure that will contain all the lights of my WebGL app, and I'm having troubles setting up it's values from JS. The structure is as follows: struct Light { vec4 position; vec4 ambient; vec4 diffuse; vec4 specular; vec3 spotDirection; float spotCutOff; float constantAttenuation; float linearAttenuation; float quadraticAttenuation; float spotExponent; float spotLightCosCutOff; }; uniform Light lights[numLights]; After testing LOTS of things I made it work but I'm not happy with the code I wrote: program.uniform.lights = []; program.uniform.lights.push({ position: "", diffuse: "", specular: "", ambient: "", spotDirection: "", spotCutOff: "", constantAttenuation: "", linearAttenuation: "", quadraticAttenuation: "", spotExponent: "", spotLightCosCutOff: "" }); program.uniform.lights[0].position = gl.getUniformLocation(program, "lights[0].position"); program.uniform.lights[0].diffuse = gl.getUniformLocation(program, "lights[0].diffuse"); program.uniform.lights[0].specular = gl.getUniformLocation(program, "lights[0].specular"); program.uniform.lights[0].ambient = gl.getUniformLocation(program, "lights[0].ambient"); ... and so on I'm sorry for making you look at this code, I know it's horrible but I can't find a better way. Is there a standard or recommended way of doing this properly? Can anyone enlighten me?

    Read the article

  • android get contact phone number

    - by ng93
    hi, im trying to get a contacts name and phone number from the contacts list. im using: contactname = Curser.getString(Curser.getColumnIndex(Contacts.DISPLAY_NAME)); to get their name and it works fine. But using: contactphone = Curser.getString(Curser.getColumnIndex(ContactsContract.CommonDataKinds.Phone.NUMBER)); causes no warnings or errors and builds fine, yet force closes in both the emulator (2.1) and my htc desire (nexus one 2.2 rom/htc desire 2.1 rom). Any ideas how to fix it? oh and contactname and contactphone are both strings thanks, ng93

    Read the article

  • How do I properly use multithreading with Nvidia PhysX?

    - by xcrypt
    I'm having a multithreading problem with Nvidia PhysX. the SDK requires that you call Simulate() (starts computing new physics positions within a new thread) and FetchResults() (waits 'till the physics computations are done). Inbetween Simulate() and FetchResults() you may not "compute new physics". It is proposed (in a sample) that we create a game loop as such: Logic (you may calculate physics here and other stuff) Render + Simulate() at start of Render call and FetchResults at end of Render() call However, this has given me various little errors that stack up: since you actually render the scene that was computed in the previous iteration in the game loop. Does anyone have a solution to this?

    Read the article

  • Weird rotation problem

    - by Phil
    I'm creating a simple tank game. No matter what I do, the turret keeps facing the target with it's side. I just can't figure out how to turn it 90 degrees in Y once so it faces it correctly. I've checked the pivot in Maya and it doesn't matter how I change it. This is the code I use to calculate how to face the target: void LookAt() { var forwardA = transform.forward; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 20) { //Rotate to transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 20) { transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } } I'm using Unity3d and would appreciate any help I can get! Thanks!

    Read the article

  • WIn API Basic Paint program

    - by Tom Burman
    Just trying to learn a bit of Win API. Im trying to make a basic drawing app, a bit like MS Paint. For the time being im trying to get one function to work which is, when you left click and drag the mouse around the screen a line is drawn behind the mouse. Heres what i have so far, but for some reason: 1) the line starts drawing straight away rather then waiting for the left click 2) the line isn't solid its very dotty. case WM_MOUSEMOVE: { if(MK_LBUTTON){ hdc = GetDC(hwnd); hPen = CreatePen(PS_SOLID,5,RGB(0, 0, 255)); SelectObject(hdc, hPen); int x = LOWORD(lParam); int y = HIWORD(lParam); MoveToEx(hdc,x,y,NULL); LineTo(hdc, LOWORD(lParam), HIWORD(lParam)); ReleaseDC(hwnd,hdc); } else break; } } Thanks for any help!

    Read the article

  • Black Screen: How to set Projection/View Matrix

    - by Lisa
    I have a Windows Phone 8 C#/XAML with DirectX component project. I'm rendering some particles, but each particle is a rectangle versus a square (as I've set the vertices to be positions equally offset from each other). I used an Identity matrix in the view and projection matrix. I decided to add the windows aspect ratio to prevent the rectangles. But now I get a black screen. None of the particles are rendered now. I don't know what's wrong with my matrices. Can anyone see the problem? These are the default matrices in Microsoft's project example. View Matrix: XMVECTOR eye = XMVectorSet(0.0f, 0.7f, 1.5f, 0.0f); XMVECTOR at = XMVectorSet(0.0f, -0.1f, 0.0f, 0.0f); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMStoreFloat4x4(&m_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(eye, at, up))); Projection Matrix: void CubeRenderer::CreateWindowSizeDependentResources() { Direct3DBase::CreateWindowSizeDependentResources(); float aspectRatio = m_windowBounds.Width / m_windowBounds.Height; float fovAngleY = 70.0f * XM_PI / 180.0f; if (aspectRatio < 1.0f) { fovAngleY /= aspectRatio; } XMStoreFloat4x4(&m_constantBufferData.projection, XMMatrixTranspose(XMMatrixPerspectiveFovRH(fovAngleY, aspectRatio, 0.01f, 100.0f))); } I've tried modifying them to use cocos2dx's WP8 example. XMMATRIX identityMatrix = XMMatrixIdentity(); float fovy = 60.0f; float aspect = m_windowBounds.Width / m_windowBounds.Height; float zNear = 0.1f; float zFar = 100.0f; float xmin, xmax, ymin, ymax; ymax = zNear * tanf(fovy * XM_PI / 360); ymin = -ymax; xmin = ymin * aspect; xmax = ymax * aspect; XMMATRIX tmpMatrix = XMMatrixPerspectiveOffCenterRH(xmin, xmax, ymin, ymax, zNear, zFar); XMMATRIX projectionMatrix = XMMatrixMultiply(tmpMatrix, identityMatrix); // View Matrix float fEyeX = m_windowBounds.Width * 0.5f; float fEyeY = m_windowBounds.Height * 0.5f; float fEyeZ = m_windowBounds.Height / 1.1566f; float fLookAtX = m_windowBounds.Width * 0.5f; float fLookAtY = m_windowBounds.Height * 0.5f; float fLookAtZ = 0.0f; float fUpX = 0.0f; float fUpY = 1.0f; float fUpZ = 0.0f; XMMATRIX tmpMatrix2 = XMMatrixLookAtRH(XMVectorSet(fEyeX,fEyeY,fEyeZ,0.f), XMVectorSet(fLookAtX,fLookAtY,fLookAtZ,0.f), XMVectorSet(fUpX,fUpY,fUpZ,0.f)); XMMATRIX viewMatrix = XMMatrixMultiply(tmpMatrix2, identityMatrix); XMStoreFloat4x4(&m_constantBufferData.view, viewMatrix); Vertex Shader cbuffer ModelViewProjectionConstantBuffer : register(b0) { //matrix model; matrix view; matrix projection; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float4 color : COLOR; }; PixelInputType main(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; //===================================== // TODO: ADDED for testing input.position.z = 0.0f; //===================================== // Calculate the position of the vertex against the world, view, and projection matrices. //output.position = mul(input.position, model); output.position = mul(input.position, view); output.position = mul(output.position, projection); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Store the particle color for the pixel shader. output.color = input.color; return output; } Before I render the shader, I set the view/projection matrices into the constant buffer void ParticleRenderer::SetShaderParameters() { ViewProjectionConstantBuffer* dataPtr; D3D11_MAPPED_SUBRESOURCE mappedResource; DX::ThrowIfFailed(m_d3dContext->Map(m_constantBuffer.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)); dataPtr = (ViewProjectionConstantBuffer*)mappedResource.pData; dataPtr->view = m_constantBufferData.view; dataPtr->projection = m_constantBufferData.projection; m_d3dContext->Unmap(m_constantBuffer.Get(), 0); // Now set the constant buffer in the vertex shader with the updated values. m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf() ); // Set shader texture resource in the pixel shader. m_d3dContext->PSSetShaderResources(0, 1, &m_textureView); } Nothing, black screen... I tried so many different look at, eye, and up vectors. I tried transposing the matrices. I've set the particle center position to always be (0, 0, 0), I tried different positions too, just to make sure they're not being rendered offscreen.

    Read the article

  • If statement causing xna sprites to draw frame by frame

    - by user1489599
    I’m a bit new to XNA but I wanted to write a simple program that would fire a cannon ball from a cannon at a 45 degree angle. It works fine outside of my keyboard i/o if statement, but when I encapsulate the code around an if statement checking to see if the user hits the space bar, the sprite will draw one frame at a time every time the space bar is hit. This is the code in question if (currentKeyboardState.IsKeyUp(Keys.Space) && previousKeyboardState.IsKeyDown(Keys.Space) && !skullBall.Alive) { //works outside the keyboard input if statement //{ skullBall.Position = cannon.Position; skullBall.DeltaY = -(float)(Math.Sin(MathHelper.ToRadians(45)) * 50/*39.7577*/ * time + 0.5 * (gravity * (time * time))); skullBall.DeltaX = (float)(Math.Cos(MathHelper.ToRadians(45)) * 50/*39.7577*/ * time); skullBall.Alive = true; //} } The skull ball represents the cannon ball and the cannon is just the starting point. DeltaX and DeltaY are the values I’m using to update the cannon balls position per update. I know it's dumb to have the cannon ball start at the cannons position every time the update is called but it’s not really noticeable right now. I was wondering if after examining my code, if anyone noticed any errors that would cause the sprite to display frame by frame instead of drawing it as a full animation of the cannon ball leaving the cannon and moving from there.

    Read the article

  • Curiosity on any Smartphones that Run on Android 2.3.3 with Different Screen Reoslution

    - by David Dimalanta
    I have a question regarding about any smartphones that run only in Android 2.3.3. Is the size of screen or the screen resolution is always HVGA or does it have capable of running this OS (Android 2.3.3) on big screen size (4" to 5") at about 720x1280? I'm thinking of the game's compatibility depending on the version of the Android OS and the screen resolution, which affects the change of coordinates especially for assigning touch buttons and drag-n-drop at exact location, before I'm gonna decide to make one. My program works on the Android 4 ICS and Jellybean, however, will that work on Android 2.3.3 in spite of precise touch coordinate or just dependent on the screen resolution (regardless how large it is) as the X and Y coordinate? And take note, I'm using Eclipse IDE for Java developers.

    Read the article

  • Collision detection code style

    - by Marian Ivanov
    Not only there are two useful broad-phase algorithms and a lot of useful narrowphase algorithms, there are also multiple code styles. Arrays vs. calling Make an array of broadphase checks, then filter them with narrowphase checks, then resolve them. function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ possibleCollisions = getPossibleCollisions(b,a->get(index)); for(i=0; i<possibleCollitionsNumber; i++){ if(narrowphase(possibleCollisions[i],a[index])) { collisions->push(possibleCollisions[i]); }; }; for(i=0; i<collitionsNumber; i++){ //CODE FOR RESOLUTION }; }; Make the broadphase call the narrowphase, and the narrowphase call the resolution function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ broadphase(b,a->get(index)); }; function broadphase(thingy * with, thingy * what){ while(blah){ //blahcode narrowphase(what,collidingThing); }; }; Events vs. in-the-loop Fire an event. This abstracts the check away, but it's trickier to make an equal interaction. a[index] -> collisionEvent(eventdata); //much later int collisionEvent(eventdata){ //resolution gets here } Resolve the collision inside the loop. This glues narrowphase and resolution into one layer. if(narrowphase(possibleCollisions[i],a[index])) { //CODE GOES HERE }; The questions are: Which of the first two is better, and how am I supposed to make a zero-sum Newtonian interaction under B1.

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • How do I get the child of a unique parent in ActionScript?

    - by Koen
    My question is about targeting a child with a unique parent. For example. Let's say I have a box people can move called box_mc and 3 platforms it can jump on called: Platform_1 Platform_2 Platform_3 All of these platforms have a child element called hit. Platform_1 Hit Platform_2 Hit Platform_3 Hit I use an array and a for each statement to detect if box_mc hits one of the platforms childs. var obj_arr:Array = [Platform_1, Platform_2, Platform_3]; for each(obj in obj_arr){ if(box_mc.hitTestObject(obj.hit)){ trace(obj + " " + obj.hit); box_mc.y = obj.hit.y - box_mc.height; } } obj seems to output the unique parent it is hitting but obj.hit ouputs hit, so my theory is that it is applying the change of y to all the childs called hit in the stage. Would it be possible to only detect the child of that specific parent?

    Read the article

  • Lwjgl or opengl double pixels

    - by Philippe Paré
    I'm working in java with LWJGL and trying to double all my pixels. I'm trying to draw in an area of 800x450 and then stretch all the frame image to the complete 1600x900 pixels without them getting blured. I can't figure out how to do that in java, everything I find is in c++... A hint would be great! Thanks a lot. EDIT : I've tried drawing to a texture created in opengl by setting it to the framebuffer, but I can't find a way to use glGenTextures() in java... so this is not working... also I though about using a shader but I would not be able to draw only in the smaller region...

    Read the article

  • How To Smoothly Animate From One Camera Position To Another

    - by www.Sillitoy.com
    The Question is basically self explanatory. I have a scene with many cameras and I'd like to smoothly switch from one to another. I am not looking for a cross fade effect but more to a camera moving and rotating the view in order to reach the next camera point of view and so on. To this end I have tried the following code: firstCamera.transform.position.x = Mathf.Lerp(firstCamera.transform.position.x, nextCamer.transform.position.x,Time.deltaTime*smooth); firstCamera.transform.position.y = Mathf.Lerp(firstCamera.transform.position.y, nextCamera.transform.position.y,Time.deltaTime*smooth); firstCamera.transform.position.z = Mathf.Lerp(firstCamera.transform.position.z, nextCamera.transform.position.z,Time.deltaTime*smooth); firstCamera.transform.rotation.x = Mathf.Lerp(firstCamera.transform.rotation.x, nextCamera.transform.rotation.x,Time.deltaTime*smooth); firstCamera.transform.rotation.z = Mathf.Lerp(firstCamera.transform.rotation.z, nextCamera.transform.rotation.z,Time.deltaTime*smooth); firstCamera.transform.rotation.y = Mathf.Lerp(firstCamera.transform.rotation.y, nextCamera.transform.rotation.y,Time.deltaTime*smooth); But the result is actually not that good.

    Read the article

  • How do I draw anti-aliased holes in a bitmap

    - by gyozo kudor
    I have an artillery game (hobby-learning project) and when the projectile hits it leaves a hole in the ground. I want this hole to have antialiased edges. I'm using System.Drawing for this. I've tried with clipping paths, and drawing with a transparent color using gfx.CompositingMode = CompositingMode.SourceCopy, but it gives me the same result. If I draw a circle with a solid color it works fine, but I need a hole, a circle with 0 alpha values. I have enabled these but they work only with solid colors: gfx.CompositingQuality = CompositingQuality.HighQuality; gfx.InterpolationMode = InterpolationMode.HighQualityBicubic; gfx.SmoothingMode = SmoothingMode.AntiAlias; In the two pictures consider black as being transparent. This is what I have (zoomed in): And what I need is something like this (made with photoshop): This will be just a visual effect, in code for collision detection I still treat everything with alpha 128 as solid. Edit: I'm usink OpenTK for this game. But for this question I think it doesn't really matter probably it is gdi+ related.

    Read the article

  • Collision Detection for a 2D RPG

    - by PHMitrious
    First of all, I have done some research on this topic before asking, and I'm asking this question as a mean to get some opinions on this topic, so I don't make a decision only on my own, but taking into account other people's experience as well. I'm starting a 2D online RPG project. I am using SFML for graphics and input and I'm creating a basic game structure and all for the game, creating modules for each part of the game. Well, let me get to the point I just wanted to give you guys some context. I want to decide on how I'm going to work with collision detection. Well I'm kinda going to work on maps with a tile map divided in layers (as usual) and add an extra 2 layers - not exactly in the map - for objects. So I'll have collisions between objects and agents (players - npcs - monsters - spells etc) and agents and tiles. The seconds one can be easily solved the first one need a little bit of work. I considered both creating a basic collision test engine using polygons and a quadtree to diminish tests since I'm going to be working with big maps with lots of objects - creating both a physical and graphical world representation. And I also considered using a physics engine like Box2D for collision tests. I think the first approach would take more work on my part but the second one would have the overhead of using a whole physics engine for just collision detection and no physics. What do you guys think ?

    Read the article

  • Event Driven Communication in Game Engine - Yes or No?

    - by Bunkai.Satori
    As I am reading book Game Coding Complete (http://www.amazon.com/Game-Coding-Complete-Third-McShaffry/dp/1584506806/ref=sr_1_1?ie=UTF8&qid=1295978774&sr=8-1), the author recommend Event Driven communication among the all game objects and modules. Basicaly, all the living game actors and object should communicate with the key modules (Physics, AI, Game Logic, Game View, etc..) via internal event messaging system. This would mean designing efficient event manager as well. My question is, whether this is proven and recommended approach. If it is not properly designed, it might mean consuming a lot of CPU cycles, which can be used elsewhere. This is especially true, if the game is targetted for mobile platform. What is your opinion and recommendation, please?

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >