Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 552/1292 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Why does my ID3DXSprite appear to be incorrectly scaled?

    - by Bjoern
    I am using D3D9 for rendering some simple things (a movie) as the backmost layer, then on top of that some text messages, and now wanted to add some buttons to that. Before adding the buttons everything seemed to have worked fine, and I was using a ID3DXSprite for the text as well (ID3DXFont), now I am loading some graphics for the buttons, but they seem to be scaled to something like 1.2 times their original size. In my test window I centered the graphic, but it being too big it just doesnt fit well, for example the client area is 640x360, the graphic is 440, so I expect 100 pixel on left and right, left side is fine [I took screenshot and "counted" the pixels in photoshop], but on the right there is only about 20 pixels) My rendering code is very simple (I am omitting error checks, et cetera, for brevity) // initially viewport was set to width/height of client area // clear device m_d3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_STENCIL|D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB(0,0,0,0), 1.0f, 0 ); // begin scene m_d3dDevice->BeginScene(); // render movie surface (just two triangles to which the movie is rendered) m_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,false); m_d3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //Ignored m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); m_d3dDevice->SetTexture( 0, m_movieTexture ); m_d3dDevice->SetStreamSource(0, m_displayPlaneVertexBuffer, 0, sizeof(Vertex)); m_d3dDevice->SetFVF(Vertex::FVF_Flags); m_d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2); // render sprites m_sprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_SORT_TEXTURE | D3DXSPRITE_DO_NOT_ADDREF_TEXTURE); // text drop shadow m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRectDropShadow, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorDropShadow ); // text m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRect, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorMessage ) ); // control object m_sprite->Draw( m_texture, 0, 0, &m_vecPos, 0xFFFFFFFF ); // draws a few objects like this m_sprite->End() // end scene m_d3dDevice->EndScene(); What did I forget to do here? Except for the control objects (play button, pause button etc which are placed on a "panel" which is about 440 pixels wide) everything seems fine, the objects are positioned where I expect them, but just too big. By the way I loaded the images using D3DXCreateTextureFromFileEx (resizing wnidow, and reacting to lost device, etc, works fine too). For experimenting, I added some code to take an identity matrix and scale is down on the x/y axis to 0.75f, which then gave me the expected result for the controls (but also made the text smaller and out of position), but I don't know why I would need to scale anything. My rendering code is so simple, I just wanted to draw my 2D objects 1;1 the size they came from the file... I am really very inexperienced in D3D, so the answer might be very simple...

    Read the article

  • Writing a Master's Thesis on evaluating visual scripting systems

    - by user1107412
    I am thinking to write my Master's thesis around theorizing, and then implementing a PlayMaker or Kismet-like (building game logic by visually arranging FSMs) tool in Unity. The only thing I am still concerned about is the actual research question that I should pose. I was kinda hoping that the more experienced game designers out there might know. Update: What about reducing the use of visual programming to graphically designing FSM-Action-Transition flows, which can then be attached to game entities (very much like http://playmaker.com does it)?

    Read the article

  • Bump mapping Problem GLSL

    - by jmfel1926
    I am having a slight problem with my Bump Mapping project. Although everything works OK (at least from what I know) there is a slight mistake somewhere and I get incorrect shading on the brick wall when the light goes to the one side or the other as seen in the picture below: The light is on the right side so the shading on the wall should be the other way. I have provided the shaders to help find the issue (I do not have much experience with shaders). Shaders: varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; attribute vec3 tangent; attribute vec3 binormal; uniform vec3 lightpos; uniform mat4 cameraMat; void main() { gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = ftransform(); position = vec3(gl_ModelViewMatrix * gl_Vertex); lightvec = vec3(cameraMat * vec4(lightpos,1.0)) - position ; vec3 eyeVec = vec3(gl_ModelViewMatrix * gl_Vertex); viewVec = normalize(-eyeVec); } uniform sampler2D colormap; uniform sampler2D normalmap; varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; vec3 vv; uniform float diffuset; uniform float specularterm; uniform float ambientterm; void main() { vv=viewVec; vec3 normals = normalize(texture2D(normalmap,gl_TexCoord[0].st).rgb * 2.0 - 1.0); normals.y = -normals.y; //normals = (normals * gl_NormalMatrix).xyz ; vec3 distance = lightvec; float dist_number =length(distance); float final_dist_number = 2.0/pow(dist_number,diffuset); vec3 light_dir=normalize(lightvec); vec3 Halfvector = normalize(light_dir+vv); float angle=max(dot(Halfvector,normals),0.0); angle= pow(angle,specularterm); vec3 specular=vec3(angle,angle,angle); float diffuseterm=max(dot(light_dir,normals),0.0); vec3 diffuse = diffuseterm * texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 ambient = ambientterm *texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 diffusefinal = diffuse * final_dist_number; vec3 finalcolor=diffusefinal+specular+ambient; gl_FragColor = vec4(finalcolor, 1.0); }

    Read the article

  • Using "screenshots" in a game, is it allowed?

    - by DevilWithin
    Lets say I have a game that is some kind of a quiz, and its questions are themed around gaming. For it to be interesting, I would need to make references to well-known games and game-related stuff. In a copyright infrigement sense, could I have problems with this? Imagine a question such as, "What was the currency used in game X?", or "Which company made game Y?". Also, the same applied to screenshots of known games, and have a question near it, such as "What game is this image from?". Toughts? Thanks

    Read the article

  • On Screen Coin Animation

    - by Siddharth
    am working with side scrolling skater game. I want to perform coin animation such that as player collect coin it moves upside and attach with currency sprite. My main character and coin present in game scene and currency sprite present in HUD layer. This situation creates problem for me. Directly I can not apply modifier to coin because it is side scrolling game so based on main character speed it reaches at different position. That I have checked. So that I have to generate other coin at same position at game layer coin has, in HUD layer and move upward to it. But I didn't able to get its y position correct though I can able to get x position correctly. Many time main character goes downward so it get minus value many time. I also tried following code float[] position = GameHUD.this .convertSceneCoordinatesToLocalCoordinates(GameManager .getInstance().getCoinX(), GameManager.getInstance() .getCoinY()); But I am getting same coordinate as I provide. No difference in that so please some one provide me guidance in that. Because I am near to complete my game. EDIT: Here game layer and hud layer is totally different. Actual coin present in game layer which player has to collect and at same position I want to generate another coin in hud layer to perform some animation. It is recommended to generate coin in hud layer because through that only I can able to complete my target.

    Read the article

  • Calculate vector direction

    - by Starkers
    Is the direction angle always measured from the plus x axis? Does a vector in the +,+ quadrant always have a direction between 0 and 90, and in -,+ between 90 and 180 and in -,- between 180 and 270 and in -,+ between 270 and 360 ? Also, how should we calculate the direction using tan? Would that mean nested if statements to find out what quadrant we're in, and then applying the appropriate "work arounds"? E.g. If we were in the -,+ (like in the diagram) would we find the angle from the + axis would be 90 + tan^-1(y/x), the 90 + only used because we're in the -,+ quadrant. Also, that's just a quick solution, may be off, I just want to know if we use nested if statements to get the angle from the + x axis. Finally, should we find the distance in degrees or radians?

    Read the article

  • Unity falling body pendulum behaviour

    - by user3447980
    I wonder if someone could provide some guidance. Im attempting to create a pendulum like behaviour in 2D space in Unity without using a hinge joint. Essentially I want to affect a falling body to act as though it were restrained at the radius of a point, and to be subject to gravity and friction etc. Ive tried many modifications of this code, and have come up with some cool 'strange-attractor' like behaviour but i cannot for the life of me create a realistic pendulum like action. This is what I have so far: startingposition = transform.position; //Get start position newposition = startingposition + velocity; //add old velocity newposition.y -= gravity * Time.deltaTime; //add gravity newposition = pivot + Vector2.ClampMagnitude(newposition-pivot,radius); //clamp body at radius??? velocity = newposition-startingposition; //Get new velocity transform.Translate (velocity * Time.deltaTime, Space.World); //apply to transform So im working out the new position based on the old velocity + gravity, then constraining it to a distance from a point, which is the element in the code i cannot get correct. Is this a logical way to go about it?

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • How to keep a data structure synchronized over a network?

    - by David Gouveia
    Context In the game I'm working on (a sort of a point and click graphic adventure), pretty much everything that happens in the game world is controlled by an action manager that is structured a bit like: So for instance if the result of examining an object should make the character say hello, walk a bit and then sit down, I simply spawn the following code: var actionGroup = actionManager.CreateGroup(); actionGroup.Add(new TalkAction("Guybrush", "Hello there!"); actionGroup.Add(new WalkAction("Guybrush", new Vector2(300, 300)); actionGroup.Add(new SetAnimationAction("Guybrush", "Sit")); This creates a new action group (an entire line in the image above) and adds it to the manager. All of the groups are executed in parallel, but actions within each group are chained together so that the second one only starts after the first one finishes. When the last action in a group finishes, the group is destroyed. Problem Now I need to replicate this information across a network, so that in a multiplayer session, all players see the same thing. Serializing the individual actions is not the problem. But I'm an absolute beginner when it comes to networking and I have a few questions. I think for the sake of simplicity in this discussion we can abstract the action manager component to being simply: var actionManager = new List<List<string>>(); How should I proceed to keep the contents of the above data structure syncronized between all players? Besides the core question, I'm also having a few other concerns related to it (i.e. all possible implications of the same problem above): If I use a server/client architecture (with one of the players acting as both a server and a client), and one of the clients has spawned a group of actions, should he add them directly to the manager, or only send a request to the server, which in turn will order every client to add that group? What about packet losses and the like? The game is deterministic, but I'm thinking that any discrepancy in the sequence of actions executed in a client could lead to inconsistent states of the world. How do I safeguard against that sort of problem? What if I add too many actions at once, won't that cause problems for the connection? Any way to alleviate that?

    Read the article

  • Any technical references for game-oriented icons and symbols?

    - by willc2
    To make localizing easier, I'm using icons to show in-game information like achievements and bonuses. Coming up with good designs isn't easy, especially when it has to be integrated into the rest of the game's art style. Can I do better than looking at some random selection of existing games? Are there any reference books or sites that cover game graphics specifically? I'm looking for more theory and best-practices rather than pre-made graphics.

    Read the article

  • Design leaderboard ratings for quiz games

    - by PeterK
    Back in March 2011 i started the following post: How to design a leaderboard? Now my quiz game have been out for approximately a year and sold pretty decently. I am working on to update the game design and is again looking into the leaderboard design to make it better as i am not happy with it. Currently i rate players on number of correct answers, which is not good as it does not consider things like number of games, difficulty levels etc. I also have "extended" stats behind the UITableView (Leaderboard). A player can play based on three levels of difficulty: hard, medium or easy Difficulty levels can be mixed between players in a game Each game can be one to six players, so there can be single games or duels Between 2 and 30 questions per game As i am considering integrating Game Center Leaderboard i need to design a better rating system so i would like to ask for some ideas how to do the rating based on the above. I am thinking about how much a point would be worth and what it includes.

    Read the article

  • Simple rendering produces minor stutter

    - by Ben
    For some reason, this game loop renders the movement of a simple rectangle with no stuttering. double currTime; double prevTime = System.nanoTime() / NANO_TO_SEC; double FPSTIMER = System.nanoTime(); double maxTimeDiff = 100.0 / 1000.0; double delta = 1.0 / 60.0; int processes = 0, frames = 0; while(true){ currTime = System.nanoTime() / NANO_TO_SEC; if(currTime - prevTime > maxTimeDiff) prevTime = currTime; if(currTime >= prevTime){ process(); processes++; prevTime += delta; if(currTime < prevTime){ render(); frames++; } } else{ try{ Thread.sleep((long) (1000 * (prevTime - currTime))); } catch(Exception e){} } if(System.nanoTime() - FPSTIMER > 1000000000.0){ System.out.println("Process: " + (1000 / processes) + "ms FPS: " + (1000 / frames) + "ms"); processes = frames = 0; FPSTIMER += 1000000000.0; } } But for this game loop, I get really minor stuttering where the movement does not look smooth. long prevTime = System.currentTimeMillis(); long prevRenderTime = 0; long currRenderTime = 0; long delta = 0; long msPerTick = 1000 / 60; int frames = 0; int ticks = 0; double FPSTIMER = System.currentTimeMillis(); while (true){ long currTime = System.currentTimeMillis(); delta += (currTime - prevTime) / msPerTick; prevTime = currTime; while (delta >= 1){ ticks++; process(); delta -= 1; } prevRenderTime = System.currentTimeMillis(); render(); frames++; currRenderTime = System.currentTimeMillis(); try{ Thread.sleep((long) ((1000 / FPS) - (currRenderTime - prevRenderTime))); } catch(Exception e){} if(System.currentTimeMillis() - FPSTIMER > 1000.0){ System.out.println("Process: " + (1000.0 / ticks) + "ms FPS: " + (1000.0 / frames) + "ms"); ticks = frames = 0; FPSTIMER += 1000.0; } Is there any critical difference that I'm missing here? The one thing I noticed is that if I uncap the fps for the second game loop, the stuttering goes away. It doesn't make sense to me. Also, the second game loop came from Notch's Minicraft code with just my thread sleeping code added in.

    Read the article

  • Car brands and models licensing

    - by Ju-v
    We are small team which working on car racing game but we don't know about licensing process for branded cars like Nissan, Lamborghini, Chevrolet and etc. Do we need to buy any licence for using real car brand names, models, logos,... or we can use them for free? Second option we think about using not real brand with real models is it possible? If someone have experience with that, fell free to share it. Any information about that is welcome.

    Read the article

  • Jitter during wall collisions with Bullet Physics: contact/penetration tolerance?

    - by Niriel
    I use the bullet physics engine through Panda3d. My scene is still very simple, think 'Wolfenstein3d': tile-based, walls are solid cubes. I expect walls to block the player, and I expect the player to slide along the walls in case of non-normal incidence. What I get is what I expect, with one difference: there is some jitter. If I try to force myself into the wall, then I see the frames blinking quickly between two positions. These differ by about 0.04 units of distance, which corresponds to 4 cm in my game. I noticed a 4 cm elsewhere: the bottom of my player capsule is 4 cm below ground, when at rest. Does that mean that there is somewhere in the Bullet engine a default 0.04-units-long tolerance to differentiate contact from collision? If so, what should I do ? Should I change the scale of my game so that these 0.04 units correspond to 0.4 cm, making the jitter ten times smaller? Or can I ask bullet to change its tolerance to a smaller value? Edit This is the jitter I get: 6.155 - 6.118 = 0.036 LPoint3f(0, 6.11694, 0.835) LPoint3f(0, 6.15499, 0.835) LPoint3f(0, 6.11802, 0.835) LPoint3f(0, 6.15545, 0.835) LPoint3f(0, 6.11817, 0.835) LPoint3f(0, 6.15726, 0.835) LPoint3f(0, 6.11876, 0.835) LPoint3f(0, 6.15911, 0.835) LPoint3f(0, 6.11937, 0.835) I found a setMargin method. I set it to 5 mm both on the BoxShape for the walls and on the Capsule shape for the player. It still jitters by about 35 mm as illustrated by this log (11.117 - 11.082 = 0.035): LPoint3f(0, 11.0821, 0.905) LPoint3f(0, 11.1169, 0.905) LPoint3f(0, 11.082, 0.905) LPoint3f(0, 11.117, 0.905) LPoint3f(0, 11.082, 0.905) LPoint3f(0, 11.117, 0.905) LPoint3f(0, 11.0821, 0.905) LPoint3f(0, 11.1175, 0.905) LPoint3f(0, 11.0822, 0.905) LPoint3f(0, 11.1178, 0.905) LPoint3f(0, 11.0823, 0.905) LPoint3f(0, 11.1183, 0.905) The margin on the capsule did change my penetration with the floor though, I'm a bit higher (0.905 instead of 0.835). However, it did not change anything when colliding with the walls. How can I make the collisions against the walls less jittery? Edit, the day after: After more investigation, it appears that dynamic objects behave well. My problem comes from the btKinematicCharacterController that I use for moving my character; that stuff is totally bugged, according to the whole Internet :/.

    Read the article

  • Store and create game objects at positions along terrain

    - by Alex
    I have a circular character that rolls down terrain like that shown in the picture below. The terrain is created from an array holding 1000 points. The ground is drawn one screen width infront and one screen width behind. So as the character moves, edges are created infront and edges are removed behind. My problem is, I want to create box2d bodies at certain locations along the path and need a way to store these creator methods or objects. I need some way to store a position at which they are created and some pointer to a function to create them, once the character is in range. I guess this would be an array of some sort that is checked each time the ground is updated and then if in range, the function is executed and removed from the array. But I'm not sure if its even possible to store pointers to functions with parameters included... any help is much appreciated!

    Read the article

  • Interaction using Kinect in XNA

    - by Sweta Dwivedi
    So i have written a program to play a sound file when ever my RightHand.Joint touches the 3D model . . It goes like this . . even though the code works somehow but not very accurate . . for example it will play the sound when my hand is slightly under my 3D object not exactly on my 3D object . How do i make it more accurate? here is the code . . (HandX & HandY is the values coming from the Skeleton data RightHand.Joint.X etc) and also this calculation doesnt work with Animated Sprites..which i need to do foreach (_3DModel s in Solar) { float x = (float)Math.Floor(((handX * 0.5f) + 0.5f) * (resolution.X)); float y = (float)Math.Floor(((handY * -0.5f) + 0.5f) * (resolution.Y)); float z = (float)Math.Floor((handZ) / 4 * 20000); if (Math.Sqrt(Math.Pow(x - s.modelPosition.X, 2) + Math.Pow(y - s.modelPosition.Y, 2)) < 15) { //Exit(); PlaySound("hyperspace_activate"); Console.WriteLine("1" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } else { Console.WriteLine("2" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } }

    Read the article

  • Help understand GLSL directional light on iOS (left handed coord system)

    - by Robse
    I now have changed from GLKBaseEffect to a own shader implementation. I have a shader management, which compiles and applies a shader to the right time and does some shader setup like lights. Please have a look at my vertex shader code. Now, light direction should be provided in eye space, but I think there is something I don't get right. After I setup my view with camera I save a lightMatrix to transform the light from global space to eye space. My modelview and projection setup: - (void)setupViewWithWidth:(int)width height:(int)height camera:(N3DCamera *)aCamera { aCamera.aspect = (float)width / (float)height; float aspect = aCamera.aspect; float far = aCamera.far; float near = aCamera.near; float vFOV = aCamera.fieldOfView; float top = near * tanf(M_PI * vFOV / 360.0f); float bottom = -top; float right = aspect * top; float left = -right; // projection GLKMatrixStackLoadMatrix4(projectionStack, GLKMatrix4MakeFrustum(left, right, bottom, top, near, far)); // identity modelview GLKMatrixStackLoadMatrix4(modelviewStack, GLKMatrix4Identity); // switch to left handed coord system (forward = z+) GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeScale(1, 1, -1)); // transform camera GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeWithMatrix3(GLKMatrix3Transpose(aCamera.orientation))); GLKMatrixStackTranslate(modelviewStack, -aCamera.position.x, -aCamera.position.y, -aCamera.position.z); } - (GLKMatrix4)modelviewMatrix { return GLKMatrixStackGetMatrix4(modelviewStack); } - (GLKMatrix4)projectionMatrix { return GLKMatrixStackGetMatrix4(projectionStack); } - (GLKMatrix4)modelviewProjectionMatrix { return GLKMatrix4Multiply([self projectionMatrix], [self modelviewMatrix]); } - (GLKMatrix3)normalMatrix { return GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3([self modelviewProjectionMatrix]), NULL); } After that, I save the lightMatrix like this: [self.renderer setupViewWithWidth:view.drawableWidth height:view.drawableHeight camera:self.camera]; self.lightMatrix = [self.renderer modelviewProjectionMatrix]; And just before I render a 3d entity of the scene graph, I setup the light config for its shader with the lightMatrix like this: - (N3DLight)transformedLight:(N3DLight)light transformation:(GLKMatrix4)matrix { N3DLight transformedLight = N3DLightMakeDisabled(); if (N3DLightIsDirectional(light)) { GLKVector3 direction = GLKVector3MakeWithArray(GLKMatrix4MultiplyVector4(matrix, light.position).v); direction = GLKVector3Negate(direction); // HACK -> TODO: get lightMatrix right! transformedLight = N3DLightMakeDirectional(direction, light.diffuse, light.specular); } else { ... } return transformedLight; } You see the line, where I negate the direction!? I can't explain why I need to do that, but if I do, the lights are correct as far as I can tell. Please help me, to get rid of the hack. I'am scared that this has something to do, with my switch to left handed coord system. My vertex shader looks like this: attribute highp vec4 inPosition; attribute lowp vec4 inNormal; ... uniform highp mat4 MVP; uniform highp mat4 MV; uniform lowp mat3 N; uniform lowp vec4 constantColor; uniform lowp vec4 ambient; uniform lowp vec4 light0Position; uniform lowp vec4 light0Diffuse; uniform lowp vec4 light0Specular; varying lowp vec4 vColor; varying lowp vec3 vTexCoord0; vec4 calcDirectional(vec3 dir, vec4 diffuse, vec4 specular, vec3 normal) { float NdotL = max(dot(normal, dir), 0.0); return NdotL * diffuse; } ... vec4 calcLight(vec4 pos, vec4 diffuse, vec4 specular, vec3 normal) { if (pos.w == 0.0) { // Directional Light return calcDirectional(normalize(pos.xyz), diffuse, specular, normal); } else { ... } } void main(void) { // position highp vec4 position = MVP * inPosition; gl_Position = position; // normal lowp vec3 normal = inNormal.xyz / inNormal.w; normal = N * normal; normal = normalize(normal); // colors vColor = constantColor * ambient; // add lights vColor += calcLight(light0Position, light0Diffuse, light0Specular, normal); ... }

    Read the article

  • OpenGL lighting with dynamic geometry

    - by Tank
    I'm currently thinking hard about how to implement lighting in my game. The geometry is quite dynamic (fixed 3D grid with custom geometry in each cell) and needs some light to get more depth and in general look nicer. A scene in my game always contains sunlight and local light sources like lamps (point lights). One can move underground, so sunlight must be able to illuminate as far as it can get. Here's a render of a typical situation: The lamp is positioned behind the wall to the top, and in the hollow cube there's a hole in the back, so that light can shine through. (I don't want soft shadows, this is just for illustration) While spending the whole day searching through Google, I stumbled on some keywords like deferred rendering, forward rendering, ambient occlusion, screen space ambient occlusion etc. Some articles/tutorials even refer to "normal shading", but to be honest I don't really have an idea to even do simple shading. OpenGL of course has a fixed lighting pipeline with 8 possible light sources. However they just illuminate all vertices without checking for occluding geometry. I'd be very thankful if someone could give me some pointers into the right direction. I don't need complete solutions or similar, just good sources with information understandable for someone with nearly no lighting experience (preferably with OpenGL).

    Read the article

  • One-way platforms in UDK

    - by Jordaan Mylonas
    I'm looking to make a multi-player platforming game using UDK. I'm currently doing feasibility research, to make sure I will reasonably be able to do all of the technical things I want to do. The first major hurdle I've come across without being able to find as answer, are one-way platforms. That is to say, platforms through which a player can jump up, but not fall through (unless they choose to). These are commonly seen in games like Mario, Kirby and Smash Bros. Does anyone know how such a system would work within UDK? I can think of solutions that might work for single-player, but not multi.

    Read the article

  • Determining relative velocities on impact?

    - by meds
    I'm trying to figure out a way to determine the relative velocity of a body colliding with another in a 2D environment. For example if one body is moving at (1,0) and another traveling behind it collides with it from behind at (2,0) the velocity of the impact relative to the first body was (1,0). I need a method which takes in two velocities, one velocity belonging to the body the velocity is being measured against, and the other for the impacting body and return the relative velocity.

    Read the article

  • Movement in RPG

    - by user1264811
    I want to make an RPG game in which I move tile by tile. So when I hit up, the tile row that I am on decreases by one for example. Also, it's supposed to be a slow movement so that I can see the change in tile, i.e. I can see my sprite move from tile to tile. Currently, with the code I have, when I hit a direction on my keyboard, I move several blocks within seconds and by the time I release the button I have already gotten a nullPointerException error because I have left the map. How can I slow down the movement?

    Read the article

  • HLSL What you get when you subtract world position from InvertViewProjection.Transform?

    - by cubrman
    In one of NVIDIA's Vertex shaders (the metal one) I found the following code: // transform object normals, tangents, & binormals to world-space: float4x4 WorldITXf : WorldInverseTranspose < string UIWidget="None"; >; // provide tranform from "view" or "eye" coords back to world-space: float4x4 ViewIXf : ViewInverse < string UIWidget="None"; >; ... float4 Po = float4(IN.Position.xyz,1); // homogeneous location coordinates float4 Pw = mul(Po,WorldXf); // convert to "world" space OUT.WorldView = normalize(ViewIXf[3].xyz - Pw.xyz); The term OUT.WorldView is subsequently used in a Pixel Shader to compute lighting: float3 Ln = normalize(IN.LightVec.xyz); float3 Nn = normalize(IN.WorldNormal); float3 Vn = normalize(IN.WorldView); float3 Hn = normalize(Vn + Ln); float4 litV = lit(dot(Ln,Nn),dot(Hn,Nn),SpecExpon); DiffuseContrib = litV.y * Kd * LightColor + AmbiColor; SpecularContrib = litV.z * LightColor; Can anyone tell me what exactly is WorldView here? And why do they add it to the normal?

    Read the article

  • What's the best way to compare blocks in a matching game that can be multiple colors?

    - by Ryan Detzel
    I have a match 3-4 game and the blocks can be one of 7 colors. There are an addition 7 blocks that are a mix of the original 7 colors so for example there is a red and blue block and there is also a red/blue block which can be matched with either the red or the blue. My original thought is just to use binary operations so. int red = 0x000000001; int blue = 0x000000010; int redblue = 0x000000011; Then just do an & operation so see if they match. Does this sound like a decent plan or am I over complicating it? edit: Better yet so it's more readable. int red = 1; int blue = 2; int red_blue = 3; int yellow = 4; int red_yellow = 5; maybe as defines or static vars?

    Read the article

  • Difference between Sound and Music

    - by Southpaw Hare
    What are the key differences between the Sound and Music classes in Pygame? What are the limitations of each? In what situation would one use one or the other? Is there a benefit to using them in an unintuitive way such as using Sound objects to play music files or visa-versa? Are there specifically issues with channel limitations, and do one or both have the potential to be dropped from their channel unreliably? What are the risks of playing music as a Sound?

    Read the article

  • How do i make a minecraft server mod? [closed]

    - by Simon
    Possible Duplicate: Mods for Minecraft Server - how does it work? I have made some minecraft client mods, but i've started a server a mounth ago and i want to make a mod for it, but i cant find any tutorial on the internet. How can then the other guys making those mods for minecraft server know how they are going to do? Do they try forward as i tryed or are they doing something else. I would be glad if someone could tell me how to do or find tutorials for me, couse I have tryed to find them in nearly a week of searching. But i guess im searching at the wrong spot of internet, what do i know :o

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >