Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 413/863 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

  • Proportional speed movement between mouse and cube

    - by user1350772
    Hi i´m trying to move a cube with the freeglut mouse "glutMotionFunc(processMouseActiveMotion)" callback, my problem is that the movement is not proportional between the mouse speed movement and the cube movement. MouseButton function: #define MOVE_STEP 0.04 float g_x=0.0f; glutMouseFunc(MouseButton); glutMotionFunc(processMouseActiveMotion); void MouseButton(int button, int state, int x, int y){ if(button == GLUT_LEFT_BUTTON && state== GLUT_DOWN){ initial_x=x; } } When the left button gets clicked the x cordinate is stored in initial_x variable. void processMouseActiveMotion(int x,int y){ if(x>initial_x){ g_x-= MOVE_STEP; }else{ g_x+= MOVE_STEP; } initial_x=x; } When I move the mouse I look in which way it moves comparing the mouse new x coordinate with the initial_x variable, if xinitial_x the cube moves to the right, if not it moves to the left. Any idea how can i move the cube according to the mouse movement speed? Thanks EDIT 1 The idea is that when you click on any point of the screen and you drag to the left/right the cube moves proportionally of the mouse mouvement speed.

    Read the article

  • Drawing different per-pixel data on the screen

    - by Amir Eldor
    I want to draw different per-pixel data on the screen, where each pixel has a specific value according to my needs. An example may be a random noise pattern where each pixel is randomly generated. I'm not sure what is the correct and fastest way to do this. Locking a texture/surface and manipulating the raw pixel data? How is this done in modern graphics programming? I'm currently trying to do this in Pygame but realized I will face the same problem if I go for C/SDL or OpenGL/DirectX.

    Read the article

  • How to derive euler angles from matrix or quaternion?

    - by KlashnikovKid
    Currently working on steering behavior for my AI and just hit a little mathematical bump. I'm in the process of writing an align function, which basically tries to match the agent's orientation with a target orientation. I've got a good source material for implementing this behavior but it uses euler angles to calculate the rotational delta, acceleration, and so on. This is nice, however I store orientation as a quaternion and the math library I'm using doesn't provide any functionality for deriving the euler angles. But if it helps I also have rotational matrices at my disposal too. What would be the best way to decompose the quaternion or rotational matrix to get the euler information? I found one source for decomposing the matrix, but I'm not quite getting the correct results. I'm thinking it may be a difference of column/row ordering of my matrices but then again, math isn't my strong point. http://nghiaho.com/?page_id=846

    Read the article

  • How to properly do weapon cool-down reload timer in multi-player laggy environment?

    - by John Murdoch
    I want to handle weapon cool-down timers in a fair and predictable way on both client on server. Situation: Multiple clients connected to server, which is doing hit detection / physics Clients have different latency for their connections to server ranging from 50ms to 500ms. They want to shoot weapons with fairly long reload/cool-down times (assume exactly 10 seconds) It is important that they get to shoot these weapons close to the cool-down time, as if some clients manage to shoot sooner than others (either because they are "early" or the others are "late") they gain a significant advantage. I need to show time remaining for reload on player's screen Clients can have clocks which are flat-out wrong (bad timezones, etc.) What I'm currently doing to deal with latency: Client collects server side state in a history, tagged with server timestamps Client assesses his time difference with server time: behindServerTimeNs = (behindServerTimeNs + (System.nanoTime() - receivedState.getServerTimeNs())) / 2 Client renders all state received from server 200 ms behind from his current time, adjusted by what he believes his time difference with server time is (whether due to wrong clocks, or lag). If he has server states on both sides of that calculated time, he (mostly LERP) interpolates between them, if not then he (LERP) extrapolates. No other client-side prediction of movement, e.g., to make his vehicle seem more responsive is done so far, but maybe will be added later So how do I properly add weapon reload timers? My first idea would be for the server to send each player the time when his reload will be done with each world state update, the client then adjusts it for the clock difference and thus can estimate when the reload will be finished in client-time (perhaps considering also for latency that the shoot message from client to server will take as well?), and if the user mashes the "shoot" button after (or perhaps even slightly before?) that time, send the shoot event. The server would get the shoot event and consider the time shot was made as the server time when it was received. It would then discard it if it is nowhere near reload time, execute it immediately if it is past reload time, and hold it for a few physics cycles until reload is done in case if it was received a bit early. It does all seem a bit convoluted, and I'm wondering whether it will work (e.g., whether it won't be the case that players with lower ping get better reload rates), and whether there are more elegant solutions to this problem.

    Read the article

  • Isometric layer moving inside map

    - by gronzzz
    i'm created isometric map and now trying to limit layer moving. Main idea, that i have left bottom, right bottom, left top, right top points, that camera can not move outside, so player will not see map out of bounds. But i can not understand algorithm of how to do that. It's my layer scale/moving code. - (void)touchBegan:(UITouch *)touch withEvent:(UIEvent *)event { _isTouchBegin = YES; } - (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event { NSArray *allTouches = [[event allTouches] allObjects]; UITouch *touchOne = [allTouches objectAtIndex:0]; CGPoint touchLocationOne = [touchOne locationInView: [touchOne view]]; CGPoint previousLocationOne = [touchOne previousLocationInView: [touchOne view]]; // Scaling if ([allTouches count] == 2) { _isDragging = NO; UITouch *touchTwo = [allTouches objectAtIndex:1]; CGPoint touchLocationTwo = [touchTwo locationInView: [touchTwo view]]; CGPoint previousLocationTwo = [touchTwo previousLocationInView: [touchTwo view]]; CGFloat currentDistance = sqrt( pow(touchLocationOne.x - touchLocationTwo.x, 2.0f) + pow(touchLocationOne.y - touchLocationTwo.y, 2.0f)); CGFloat previousDistance = sqrt( pow(previousLocationOne.x - previousLocationTwo.x, 2.0f) + pow(previousLocationOne.y - previousLocationTwo.y, 2.0f)); CGFloat distanceDelta = currentDistance - previousDistance; CGPoint pinchCenter = ccpMidpoint(touchLocationOne, touchLocationTwo); pinchCenter = [self convertToNodeSpace:pinchCenter]; CGFloat predictionScale = self.scale + (distanceDelta * PINCH_ZOOM_MULTIPLIER); if([self predictionScaleInBounds:predictionScale]) { [self scale:predictionScale scaleCenter:pinchCenter]; } } else { // Dragging _isDragging = YES; CGPoint previous = [[CCDirector sharedDirector] convertToGL:previousLocationOne]; CGPoint current = [[CCDirector sharedDirector] convertToGL:touchLocationOne]; CGPoint delta = ccpSub(current, previous); self.position = ccpAdd(self.position, delta); } } - (void)touchEnded:(UITouch *)touch withEvent:(UIEvent *)event { _isDragging = NO; _isTouchBegin = NO; // Check if i need to bounce _touchLoc = [touch locationInNode:self]; } #pragma mark - Update - (void)update:(CCTime)delta { CGPoint position = self.position; float scale = self.scale; static float friction = 0.92f; //0.96f; if(_isDragging && !_isScaleBounce) { _velocity = ccp((position.x - _lastPos.x)/2, (position.y - _lastPos.y)/2); _lastPos = position; } else { _velocity = ccp(_velocity.x * friction, _velocity.y *friction); position = ccpAdd(position, _velocity); self.position = position; } if (_isScaleBounce && !_isTouchBegin) { float min = fabsf(self.scale - MIN_SCALE); float max = fabsf(self.scale - MAX_SCALE); int dif = max > min ? 1 : -1; if ((scale > MAX_SCALE - SCALE_BOUNCE_AREA) || (scale < MIN_SCALE + SCALE_BOUNCE_AREA)) { CGFloat newSscale = scale + dif * (delta * friction); [self scale:newSscale scaleCenter:_touchLoc]; } else { _isScaleBounce = NO; } } }

    Read the article

  • What is the practical use of IBOs / degenerate vertex in OpenGL?

    - by 0xFAIL
    Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals/tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost?

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • What causes player box/world geometry glitches in old games?

    - by Alexander
    I'm looking to understand and find the terminology for what causes - or allows - players to interfere with geometry in old games. Famously, ID's Quake3 gave birth to a whole community of people breaking the physics by jumping, sliding, getting stuck and launching themselves off points in geometry. Some months ago (though I'd be darned if I can find it again!) I saw a conference held by Bungie's Vic DeLeon and a colleague in which Vic briefly discussed the issues he ran into while attempting to wrap 'collision' objects (please correct my terminology) around environment objects so that players could appear as though they were walking on organic surfaces, while not clipping through them or appear to be walking on air at certain points, due to complexities in the modeling. My aim is to compose a case study essay for University in which I can tackle this issue in games, drawing on early exploits and how techniques have changed to address such exploits and to aid in the gameplay itself. I have 3 current day example of where exploits still exist, however specifically targeting ID Software clearly shows they've massively improved their techniques between Q3 and Q4. So in summary, with your help please, I'd like to gain a slightly better understanding of this issue as a whole (its terminology mainly) so I can use terms and ask the right questions within the right contexts. In practical application, I know what it is, I know how to do it, but I don't have the benefit of level design knowledge yet and its technical widgety knick-knack terms =) Many thanks in advance AJ

    Read the article

  • Making AI jump on a spot effectively

    - by Pasquale Sada
    How to calculate, in 3D environment, the closest point, from which an AI character can jump onto a platform? Setup I have an initial velocity V(Vx,Vy,VZ) and a spot where the character stands still at S(Sx,Sy,Sz). What I'm trying to achieve is a successful jump on a spot E(Ex,Ey,Ez) where you have clicked on(only lower or higher spot, because I've in place a simple steering behavior for even terrains). There are no obstacles around. I've implemented a formula that can make him jump in a precise way on a spot but you need to declare an angle: the problem arise when the selected spot is straight above your head. It' pretty lame that the char hang there and can reach a thing that is 1cm above is head. I'll share the code I'm using: Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 *a)); return vel * dir.normalized; Ended up using the lowest angle (20 degree) and checking for collision on the trajectory. If found any increase the angle. Here some code (to improve the code maybe must stop the check at the highest point of the curve): Vector3 BallisticVel(Vector3 target, float angle) { Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 * a)); return vel * dir.normalized; } Vector3 TrajectoryPoint(Vector3 startingPosition, Vector3 startingVelocity, float n ) { float t = 1/60 ; // seconds per time step Vector3 stepVelocity = t * startingVelocity; // m/s Vector3 stepGravity = t * t * Physics.gravity; // m/s/s return startingPosition + n * stepVelocity + 0.5f * (n*n+n) * stepGravity; } bool CheckTrajectory(Vector3 startingPosition,Vector3 target, float angle_jump) { Debug.Log("checking"); if(angle_jump < 80f) { Debug.Log("if"); Vector3 startingVelocity = BallisticVel(target, angle_jump); for (int i = 0; i < 180; i++) { //Debug.Log(i); Vector3 trajectoryPosition = TrajectoryPoint( startingPosition, startingVelocity, i ); if(Physics.Raycast(trajectoryPosition,Vector3.forward,safeDistance)) { angle_jump += 10; break; // restart loop with the new angle } else continue; } return true; JumpVelocity = BallisticVel(target, angle_jump); } return false; }

    Read the article

  • Issue with a point coordinates, which creates an unwanted triangle

    - by Paul
    I would like to connect the points from the red path, to the y-axis in blue. I figured out that the problem with my triangles came from the first point (V0) : it is not located where it should be. In the console, it says its location is at 0,0, but in the emulator, it is not. The code : for(int i = 1; i < 2; i++) { CCLOG(@"_polyVertices[i-1].x : %f, _polyVertices[i-1].y : %f", _polyVertices[i-1].x, _polyVertices[i-1].y); CCLOG(@"_polyVertices[i].x : %f, _polyVertices[i].y : %f", _polyVertices[i].x, _polyVertices[i].y); ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } The output : _polyVertices[i-1].x : 0.000000, _polyVertices[i-1].y : 0.000000 _polyVertices[i].x : 50.000000, _polyVertices[i].y : 0.000000 And the result : (the layer goes up, i could not take the screenshot before the layer started to go up, but the first red point starts at y=0) : Then it creates an unwanted triangle when the code continues : Would you have any idea about this? (So to force the first blue point to start at 0,0, and not at 50,0 as it seems to be now) Here is the code : - (void)generatePath{ float x = 50; //first red point float y = 0; for(int i = 0; i < kMaxKeyPoints+1; i++) { if (i<3){ _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } else if(i<20){ //going right _hillKeyPoints[i] = CGPointMake(x, y); x += (random() % (int) 30); y += -40; } else if(i<25){ //stabilize _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } else if(i<30){ //going left _hillKeyPoints[i] = CGPointMake(x, y); //x -= (random() % (int) 10); x = 150 + (random() % (int) 30); y += -40; } else { //back to normal _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += -40; } } } -(void)generatePolygons{ static int prevFromKeyPointI = -1; static int prevToKeyPointI = -1; // key points interval for drawing while (_hillKeyPoints[_fromKeyPointI].y > -_offsetY+winSizeTop) { _fromKeyPointI++; } while (_hillKeyPoints[_toKeyPointI].y > -_offsetY-winSizeBottom) { _toKeyPointI++; } if (prevFromKeyPointI != _fromKeyPointI || prevToKeyPointI != _toKeyPointI) { _nPolyVertices = 0; float x1 = 0; int keyPoints = _fromKeyPointI; for (int i=_fromKeyPointI; i<_toKeyPointI; i++){ //V0: at (0,0) _polyVertices[_nPolyVertices] = CGPointMake(x1, y1); //first blue point _polyTexCoords[_nPolyVertices++] = CGPointMake(x1, y1); //V1: to the first "point" _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); keyPoints++; //from point at index 0 to 1 //V2, same y as point n°2: _polyVertices[_nPolyVertices] = CGPointMake(0, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(0, _hillKeyPoints[keyPoints].y); //V1 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //V2 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //CCLOG(@"_nPolyVertices V2 again : %i", _nPolyVertices); //V3 = same x,y as point at index 1 _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); y1 = _polyVertices[_nPolyVertices].y; _nPolyVertices++; } prevFromKeyPointI = _fromKeyPointI; prevToKeyPointI = _toKeyPointI; } } - (void) draw { //RED glColor4f(1, 1, 1, 1); for(int i = MAX(_fromKeyPointI, 1); i <= _toKeyPointI; ++i) { glColor4f(1.0, 0, 0, 1.0); ccDrawLine(_hillKeyPoints[i-1], _hillKeyPoints[i]); } //BLUE glColor4f(0, 0, 1, 1); for(int i = 1; i < 2; i++) { CCLOG(@"_polyVertices[i-1].x : %f, _polyVertices[i-1].y : %f", _polyVertices[i-1].x, _polyVertices[i-1].y); CCLOG(@"_polyVertices[i].x : %f, _polyVertices[i].y : %f", _polyVertices[i].x, _polyVertices[i].y); ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } } Thanks

    Read the article

  • Ray Intersecting Plane Formula in C++/DirectX

    - by user4585
    I'm developing a picking system that will use rays that intersect volumes and I'm having trouble with ray intersection versus a plane. I was able to figure out spheres fairly easily, but planes are giving me trouble. I've tried to understand various sources and get hung up on some of the variables used within their explanations. Here is a snippet of my code: bool Picking() { D3DXVECTOR3 vec; D3DXVECTOR3 vRayDir; D3DXVECTOR3 vRayOrig; D3DXVECTOR3 vROO, vROD; // vect ray obj orig, vec ray obj dir D3DXMATRIX m; D3DXMATRIX mInverse; D3DXMATRIX worldMat; // Obtain project matrix D3DXMATRIX pMatProj = CDirectXRenderer::GetInstance()->Director()->Proj(); // Obtain mouse position D3DXVECTOR3 pos = CGUIManager::GetInstance()->GUIObjectList.front().pos; // Get window width & height float w = CDirectXRenderer::GetInstance()->GetWidth(); float h = CDirectXRenderer::GetInstance()->GetHeight(); // Transform vector from screen to 3D space vec.x = (((2.0f * pos.x) / w) - 1.0f) / pMatProj._11; vec.y = -(((2.0f * pos.y) / h) - 1.0f) / pMatProj._22; vec.z = 1.0f; // Create a view inverse matrix D3DXMatrixInverse(&m, NULL, &CDirectXRenderer::GetInstance()->Director()->View()); // Determine our ray's direction vRayDir.x = vec.x * m._11 + vec.y * m._21 + vec.z * m._31; vRayDir.y = vec.x * m._12 + vec.y * m._22 + vec.z * m._32; vRayDir.z = vec.x * m._13 + vec.y * m._23 + vec.z * m._33; // Determine our ray's origin vRayOrig.x = m._41; vRayOrig.y = m._42; vRayOrig.z = m._43; D3DXMatrixIdentity(&worldMat); //worldMat = aliveActors[0]->GetTrans(); D3DXMatrixInverse(&mInverse, NULL, &worldMat); D3DXVec3TransformCoord(&vROO, &vRayOrig, &mInverse); D3DXVec3TransformNormal(&vROD, &vRayDir, &mInverse); D3DXVec3Normalize(&vROD, &vROD); When using this code I'm able to detect a ray intersection via a sphere, but I have questions when determining an intersection via a plane. First off should I be using my vRayOrig & vRayDir variables for the plane intersection tests or should I be using the new vectors that are created for use in object space? When looking at a site like this for example: http://www.tar.hu/gamealgorithms/ch22lev1sec2.html I'm curious as to what D is in the equation AX + BY + CZ + D = 0 and how does it factor in to determining a plane intersection? Any help will be appreciated, thanks.

    Read the article

  • Unable to Call Instantiate in Class Member Function

    - by onguarde
    The following javascript is attached to a gameObject. var instance : GameObject; class eg_class { function eg_func(){ var thePrefab : GameObject; instance = Instantiate(thePrefab); } } Error, Unknown identifier: 'instance'. Unknown identifier: 'Instantiate'. Questions, 1) Why is it that "instance" cannot be accessed within a class? Isn't it supposed to be a public variable? 2) "Instantiate" function works in Start()/Update() root functions. Is there a way to make it work from within member functions? Thanks in advance!

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Directx and Open Libraries list? [closed]

    - by OVERTONE
    I've just been looking for comparissons between open and proprietary frameworks and libraries. More so just to get an idea of what exists than how they compare. For example: We have DirectX (graphics) and its open counterpart OpenGL DirectX (sound) and OpenAL But there are other DirectX libraries that I can't find open alternatives to such as DirectInput DXGI Direct2D DirectWrite Doe's anyone have any list's or Comparisons between Directx and their open counterparts?

    Read the article

  • How does a collision engine work?

    - by JXPheonix
    Original question: Click me How exactly does a collision engine work? This is an extremely broad question. What code keeps things bouncing against each other, what code makes the player walk into a wall instead of walk through the wall? How does the code constantly refresh the players position and objects position to keep gravity and collision working as it should? If you don't know what a collision engine is, basically it's generally used in platform games to make the player acutally hit walls and the like. There's the 2D type and the 3D type, but they all accomplish the same thing: collision. So, what keeps a collision engine ticking?

    Read the article

  • simple collision detection

    - by Rob
    Imagine 2 squares sitting side by side, both level with the ground: http://img19.imageshack.us/img19/8085/sqaures2.jpg A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if ALL of the following are NOT true: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. If all of those are false, the squares are touching. But consider a case like this, where one square is at a 45 degree angle: http://img189.imageshack.us/img189/4236/squaresb.jpg Is there an equally simple way to determine if those squares are touching?

    Read the article

  • How can I achieve strong typing with a component messaging system?

    - by Vaughan Hilts
    I'm looking at implementing a messaging system in my entity component system. I've deduced that I can use an event / queue for passing messages, but right now, I just use a generic object and cast out the data I want. I also considered using a dictionary. I see a lot of information on this, but they all involve a lot of casting and guessing. Is there any way to do this elegantly and keep strong typing on my messages?

    Read the article

  • OpenGL textures trigger error 1281 if SFML is not called

    - by user3714670
    I am using SOIL to apply textures to VBOs, without textures i could change the background and display black (default color) vbos easily, but now with textures, openGL is giving an error 1281, the background is black and some textures are not applied. but the first texture IS applied (nothing else is working though). The strange thing is : if i create a dummy texture with SFML in the same program, all other textures do work. So i guess there is something i forgot in the texture creation/application, if someone could enlighten me. Here is the code i use to load textures, once loaded it is kept in memory, it mostly comes from the example of SOIL : texture = SOIL_load_OGL_single_cubemap( filename, SOIL_DDS_CUBEMAP_FACE_ORDER, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); if( texture > 0 ) { glEnable( GL_TEXTURE_CUBE_MAP ); glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glBindTexture( GL_TEXTURE_CUBE_MAP, texture ); std::cout << "the loaded single cube map ID was " << texture << std::endl; } else { std::cout << "Attempting to load as a HDR texture" << std::endl; texture = SOIL_load_OGL_HDR_texture( filename, SOIL_HDR_RGBdivA2, 0, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS ); if( texture < 1 ) { std::cout << "Attempting to load as a simple 2D texture" << std::endl; texture = SOIL_load_OGL_texture( filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); } if( texture > 0 ) { // enable texturing glEnable( GL_TEXTURE_2D ); // bind an OpenGL texture ID glBindTexture( GL_TEXTURE_2D, texture ); std::cout << "the loaded texture ID was " << texture << std::endl; } else { glDisable( GL_TEXTURE_2D ); std::cout << "Texture loading failed: '" << SOIL_last_result() << "'" << std::endl; } } and how i apply it when drawing : GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; // glEnableVertexAttribArray(TextureID); glActiveTexture(GL_TEXTURE0); if(SFML) sf::Texture::bind(sfml_texture); else { glBindTexture (GL_TEXTURE_2D, texture); // glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, &texture); } glUniform1i(TextureID, 0); I am not sure that SOIL is adapted to my program as i want something as simple as possible (i used sfml's texture object which was the best but i can't anymore), but if i can get it to work it would be great.

    Read the article

  • Increasing speed of circle over time as linear with Box2d

    - by Whispered
    Assume that there is a circle and it can be moved by using keyboard arrows.Is required that increasing speed over time like increasing car speed. For example; max speed is 25 and time to reach max speed shall be 5 sec. Over 5 sec the speed will reach to max speed. Does Box2d handle that situation?. I tried setting linear valocity but it seems to make the circle have constant speed instead of increased speed over time. Thank You! Note: I'm using Box2DWeb Javascript port of Box2D.

    Read the article

  • CW/CCW Rotation of a Vector

    - by user23132
    Considering that I have a vector A, and after an arbitrary rotation I get vector B. I want to use this rotation operation in others vectors as well, but I'm having problems in doing that. My idea do that is to calculate the perpendicular vector C of the plane AB (by calculating AxB). This vector C is the axis that I'll need to rotate. To discover the angle I used the dot product between A and B, the acos of the dot product will return the lowest angle between A and B, the angle ang. The rotation I need to do is then: -rotate *ang*º around the C axis. The problem is that I dont know if this rotation is a CW or CCW rotation, since the cos of the dot product does not give me information of the sign of the angle. There's a tip discover that in 2D ( A.x * B.y - A.y * B.x) that you can use to discover if the vector A is at left/right of vector B. But I dont know how to do this in 3D space. Can anyone help me?

    Read the article

  • Linking one uniform variable to many shaders

    - by Winged
    Let's say, that I have 3 programs, and in each of those programs there is a view matrix uniform, which should be the same in all those programs. Right now, when my camera moves, I need to re-upload the modified matrix to every program separately. Is it possible to create some kind of global uniforms which are constant for all programs linked to it, so I could just upload the matrix once? I tried creating a globalUniforms object which looked kinda like this: var globalUniforms = { program: {}, // (...) vMatrixUniform: null, // (...) initialize: function() { vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); } }; So I could just link it to proper programs like this: program.vMatrixUniform = globalUniforms.vMatrixUniform;, and then pass the matrix like this: if (camera.isDirty.viewMatrix !== false) { camera.isDirty.viewMatrix = false; gl.uniformMatrix4fv(globalUniforms.vMatrixUniform, false, camera.viewMatrix.element); } but unfortunately it throws an error: Uncaught exception: gl.INVALID_VALUE was caused by call to: getUniformLocation called from line 272, column 2 in () in mysite/js/mesh.js: vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); Summing up: is there a more efficient way of managing shaders which follows my logic?

    Read the article

  • FBX Importer - Texture Name

    - by CmasterG
    I have a problem with the FBX SDK. I read in the data for the vertex position and the uv coordinates. It works fine, but now I want to read for each polygon to which texture it belongs, so that I can have models with multiple textures. Can anyone tell me how I can get the texture name (file name) for my polygon. My code to read in vertex position and uv coordinates is the following: int i, j, lPolygonCount = pMesh->GetPolygonCount(); FbxVector4* lControlPoints = pMesh->GetControlPoints(); int vertexId = 0; for (i = 0; i < lPolygonCount; i++) { int lPolygonSize = pMesh->GetPolygonSize(i); for (j = 0; j < lPolygonSize; j++) { int lControlPointIndex = pMesh->GetPolygonVertex(i, j); FbxVector4 pos = lControlPoints[lControlPointIndex]; current_model[vertex_index].x = pos.mData[0] - pivot_offset[0]; current_model[vertex_index].y = pos.mData[1] - pivot_offset[1]; current_model[vertex_index].z = pos.mData[2]- pivot_offset[2]; FbxVector4 vertex_normal; pMesh->GetPolygonVertexNormal(i,j, vertex_normal); current_model[vertex_index].nx = vertex_normal.mData[0]; current_model[vertex_index].ny = vertex_normal.mData[1]; current_model[vertex_index].nz = vertex_normal.mData[2]; //read in UV data FbxStringList lUVSetNameList; pMesh->GetUVSetNames(lUVSetNameList); //get lUVSetIndex-th uv set const char* lUVSetName = lUVSetNameList.GetStringAt(0); const FbxGeometryElementUV* lUVElement = pMesh->GetElementUV(lUVSetName); if(!lUVElement) continue; // only support mapping mode eByPolygonVertex and eByControlPoint if( lUVElement->GetMappingMode() != FbxGeometryElement::eByPolygonVertex && lUVElement->GetMappingMode() != FbxGeometryElement::eByControlPoint ) return; //index array, where holds the index referenced to the uv data const bool lUseIndex = lUVElement->GetReferenceMode() != FbxGeometryElement::eDirect; const int lIndexCount= (lUseIndex) ? lUVElement->GetIndexArray().GetCount() : 0; FbxVector2 lUVValue; //get the index of the current vertex in control points array int lPolyVertIndex = pMesh->GetPolygonVertex(i,j); //the UV index depends on the reference mode //int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex; int lUVIndex = pMesh->GetTextureUVIndex(i, j); lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex); current_model[vertex_index].tu = (float)lUVValue.mData[0]; current_model[vertex_index].tv = (float)lUVValue.mData[1]; vertex_index ++; } } float v1[3], v2[3], v3[3]; v1[0] = current_model[vertex_index - 3].x; v1[1] = current_model[vertex_index - 3].y; v1[2] = current_model[vertex_index - 3].z; v2[0] = current_model[vertex_index - 2].x; v2[1] = current_model[vertex_index - 2].y; v2[2] = current_model[vertex_index - 2].z; v3[0] = current_model[vertex_index - 1].x; v3[1] = current_model[vertex_index - 1].y; v3[2] = current_model[vertex_index - 1].z; collision_model->addTriangle(v1,v2,v3);

    Read the article

  • Adding 2D vector movement with rotation applied

    - by Michael Zehnich
    I am trying to apply a slight sine wave movement to objects that float around the screen to make them a little more interesting. I would like to apply this to the objects so that they oscillate from side to side, not front to back (so the oscillation does not affect their forward velocity). After reading various threads and tutorials, I have come to the conclusion that I need to create and add vectors, but I simply cannot come up with a solution that works. This is where I'm at right now, in the object's update method (updated based on comments): Vector2 oldPosition = new Vector2(spritePos.X, spritePos.Y); //note: newPosition is initially set in the constructor to spritePos.x/y Vector2 direction = newPosition - oldPosition; Vector2 perpendicular = new Vector2(direction.Y, -direction.X); perpendicular.Normalize(); sinePosAng += 0.1f; perpendicular.X += 2.5f * (float)Math.Sin(sinePosAng); spritePos.X += velocity * (float)Math.Cos(radians); spritePos.Y += velocity * (float)Math.Sin(radians); spritePos += perpendicular; newPosition = spritePos;

    Read the article

  • Constrained/penalized distance function

    - by sigma.z.1980
    Assume a character is located on a n by n grid and has to reach a certain entry on that grid. Its current position is (x1,y1). Also on the same grid is an enemy with coordinates (x2,y2). Each step algorithm randomly generates new candidate locations for the hero (if there are k candidates then there is a kx2 matrix of new potential locations. What I need is some distance objective function to compare the candidates. I'm currently using d1 - c * d2, where d1 is distance to the objective (measure in terms of number of pixels for each axis), d2 is distance to the enemy and c is some coefficient (this is very much like a set-up for Lagrangian). It's not working very well though. I'd be quite keen to learn how what constrained distance function are used for similar cases. Any suggestions are very much appreciated.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >