Search Results

Search found 6 results on 1 pages for 'freefallr'.

Page 1/1 | 1 

  • matrix 4x4 position data

    - by freefallr
    I understand that a 4x4 matrix holds rotation and position data. The rotation data is held in the 3x3 sub-matrix at the top left of the matrix. The position data is held in the last column of the matrix. e.g. glm::vec3 vParentPos( mParent[3][0], mParent[3][1], mParent[3][2] ); My question is - am I accessing the parent matrix correctly in the example above? I know that opengl uses a different matrix ordering that directx, (row order instead of column order or something), so, should the mParent be accessed as follows instead? glm::vec3 vParentPos( mParent[0][3], mParent[1][3], mParent[2][3] ); thanks!

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

  • vector rotations for branches of a 3d tree

    - by freefallr
    I'm attempting to create a 3d tree procedurally. I'm hoping that someone can check my vector rotation maths, as I'm a bit confused. I'm using an l-system (a recursive algorithm for generating branches). The trunk of the tree is the root node. It's orientation is aligned to the y axis. In the next iteration of the tree (e.g. the first branches), I might create a branch that is oriented say by +10 degrees in the X axis and a similar amount in the Z axis, relative to the trunk. I know that I should keep a rotation matrix at each branch, so that it can be applied to child branches, along with any modifications to the child branch. My questions then: for the trunk, the rotation matrix - is that just the identity matrix * initial orientation vector ? for the first branch (and subsequent branches) - I'll "inherit" the rotation matrix of the parent branch, and apply x and z rotations to that also. e.g. using glm::normalize; using glm::rotateX; using glm::vec4; using glm::mat4; using glm::rotate; vec4 vYAxis = vec4(0.0f, 1.0f, 0.0f, 0.0f); vec4 vInitial = normalize( rotateX( vYAxis, 10.0f ) ); mat4 mRotation = mat4(1.0); // trunk rotation matrix = identity * initial orientation vector mRotation *= vInitial; // first branch = parent rotation matrix * this branches rotations mRotation *= rotate( 10.0f, 1.0f, 0.0f, 0.0f ); // x rotation mRotation *= rotate( 10.0f, 0.0f, 0.0f, 1.0f ); // z rotation Are my maths and approach correct, or am I completely wrong? Finally, I'm using the glm library with OpenGL / C++ for this. Is the order of x rotation and z rotation important?

    Read the article

  • Switching VS2010 to use Windows 7.1 SDK

    - by freefallr
    I've used VS2008 on my development machine for some years now, with windows SDK v7.1. I've installed VS2010, and it's using the Windows SDK v7.0a, but I need it to use the Windows 7.1 SDK (which I had installed prior to installing VS2010). When I run the Windows SDK 7.1 configuration tool, to switch the Windows SDK in use, the tool updates for VS2008, but not for VS2010. The message it reports is: "The Windows SDK Configuration Tool has successfully set Windows SDK version v7.1 as the current version for Visual Studio 2008" The configuration tool is installed with the Windows 7.1 SDK and is found here: "C:\Program Files\Microsoft SDKs\Windows\v7.1\Setup\WindowsSdkVer.exe" VS2010 continues to use WSDK 7.0a, which extremely frustrating, as I need to do DirectShow development (so I need to build the baseclasses, which aren't released with 7.0a release of WSDK). Would I be correct in assuming that it's not updating VS2010 settings because VS2010 wasn't installed at the time that I installed Windows 7.1 SDK? Can I fix this manually, or should I uninstall Windows 7.1 SDK, then reinstall it? Any other suggestions / workarounds for this?

    Read the article

  • a floating toolbar in WTL

    - by freefallr
    I've created a multimedia app that uses DirectShow to display multiple media streams simultaneously. The app is a WTL MDI application. For video windows, I use a CWindowImpl derived class - one per CChildFrame. I'd like to add controls to the video windows (volume ctrls etc). I'd initially thought about adding a slider (volume) control and a couple of buttons to a context menu - but later thought that this might not be the best approach. I was looking at MS Word 2007 - which has a floating toolbar that allows you to change options on highlighted text. I'd like to implement a similar floating toolbar for the video controls. I googled around a bit and found an old post about floating toolbars in WTL. The response was - for a floating toolbar, create a popup window and make it's parent the main window. I think that this sounds like a reasonable approach. my questions: Is this a good approach, or is there a more standard approach for a floating toolbar now in WTL? Should I make the toolbar a child of the video window or the CChildFrame that contains the video window, in order to ensure that it always remains on top of the video? How can I implement transparency in the floating toolbar, as in the floating toolbar in MS word?

    Read the article

  • VMRMixerControl9 & GDI problem!

    - by freefallr
    I'm attempting to overlay a bitmap on some video. I create a bitmap in memory, then I call SelectObject to select it into a memoryDC, so I can perform a DrawText operation on it. I'm not getting any results on screen at all - can anyone suggest why? thanks HRESULT MyGraph::MixTextOverVMR9(LPCTSTR szText) { // create a bitmap object using GDI, render the text to it accordingly // then Sets the bitmap as an alpha bitmap to the VMR9, so that it can be overlayed. HRESULT hr = S_OK; CBitmap bmpMem; CFont font; LOGFONT logicfont; CRect rcText; CRect rcVideo; VMR9AlphaBitmap alphaBmp; HWND hWnd = this->GetFirstRendererWindow(); COLORREF clrText = RGB(255, 255, 0); COLORREF clrBlack = RGB(0,0,0); HDC hdcHwnd = NULL; CDC dcMem; LONG lWidth; LONG lHeight; if( ! m_spVideoRenderer.p ) return E_NOINTERFACE; if( !m_spWindowlessCtrl.p ) return E_NOINTERFACE; if( ! m_spIMixerBmp9.p ) { m_spIMixerBmp9 = m_spVideoRenderer; if( ! m_spIMixerBmp9.p ) return E_NOINTERFACE; } // create the font.. LPCTSTR sFont = _T("Times New Roman"); memset(&logicfont, 0, sizeof(LOGFONT)); logicfont.lfHeight = 42; logicfont.lfWidth = 20; logicfont.lfStrikeOut = 0; logicfont.lfUnderline = 0; logicfont.lfItalic = FALSE; logicfont.lfWeight = FW_NORMAL; logicfont.lfEscapement = 0; logicfont.lfCharSet = ANSI_CHARSET; logicfont.lfQuality = ANTIALIASED_QUALITY; logicfont.lfPitchAndFamily = DEFAULT_PITCH | FF_DONTCARE; wcscpy_s( &logicfont.lfFaceName[0], wcslen(sFont)*2, sFont ); font.CreateFontIndirectW(&logicfont); // create a compatible memDC from the video window's HDC if( hWnd == NULL ) return E_FAIL; hdcHwnd = GetDC(hWnd); dcMem = CreateCompatibleDC(hdcHwnd); // get the required bitmap metrics from the MediaBuffer if( ! SUCCEEDED(m_spWindowlessCtrl->GetNativeVideoSize(&lWidth, &lHeight, NULL, NULL)) ) return E_FAIL; // create a bitmap for the text bmpMem.CreateCompatibleBitmap(dcMem.m_hDC, lWidth, lHeight); SelectBitmap (dcMem.m_hDC, bmpMem); SetBkMode (dcMem.m_hDC, TRANSPARENT); SetTextColor (dcMem.m_hDC, clrText); SelectFont (dcMem.m_hDC, font.m_hFont); // draw the text DrawTextW(dcMem.m_hDC, szText, wcslen(szText), rcText, DT_CALCRECT | DT_NOPREFIX ); DrawTextW(dcMem.m_hDC, szText, wcslen(szText), rcText, DT_NOPREFIX ); // Set the alpha bitmap on the VMR9 renderer memset(&alphaBmp, 0, sizeof(VMR9AlphaBitmap)); alphaBmp.rDest.left = 0; alphaBmp.rDest.top = 0.5; alphaBmp.rDest.right = 0.5; alphaBmp.rDest.bottom = 1; alphaBmp.dwFlags = VMR9AlphaBitmap_hDC; alphaBmp.hdc = dcMem.m_hDC; alphaBmp.pDDS = NULL; alphaBmp.rSrc = rcText; // rect to copy from the source image alphaBmp.fAlpha = 0.5f; // transparency value (1.0 is opaque, 0.0 is transparent) alphaBmp.clrSrcKey = clrText; // alphaBmp.dwFilterMode = MixerPref9_AnisotropicFiltering; hr = m_spIMixerBmp9->SetAlphaBitmap(&alphaBmp); DeleteDC(hdcHwnd); dcMem.DeleteDC(); bmpMem.DeleteObject(); font.DeleteObject(); return hr; }

    Read the article

1