Search Results

Search found 3950 results on 158 pages for 'float'.

Page 21/158 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • CUDA linking error - Visual Express 2008 - nvcc fatal due to (null) configuration file

    - by Josh
    Hi, I've been searching extensively for a possible solution to my error for the past 2 weeks. I have successfully installed the Cuda 64-bit compiler (tools) and SDK as well as the 64-bit version of Visual Studio Express 2008 and Windows 7 SDK with Framework 3.5. I'm using windows XP 64-bit. I have confirmed that VSE is able to compile in 64-bit as I have all of the 64-bit options available to me using the steps on the following website: (since Visual Express does not inherently include the 64-bit packages) http://jenshuebel.wordpress.com/2009/02/12/visual-c-2008-express-edition-and-64-bit-targets/ I have confirmed the 64-bit compile ability since the "x64" is available from the pull-down menu under "Tools-Options-VC++ Directories" and compiling in 64-bit does not result in the entire project being "skipped". I have included all the needed directories for 64-bit cuda tools, 64 SDK and Visual Express (\VC\bin\amd64). Here's the error message I receive when trying to compile in 64-bit: 1>------ Build started: Project: New, Configuration: Release x64 ------ 1>Compiling with CUDA Build Rule... 1>"C:\CUDA\bin64\nvcc.exe" -arch sm_10 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -maxrregcount=32 --compile -o "x64\Release\template.cu.obj" "c:\Documents and Settings\All Users\Application Data\NVIDIA Corporation\NVIDIA GPU Computing SDK\C\src\CUDA_Walkthrough_DeviceKernels\template.cu" 1>nvcc fatal : Visual Studio configuration file '(null)' could not be found for installation at 'C:/Program Files (x86)/Microsoft Visual Studio 9.0/VC/bin/../..' 1>Linking... 1>LINK : fatal error LNK1181: cannot open input file '.\x64\Release\template.cu.obj' 1>Build log was saved at "file://c:\Documents and Settings\Administrator\My Documents\Visual Studio 2008\Projects\New\New\x64\Release\BuildLog.htm" 1>New - 1 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Here's the simple code I'm trying to compile/run in 64-bit: #include <stdlib.h> #include <stdio.h> #include <string.h> #include <math.h> #include <cuda.h> void mypause () { printf ( "Press [Enter] to continue . . ." ); fflush ( stdout ); getchar(); } __global__ void VecAdd1_Kernel(float* A, float* B, float* C, int N) { int i = blockDim.x*blockIdx.x+threadIdx.x; if (i<N) C[i] = A[i] + B[i]; //result should be a 16x1 array of 250s } __global__ void VecAdd2_Kernel(float* B, float* C, int N) { int i = blockDim.x*blockIdx.x+threadIdx.x; if (i<N) C[i] = C[i] + B[i]; //result should be a 16x1 array of 400s } int main() { int N = 16; float A[16];float B[16]; size_t size = N*sizeof(float); for(int i=0; i<N; i++) { A[i] = 100.0; B[i] = 150.0; } // Allocate input vectors h_A and h_B in host memory float* h_A = (float*)malloc(size); float* h_B = (float*)malloc(size); float* h_C = (float*)malloc(size); //Initialize Input Vectors memset(h_A,0,size);memset(h_B,0,size); h_A = A;h_B = B; printf("SUM = %f\n",A[1]+B[1]); //simple check for initialization //Allocate vectors in device memory float* d_A; cudaMalloc((void**)&d_A,size); float* d_B; cudaMalloc((void**)&d_B,size); float* d_C; cudaMalloc((void**)&d_C,size); //Copy vectors from host memory to device memory cudaMemcpy(d_A,h_A,size,cudaMemcpyHostToDevice); cudaMemcpy(d_B,h_B,size,cudaMemcpyHostToDevice); //Invoke kernel int threadsPerBlock = 256; int blocksPerGrid = (N+threadsPerBlock-1)/threadsPerBlock; VecAdd1(blocksPerGrid, threadsPerBlock,d_A,d_B,d_C,N); VecAdd2(blocksPerGrid, threadsPerBlock,d_B,d_C,N); //Copy results from device memory to host memory //h_C contains the result in host memory cudaMemcpy(h_C,d_C,size,cudaMemcpyDeviceToHost); for(int i=0; i<N; i++) //output result from the kernel "VecAdd" { printf("%f ", h_C[i] ); printf("\n"); } printf("\n"); cudaFree(d_A); cudaFree(d_B); cudaFree(d_C); free(h_A); free(h_B); free(h_C); mypause(); return 0; }

    Read the article

  • Access violation reading location 0x00184000.

    - by numerical25
    having troubles with the following line HR(md3dDevice->CreateBuffer(&vbd, &vinitData, &mVB)); it appears the CreateBuffer method is having troubles reading &mVB. mVB is defined in box.h and looks like this ID3D10Buffer* mVB; Below is the code it its entirety. this is all files that mVB is in. //Box.cpp #include "Box.h" #include "Vertex.h" #include <vector> Box::Box() : mNumVertices(0), mNumFaces(0), md3dDevice(0), mVB(0), mIB(0) { } Box::~Box() { ReleaseCOM(mVB); ReleaseCOM(mIB); } float Box::getHeight(float x, float z)const { return 0.3f*(z*sinf(0.1f*x) + x*cosf(0.1f*z)); } void Box::init(ID3D10Device* device, float m, float n, float dx) { md3dDevice = device; mNumVertices = m*n; mNumFaces = 12; float halfWidth = (n-1)*dx*0.5f; float halfDepth = (m-1)*dx*0.5f; std::vector<Vertex> vertices(mNumVertices); for(DWORD i = 0; i < m; ++i) { float z = halfDepth - (i * dx); for(DWORD j = 0; j < n; ++j) { float x = -halfWidth + (j* dx); float y = getHeight(x,z); vertices[i*n+j].pos = D3DXVECTOR3(x, y, z); if(y < -10.0f) vertices[i*n+j].color = BEACH_SAND; else if( y < 5.0f) vertices[i*n+j].color = LIGHT_YELLOW_GREEN; else if (y < 12.0f) vertices[i*n+j].color = DARK_YELLOW_GREEN; else if (y < 20.0f) vertices[i*n+j].color = DARKBROWN; else vertices[i*n+j].color = WHITE; } } D3D10_BUFFER_DESC vbd; vbd.Usage = D3D10_USAGE_IMMUTABLE; vbd.ByteWidth = sizeof(Vertex) * mNumVertices; vbd.BindFlags = D3D10_BIND_VERTEX_BUFFER; vbd.CPUAccessFlags = 0; vbd.MiscFlags = 0; D3D10_SUBRESOURCE_DATA vinitData; vinitData.pSysMem = &vertices; HR(md3dDevice->CreateBuffer(&vbd, &vinitData, &mVB)); //create the index buffer std::vector<DWORD> indices(mNumFaces*3); // 3 indices per face int k = 0; for(DWORD i = 0; i < m-1; ++i) { for(DWORD j = 0; j < n-1; ++j) { indices[k] = i*n+j; indices[k+1] = i*n+j+1; indices[k+2] = (i*1)*n+j; indices[k+3] = (i*1)*n+j; indices[k+4] = i*n+j+1; indices[k+5] = (i*1)*n+j+1; k+= 6; } } D3D10_BUFFER_DESC ibd; ibd.Usage = D3D10_USAGE_IMMUTABLE; ibd.ByteWidth = sizeof(DWORD) * mNumFaces*3; ibd.BindFlags = D3D10_BIND_INDEX_BUFFER; ibd.CPUAccessFlags = 0; ibd.MiscFlags = 0; D3D10_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &indices; HR(md3dDevice->CreateBuffer(&ibd, &iinitData, &mIB)); } void Box::Draw() { UINT stride = sizeof(Vertex); UINT offset = 0; md3dDevice->IASetVertexBuffers(0, 1, &mVB, &stride, &offset); md3dDevice->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); md3dDevice->DrawIndexed(mNumFaces*3, 0 , 0); } //Box.h #ifndef _BOX_H #define _BOX_H #include "d3dUtil.h" Box.h class Box { public: Box(); ~Box(); void init(ID3D10Device* device, float m, float n, float dx); void Draw(); float getHeight(float x, float z)const; private: DWORD mNumVertices; DWORD mNumFaces; ID3D10Device* md3dDevice; ID3D10Buffer* mVB; ID3D10Buffer* mIB; }; #endif Thanks again for the help

    Read the article

  • OpenGL, draw two polygons in the same time (by mouse clicks)

    - by YoungSalafi
    im trying to draw 2 polygons at the same time depending on user input from the opengl screen... so i made 2 arrays which each one of them will carry the vertices of each polygon ... i think my logic is right but the program still prints only polygon and delete the old polygon if you draw a polygon again . and its acting weird too please check the code yourself here it is : P.S dont mind the delete function right now.. i know it missing something. #include <windows.h> #include <gl/gl.h> #include <gl/glut.h> void Draw(); void Set_Transformations(); void Initialize(int argc, char *argv[]); void OnKeyPress(unsigned char key, int x, int y); void DeleteVer(); void MouseClick(int bin, int state , int x , int y); void GetOGLPos(int x, int y,float* arrY,float* arrX); void DrawPolygon(float* arrX,float* arrY); float xPos[20]; float yPos[20]; float xPos2[20]; float yPos2[20]; float fx = 0,fy = 0; float size = 10; int count = 0; bool done = false; bool flag = true; void Initialize(int argc, char *argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA); glutInitWindowPosition(100, 100); glutInitWindowSize(600, 600); glutCreateWindow("OpenGL Lab1"); Set_Transformations(); glutDisplayFunc(Draw); glutMouseFunc(MouseClick); glutKeyboardFunc(OnKeyPress); glutMainLoop(); } void Set_Transformations() { glClearColor(1, 1, 1, 1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(-200, 200, -200, 200); } void OnKeyPress(unsigned char key, int x, int y) { if (key == 27) exit(0); switch(key) { case 13: //enter key it will draw done = true; glutPostRedisplay(); flag=!flag; // this flag to switch to the other array that the vertices will be stored in, in order to draw the second polygon break; } } void MouseClick(int button, int state , int x , int y) { switch (button) { case GLUT_RIGHT_BUTTON: if (state == GLUT_DOWN) { if (count>0) { DeleteVer(); //dont mind this right now } } break; case GLUT_LEFT_BUTTON: if (state == GLUT_DOWN) { if(count<20) { if(flag =true){ // drawing first polygon GetOGLPos(x, y,xPos,yPos);} if (flag=false) //drawing second polygon after Enter is pressed GetOGLPos(x, y,xPos2,yPos2); } } break; } } void GetOGLPos(int x, int y,float* arrY,float* arrX) //getting the vertices from the user { GLint viewport[4]; GLdouble modelview[16]; GLdouble projection[16]; GLfloat winX, winY, winZ; GLdouble posX, posY, posZ; glGetDoublev( GL_MODELVIEW_MATRIX, modelview ); glGetDoublev( GL_PROJECTION_MATRIX, projection ); glGetIntegerv( GL_VIEWPORT, viewport ); winX = (float)x; winY = (float)viewport[3] - (float)y; glReadPixels( x, int(winY), 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ ); gluUnProject( winX, winY, winZ, modelview, projection, viewport, &posX, &posY, &posZ); arrX[count] = posX; arrY[count] = posY; count++; glPointSize( 6.0 ); glBegin(GL_POINTS); glVertex2f(posX,posY); glEnd(); glFlush(); } void DeleteVer(){ //dont mind this glColor3f ( 1, 1, 1); glBegin(GL_POINTS); glVertex2f(xPos[count-1],yPos[count-1]); glEnd(); glFlush(); xPos[count] = NULL; yPos[count] = NULL; count--; glColor3f ( 0, 0, 0); } void DrawPolygon(float* arrX,float* arrY) { int n=0; glColor3f ( 0, 0, 0); glBegin(GL_POLYGON); while(n<count) { glVertex2f(arrX[n],arrY[n]); n++; } count=0; glEnd(); glFlush(); } void Draw() //main drawing func { glClear(GL_COLOR_BUFFER_BIT); glColor3f(0, 0, 0); if(done) { DrawPolygon(xPos,yPos); DrawPolygon(xPos2,yPos2); } glFlush(); } int main(int argc, char *argv[]) { Initialize(argc, argv); return 0; }

    Read the article

  • Inverse Kinematics with OpenGL/Eigen3 : unstable jacobian pseudoinverse

    - by SigTerm
    I'm trying to implement simple inverse kinematics test using OpenGL, Eigen3 and "jacobian pseudoinverse" method. The system works fine using "jacobian transpose" algorithm, however, as soon as I attempt to use "pseudoinverse", joints become unstable and start jerking around (eventually they freeze completely - unless I use "jacobian transpose" fallback computation). I've investigated the issue and turns out that in some cases jacobian.inverse()*jacobian has zero determinant and cannot be inverted. However, I've seen other demos on the internet (youtube) that claim to use same method and they do not seem to have this problem. So I'm uncertain where is the cause of the issue. Code is attached below: *.h: struct Ik{ float targetAngle; float ikLength; VectorXf angles; Vector3f root, target; Vector3f jointPos(int ikIndex); size_t size() const; Vector3f getEndPos(int index, const VectorXf& vec); void resize(size_t size); void update(float t); void render(); Ik(): targetAngle(0), ikLength(10){ } }; *.cpp: size_t Ik::size() const{ return angles.rows(); } Vector3f Ik::getEndPos(int index, const VectorXf& vec){ Vector3f pos(0, 0, 0); while(true){ Eigen::Affine3f t; float radAngle = pi*vec[index]/180.0f; t = Eigen::AngleAxisf(radAngle, Vector3f(-1, 0, 0)) * Eigen::Translation3f(Vector3f(0, 0, ikLength)); pos = t * pos; if (index == 0) break; index--; } return pos; } void Ik::resize(size_t size){ angles.resize(size); angles.setZero(); } void drawMarker(Vector3f p){ glBegin(GL_LINES); glVertex3f(p[0]-1, p[1], p[2]); glVertex3f(p[0]+1, p[1], p[2]); glVertex3f(p[0], p[1]-1, p[2]); glVertex3f(p[0], p[1]+1, p[2]); glVertex3f(p[0], p[1], p[2]-1); glVertex3f(p[0], p[1], p[2]+1); glEnd(); } void drawIkArm(float length){ glBegin(GL_LINES); float f = 0.25f; glVertex3f(0, 0, length); glVertex3f(-f, -f, 0); glVertex3f(0, 0, length); glVertex3f(f, -f, 0); glVertex3f(0, 0, length); glVertex3f(f, f, 0); glVertex3f(0, 0, length); glVertex3f(-f, f, 0); glEnd(); glBegin(GL_LINE_LOOP); glVertex3f(f, f, 0); glVertex3f(-f, f, 0); glVertex3f(-f, -f, 0); glVertex3f(f, -f, 0); glEnd(); } void Ik::update(float t){ targetAngle += t * pi*2.0f/10.0f; while (t > pi*2.0f) t -= pi*2.0f; target << 0, 8 + 3*sinf(targetAngle), cosf(targetAngle)*4.0f+5.0f; Vector3f tmpTarget = target; Vector3f targetDiff = tmpTarget - root; float l = targetDiff.norm(); float maxLen = ikLength*(float)angles.size() - 0.01f; if (l > maxLen){ targetDiff *= maxLen/l; l = targetDiff.norm(); tmpTarget = root + targetDiff; } Vector3f endPos = getEndPos(size()-1, angles); Vector3f diff = tmpTarget - endPos; float maxAngle = 360.0f/(float)angles.size(); for(int loop = 0; loop < 1; loop++){ MatrixXf jacobian(diff.rows(), angles.rows()); jacobian.setZero(); float step = 1.0f; for (int i = 0; i < angles.size(); i++){ Vector3f curRoot = root; if (i) curRoot = getEndPos(i-1, angles); Vector3f axis(1, 0, 0); Vector3f n = endPos - curRoot; float l = n.norm(); if (l) n /= l; n = n.cross(axis); if (l) n *= l*step*pi/180.0f; //std::cout << n << "\n"; for (int j = 0; j < 3; j++) jacobian(j, i) = n[j]; } std::cout << jacobian << std::endl; MatrixXf jjt = jacobian.transpose()*jacobian; //std::cout << jjt << std::endl; float d = jjt.determinant(); MatrixXf invJ; float scale = 0.1f; if (!d /*|| true*/){ invJ = jacobian.transpose(); scale = 5.0f; std::cout << "fallback to jacobian transpose!\n"; } else{ invJ = jjt.inverse()*jacobian.transpose(); std::cout << "jacobian pseudo-inverse!\n"; } //std::cout << invJ << std::endl; VectorXf add = invJ*diff*step*scale; //std::cout << add << std::endl; float maxSpeed = 15.0f; for (int i = 0; i < add.size(); i++){ float& cur = add[i]; cur = std::max(-maxSpeed, std::min(maxSpeed, cur)); } angles += add; for (int i = 0; i < angles.size(); i++){ float& cur = angles[i]; if (i) cur = std::max(-maxAngle, std::min(maxAngle, cur)); } } } void Ik::render(){ glPushMatrix(); glTranslatef(root[0], root[1], root[2]); for (int i = 0; i < angles.size(); i++){ glRotatef(angles[i], -1, 0, 0); drawIkArm(ikLength); glTranslatef(0, 0, ikLength); } glPopMatrix(); drawMarker(target); for (int i = 0; i < angles.size(); i++) drawMarker(getEndPos(i, angles)); } Any help will be appreciated.

    Read the article

  • Why can't I store a float value - it's always zero!

    - by just_another_coder
    I have a view controller that is created by the app delegate - it's the first one shown in the app. In its interface I declare float lengthOfTime; I also set it as a property: @property (nonatomic) float lengthOfTime; And in it's implemetation: @synthesize lengthOfTime; In the class viewDidLoad method, I set the value: self.lengthOfTime = 3.0f; However, after this, the value is always zero. No errors, no compile warnings, nothing. Just zero. The class is instantiated, it is showing in the view, so I'm pretty sure it's not a nil reference. I've searched all over Google and can't figure it out. What's going on?!? :(

    Read the article

  • Can I float a block of text like an image?

    - by george.entenman.name
    If you change "float:right" to "float:left" in this W3schools example, you'll get an image floating to the left of the paragraph. I want to do the same thing with a block of text. The purpose is to be able to have little annotations to the left of paragraphs. If you know of any way to do this, I'd be very grateful. I'd be really grateful (and amazed) if there were a way to place this annotation midway in a paragraph and have text flow around it. I've searched all over for an answer but possibly don't know how to ask the question so that search engines can help me. So now I'm appealing to humans!!

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • Ragdoll continuous movement

    - by Siddharth
    I have created a ragdoll for my game but the problem I found was that the ragdoll joints are not perfectly implemented so they are continuously moving. Ragdoll does not stand at fix place. I here paste my work for that and suggest some guidance about that so that it can stand on fix place. chest = new Chest(pX, pY, gameObject.getmChestTextureRegion(), gameObject); head = new Head(pX, pY - 16, gameObject.getmHeadTextureRegion(), gameObject); leftHand = new Hand(pX - 6, pY + 6, gameObject.getmHandTextureRegion() .clone(), gameObject); rightHand = new Hand(pX + 12, pY + 6, gameObject .getmHandTextureRegion().clone(), gameObject); rightHand.setFlippedHorizontal(true); leftLeg = new Leg(pX, pY + 18, gameObject.getmLegTextureRegion() .clone(), gameObject); rightLeg = new Leg(pX + 7, pY + 18, gameObject.getmLegTextureRegion() .clone(), gameObject); rightLeg.setFlippedHorizontal(true); gameObject.getmScene().registerTouchArea(chest); gameObject.getmScene().attachChild(chest); gameObject.getmScene().registerTouchArea(head); gameObject.getmScene().attachChild(head); gameObject.getmScene().registerTouchArea(leftHand); gameObject.getmScene().attachChild(leftHand); gameObject.getmScene().registerTouchArea(rightHand); gameObject.getmScene().attachChild(rightHand); gameObject.getmScene().registerTouchArea(leftLeg); gameObject.getmScene().attachChild(leftLeg); gameObject.getmScene().registerTouchArea(rightLeg); gameObject.getmScene().attachChild(rightLeg); // head revolute joint revoluteJointDef = new RevoluteJointDef(); revoluteJointDef.enableLimit = true; revoluteJointDef.initialize(head.getHeadBody(), chest.getChestBody(), chest.getChestBody().getWorldCenter()); revoluteJointDef.localAnchorA.set(0f, 0f); revoluteJointDef.localAnchorB.set(0f, -0.5f); revoluteJointDef.lowerAngle = (float) (0f / (180 / Math.PI)); revoluteJointDef.upperAngle = (float) (0f / (180 / Math.PI)); headRevoluteJoint = (RevoluteJoint) gameObject.getmPhysicsWorld() .createJoint(revoluteJointDef); // // left leg revolute joint revoluteJointDef.initialize(leftLeg.getLegBody(), chest.getChestBody(), chest.getChestBody().getWorldCenter()); revoluteJointDef.localAnchorA.set(0f, 0f); revoluteJointDef.localAnchorB.set(-0.15f, 0.75f); revoluteJointDef.lowerAngle = (float) (0f / (180 / Math.PI)); revoluteJointDef.upperAngle = (float) (0f / (180 / Math.PI)); leftLegRevoluteJoint = (RevoluteJoint) gameObject.getmPhysicsWorld() .createJoint(revoluteJointDef); // right leg revolute joint revoluteJointDef.initialize(rightLeg.getLegBody(), chest.getChestBody(), chest.getChestBody().getWorldCenter()); revoluteJointDef.localAnchorA.set(0f, 0f); revoluteJointDef.localAnchorB.set(0.15f, 0.75f); revoluteJointDef.lowerAngle = (float) (0f / (180 / Math.PI)); revoluteJointDef.upperAngle = (float) (0f / (180 / Math.PI)); rightLegRevoluteJoint = (RevoluteJoint) gameObject.getmPhysicsWorld() .createJoint(revoluteJointDef); // left hand revolute joint revoluteJointDef.initialize(leftHand.getHandBody(), chest.getChestBody(), chest.getChestBody().getWorldCenter()); revoluteJointDef.localAnchorA.set(0f, 0f); revoluteJointDef.localAnchorB.set(-0.25f, 0.1f); revoluteJointDef.lowerAngle = (float) (0f / (180 / Math.PI)); revoluteJointDef.upperAngle = (float) (0f / (180 / Math.PI)); leftHandRevoluteJoint = (RevoluteJoint) gameObject.getmPhysicsWorld() .createJoint(revoluteJointDef); // right hand revolute joint revoluteJointDef.initialize(rightHand.getHandBody(), chest.getChestBody(), chest.getChestBody().getWorldCenter()); revoluteJointDef.localAnchorA.set(0f, 0f); revoluteJointDef.localAnchorB.set(0.25f, 0.1f); revoluteJointDef.lowerAngle = (float) (0f / (180 / Math.PI)); revoluteJointDef.upperAngle = (float) (0f / (180 / Math.PI)); rightHandRevoluteJoint = (RevoluteJoint) gameObject.getmPhysicsWorld() .createJoint(revoluteJointDef);

    Read the article

  • What's wrong with this turn to face algorithm?

    - by Chan
    I implement a torpedo object that chases a rotating planet. Specifically, it will turn toward the planet each update. Initially my implement was: void move() { vector3<float> to_target = target - get_position(); to_target.normalize(); position += (to_target * speed); } which works perfectly for torpedo that is a solid sphere. Now my torpedo is actually a model, which has a forward vector, so using this method looks odd because it doesn't actually turn toward but jump toward. So I revised it a bit to get, double get_rotation_angle(vector3<float> u, vector3<float> v) const { u.normalize(); v.normalize(); double cosine_theta = u.dot(v); // domain of arccosine is [-1, 1] if (cosine_theta > 1) { cosine_theta = 1; } if (cosine_theta < -1) { cosine_theta = -1; } return math3d::to_degree(acos(cosine_theta)); } vector3<float> get_rotation_axis(vector3<float> u, vector3<float> v) const { u.normalize(); v.normalize(); // fix linear case if (u == v || u == -v) { v[0] += 0.05; v[1] += 0.0; v[2] += 0.05; v.normalize(); } vector3<float> axis = u.cross(v); return axis.normal(); } void turn_to_face() { vector3<float> to_target = (target - position); vector3<float> axis = get_rotation_axis(get_forward(), to_target); double angle = get_rotation_angle(get_forward(), to_target); double distance = math3d::distance(position, target); gl_matrix_mode(GL_MODELVIEW); gl_push_matrix(); { gl_load_identity(); gl_translate_f(position.get_x(), position.get_y(), position.get_z()); gl_rotate_f(angle, axis.get_x(), axis.get_y(), axis.get_z()); gl_get_float_v(GL_MODELVIEW_MATRIX, OM); } gl_pop_matrix(); move(); } void move() { vector3<float> to_target = target - get_position(); to_target.normalize(); position += (get_forward() * speed); } The logic is simple, I find the rotation axis by cross product, the angle to rotate by dot product, then turn toward the target position each update. Unfortunately, it looks extremely odds since the rotation happens too fast that it always turns back and forth. The forward vector for torpedo is from the ModelView matrix, the third column A: MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 -------------------------------------------------- Any suggestion or idea would be greatly appreciated.

    Read the article

  • Direct3D - Zooming into Mouse Position

    - by roohan
    I'm trying to implement my camera class for a simulation. But I cant figure out how to zoom into my world based on the mouse position. I mean the object under the mouse cursor should remain at the same screen position. My zooming looks like this: VOID ZoomIn(D3DXMATRIX& WorldMatrix, FLOAT const& MouseX, FLOAT const& MouseY) { this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); } I passed the world matrix to the function because I had the idea to move my drawing origin according to the mouse position. But I cant find out how to calculate the offset in to move my drawing origin. Anyone got an idea how to calculate this? Thanks in advance. SOLVED Ok I solved my problem. Here is the code if anyone is interested: VOID CAMERA2D::ZoomIn(FLOAT const& MouseX, FLOAT const& MouseY) { // Get the setting of the current view port. D3DVIEWPORT9 ViewPort; this->Direct3DDevice->GetViewport(&ViewPort); // Convert the screen coordinates of the mouse to world space coordinates. D3DXVECTOR3 VectorOne; D3DXVECTOR3 VectorTwo; D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ = 0.0f; float WorldX = ((WorldZ - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY = ((WorldZ - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Move the camera into the screen. this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); // Calculate the world space vector again based on the new view matrix, D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ2 = 0.0f; float WorldX2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Create a temporary translation matrix for calculating the origin offset. D3DXMATRIX TranslationMatrix; D3DXMatrixIdentity(&TranslationMatrix); // Calculate the origin offset. D3DXMatrixTranslation(&TranslationMatrix, WorldX2 - WorldX, WorldY2 - WorldY, 0.0f); // At the offset to the cameras world matrix. this->WorldMatrix = this->WorldMatrix * TranslationMatrix; } Maybe someone has even a better solution than mine.

    Read the article

  • Stored Procedure call with parameters in ASP.NET MVC

    - by cc0
    I have a working controller for another stored procedure in the database, but I am trying to test another. When I request the URL; http://host.com/Map?minLat=0&maxLat=50&minLng=0&maxLng=50 I get the following error message, which is understandable but I can't seem to find out why it occurs; Procedure or function 'esp_GetPlacesWithinGeoSpan' expects parameter '@MinLat', which was not supplied. This is the code I am using. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Mvc.Ajax; using System.Data; using System.Text; using System.Data.SqlClient; namespace prototype.Controllers { public class MapController : Controller { //Initial variable definitions //Array with chars to be used with the Trim() methods char[] lastComma = { ',' }; //Minimum and maximum lat/longs for queries float _minLat; float _maxLat; float _minLng; float _maxLng; //Creates stringbuilder object to store SQL results StringBuilder json = new StringBuilder(); //Defines which SQL-server to connect to, which database, and which user SqlConnection con = new SqlConnection(...connection string here...); // // HTTP-GET: /Map/ public string CallProcedure_getPlaces(float minLat, float maxLat, float minLng, float maxLng) { con.Open(); using (SqlCommand cmd = new SqlCommand("esp_GetPlacesWithinGeoSpan", con)) { cmd.CommandType = CommandType.Text; cmd.Parameters.AddWithValue("@MinLat", _minLat); cmd.Parameters.AddWithValue("@MaxLat", _maxLat); cmd.Parameters.AddWithValue("@MinLng", _minLng); cmd.Parameters.AddWithValue("@MaxLng", _maxLng); using (SqlDataReader reader = cmd.ExecuteReader()) { while (reader.Read()) { json.AppendFormat("\"{0}\":{{\"c\":{1},\"f\":{2}}},", reader["PlaceID"], reader["PlaceName"], reader["SquareID"]); } } con.Close(); } return "{" + json.ToString().TrimEnd(lastComma) + "}"; } //http://host.com/Map?minLat=0&maxLat=50&minLng=0&maxLng=50 public ActionResult Index(float minLat, float maxLat, float minLng, float maxLng) { _minLat = minLat; _maxLat = maxLat; _minLng = minLng; _maxLng = maxLng; return Content(CallProcedure_getPlaces(_minLat, _maxLat, _minLng, _maxLng)); } } } Any help on resolving this problem would be greatly appreciated.

    Read the article

  • Issues with HLSL and lighting

    - by numerical25
    I am trying figure out whats going on with my HLSL code but I have no way of debugging it cause C++ gives off no errors. The application just closes when I run it. I am trying to add lighting to a 3d plane I made. below is my HLSL. The problem consist when my Pixel shader method returns the struct "outColor" . If I change the return value back to the struct "psInput" , everything goes back to working again. My light vectors and colors are at the top of the fx file // PS_INPUT - input variables to the pixel shader // This struct is created and fill in by the // vertex shader cbuffer Variables { matrix Projection; matrix World; float TimeStep; }; struct PS_INPUT { float4 Pos : SV_POSITION; float4 Color : COLOR0; float3 Normal : TEXCOORD0; float3 ViewVector : TEXCOORD1; }; float specpower = 80.0f; float3 camPos = float3(0.0f, 9.0, -256.0f); float3 DirectLightColor = float3(1.0f, 1.0f, 1.0f); float3 DirectLightVector = float3(0.0f, 0.602f, 0.70f); float3 AmbientLightColor = float3(1.0f, 1.0f, 1.0f); /*************************************** * Lighting functions ***************************************/ /********************************* * CalculateAmbient - * inputs - * vKa material's reflective color * lightColor - the ambient color of the lightsource * output - ambient color *********************************/ float3 CalculateAmbient(float3 vKa, float3 lightColor) { float3 vAmbient = vKa * lightColor; return vAmbient; } /********************************* * CalculateDiffuse - * inputs - * material color * The color of the direct light * the local normal * the vector of the direct light * output - difuse color *********************************/ float3 CalculateDiffuse(float3 baseColor, float3 lightColor, float3 normal, float3 lightVector) { float3 vDiffuse = baseColor * lightColor * saturate(dot(normal, lightVector)); return vDiffuse; } /********************************* * CalculateSpecular - * inputs - * viewVector * the direct light vector * the normal * output - specular highlight *********************************/ float CalculateSpecular(float3 viewVector, float3 lightVector, float3 normal) { float3 vReflect = reflect(lightVector, normal); float fSpecular = saturate(dot(vReflect, viewVector)); fSpecular = pow(fSpecular, specpower); return fSpecular; } /********************************* * LightingCombine - * inputs - * ambient component * diffuse component * specualr component * output - phong color color *********************************/ float3 LightingCombine(float3 vAmbient, float3 vDiffuse, float fSpecular) { float3 vCombined = vAmbient + vDiffuse + fSpecular.xxx; return vCombined; } //////////////////////////////////////////////// // Vertex Shader - Main Function /////////////////////////////////////////////// PS_INPUT VS(float4 Pos : POSITION, float4 Color : COLOR, float3 Normal : NORMAL) { PS_INPUT psInput; float4 newPosition; newPosition = Pos; newPosition.y = sin((newPosition.x * TimeStep) + (newPosition.z / 3.0f)) * 5.0f; // Pass through both the position and the color psInput.Pos = mul(newPosition , Projection ); psInput.Color = Color; psInput.ViewVector = normalize(camPos - psInput.Pos); return psInput; } /////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////// //Anthony!!!!!!!!!!! Find out how color works when multiplying them float4 PS(PS_INPUT psInput) : SV_Target { float3 normal = -normalize(psInput.Normal); float3 vAmbient = CalculateAmbient(psInput.Color, AmbientLightColor); float3 vDiffuse = CalculateDiffuse(psInput.Color, DirectLightColor, normal, DirectLightVector); float fSpecular = CalculateSpecular(psInput.ViewVector, DirectLightVector, normal); float4 outColor; outColor.rgb = LightingCombine(vAmbient, vDiffuse, fSpecular); outColor.a = 1.0f; //Below is where the error begins return outColor; } // Define the technique technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); } } Below is some of my c++ code. Reason I am showing this is because it is pretty much what creates the surface normals for my shaders to evaluate. for the lighting for(int z=0; z < NUM_ROWS; ++z) { for(int x = 0; x < NUM_COLS; ++x) { int curVertex = x + (z * NUM_VERTSX); indices[curIndex] = curVertex; indices[curIndex + 1] = curVertex + NUM_VERTSX; indices[curIndex + 2] = curVertex + 1; D3DXVECTOR3 v0 = vertices[indices[curIndex]].pos; D3DXVECTOR3 v1 = vertices[indices[curIndex + 1]].pos; D3DXVECTOR3 v2 = vertices[indices[curIndex + 2]].pos; D3DXVECTOR3 normal; D3DXVECTOR3 cross; D3DXVec3Cross(&cross, &D3DXVECTOR3(v2 - v0),&D3DXVECTOR3(v1 - v0)); D3DXVec3Normalize(&normal, &cross); vertices[indices[curIndex]].normal = normal; vertices[indices[curIndex + 1]].normal = normal; vertices[indices[curIndex + 2]].normal = normal; indices[curIndex + 3] = curVertex + 1; indices[curIndex + 4] = curVertex + NUM_VERTSX; indices[curIndex + 5] = curVertex + NUM_VERTSX + 1; v0 = vertices[indices[curIndex + 3]].pos; v1 = vertices[indices[curIndex + 4]].pos; v2 = vertices[indices[curIndex + 5]].pos; D3DXVec3Cross(&cross, &D3DXVECTOR3(v2 - v0),&D3DXVECTOR3(v1 - v0)); D3DXVec3Normalize(&normal, &cross); vertices[indices[curIndex + 3]].normal = normal; vertices[indices[curIndex + 4]].normal = normal; vertices[indices[curIndex + 5]].normal = normal; curIndex += 6; } } and below is my c++ code, in it's entirety. showing the drawing and also calling on the passes #include "MyGame.h" //#include "CubeVector.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; //grid information #define NUM_COLS 16 #define NUM_ROWS 16 #define CELL_WIDTH 32 #define CELL_HEIGHT 32 #define NUM_VERTSX (NUM_COLS + 1) #define NUM_VERTSY (NUM_ROWS + 1) // timer variables LARGE_INTEGER timeStart; LARGE_INTEGER timeEnd; LARGE_INTEGER timerFreq; double currentTime; float anim_rate; // Variable to hold how long since last frame change float lastElaspedFrame = 0; // How long should the frames last float frameDuration = 0.5; bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } // Get the timer frequency QueryPerformanceFrequency(&timerFreq); float freqSeconds = 1.0f / timerFreq.QuadPart; lastElaspedFrame = 0; D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(200.0f, 60.0f, -20.0f), new D3DXVECTOR3(200.0f, 50.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); pTimeVariable = NULL; if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngle = 0.0f; // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&WorldMatrix, rotationAngle); rotationAngle += (float)D3DX_PI * 0.0f; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); mpD3DDevice->IASetIndexBuffer(modelObject.pIndicesBuffer, DXGI_FORMAT_R32_UINT, 0); pTimeVariable->SetFloat((float)currentTime); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->DrawIndexed(modelObject.numIndices,0,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { // Get the start timer count QueryPerformanceCounter(&timeStart); currentTime += anim_rate; DX3dApp::Render(); QueryPerformanceCounter(&timeEnd); anim_rate = ( (float)timeEnd.QuadPart - (float)timeStart.QuadPart ) / timerFreq.QuadPart; } bool MyGame::CreateObject() { VertexPos vertices[NUM_VERTSX * NUM_VERTSY]; for(int z=0; z < NUM_VERTSY; ++z) { for(int x = 0; x < NUM_VERTSX; ++x) { vertices[x + z * NUM_VERTSX].pos.x = (float)x * CELL_WIDTH; vertices[x + z * NUM_VERTSX].pos.z = (float)z * CELL_HEIGHT; vertices[x + z * NUM_VERTSX].pos.y = (float)(rand() % CELL_HEIGHT); vertices[x + z * NUM_VERTSX].color = D3DXVECTOR4(1.0, 0.0f, 0.0f, 0.0f); } } DWORD indices[NUM_VERTSX * NUM_VERTSY * 6]; int curIndex = 0; for(int z=0; z < NUM_ROWS; ++z) { for(int x = 0; x < NUM_COLS; ++x) { int curVertex = x + (z * NUM_VERTSX); indices[curIndex] = curVertex; indices[curIndex + 1] = curVertex + NUM_VERTSX; indices[curIndex + 2] = curVertex + 1; D3DXVECTOR3 v0 = vertices[indices[curIndex]].pos; D3DXVECTOR3 v1 = vertices[indices[curIndex + 1]].pos; D3DXVECTOR3 v2 = vertices[indices[curIndex + 2]].pos; D3DXVECTOR3 normal; D3DXVECTOR3 cross; D3DXVec3Cross(&cross, &D3DXVECTOR3(v2 - v0),&D3DXVECTOR3(v1 - v0)); D3DXVec3Normalize(&normal, &cross); vertices[indices[curIndex]].normal = normal; vertices[indices[curIndex + 1]].normal = normal; vertices[indices[curIndex + 2]].normal = normal; indices[curIndex + 3] = curVertex + 1; indices[curIndex + 4] = curVertex + NUM_VERTSX; indices[curIndex + 5] = curVertex + NUM_VERTSX + 1; v0 = vertices[indices[curIndex + 3]].pos; v1 = vertices[indices[curIndex + 4]].pos; v2 = vertices[indices[curIndex + 5]].pos; D3DXVec3Cross(&cross, &D3DXVECTOR3(v2 - v0),&D3DXVECTOR3(v1 - v0)); D3DXVec3Normalize(&normal, &cross); vertices[indices[curIndex + 3]].normal = normal; vertices[indices[curIndex + 4]].normal = normal; vertices[indices[curIndex + 5]].normal = normal; curIndex += 6; } } //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 28, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; modelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * modelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pIndicesBuffer); if(FAILED(hr)) return false; ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); pTimeVariable = modelObject.pEffect->GetVariableByName("TimeStep")->AsScalar(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; }

    Read the article

  • subfig package in latex

    - by Tim
    Hi, When I am using subfig package in latex, it gives some errors: Package subfig Warning: Your document class has a bad definition of \endfigure, most likely \let\endfigure=\end@float which has now been changed to \def\endfigure{\end@float} because otherwise subsequent changes to \end@float (like done by several packages changing float behaviour) can't take effect on \endfigure. Please complain to your document class author. Package subfig Warning: Your document class has a bad definition of \endtable, most likely \let\endtable=\end@float which has now been changed to \def\endtable{\end@float} because otherwise subsequent changes to \end@float (like done by several packages changing float behaviour) can't take effect on \endtable. Please complain to your document class author. (/usr/share/texmf/tex/latex/caption/caption.sty `rotating' package detected `float' package detected ) LaTeX Warning: You have requested, on input line 139, version `2005/06/26' of package caption, but only version `1995/04/05 v1.4b caption package (AS)' is available. ! Undefined control sequence. l.163 \DeclareCaptionOption {listofformat}{\caption@setlistofformat{#1}} How can I solve it? Thanks and regards!

    Read the article

  • Code golf: the Mandelbrot set

    - by Stefano Borini
    Usual rules for the code golf. Here is an implementation in python as an example from PIL import Image im = Image.new("RGB", (300,300)) for i in xrange(300): print "i = ",i for j in xrange(300): x0 = float( 4.0*float(i-150)/300.0 -1.0) y0 = float( 4.0*float(j-150)/300.0 +0.0) x=0.0 y=0.0 iteration = 0 max_iteration = 1000 while (x*x + y*y <= 4.0 and iteration < max_iteration): xtemp = x*x - y*y + x0 y = 2.0*x*y+y0 x = xtemp iteration += 1 if iteration == max_iteration: value = 255 else: value = iteration*10 % 255 print value im.putpixel( (i,j), (value, value, value)) im.save("image.png", "PNG") The result should look like this Use of an image library is allowed. Alternatively, you can use ASCII art. This code does the same for i in xrange(40): line = [] for j in xrange(80): x0 = float( 4.0*float(i-20)/40.0 -1.0) y0 = float( 4.0*float(j-40)/80.0 +0.0) x=0.0 y=0.0 iteration = 0 max_iteration = 1000 while (x*x + y*y <= 4.0 and iteration < max_iteration): xtemp = x*x - y*y + x0 y = 2.0*x*y+y0 x = xtemp iteration += 1 if iteration == max_iteration: line.append(" ") else: line.append("*") print "".join(line) The result ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** *************************************** ************************************** ************************************* ************************************ ************************************ *********************************** *********************************** ********************************** ************************************ *********************************** ************************************* ************************************ *********************************** ********************************** ******************************** ******************************* **************************** *************************** ***************************** **************************** **************************** *************************** ************************ * * *********************** *********************** * * ********************** ******************** ******* ******* ******************* **************************** *************************** ****************************** ***************************** ***************************** * * * **************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ********************************************************************************

    Read the article

  • Why a graphics overflow problem as a result of a for loop?

    - by sonny5
    using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; using System.Drawing.Imaging; using System.Drawing.Drawing2D; public class Form1 : System.Windows.Forms.Form { public static float WXmin; public static float WYmin; public static float WXmax; public static float WYmax; public static int VXmin; public static int VYmin; public static int VXmax; public static int VYmax; public static float Wx; public static float Wy; public static float Vx; public static float Vy; public Form1() { InitializeComponent(); } private void InitializeComponent() { this.ClientSize = new System.Drawing.Size(400, 300); this.Text="Pass Args"; this.Paint += new System.Windows.Forms.PaintEventHandler(this.doLine); //this.Paint += new System.PaintEventHandler(this.eachCornerPix); //eachCornerPix(out Wx, out Wy, out Vx, out Vy); } static void Main() { Application.Run(new Form1()); } private void doLine(object sender, System.Windows.Forms.PaintEventArgs e) { Graphics g = e.Graphics; g.FillRectangle(Brushes.White, this.ClientRectangle); Pen p = new Pen(Color.Black); g.DrawLine(p, 0, 0, 100, 100); // draw DOWN in y, which is positive since no matrix called eachCornerPix(sender, e, out Wx, out Wy, out Vx, out Vy); p.Dispose(); } private void eachCornerPix (object sender, System.EventArgs e, out float Wx, out float Wy, out float Vx, out float Vy) { Wx = 0.0f; Wy = 0.0f; Vx = 0.0f; Vy = 0.0f; Graphics g = this.CreateGraphics(); Pen penBlu = new Pen(Color.Blue, 2); SolidBrush redBrush = new SolidBrush(Color.Red); int width = 2; // 1 pixel wide in x int height = 2; float [] Wxc = {0.100f, 5.900f, 5.900f, 0.100f}; float [] Wyc = {0.100f, 0.100f, 3.900f, 3.900f}; Console.WriteLine("Wxc[0] = {0}", Wxc[0]); Console.WriteLine("Wyc[3] = {0}", Wyc[3]); /* for (int i = 0; i<3; i++) { Wx = Wxc[i]; Wy = Wyc[i]; Vx = ((Wx - WXmin)*((VXmax-VXmin)+VXmin)/(WXmax-WXmin)); Vy = ((Wy - WYmin)*(VYmax-VYmin)/(WYmax-WYmin)+VYmin); Console.WriteLine("eachCornerPix Vx= {0}", Vx); Console.WriteLine("eachCornerPix Vy= {0}", Vy); g.FillRectangle(redBrush, Vx, Vy, width, height); } */ // What is there about this for loop that will not run? // When the comments above and after the for loop are removed, it gets an overflow? g.Dispose(); } }

    Read the article

  • Opengl Iphone SDK: How to tell if you're touching an object on screen?

    - by TheGambler
    First is my touchesBegan function and then the struct that stores the values for my object. I have an array of these objects and I'm trying to figure out when I touch the screen if I'm touching an object on the screen. I don't know if I need to do this by iterating through all my objects and figure out if I'm touching an object that way or maybe there is an easier more efficient way. How is this usually handled? -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ [super touchesEnded:touches withEvent:event]; UITouch* touch = ([touches count] == 1 ? [touches anyObject] : nil); CGRect bounds = [self bounds]; CGPoint location = [touch locationInView:self]; location.y = bounds.size.height - location.y; float xTouched = location.x/20 - 8 + ((int)location.x % 20)/20; float yTouched = location.y/20 - 12 + ((int)location.y % 20)/20; } typedef struct object_tag // Create A Structure Called Object { int tex; // Integer Used To Select Our Texture float x; // X Position float y; // Y Position float z; // Z Position float yi; // Y Increase Speed (Fall Speed) float spinz; // Z Axis Spin float spinzi; // Z Axis Spin Speed float flap; // Flapping Triangles :) float fi; // Flap Direction (Increase Value) } object;

    Read the article

  • Why differs floating-point precision in C# when separated by parantheses and when separated by state

    - by Andreas Larsen
    I am aware of how floating point precision works in the regular cases, but I stumbled on an odd situation in my C# code. Why aren't result1 and result2 the exact same floating point value here? const float A; // Arbitrary value const float B; // Arbitrary value float result1 = (A*B)*dt; float result2 = (A*B); result2 *= dt; From this page I figured float arithmetic was left-associative and that this means values are evaluated and calculated in a left-to-right manner. The full source code involves XNA's Quaternions. I don't think it's relevant what my constants are and what the VectorHelper.AddPitchRollYaw() does. The test passes just fine if I calculate the delta pitch/roll/yaw angles in the same manner, but as the code is below it does not pass: X Expected: 0.275153548f But was: 0.275153786f [TestFixture] internal class QuaternionPrecisionTest { [Test] public void Test() { JoystickInput input; input.Pitch = 0.312312432f; input.Roll = 0.512312432f; input.Yaw = 0.912312432f; const float dt = 0.017001f; float pitchRate = input.Pitch * PhysicsConstants.MaxPitchRate; float rollRate = input.Roll * PhysicsConstants.MaxRollRate; float yawRate = input.Yaw * PhysicsConstants.MaxYawRate; Quaternion orient1 = Quaternion.Identity; Quaternion orient2 = Quaternion.Identity; for (int i = 0; i < 10000; i++) { float deltaPitch = (input.Pitch * PhysicsConstants.MaxPitchRate) * dt; float deltaRoll = (input.Roll * PhysicsConstants.MaxRollRate) * dt; float deltaYaw = (input.Yaw * PhysicsConstants.MaxYawRate) * dt; // Add deltas of pitch, roll and yaw to the rotation matrix orient1 = VectorHelper.AddPitchRollYaw( orient1, deltaPitch, deltaRoll, deltaYaw); deltaPitch = pitchRate * dt; deltaRoll = rollRate * dt; deltaYaw = yawRate * dt; orient2 = VectorHelper.AddPitchRollYaw( orient2, deltaPitch, deltaRoll, deltaYaw); } Assert.AreEqual(orient1.X, orient2.X, "X"); Assert.AreEqual(orient1.Y, orient2.Y, "Y"); Assert.AreEqual(orient1.Z, orient2.Z, "Z"); Assert.AreEqual(orient1.W, orient2.W, "W"); } } Granted, the error is small and only presents itself after a large number of iterations, but it has caused me some great headackes.

    Read the article

  • Calculate new position of player

    - by user1439111
    Edit: I will summerize my question since it is very long (Thanks Len for pointing it out) What I'm trying to find out is to get a new position of a player after an X amount of time. The following variables are known: - Speed - Length between the 2 points - Source position (X, Y) - Destination position (X, Y) How can I calculate a position between the source and destion with these variables given? For example: source: 0, 0 destination: 10, 0 speed: 1 so after 1 second the players position would be 1, 0 The code below works but it's quite long so I'm looking for something shorter/more logical ====================================================================== I'm having a hard time figuring out how to calculate a new position of a player ingame. This code is server sided used to track a player(It's a emulator so I don't have access to the clients code). The collision detection of the server works fine I'm using bresenham's line algorithm and a raycast to determine at which point a collision happens. Once I deteremined the collision I calculate the length of the path the player is about to walk and also the total time. I would like to know the new position of a player each second. This is the code I'm currently using. It's in C++ but I am porting the server to C# and I haven't written the code in C# yet. // Difference between the source X - destination X //and source y - destionation Y float xDiff, yDiff; xDiff = xDes - xSrc; yDiff = yDes - ySrc; float walkingLength = 0.00F; float NewX = xDiff * xDiff; float NewY = yDiff * yDiff; walkingLength = NewX + NewY; walkingLength = sqrt(walkingLength); const float PI = 3.14159265F; float Angle = 0.00F; if(xDes >= xSrc && yDes >= ySrc) { Angle = atanf((yDiff / xDiff)); Angle = Angle * 180 / PI; } else if(xDes < xSrc && yDes >= ySrc) { Angle = atanf((-xDiff / yDiff)); Angle = Angle * 180 / PI; Angle += 90.00F; } else if(xDes < xSrc && yDes < ySrc) { Angle = atanf((yDiff / xDiff)); Angle = Angle * 180 / PI; Angle += 180.00F; } else if(xDes >= xSrc && yDes < ySrc) { Angle = atanf((xDiff / -yDiff)); Angle = Angle * 180 / PI; Angle += 270.00F; } float WalkingTime = (float)walkingLength / (float)speed; bool Done = false; float i = 0; while(i < walkingLength) { if(Done == true) { break; } if(WalkingTime >= 1000) { Sleep(1000); i += speed; WalkTime -= 1000; } else { Sleep(WalkTime); i += speed * WalkTime; WalkTime -= 1000; Done = true; } if(Angle >= 0 && Angle < 90) { float xNew = cosf(Angle * PI / 180) * i; float yNew = sinf(Angle * PI / 180) * i; float NewCharacterX = xSrc + xNew; float NewCharacterY = ySrc + yNew; } I have cut the last part of the loop since it's just 3 other else if statements with 3 other angle conditions and the only change is the sin and cos. The given speed parameter is the speed/second. The code above works but as you can see it's quite long so I'm looking for a new way to calculate this. btw, don't mind the while loop to calculate each new position I'm going to use a timer in C# Thank you very much

    Read the article

  • Calculix Data Visualiser using QT

    - by Ann
    I am doing a project on CalculiX data visualizor,using Qt.I 've to draw the structure and after giving force the displacement should be shawn as variation in color.I chose HSV coloring,but while executing I got an error message:"QColor::from Hsv:HSV parameters out of range".The code is: DataViz1::DataViz1(QWidget *parent) : QWidget(parent), ui(new Ui::DataViz1) { DArea = new QGLScreen(this); DArea-setGeometry(QRect(10,10,700,600)); //TODO This values are feeded by user dfile="/home/41407/color.txt";//input file with displacement mfile="/home/41407/mesh21.txt";//input file nodeId="*NODE"; elId="*ELEMENT"; DataId="displ"; parseMfile(); parseDfile(); DArea->Nodes=Nodes; DArea->Elements=Elements; DArea->Data=Data; DArea->fillColorArray(); //printf("Colr is %d",DArea->pickColor(-11.02,0));fflush(stdout); ui->setupUi(this); } DataViz1::~DataViz1() { delete ui; } void DataViz1::parseMfile() { QFile file(mfile); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; int node_end=0; QTextStream in(&file); in.skipWhiteSpace(); while (!in.atEnd()) { QString line = in.readLine(); if(line.startsWith(nodeId))//Node block in Mfile { while(1) { line = in.readLine(); if(line.startsWith(elId)) { break; } Nodes< while(1) { line = in.readLine(); Elements<<line; //printf("Element is %s\n",line.toLocal8Bit().constData());fflush(stdout); if(in.atEnd()) break; } } } } void DataViz1::parseDfile() { QFile file(dfile); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; int node_end=0; QTextStream in(&file); in.skipWhiteSpace(); while (!in.atEnd()) { QString line = in.readLine(); if(line.startsWith(DataId)) { continue; } line = in.readLine(); Data< } /......................................................................../ include "qglscreen.h" include GLfloat LightAmbient[]= { 0.5f, 0.5f, 0.5f, 1.0f }; GLfloat LightDiffuse[]= { 1.0f, 1.0f, 1.0f, 1.0f }; GLfloat LightPosition[]= { 0.0f, 0.0f, 2.0f, 1.0f }; QGLScreen::QGLScreen(QWidget *parent):QGLWidget(QGLFormat(QGL::SampleBuffers), parent) { clearColor = Qt::black; xRot = 0; yRot = 0; zRot = 0; ifdef QT_OPENGL_ES_2 program = 0; endif //TODO user input ElType="HE8"; DType="SolidFrame"; axis="X"; } QGLScreen::~QGLScreen() { } QSize QGLScreen::minimumSizeHint() const { return QSize(50, 50); } QSize QGLScreen::sizeHint() const { return QSize(200, 200); } void QGLScreen::setClearColor(const QColor &color) { clearColor = color; updateGL(); } void QGLScreen::initializeGL() { xRot=0; yRot=0; zRot=0; scaling = 1.0; /* select clearing (background) color */ glClearColor (0.0, 0.0, 0.0, 0.0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); // glViewport(0,0,10,10); glOrtho(-10.0, +10.0, -10.0, +10.0, -10.0,+10.0); glEnable (GL_LINE_SMOOTH); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); } void QGLScreen::wheel1() { scaling1 += .0025; count2++; update(); } void QGLScreen::wheel2() { if(count2-14) { scaling1 -= .0025; count2--; update(); } } void QGLScreen::drawModel(int x1,int y1,int x2,int y2) { makeCurrent(); QStringList Cnode,Celement; for (int i = 0; i < Elements.size(); ++i) { Celement=Elements.at(i).split(","); // printf("Element is %s",Celement.at(0).toLocal8Bit().constData());fflush(stdout); //printf("Node at el is %s\n",(findNode(Celement.at(1).toInt())).at(1).toLocal8Bit().constData()); fflush(stdout); if(ElType=="HE8") { //First four nodes float ENX1=(findNode(Celement.at(1).toInt())).at(1).toDouble(); float ENX2=(findNode(Celement.at(2).toInt())).at(1).toDouble(); float ENX3=(findNode(Celement.at(3).toInt())).at(1).toDouble(); float ENX4=(findNode(Celement.at(4).toInt())).at(1).toDouble(); float ENY1=(findNode(Celement.at(1).toInt())).at(2).toDouble(); float ENY2=(findNode(Celement.at(2).toInt())).at(2).toDouble(); float ENY3=(findNode(Celement.at(3).toInt())).at(2).toDouble(); float ENY4=(findNode(Celement.at(4).toInt())).at(2).toDouble(); float ENZ1=(findNode(Celement.at(1).toInt())).at(3).toDouble(); float ENZ2=(findNode(Celement.at(2).toInt())).at(3).toDouble(); float ENZ3=(findNode(Celement.at(3).toInt())).at(3).toDouble(); float ENZ4=(findNode(Celement.at(4).toInt())).at(3).toDouble(); //Second four Nodes float ENX5=(findNode(Celement.at(5).toInt())).at(1).toDouble(); float ENX6=(findNode(Celement.at(6).toInt())).at(1).toDouble(); float ENX7=(findNode(Celement.at(7).toInt())).at(1).toDouble(); float ENX8=(findNode(Celement.at(8).toInt())).at(1).toDouble(); float ENY5=(findNode(Celement.at(5).toInt())).at(2).toDouble(); float ENY6=(findNode(Celement.at(6).toInt())).at(2).toDouble(); float ENY7=(findNode(Celement.at(7).toInt())).at(2).toDouble(); float ENY8=(findNode(Celement.at(8).toInt())).at(2).toDouble(); float ENZ5=(findNode(Celement.at(5).toInt())).at(3).toDouble(); float ENZ6=(findNode(Celement.at(6).toInt())).at(3).toDouble(); float ENZ7=(findNode(Celement.at(7).toInt())).at(3).toDouble(); float ENZ8=(findNode(Celement.at(8).toInt())).at(3).toDouble(); //Identify Colors GLfloat ENC[8][3]; for(int k=1;k<8;k++) { int hsv=pickColor(findData(Celement.at(k).toInt()).toDouble(),0); //printf("hsv is %d=",hsv);fflush(stdout); getRGB(hsv); //printf("%d*%d*%d\n",red,green,blue); //ENC[k]={red,green,blue}; ENC[k][0]=red; ENC[k][1]=green; ENC[k][2]=blue; } //Plot the first four direct loop if(DType=="WireFrame"){ glBegin(GL_LINE_LOOP); glColor3f(255,0,0); glVertex3f(ENX1,ENY1,ENZ1); glColor3f(255,0,0); glVertex3f(ENX2,ENY2,ENZ2); glColor3f(255,0,0); glVertex3f(ENX3,ENY3,ENZ3); glColor3f(255,0,0); glVertex3f(ENX4,ENY4,ENZ4); glEnd(); //Plot the second four direct loop glBegin(GL_LINE_LOOP); glColor3f(0,0,255); glVertex3f(ENX5,ENY5,ENZ5); glColor3f(0,0,255); glVertex3f(ENX6,ENY6,ENZ6); glColor3f(0,0,255); glVertex3f(ENX7,ENY7,ENZ7); glColor3f(0,0,255); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); //Plot the interconnections glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX1,ENY1,ENZ1); glVertex3f(ENX5,ENY5,ENZ5); glEnd(); glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX2,ENY2,ENZ2); glVertex3f(ENX6,ENY6,ENZ6); glEnd(); glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX3,ENY3,ENZ3); glVertex3f(ENX7,ENY7,ENZ7); glEnd(); glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX4,ENY4,ENZ4); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); } if(DType=="SolidFrame") { glBegin(GL_QUADS); glColor3fv(ENC[1]); glVertex3f(ENX1,ENY1,ENZ1); glColor3fv(ENC[2]); glVertex3f(ENX2,ENY2,ENZ2); glColor3fv(ENC[3]); glVertex3f(ENX3,ENY3,ENZ3); glColor3fv(ENC[4]); glVertex3f(ENX4,ENY4,ENZ4); glEnd(); //break; glBegin(GL_QUADS); glColor3fv(ENC[5]); glVertex3f(ENX5,ENY5,ENZ5); glColor3fv(ENC[6]); glVertex3f(ENX6,ENY6,ENZ6); glColor3fv(ENC[7]); glVertex3f(ENX7,ENY7,ENZ7); glColor3fv(ENC[8]); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[1]); glVertex3f(ENX1,ENY1,ENZ1); glColor3fv(ENC[5]); glVertex3f(ENX5,ENY5,ENZ5); glColor3fv(ENC[2]); glVertex3f(ENX2,ENY2,ENZ2); glColor3fv(ENC[6]); glVertex3f(ENX6,ENY6,ENZ6); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[3]); glVertex3f(ENX3,ENY3,ENZ3); glColor3fv(ENC[7]); glVertex3f(ENX7,ENY7,ENZ7); glColor3fv(ENC[4]); glVertex3f(ENX4,ENY4,ENZ4); glColor3fv(ENC[8]); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[2]); glVertex3f(ENX2,ENY2,ENZ2); glColor3fv(ENC[6]); glVertex3f(ENX6,ENY6,ENZ6); glColor3fv(ENC[3]); glVertex3f(ENX3,ENY3,ENZ3); glColor3fv(ENC[7]); glVertex3f(ENX7,ENY7,ENZ7); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[1]); glVertex3f(ENX1,ENY1,ENZ1); glColor3fv(ENC[5]); glVertex3f(ENX5,ENY5,ENZ5); glColor3fv(ENC[4]); glVertex3f(ENX4,ENY4,ENZ4); glColor3fv(ENC[8]); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); } } } } QStringList QGLScreen::findNode(int element) { QStringList Temp; for (int i = 0; i < Nodes.size(); ++i) { Temp=Nodes.at(i).split(","); if(Temp.at(0).toInt()==element) { break; } } return Temp; } QString QGLScreen::findData(int Node) { QString Temp; QRegExp sep("\s+"); for (int i = 0; i < Data.size(); ++i) { if((Data.at(i).split("\t")).at(0).section(sep,1,1).toInt()==Node) { if(axis=="X") { Temp=Data.at(i).split("\t").at(0).section(sep,2,2); } if(axis=="Y") { Temp=Data.at(i).split("\t").at(0).section(sep,3,3); } if(axis=="Z") { Temp=Data.at(i).split("\t").at(0).section(sep,4,4); } break; } } return Temp; } void QGLScreen::fillColorArray() { QString Temp1,Temp2,Temp3; double d1s=0,d2s=0,d3s=0,d1l=0,d2l=0,d3l=0,diff=0; QRegExp sep("\\s+"); for (int i = 0; i < Data.size(); ++i) { Temp1=(Data.at(i).split("\t")).at(0).section(sep,2,2); if(d1s>Temp1.toDouble()) { d1s=Temp1.toDouble(); } if(d1l<Temp1.toDouble()) { d1l=Temp1.toDouble(); } Temp2=(Data.at(i).split("\t")).at(0).section(sep,3,3); if(d2s>Temp2.toDouble()) { d2s=Temp2.toDouble(); } if(d2l<Temp2.toDouble()) { d2l=Temp2.toDouble(); } Temp3=(Data.at(i).split("\t")).at(0).section(sep,4,4); if(d3s>Temp3.toDouble()) { d3s=Temp3.toDouble(); } if(d3l<Temp3.toDouble()) { d3l=Temp3.toDouble(); } // printf("data is %s",Temp.toLocal8Bit().constData());fflush(stdout); } color[0][0]=d1l; for(int i=1;i<360;i++) { //printf("Large is%f small is %f",d1l,d1s); diff=d1l-d1s; if(d1l==0&&d1s<0) color[0][i]=color[0][i-1]-diff/360; else if(d1l>0&&d1s==0) color[0][i]=color[0][i-1]+diff/360; else if(d1l>0&&d1s<0) color[0][i]=color[0][i-1]-diff/360; diff=d2l-d2s; if(d2l==0&&d2s<0) color[1][i]=color[1][i-1]-diff/360; else if(d2l>0&&d2s==0) color[1][i]=color[1][i-1]+diff/360; else if(d2l>0&&d2s<0) color[1][i]=color[1][i-1]-diff/360; diff=d3l-d3s; if(d3l==0&&d3s<0) color[2][i]=color[2][i-1]-diff/360; else if(d3l>0&&d3s==0) color[2][i]=color[2][i-1]+diff/360; else if(d3l>0&&d3s<0) color[2][i]=color[2][i-1]-diff/360; } //for(int i=0;i<360;i++) printf("%d %f %f %f\n",i,color[0][i],color[1][i],color[2][i]); } int QGLScreen::pickColor(double data,int Did) { int i,pos; if(axis=="X")Did=0; if(axis=="Y")Did=1; if(axis=="Z")Did=2; //printf("%f data is",data);fflush(stdout); for(int i=0;i<360;i++) { if(color[Did][i]<data && data>color[Did][i+1]) { //printf("Orginal dat is %f Data found is %f and pos %d\n",data,color[Did][i],i);fflush(stdout); pos=i; break; } } return pos; } void QGLScreen::getRGB(int hsv) { QColor c; c.setHsv(hsv,255,255,255); QColor r=QColor::fromHsv(hsv,255,255); red=r.red(); green=r.green(); blue=r.blue(); } void QGLScreen::paintGL() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glPushAttrib(GL_ALL_ATTRIB_BITS); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); GLfloat x = 3.0 * GLfloat(width()) / height(); glOrtho(-x, +x, -3.0, +3.0, 4.0, 15.0); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glTranslatef(0.0, 0.0, -10.0); glScalef(scaling, scaling, scaling); glRotatef(xRot, 1.0, 0.0, 0.0); glRotatef(yRot, 0.0, 1.0, 0.0); glRotatef(zRot, 0.0, 0.0, 1.0); drawModel(0,0,1,1); /* don't wait! * start processing buffered OpenGL routines */ glFlush (); } /void QGLScreen::zoom1() { scaling+=.05; update(); }/ void QGLScreen::resizeGL(int width, int height) { int side = qMin(width, height); glViewport((width - side) / 2, (height - side) / 2, side, side); #if !defined(QT_OPENGL_ES_2) glMatrixMode(GL_PROJECTION); glLoadIdentity(); #ifndef QT_OPENGL_ES glOrtho(-0.5, +0.5, +0.5, -0.5, 4.0, 15.0); #else glOrthof(-0.5, +0.5, +0.5, -0.5, 4.0, 15.0); #endif glMatrixMode(GL_MODELVIEW); #endif } void QGLScreen::mousePressEvent(QMouseEvent *event) { lastPos = event-pos(); } void QGLScreen::mouseMoveEvent(QMouseEvent *event) { GLfloat dx = GLfloat(event->x() - lastPos.x()) / width(); GLfloat dy = GLfloat(event->y() - lastPos.y()) / height(); if (event->buttons() & Qt::LeftButton) { xRot+= 180 * dy; yRot += 180 * dx; update(); } else if (event->buttons() & Qt::RightButton) { xRot += 180 * dy; yRot += 180 * dx; update(); } lastPos = event->pos(); } void QGLScreen::mouseReleaseEvent(QMouseEvent * /* event */) { emit clicked(); }

    Read the article

  • java quaternion 3D rotation implementation

    - by MRM
    I made a method to rotate a list of points using quaternions, but all i get back as output is the same list i gave to rotate on. Maybe i did not understood corectly the math for 3d rotations or my code is not implemented the right way, could you give me a hand? This is the method i use: public static ArrayList<Float> rotation3D(ArrayList<Float> points, double angle, int x, int y, int z) { ArrayList<Float> newpoints = points; for (int i=0;i<points.size();i+=3) { float x_old = points.get(i).floatValue(); float y_old = points.get(i+1).floatValue(); float z_old = points.get(i+2).floatValue(); double[] initial = {1,0,0,0}; double[] total = new double[4]; double[] local = new double[4]; //components for local quaternion //w local[0] = Math.cos(0.5 * angle); //x local[1] = x * Math.sin(0.5 * angle); //y local[2] = y * Math.sin(0.5 * angle); //z local[3] = z * Math.sin(0.5 * angle); //components for final quaternion Q1*Q2 //w = w1w2 - x1x2 - y1y2 - z1z2 total[0] = local[0] * initial[0] - local[1] * initial[1] - local[2] * initial[2] - local[3] * initial[3]; //x = w1x2 + x1w2 + y1z2 - z1y2 total[1] = local[0] * initial[1] + local[1] * initial[0] + local[2] * initial[3] - local[3] * initial[2]; //y = w1y2 - x1z2 + y1w2 + z1x2 total[2] = local[0] * initial[2] - local[1] * initial[3] + local[2] * initial[0] + local[3] * initial[1]; //z = w1z2 + x1y2 - y1x2 + z1w2 total[3] = local[0] * initial[3] + local[1] * initial[2] - local[2] * initial[1] + local[3] * initial[0]; //new x,y,z of the 3d point using rotation matrix made from the final quaternion float x_new = (float)((1 - 2 * total[2] * total[2] - 2 * total[3] * total[3]) * x_old + (2 * total[1] * total[2] - 2 * total[0] * total[3]) * y_old + (2 * total[1] * total[3] + 2 * total[0] * total[2]) * z_old); float y_new = (float) ((2 * total[1] * total[2] + 2 * total[0] * total[3]) * x_old + (1 - 2 * total[1] * total[1] - 2 * total[3] * total[3]) * y_old + (2 * total[2] * total[3] + 2 * total[0] * total[1]) * z_old); float z_new = (float) ((2 * total[1] * total[3] - 2 * total[0] * total[2]) * x_old + (2 * total[2] * total[3] - 2 * total[0] * total[1]) * y_old + (1 - 2 * total[1] * total[1] - 2 * total[2] * total[2]) * z_old); newpoints.set(i, x_new); newpoints.set(i+1, y_new); newpoints.set(i+2, z_new); } return newpoints; } For rotation3D(points, 50, 0, 1, 0) where points is: 0.0, 0.0, -9.0; 0.0, 0.0, -11.0; 20.0, 0.0, -11.0; 20.0, 0.0, -9.0; i get back the same list.

    Read the article

  • Filling in gaps for outlines

    - by user146780
    I'm using an algorithm to generate quads. These become outlines. The algorithm is: void OGLENGINEFUNCTIONS::GenerateLinePoly(const std::vector<std::vector<GLdouble>> &input, std::vector<GLfloat> &output, int width) { output.clear(); if(input.size() < 2) { return; } int temp; float dirlen; float perplen; POINTFLOAT start; POINTFLOAT end; POINTFLOAT dir; POINTFLOAT ndir; POINTFLOAT perp; POINTFLOAT nperp; POINTFLOAT perpoffset; POINTFLOAT diroffset; POINTFLOAT p0, p1, p2, p3; for(unsigned int i = 0; i < input.size() - 1; ++i) { start.x = static_cast<float>(input[i][0]); start.y = static_cast<float>(input[i][1]); end.x = static_cast<float>(input[i + 1][0]); end.y = static_cast<float>(input[i + 1][1]); dir.x = end.x - start.x; dir.y = end.y - start.y; dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y)); ndir.x = static_cast<float>(dir.x * 1.0 / dirlen); ndir.y = static_cast<float>(dir.y * 1.0 / dirlen); perp.x = dir.y; perp.y = -dir.x; perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y)); nperp.x = static_cast<float>(perp.x * 1.0 / perplen); nperp.y = static_cast<float>(perp.y * 1.0 / perplen); perpoffset.x = static_cast<float>(nperp.x * width * 0.5); perpoffset.y = static_cast<float>(nperp.y * width * 0.5); diroffset.x = static_cast<float>(ndir.x * 0 * 0.5); diroffset.y = static_cast<float>(ndir.y * 0 * 0.5); // p0 = start + perpoffset - diroffset //p1 = start - perpoffset - diroffset //p2 = end + perpoffset + diroffset // p3 = end - perpoffset + diroffset p0.x = start.x + perpoffset.x - diroffset.x; p0.y = start.y + perpoffset.y - diroffset.y; p1.x = start.x - perpoffset.x - diroffset.x; p1.y = start.y - perpoffset.y - diroffset.y; p2.x = end.x + perpoffset.x + diroffset.x; p2.y = end.y + perpoffset.y + diroffset.y; p3.x = end.x - perpoffset.x + diroffset.x; p3.y = end.y - perpoffset.y + diroffset.y; output.push_back(p2.x); output.push_back(p2.y); output.push_back(p0.x); output.push_back(p0.y); output.push_back(p1.x); output.push_back(p1.y); output.push_back(p3.x); output.push_back(p3.y); } } The problem is that there are then gaps as seen here: http://img816.imageshack.us/img816/2882/eeekkk.png There must be a way to fix this. I see a pattern but I just cant figure it out. There must be a way to fill the missing inbetweens. Thanks

    Read the article

  • C# generics when T could be an array

    - by bufferz
    I am writing a C# wrapper for a 3rd party library that reads both single values and arrays from a hardware device, but always returns an object[] array even for one value. This requires repeated calls to object[0] when I'd like the end user to be able to use generics to receive either an array or single value. I want to use generics so the callee can use the wrapper in the following ways: MyWrapper<float> mw = new MyWrapper<float>( ... ); float value = mw.Value; //should return float; MyWrapper<float[]> mw = new MyWrapper<float[]>( ... ); float[] values = mw.Value; //should return float[]; In MyWrapper I have the Value property currently as the following: public T Value { get { if(_wrappedObject.Values.Length > 1) return (T)_wrappedObject.Value; //T could be float[]. this doesn't compile. else return (T)_wrappedObject.Values[0]; //T could be float. this compiles. } } I get a compile error in the first case: Cannot convert type 'object[]' to 'T' If I change MyWrapper.Value to T[] I receive: Cannot convert type 'object[]' to 'T[]' Any ideas of how to achieve my goal? Thanks!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >