Search Results

Search found 60978 results on 2440 pages for 'web development'.

Page 864/2440 | < Previous Page | 860 861 862 863 864 865 866 867 868 869 870 871  | Next Page >

  • Lighting-Reflectance Models & Licensing Issues

    - by codey
    Generally, or specifically, is there any licensing issue with using any of the well known lighting/reflectance models (i.e. the BRDFs or other distribution or approximation functions): Phong, Blinn–Phong, Cook–Torrance, Blinn-Torrance-Sparrow, Lambert, Minnaert, Oren–Nayar, Ward, Strauss, Ashikhmin-Shirley and common modifications where applicable, such as: Beckmann distribution, Blinn distribution, Schlick's approximation, etc. in your shader code utilised in a commercial product? Or is it a non-issue?

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • Finding Z given X & Y coordinates on terrain?

    - by mrky
    I need to know what the most efficient way of finding Z given X & Y coordinates on terrain. My terrain is set up as a grid, each grid block consisting of two triangles, which may be flipped in any direction. I want to move game objects smoothly along the floor of the terrain without "stepping." I'm currently using the following method with unexpected results: double mapClass::getZ(double x, double y) { int vertexIndex = ((floor(y))*width*2)+((floor(x))*2); vec3ray ray = {glm::vec3(x, y, 2), glm::vec3(x, y, 0)}; vec3triangle tri1 = { glmFrom(vertices[vertexIndex].v1), glmFrom(vertices[vertexIndex].v2), glmFrom(vertices[vertexIndex].v3) }; vec3triangle tri2 = { glmFrom(vertices[vertexIndex+1].v1), glmFrom(vertices[vertexIndex+1].v2), glmFrom(vertices[vertexIndex+1].v3) }; glm::vec3 intersect; if (!intersectRayTriangle(tri1, ray, intersect)) { intersectRayTriangle(tri2, ray, intersect); } return intersect.z; } intersectRayTriangle() and glmFrom() are as follows: bool intersectRayTriangle(vec3triangle tri, vec3ray ray, glm::vec3 &worldIntersect) { glm::vec3 barycentricIntersect; if (glm::intersectLineTriangle(ray.origin, ray.direction, tri.p0, tri.p1, tri.p2, barycentricIntersect)) { // Convert barycentric to world coordinates double u, v, w; u = barycentricIntersect.x; v = barycentricIntersect.y; w = 1 - (u+v); worldIntersect.x = (u * tri.p0.x + v * tri.p1.x + w * tri.p2.x); worldIntersect.y = (u * tri.p0.y + v * tri.p1.y + w * tri.p2.y); worldIntersect.z = (u * tri.p0.z + v * tri.p1.z + w * tri.p2.z); return true; } else { return false; } } glm::vec3 glmFrom(s_point3f point) { return glm::vec3(point.x, point.y, point.z); } My convenience structures are defined as: struct s_point3f { GLfloat x, y, z; }; struct s_triangle3f { s_point3f v1, v2, v3; }; struct vec3ray { glm::vec3 origin, direction; }; struct vec3triangle { glm::vec3 p0, p1, p2; }; vertices is defined as: std::vector<s_triangle3f> vertices; Basically, I'm trying to get the intersect of a ray (which is positioned at the x, and y coordinates specified facing pointing downwards toward the terrain) and one of the two triangles on the grid. getZ() rarely returns anything but 0. Other times, the numbers it generates seem to be completely off. Am I taking the wrong approach? Can anyone see a problem with my code? Any help or critique is appreciated!

    Read the article

  • Eculidean space and vector magnitude

    - by Starkers
    Below we have distances from the origin calculated in two different ways, giving the Euclidean distance, the Manhattan distance and the Chebyshev distance. Euclidean distance is what we use to calculate the magnitude of vectors in 2D/3D games, and that makes sense to me: Let's say we have a vector that gives us the range a spaceship with limited fuel can travel. If we calculated this with Manhattan metric, our ship could travel a distance of X if it were travelling horizontally or vertically, however the second it attempted to travel diagonally it could only tavel X/2! So like I say, Euclidean distance does make sense. However, I still don't quite get how we calculate 'real' distances from the vector's magnitude. Here are two points, purple at (2,2) and green at (3,3). We can take two points away from each other to derive a vector. Let's create a vector to describe the magnitude and direction of purple from green: |d| = purple - green |d| = (purple.x, purple.y) - (green.x, green.y) |d| = (2, 2) - (3, 3) |d| = <-1,-1> Let's derive the magnitude of the vector via Pythagoras to get a Euclidean measurement: euc_magnitude = sqrt((x*x)+(y*y)) euc_magnitude = sqrt((-1*-1)+(-1*-1)) euc_magnitude = sqrt((1)+(1)) euc_magnitude = sqrt(2) euc_magnitude = 1.41 Now, if the answer had been 1, that would make sense to me, because 1 unit (in the direction described by the vector) from the green is bang on the purple. But it's not. It's 1.41. 1.41 units is the direction described, to me at least, makes us overshoot the purple by almost half a unit: So what do we do to the magnitude to allow us to calculate real distances on our point graph? Worth noting I'm a beginner just working my way through theory. Haven't programmed a game in my life!

    Read the article

  • Why do meshes show up as bones in the Model class?

    - by Itamar Marom
    Right now I'm working on a 3D game and I've come across something very weird. When I created the model in Blender, I added an armature named "MyBone" to the stage and attached a cube ("MyCube") to it, so that when I move the armature, the cube moves with it. I exported this as an FBX and loaded it as a Model object. What I expected to see was: But what I got was this: I'm really confused. Why is the mesh I created showing up in the bone list? And what's Root Node? Here are the .blend and .fbx files: here or here. Thanks.

    Read the article

  • Sounds to describe the weather?

    - by Matthew
    I'm trying to think of sounds that will help convey the time of day and weather condition. I'm not even sure of all the weather conditions I would consider, and some are obvious. Like if it's raining, the sound of rain. But then I'm thinking, what about for a calm day? If it's morning time, I could do birds chirping or something. Night time could be an owl or something. What are some good combinations of sounds/weather/time to have a good effect?

    Read the article

  • Nothing drawing on screen OpenGL with GLSL

    - by codemonkey
    I hate to be asking this kind of question here, but I am at a complete loss as to what is going wrong, so please bear with me. I am trying to render a single cube (voxel) in the center of the screen, through OpenGL with GLSL on Mac I begin by setting up everything using glut glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH); glutInitWindowSize(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); glutCreateWindow("Cubez-OSX"); glutReshapeFunc(reshape); glutDisplayFunc(render); glutIdleFunc(idle); _electricSheepEngine=new ElectricSheepEngine(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); _electricSheepEngine->initWorld(); glutMainLoop(); Then inside the engine init camera & projection matrices: cameraPosition=glm::vec3(2,2,2); cameraTarget=glm::vec3(0,0,0); cameraUp=glm::vec3(0,0,1); glm::vec3 cameraDirection=glm::normalize(cameraPosition-cameraTarget); cameraRight=glm::cross(cameraDirection, cameraUp); cameraRight.z=0; view=glm::lookAt(cameraPosition, cameraTarget, cameraUp); lensAngle=45.0f; aspectRatio=1.0*(windowWidth/windowHeight); nearClippingPlane=0.1f; farClippingPlane=100.0f; projection=glm::perspective(lensAngle, aspectRatio, nearClippingPlane, farClippingPlane); then init shaders and check compilation and bound attributes & uniforms to be correctly bound (my previous question) These are my two shaders, vertex: #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } and fragment: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } init the cube: setPosition(glm::vec3(0,0,0)); struct voxelData data[]={ //front face {{-1.0, -1.0, 1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, 1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, 1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, 1.0}, {0.0, 1.0, 1.0}}, //back face {{-1.0, -1.0, -1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, -1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, -1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, -1.0}, {0.0, 1.0, 1.0}} }; glGenBuffers(1, &modelVerticesBufferObject); glBindBuffer(GL_ARRAY_BUFFER, modelVerticesBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); const GLubyte indices[] = { // Front 0, 1, 2, 2, 3, 0, // Back 4, 6, 5, 4, 7, 6, // Left 2, 7, 3, 7, 6, 2, // Right 0, 4, 1, 4, 1, 5, // Top 6, 2, 1, 1, 6, 5, // Bottom 0, 3, 7, 0, 7, 4 }; glGenBuffers(1, &modelFacesBufferObject); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, modelFacesBufferObject); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); and then the render call: glClearColor(0.52, 0.8, 0.97, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); //use the shader glUseProgram(shaderProgram); //enable attributes in program glEnableVertexAttribArray(shaderAttribute_position); glEnableVertexAttribArray(shaderAttribute_color); //model matrix using model position vector glm::mat4 mvp=projection*view*voxel->getModelMatrix(); glUniformMatrix4fv(shaderAttribute_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_position, // attribute 3, // number of elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements 0 // offset of first element ); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_color, // attribute 3, // number of colour elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements (GLvoid *)(offsetof(struct voxelData, color3D)) // offset of colour data ); //draw the model by going through its elements array glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, voxel->modelFacesBufferObject); int bufferSize; glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &bufferSize); glDrawElements(GL_TRIANGLES, bufferSize/sizeof(GLushort), GL_UNSIGNED_SHORT, 0); //close up the attribute in program, no more need glDisableVertexAttribArray(shaderAttribute_position); glDisableVertexAttribArray(shaderAttribute_color); but on screen all I get is the clear color :$ I generate my model matrix using: modelMatrix=glm::translate(glm::mat4(1.0), position); which in debug turns out to be for the position of (0,0,0): |1, 0, 0, 0| |0, 1, 0, 0| |0, 0, 1, 0| |0, 0, 0, 1| Sorry for such a question, I know it is annoying to look at someone's code, but I promise I have tried to debug around and figure it out as much as I can, and can't come to a solution Help a noob please? EDIT: Full source here, if anyone wants

    Read the article

  • Component-wise GLSL vector branching

    - by Gustavo Maciel
    I'm aware that it usually is a BAD idea to operate separately on GLSL vec's components separately. For example: //use instrinsic functions, they do the calculation on 4 components at a time. float dot = v1.x*v2.x + v1.y * v2.y + v1.z * v2.z; //NEVER float dot = dot(v1, v2); //YES //Multiply one by one is not good too, since the ALU can do the 4 components at a time too. vec3 mul = vec3(v1.x * v2.x, v1.y * v2.y, v1.z * v2.z); //NEVER vec3 mul = v1 * v2; I've been struggling thinking, are there equivalent operations for branching? For example: vec4 Overlay(vec4 v1, vec4 v2, vec4 opacity) { bvec4 less = lessThan(v1, vec4(0.5)); vec4 blend; for(int i = 0; i < 4; ++i) { if(less[i]) blend[i] = 2.0 * v1[i]*v2[i]; else blend[i] = 1.0 - 2.0 * (1.0 - v1[i])*(1.0 - v2[i]); } return v1 + (blend-v1)*opacity; } This is a Overlay operator that works component wise. I'm not sure if this is the best way to do it, since I'm afraid these for and if can be a bottleneck later. Tl;dr, Can I branch component wise? If yes, how can I optimize that Overlay function with it?

    Read the article

  • OpenGL Tessellation makes point

    - by urza57
    A little problem with my tessellation shader. I try to implement a simple tessellation shader but it only makes points. Here's my vertex shader : out vec4 ecPosition; out vec3 ecNormal; void main( void ) { vec4 position = gl_Vertex; gl_Position = gl_ModelViewProjectionMatrix * position; ecPosition = gl_ModelViewMatrix * position; ecNormal = normalize(gl_NormalMatrix * gl_Normal); } My tessellation control shader : layout(vertices = 3) out; out vec4 ecPosition3[]; in vec3 ecNormal[]; in vec4 ecPosition[]; out vec3 myNormal[]; void main() { gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position; myNormal[gl_InvocationID] = ecNormal[gl_InvocationID]; ecPosition3[gl_InvocationID] = ecPosition[gl_InvocationID]; gl_TessLevelOuter[0] = float(4.0); gl_TessLevelOuter[1] = float(4.0); gl_TessLevelOuter[2] = float(4.0); gl_TessLevelInner[0] = float(4.0); } And my Tessellation Evaluation shader: layout(triangles, equal_spacing, ccw) in; in vec3 myNormal[]; in vec4 ecPosition3[]; out vec3 ecNormal; out vec4 ecPosition; void main() { float u = gl_TessCoord.x; float v = gl_TessCoord.y; float w = gl_TessCoord.z; vec3 position = vec4(gl_in[0].gl_Position.xyz * u + gl_in[1].gl_Position.xyz * v + gl_in[2].gl_Position.xyz * w ); vec3 position2 = vec4(ecPosition3[0].xyz * u + ecPosition3[1].xyz * v + ecPosition3[2].xyz * w ); vec3 normal = myNormal[0] * u + myNormal[1] * v + myNormal[2] * w ); ecNormal = normal; gl_Position = vec4(position, 1.0); ecPosition = vec4(position2, 1.0); } Thank you !

    Read the article

  • HLSL - Creating Shadows in 2D

    - by richard
    The way that I create shadows is by the following technique: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/ But I have questions to HLSL. The way that I currently do it is, I have a black and white image, where Black means 'object', and white means 'nothing'. I then distort the image like in the tutorial. I do this with a pixel shader, but instead of rendering to the screen, I render to a texture, back to my application. I then take this, and create the shadows, and then send it back to the graphics card to undo the distortion, after the shadow has been added - this comes back and I have a stencil of shadow. I can put this ontop of the original image and send them back to the graphics card, which then puts them on the screen. To me this is alot of back and forth. Is there a way i can avoid this? The problem that I am having is that I need to basically go through all positions in the texture 3 times, and use the new new texture every time instead of the orginal one. I tried to read up on Passes, but i don't think that i am heading in the right direction there. Help?

    Read the article

  • Need help about Drag a sprite with Animation (Cocos2d)

    - by Zishan
    I want to play an animation when someone drag a sprite from it's default position to another selected position. If he drag half of the selected position then animation will be play half. Example, I have 15 Frames of a animation and have a projectile arm. The projectile arm can be rotate maximum 30°, if someone rotate the arm 2° then animation sprite will be show 2nd frame, if rotate 12° then animation sprite will be show 6th frame.... and so on. Also when he release the arm, then the arm will be reverse back to it's default position and animation frames also will be reverse back to it's default first frame. I am new on cocos2d.I know how to make an animation and how to drag a sprite but I have no idea how to do this. Can anyone Please give me any idea or any tutorial how to do this, it will be very helpful for me. Thank you in advance.

    Read the article

  • stdexcept On Android

    - by David R.
    I'm trying to compile SoundTouch on Android. I started with this configure line: ./configure CPPFLAGS="-I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/" LDFLAGS="-Wl,-rpath-link=/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/lib -L/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/lib -nostdlib -lc" --host=arm-eabi --enable-shared=yes CFLAGS="-nostdlib -O3 -mandroid" host_alias=arm-eabi --no-create --no-recursion Because the Android NDK targets ARM, I also had to change the Makefile to remove the -msse2 flags to progress. When I run 'make', I get: /bin/sh ../../libtool --tag=CXX --mode=compile arm-eabi-g++ -DHAVE_CONFIG_H -I. -I../../include -I../../include -I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/ -O3 -fcheck-new -I../../include -g -O2 -MT FIRFilter.lo -MD -MP -MF .deps/FIRFilter.Tpo -c -o FIRFilter.lo FIRFilter.cpp libtool: compile: arm-eabi-g++ -DHAVE_CONFIG_H -I. -I../../include -I../../include -I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/ -O3 -fcheck-new -I../../include -g -O2 -MT FIRFilter.lo -MD -MP -MF .deps/FIRFilter.Tpo -c FIRFilter.cpp -o FIRFilter.o FIRFilter.cpp:46:21: error: stdexcept: No such file or directory FIRFilter.cpp: In member function 'virtual void soundtouch::FIRFilter::setCoefficients(const soundtouch::SAMPLETYPE*, uint, uint)': FIRFilter.cpp:177: error: 'runtime_error' is not a member of 'std' FIRFilter.cpp: In static member function 'static void* soundtouch::FIRFilter::operator new(size_t)': FIRFilter.cpp:225: error: 'runtime_error' is not a member of 'std' make[2]: *** [FIRFilter.lo] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all-recursive] Error 1 This isn't very surprising, since the -nostdlib flag was required. Android seems to have neither stdexcept nor stdlib. How can I get past this block of compiling SoundTouch? At a guess, there may be some flag I don't know about that I should use. I could refactor the code not to use stdexcept. There may be a way to pull in the original stdexcept source and reference that. I might be able to link to a precompiled stdexcept library.

    Read the article

  • Bullet Physic: Transform body after adding

    - by Mathias Hölzl
    I would like to transform a rigidbody after adding it to the btDiscreteDynamicsWorld. When I use the CF_KINEMATIC_OBJECT flag I am able to transform it but it's static (no collision response/gravity). When I don't use the CF_KINEMATIC_OBJECT flag the transform doesn't gets applied. So how to I transform non-static objects in bullet? DemoCode: btBoxShape* colShape = new btBoxShape(btVector3(SCALING*1,SCALING*1,SCALING*1)); /// Create Dynamic Objects btTransform startTransform; startTransform.setIdentity(); btScalar mass(1.f); //rigidbody is dynamic if and only if mass is non zero, otherwise static bool isDynamic = (mass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) colShape->calculateLocalInertia(mass,localInertia); btDefaultMotionState* myMotionState = new btDefaultMotionState(); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,colShape,localInertia); btRigidBody* body = new btRigidBody(rbInfo); body->setCollisionFlags(body->getCollisionFlags()|btCollisionObject::CF_KINEMATIC_OBJECT); body->setActivationState(DISABLE_DEACTIVATION); m_dynamicsWorld->addRigidBody(body); startTransform.setOrigin(SCALING*btVector3( btScalar(0), btScalar(20), btScalar(0) )); body->getMotionState()->setWorldTransform(startTransform);

    Read the article

  • Ruby: implementing alpha-beta pruning for tic-tac-toe

    - by DerNalia
    So, alpha-beta pruning seems to be the most efficient algorithm out there aside from hard coding (for tic tac toe). However, I'm having problems converting the algorithm from the C++ example given in the link: http://www.webkinesia.com/games/gametree.php #based off http://www.webkinesia.com/games/gametree.php # (converted from C++ code from the alpha - beta pruning section) # returns 0 if draw LOSS = -1 DRAW = 0 WIN = 1 @next_move = 0 def calculate_ai_next_move score = self.get_best_move(COMPUTER, WIN, LOSS) return @next_move end def get_best_move(player, alpha, beta) best_score = nil score = nil if not self.has_available_moves? return false elsif self.has_this_player_won?(player) return WIN elsif self.has_this_player_won?(1 - player) return LOSS else best_score = alpha NUM_SQUARES.times do |square| if best_score >= beta break end if self.state[square].nil? self.make_move_with_index(square, player) # set to negative of opponent's best move; we only need the returned score; # the returned move is irrelevant. score = -get_best_move(1-player, -beta, -alpha) if (score > bestScore) @next_move = square best_score = score end undo_move(square) end end end return best_score end the problem is that this is returning nil. some support methods that are used above: WAYS_TO_WIN = [[0, 1, 2], [3, 4, 5], [6, 7, 8], [0, 3, 6], [1, 4, 7], [2, 5, 8],[0, 4, 8], [2, 4, 6]] def has_this_player_won?(player) result = false WAYS_TO_WIN.each {|solution| result = self.state[solution[0]] if contains_win?(solution) } return (result == player) end def contains_win?(ttt_win_state) ttt_win_state.each do |pos| return false if self.state[pos] != self.state[ttt_win_state[0]] or self.state[pos].nil? end return true end def make_move(x, y, player) self.set_square(x,y, player) end

    Read the article

  • How can I do Mouse Selection In OpenGL 3.0?

    - by NoobScratcher
    Hello I'm pretty good programmer I've made my own 2D games in SDL and made a gui in 3D using Old OpenGL and Modern OpenGL but.. I'm having problems with trying to click 3D models with opengl I have no idea what to do too be honest. Do I read the area that I've clicked? or what do I do? 100% shore this has been asked before but I just don't know what to do...?? using : OpenGL 3.0 WIN32 API C++

    Read the article

  • Using LINQ to query database through a proxy server of some kind?

    - by Mustafakidd
    Hey All Sorry for using (perhaps) the wrong lingo, but my question may be clearer if you view this diagram as you read it. http://dl.dropbox.com/u/13256/DIAGRAM.PNG Our client is requiring us to adhere to the server configuration (poorly) diagrammed in the above image. The web server is accessible over port 80 and is where our web application is hosted - a second firewall permits this web server to access a second server which in turn is the only server permitted to access the database server. My question is: How do I deploy a web application that uses LINQ-to-SQL in this environment? Is there a way to proxy my LINQ queries through the app server so that the database connection goes through that server? This is uncharted territory for me, as we always have had access to the DB server directly from our web server in the past. Any help is appreciated. Thanks Mustafa

    Read the article

  • Mesh with quads to triangle mesh

    - by scape
    I want to use Blender for making models yet realize some of the polygons are not triangles but contain quads or more (example: cylinder top and bottom). I could export the the mesh as a basic mesh file and import it in to an openGL application and workout rendering the quads as tris, but anything with more than 4 vert indices is beyond me. Is it typical to convert the mesh to a triangle-based mesh inside blender before exporting it? I actually tried this through the quads_convert_to_tris method within a blender py script and the top of the cylinder does not look symmetrical. What is typically done to render a loaded mesh as a tri?

    Read the article

  • Example of DOD design (on a generic Zombie game)

    - by Jeffrey
    I can't seem to find a nice explanation of the Data Oriented Design for a generic zombie game (it's just an example, pretty common example). Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombie list class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie } Or would creating a generic World container that contains every action from bite(zombieId, playerId) to moveTo(playerId, vector) to createPlayer() to shoot(playerId, vector) to face(radians)/face(vector); and contains: std::vector<zombie> std::vector<player> ... std::vector<mapchunk> ... std::vector<vbobufferid> player_run_animation; ... be a good example? Whats the proper way to organize a game with DOD?

    Read the article

  • Better way to do AI Behavior in AS3/Flixel

    - by joon
    I'm making a game in Flixel and I need to program an NPC. It's rapidly turning more complex than I expected. I was wondering if there are any best practices, tutorials or examples that you can refer me to, to see how this is done. I can probably hack it together, which is what I always do, but it would be nice if I can make it maintanable and can add stuff later on. Here's screenshot to give you an idea: The butler will be an NPC that will follow you, or guide you, and talk to you the whole time. EDIT: More specifically: What I have now is a long list of IF statements in the update loop of the butler (about 8 different cases), and all I have covered is his walking behavior. I want him to comment on things and sometimes switch his main behavior to be more aggresive or distant,... Is there any way to keep track of this, or is complex code with many many nested if statements the way to go?

    Read the article

  • matrix 4x4 position data

    - by freefallr
    I understand that a 4x4 matrix holds rotation and position data. The rotation data is held in the 3x3 sub-matrix at the top left of the matrix. The position data is held in the last column of the matrix. e.g. glm::vec3 vParentPos( mParent[3][0], mParent[3][1], mParent[3][2] ); My question is - am I accessing the parent matrix correctly in the example above? I know that opengl uses a different matrix ordering that directx, (row order instead of column order or something), so, should the mParent be accessed as follows instead? glm::vec3 vParentPos( mParent[0][3], mParent[1][3], mParent[2][3] ); thanks!

    Read the article

  • 2D XNA C#: Texture2D Wrapping Issue

    - by Kieran
    Working in C#/XNA for a Windows game: I'm using Texture2D to draw sprites. All of my sprites are 16 x 32. The sprites move around the screen as you would expect, by changing the top X/Y position of them when they're being drawn by the spritebatch. Most of the time when I run the game, the sprites appear like this: and when moved, they move as I expect, as one element. Infrequently they appear like this: and when moved it's like there are two sprites with a gap in between them - it's hard to describe. It only seems to happen sometimes - is there something I'm missing? I'd really like to know why this is happening. [Edit:] Adding Draw code as requested: This is the main draw routine - it first draws the sprite to a RenderTarget then blows it up by a scale of 4: protected override void Draw(GameTime gameTime) { // Draw to render target GraphicsDevice.SetRenderTarget(renderTarget); GraphicsDevice.Clear(Color.CornflowerBlue); Texture2D imSprite = null; spriteBatch.Begin(SpriteSortMode.FrontToBack, null, SamplerState.PointWrap, null, null); ManSprite.Draw(spriteBatch); base.Draw(gameTime); spriteBatch.End(); // Draw render target to screen GraphicsDevice.SetRenderTarget(null); imageFrame = (Texture2D)renderTarget; GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); spriteBatch.Begin(SpriteSortMode.FrontToBack, null, SamplerState.PointClamp, null, null); spriteBatch.Draw(imageFrame, new Vector2(0, 0), null, Color.White, 0, new Vector2(0, 0), IM_SCALE, SpriteEffects.None, 0); spriteBatch.End(); } This is the draw routine for the Sprite class: public virtual void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw(Texture, new Vector2(PositionX, PositionY), null, Color.White, 0.0f, Vector2.Zero, Scale, SpriteEffects.None, 0.3f); }

    Read the article

  • GoogleAppEngine : ClassNotFoundException : javax.jdo.metadata.ComponentMetadata

    - by James.Elsey
    I'm trying to deploy my application to a locally running GoogleAppEngine development server, but I'm getting the following stack trace when I start the server Apr 23, 2010 9:03:33 PM com.google.apphosting.utils.jetty.JettyLogger warn WARNING: Nested in org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'clientDao' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Cannot resolve reference to bean 'entityManagerFactory' while setting bean property 'entityManagerFactory'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: javax/jdo/metadata/ComponentMetadata: java.lang.ClassNotFoundException: javax.jdo.metadata.ComponentMetadata at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at com.google.appengine.tools.development.IsolatedAppClassLoader.loadClass(IsolatedAppClassLoader.java:151) at java.lang.ClassLoader.loadClass(ClassLoader.java:264) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at javax.jdo.JDOHelper$18.run(JDOHelper.java:2009) at javax.jdo.JDOHelper$18.run(JDOHelper.java:2007) at java.security.AccessController.doPrivileged(Native Method) at javax.jdo.JDOHelper.forName(JDOHelper.java:2006) at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1155) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698) at org.datanucleus.jpa.EntityManagerFactoryImpl.initialisePMF(EntityManagerFactoryImpl.java:482) at org.datanucleus.jpa.EntityManagerFactoryImpl.<init>(EntityManagerFactoryImpl.java:255) at org.datanucleus.store.appengine.jpa.DatastoreEntityManagerFactory.<init>(DatastoreEntityManagerFactory.java:68) at org.datanucleus.store.appengine.jpa.DatastorePersistenceProvider.createContainerEntityManagerFactory(DatastorePersistenceProvider.java:45) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:224) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:291) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1369) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1335) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:473) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:269) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:104) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1245) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1010) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:530) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1218) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:500) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) at org.mortbay.jetty.Server.doStart(Server.java:217) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at com.google.appengine.tools.development.JettyContainerService.startContainer(JettyContainerService.java:181) at com.google.appengine.tools.development.AbstractContainerService.startup(AbstractContainerService.java:116) at com.google.appengine.tools.development.DevAppServerImpl.start(DevAppServerImpl.java:217) at com.google.appengine.tools.development.DevAppServerMain$StartAction.apply(DevAppServerMain.java:162) at com.google.appengine.tools.util.Parser$ParseResult.applyArgs(Parser.java:48) at com.google.appengine.tools.development.DevAppServerMain.<init>(DevAppServerMain.java:113) at com.google.appengine.tools.development.DevAppServerMain.main(DevAppServerMain.java:89) The server is running at http://localhost:1234/ I'm a little confused over this, since I have the same application running locally on GlassFish/MySQL. All I have done is to swap in the relevant jar files, and change the persistence.xml. My applicationContext.xml looks as follows : <context:annotation-config/> <bean id="clientDao" class="com.jameselsey.salestracker.dao.jpa.JpaDaoClient"> <property name="entityManagerFactory" ref="entityManagerFactory"/> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"/> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean id="org.springframework.context.annotation.internalPersistenceAnnotationProcessor" class="com.jameselsey.salestracker.util.GaeFixInternalPersistenceAnnotationProcessor" /> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <tx:annotation-driven/> <bean id="clientService" class="com.jameselsey.salestracker.service.ClientService"/> </beans> My JPA DAO looks like this public class JpaDao extends JpaDaoSupport { protected <T> List<T> findAll(Class<T> clazz) { return getJpaTemplate().find("select c from " + clazz.getName() + " c"); } protected <T> T findOne(String jpql, Map params) { List<T> results = getJpaTemplate().findByNamedParams(jpql, params); if(results.isEmpty()) { return null; } if(results.size() > 1) { throw new IncorrectResultSizeDataAccessException(1, results.size()); } return results.get(0); } } And an example implemented method looks like this : @Override public Client getClientById(Integer clientId) { String jpql = "SELECT c " + "FROM com.jameselsey.salestracker.domain.Client c " + "WHERE c.id = " + clientId; return (Client) getJpaTemplate().find(jpql).get(0); } Like I say, this works ok on Glassfish/MySQL, is it possible this error could be a red herring to something else?

    Read the article

  • Rotating an object about a point (2D) using box2d

    - by noob
    i just started developing using box2d on flixel and i realise the pivot point of the rotation of an object in box2d is set to the center of an object. i had read on forums and i found out that SetAsBox can change the pivot point of the object, however, i cannot seem to get it work to rotate about a point. what i would like to achieve is to rotate an object about a point like earth revolving around the sun. any one can help me with it? really thanks a lot and sorry for the bad english

    Read the article

  • SpriteFont Exception, no such character?

    - by Michal Bozydar Pawlowski
    I have such spriteFont: <?xml version="1.0" encoding="utf-8"?> <!-- This file contains an xml description of a font, and will be read by the XNA Framework Content Pipeline. Follow the comments to customize the appearance of the font in your game, and to change the characters which are available to draw with. --> <XnaContent xmlns:Graphics="Microsoft.Xna.Framework.Content.Pipeline.Graphics"> <Asset Type="Graphics:FontDescription"> <!-- Modify this string to change the font that will be imported. --> <FontName>Segoe UI</FontName> <!-- Size is a float value, measured in points. Modify this value to change the size of the font. --> <Size>20</Size> <!-- Spacing is a float value, measured in pixels. Modify this value to change the amount of spacing in between characters. --> <Spacing>0</Spacing> <!-- UseKerning controls the layout of the font. If this value is true, kerning information will be used when placing characters. --> <UseKerning>true</UseKerning> <!-- Style controls the style of the font. Valid entries are "Regular", "Bold", "Italic", and "Bold, Italic", and are case sensitive. --> <Style>Regular</Style> <!-- If you uncomment this line, the default character will be substituted if you draw or measure text that contains characters which were not included in the font. --> <!-- <DefaultCharacter>*</DefaultCharacter> --> <!-- CharacterRegions control what letters are available in the font. Every character from Start to End will be built and made available for drawing. The default range is from 32, (ASCII space), to 126, ('~'), covering the basic Latin character set. The characters are ordered according to the Unicode standard. See the documentation for more information. --> <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#32;</Start> <End>&#1200;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> It has the character regions (32-1200) And I get this exception: A first chance exception of type 'System.ArgumentException' occurred in Microsoft.Xna.Framework.Graphics.ni.dll The character '?' (0x0441) is not available in this SpriteFont. If applicable, adjust the font's start and end CharacterRegions to include this character. Parameter name: character Why? I'm drawing the string like this: spriteBatch.DrawString(font24, zasadyText, zasadyTextPos, kolorCzcionki1, -0.05f, Vector2.Zero, 1.0f, SpriteEffects.None, 0.5f) I even changed the spriteFont to cyrillic: <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#0032;</Start> <End>&#0383;</End> </CharacterRegion> <CharacterRegion> <Start>&#1040;</Start> <End>&#1111;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> and it still doesn't work. I got the (0x441 = char) exception -- EDIT -- Ok, I got the solution. It was a letter mistake in language. I had this: if (jezyk == "ru_RU") { font14 = Content.Load<SpriteFont>("ru_font14"); font24 = Content.Load<SpriteFont>("ru_font24"); font12 = Content.Load<SpriteFont>("ru_czcionkaFloty"); font10 = Content.Load<SpriteFont>("ru_font10"); font28 = Content.Load<SpriteFont>("ru_font28"); font20 = Content.Load<SpriteFont>("ru_font20"); } else { font14 = Content.Load<SpriteFont>("font14"); font24 = Content.Load<SpriteFont>("font24"); font12 = Content.Load<SpriteFont>("czcionkaFloty"); font10 = Content.Load<SpriteFont>("font10"); font28 = Content.Load<SpriteFont>("font28"); font20 = Content.Load<SpriteFont>("font20"); } and there should be not "ru_RU" but "ru-RU" I have no idea. I changed the spriteFont to cyrillic: <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#0032;</Start> <End>&#0383;</End> </CharacterRegion> <CharacterRegion> <Start>&#1040;</Start> <End>&#1111;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> and it still doesn't work. I got the (0x441 = char) exception

    Read the article

  • How to move Objects smoothly like swimming arround

    - by philipp
    I have a Box2D project that is about to create a view where the user looks from the Sky onto Water. Or perhaps on a bathtub filled with water or something like this. The Object which holds the fluid actually does not matter, what matters is the movement of the bodies, because they should move like drops of grease on a soup, or wood on water, I can even imagine the the fluid is mercurial, extreme heavy and "lazy". How can I manipulate the bodies (every frame or time by time) to make them move like this? I started with randomly manipulation their linear velocity, but I turned out that this not very smooth and looks quite hard. Is it a better idea to check their velocity and apply impulses? Is there any example? Greetings philipp

    Read the article

< Previous Page | 860 861 862 863 864 865 866 867 868 869 870 871  | Next Page >