Search Results

Search found 11995 results on 480 pages for 'clement game'.

Page 258/480 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • obj-c classes and sub classes (Cocos2d) conversion

    - by Lewis
    Hi I'm using this version of cocos2d: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which supports the UIGestureRecognizer within a CCLayer in a cocos2d scene like so: @interface HelloWorldLayer : CCLayer <UIGestureRecognizerDelegate> { } Now I want to make this custom gesture work within the scene, attaching it to a sprite in cocos2d: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Header file for view controller: #import "OneFingerRotationGestureRecognizer.h" @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> @property (nonatomic, strong) IBOutlet UIImageView *image; @property (nonatomic, strong) IBOutlet UITextField *textDisplay; @end then this is in the .m file: gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; Now my question is, is it possible to add this custom gesture into the cocos2d project found on that github, and if so, what do I need to change in the OneFingerRotationGestureRecognizerDelegate to get it to work within cocos2d. Because at the minute it is setup in a standard iOS project and not a cocos2d project and I do not know enough about UIViews and classing/ sub classing in obj-c to get this to work. Also it seems to inherit from a UIView where cocos2d uses CCLayer. Kind regards, Lewis. I also realise I may have not included enough code from the custom gesture project for readers to interpret it fully, so the full project can be found here: https://github.com/melle/OneFingerRotationGestureDemo

    Read the article

  • Center directional light shadow to the cameras eye

    - by Caesar
    I'm currently drawing my directional light shadow using this view and projection: XMFLOAT3 dir((float)pitch, (float)yaw, (float)roll); XMFLOAT3 center(0.0f, 0.0f, 0.0f); XMVECTOR lightDir = XMLoadFloat3(&dir); XMVECTOR lightPos = radius * lightDir; XMVECTOR targetPos = XMLoadFloat3(&center); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMMATRIX V = XMMatrixLookAtLH(lightPos, targetPos, up); // This is the view // Transform bounding sphere to light space. XMFLOAT3 sphereCenterLS; XMStoreFloat3(&sphereCenterLS, XMVector3TransformCoord(targetPos, V)); // Ortho frustum in light space encloses scene. float l = sphereCenterLS.x - radius; float b = sphereCenterLS.y - radius; float n = sphereCenterLS.z - radius; float r = sphereCenterLS.x + radius; float t = sphereCenterLS.y + radius; float f = sphereCenterLS.z + radius; XMMATRIX P = XMMatrixOrthographicOffCenterLH(l, r, b, t, n, f); // This is the projection Which works prefect if the center of my scene is at 0.0, 0.0, 0.0. What I would like to do is move the center of the scene relative to the cameras position. How can I do that?

    Read the article

  • Automated texture mapping

    - by brandon
    I have a set of seamless tiling textures. I want to be able to take an arbitrary model and create a UV map with these properties: No stretching (all textures tile appropriately so there is no stretching and sheering of the texture) The textures display on the correct axis relative to the model it's mapping to (if you look at the example, you can see some of the letters on the front are tilted, the y axis of the texture should be matching up with the y axis of the object. Some other faces have upside down letters too) the texture is as continuous as possible on the surface of the model (if two faces are adjacent, the texture continues on the adjacent face where it left off) the model is closed (all faces are completely enclosed by other faces) A few notes. This mapping will occur before triangulation. I realize there are ways to do this by hand and it's probably a hard problem to automatically map textures in general, but since these textures are seamless and I just need uniform coverage it seems like an easier problem. I'm looking for an algorithmic approach to this that I can apply in general, not a tool that does it. What approach would work for this, is there an existing one? (I assume so)

    Read the article

  • Dynamic obstacles avoidance in navigation mesh system

    - by Variable
    I've built my path finding system with unreal engine, somehow the path finding part works just fine while i can't find a proper way to solve dynamic obstacles avoidance problem. My characters are walking allover the map and collide with each other while they moving. I try to steering them when collision occurs, but this doesn't work well. For example, two characters block on the road while the third one's path is right in the middle of them and he'll get stuck. Can someone tell me the most popular way of doing dynamic avoidance? Thanks a lot.

    Read the article

  • Unexpected behaviour with glFramebufferTexture1D

    - by Roshan
    I am using render to texture concept with glFramebufferTexture1D. I am drawing a cube on non-default FBO with all the vertices as -1,1 (maximum) in X Y Z direction. Now i am setting viewport to X while rendering on non default FBO. My background is blue with white color of cube. For default FBO, i have created 1-D texture and attached this texture to above FBO with color attachment. I am setting width of texture equal to width*height of above FBO view-port. Now, when i render this texture to on another cube, i can see continuous white color on start or end of each face of the cube. That means part of the face is white and rest is blue. I am not sure whether this behavior is correct or not. I expect all the texels should be white as i am using -1 and 1 coordinates for cube rendered on non-default FBO. code: #define WIDTH 3 #define HEIGHT 3 GLfloat vertices8[]={ 1.0f,1.0f,1.0f, -1.0f,1.0f,1.0f, -1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,//face 1 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 2 1.0f,1.0f,1.0f, 1.0f,-1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 3 -1.0f,1.0f,1.0f, -1.0f,1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,1.0f,//face 4 1.0f,1.0f,1.0f, 1.0f,1.0f,-1.0f, -1.0f,1.0f,-1.0f, -1.0f,1.0f,1.0f,//face 5 -1.0f,-1.0f,1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f//face 6 }; GLfloat vertices[]= { 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,0.5f,//face 1 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 2 0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 3 -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f,//face 4 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,0.5f,//face 5 -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,0.5f//face 6 }; GLuint indices[] = { 0, 2, 1, 0, 3, 2, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 15, 14, 12, 14, 13, 16, 17, 18, 16, 18, 19, 20, 23, 22, 20, 22, 21 }; GLfloat texcoord[] = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 }; glGenTextures(1, &id1); glBindTexture(GL_TEXTURE_1D, id1); glGenFramebuffers(1, &Fboid); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, WIDTH*HEIGHT , 0, GL_RGBA, GL_UNSIGNED_BYTE,0); glBindFramebuffer(GL_FRAMEBUFFER, Fboid); glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_1D,id1,0); draw_cube(); glBindFramebuffer(GL_FRAMEBUFFER, 0); draw(); } draw_cube() { glViewport(0, 0, WIDTH, HEIGHT); glClearColor(0.0f, 0.0f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(temp.psId,"position")); glVertexAttribPointer(glGetAttribLocation(temp.psId,"position"), 3, GL_FLOAT, GL_FALSE, 0,vertices8); glDrawArrays (GL_TRIANGLE_FAN, 0, 24); } draw() { glClearColor(1.0f, 0.0f, 0.0f, 1.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"tk_position")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"tk_position"), 3, GL_FLOAT, GL_FALSE, 0,vertices); nResult = GL_ERROR_CHECK((GL_NO_ERROR, "glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0,vertices);")); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"inputtexcoord")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"inputtexcoord"), 2, GL_FLOAT, GL_FALSE, 0,texcoord); glBindTexture(*target11, id1); glDrawElements ( GL_TRIANGLES, 36,GL_UNSIGNED_INT, indices ); when i change WIDTH=HEIGHT=2, and call a glreadpixels with height, width equal to 4 in draw_cube() i can see first 2 pixels with white color, next two with blue(glclearcolor), next two white and then blue and so on.. Now when i change width parameter in glTeximage1D to 16 then ideally i should see alternate patches of white and blue right? But its not the case here. why so?

    Read the article

  • Unity: Assigning a key to perform an action in the inspector

    - by Marc Pilgaard
    I am trying to write a simple piece of code in JavaScript where a button toggles the activation of a shield, by dragging a prefab with Resources.load("ActivateShieldPreFab") and destroying it again (Haven't implemented that yet). I wish to assign this button through the inspector, so I have created a string variable which appears as intended in the inspector. Though it doesn't seem to register the inspector input, even though I changed the value through the inspector. It only provides the error: "Input Key named: is unknown" When the button name is assigned within the code, there is no issues. Code as follows: var ShieldOn = false; var stringbutton : String; function Start(){ } function Update () { if(Input.GetKey(stringbutton) && ShieldOn != true) { Instantiate(Resources.load("ActivateShieldPreFab"), Vector3 (0, 0, 0), Quaternion.identity); ShieldOn = true; } }

    Read the article

  • Why is my Simplex Noise appearing in four columns?

    - by Joe the Person
    I'm trying to make a Texture out of Simplex noise, but it keeps appearing like this regardless of how big or small scale is: The following code is used to produce the image's color date: private Color[,] GetSimplex() { Color[,] colors = new Color[800, 600]; float scale = colors.GetLength(0); for (int x = 0; x < 800; x++) { for (int y = 0; y < 600; y++) { byte noise = (byte)(Noise.Generate(x / scale, y / scale) * 255); colors[x, y] = new Color(noise, noise, noise); } } return colors; }

    Read the article

  • Box2D Bicycle Wheels Motor Problem - Flash 2.1a

    - by Craig
    I have made a bicycle with Box2D using several polygons for the frame at different angles connected using weld joints, and I have revolute joints on the wheels with a motor. I have made some basic terrain (straight ground and a small ramp) and added keyboard input to control the bicycle with torque to balance it. All of this is done in with Box2D's Debug Draw. When the bicycle is on its back wheel but diagonally forward (kinda like this position - /) the motors just cause it go spinning backwards over when in reality it should either stay on its back wheel or go down onto both wheels. Here's my code the revolute joints: //Front Wheel Joint var frontWheelJointDef:b2RevoluteJointDef = new b2RevoluteJointDef(); frontWheelJointDef.Initialize(frontWheelBody, secondFrameBody, frontWheelBody.GetWorldCenter()); frontWheelJointDef.enableMotor=true; frontWheelJointDef.maxMotorTorque=10000; frontWheelJoint = _world.CreateJoint(frontWheelJointDef) as b2RevoluteJoint; //Rear Wheel Joint var rearWheelJointDef:b2RevoluteJointDef = new b2RevoluteJointDef(); rearWheelJointDef.Initialize(rearWheelBody, firstFrameBody, rearWheelBody.GetWorldCenter()); rearWheelJointDef.enableMotor=true; rearWheelJointDef.maxMotorTorque=10000; rearWheelJoint = _world.CreateJoint(rearWheelJointDef) as b2RevoluteJoint; And here's the relevant part of my update function: // up and down control wheels motor if (up) { motorSpeed-=0.5; } if (down) { motorSpeed += 0.5; } // left and right control cart torque if (left) { middleCentreFrameBody.ApplyTorque( -3); gearBody.ApplyTorque( -3); firstFrameBody.ApplyTorque( -3); secondFrameBody.ApplyTorque( -3); rearWheelToChainBody.ApplyTorque( -3); chainToFrontFrameBody.ApplyTorque( -3); topMiddleFrameBody.ApplyTorque( -3); } if (right) { middleCentreFrameBody.ApplyTorque( 3); gearBody.ApplyTorque( 3); firstFrameBody.ApplyTorque( 3); secondFrameBody.ApplyTorque( 3); rearWheelToChainBody.ApplyTorque( 3); chainToFrontFrameBody.ApplyTorque( 3); topMiddleFrameBody.ApplyTorque( 3); } // motor friction motorSpeed*=0.99; // motor max speed if (motorSpeed>100) { motorSpeed=100; } rearWheelJoint.SetMotorSpeed(motorSpeed); frontWheelJoint.SetMotorSpeed(motorSpeed); Any ideas what might be causing this? Thanks

    Read the article

  • How to change speed without changing path travelled?

    - by Ben Williams
    I have a ball which is being thrown from one side of a 2D space to the other. The formula I am using for calculating the ball's position at any one point in time is: x = x0 + vx0*t y = y0 + vy0*t - 0.5*g*t*t where g is gravity, t is time, x0 is the initial x position, vx0 is the initial x velocity. What I would like to do is change the speed of this ball, without changing how far it travels. Let's say the ball starts in the lower left corner, moves upwards and rightwards in an arc, and finishes in the lower right corner, and this takes 5s. What I would like to be able to do is change this so it takes 10s or 20s, but the ball still follows the same curve and finishes in the same position. How can I achieve this? All I can think of is manipulating t but I don't think that's a good idea. I'm sure it's something simple, but my maths is pretty shaky.

    Read the article

  • how to do partial updates in OpenGL?

    - by Will
    It is general wisdom that you redraw the entire viewport on each frame. I would like to use partial updates; what are the various ways can do that, and what are their pros, cons and relative performance? (Using textures, FBOs, the accumulator buffer, any kind of scissors that can affect swapbuffers etc?) A scenario: a scene with a fair few thousand visible trees; although the textures are mipmapped and they are drawn via VBOs roughly front-to-back with so on, its still a lot of polys. Would streaming a single screen-sized texture be better than throwing them at the screen every frame? You'd have to redraw and recapture them only on camera movement or as often as your wind model updates or whatever, which need not be every frame.

    Read the article

  • Gap in parallaxing background loop

    - by CinetiK
    The bug here is that my background kind of offset a bit itself from where it should draw and so I have this line. I have some troubles understanding why I get this bug when I set a speed that is different then 1,2,4,8,16,... In main class I set the speed depending on the player speed bgSpeed = -(int)playerMoveSpeed.X / 10; and here's my background class class ParallaxingBackground { Texture2D texture; Vector2[] positions; public int Speed { get; set;} public void Initialize(ContentManager content, String texturePath, int screenWidth, int speed) { texture = content.Load<Texture2D>(texturePath); this.Speed = speed; positions = new Vector2[screenWidth / texture.Width + 2]; for (int i = 0; i < positions.Length; i++) { positions[i] = new Vector2(i * texture.Width, 0); } } public void Update() { for (int i = 0; i < positions.Length; i++) { positions[i].X += Speed; if (Speed <= 0) { if (positions[i].X <= -texture.Width) { positions[i].X = texture.Width * (positions.Length - 1); } } else { if (positions[i].X >= texture.Width*(positions.Length - 1)) { positions[i].X = -texture.Width; } } } } public void Draw(SpriteBatch spriteBatch) { for (int i = 0; i < positions.Length; i++) { spriteBatch.Draw(texture, positions[i], Color.White); } } }

    Read the article

  • "Unable to create OpenGL 3.3 context (flags 0, profile 1)"

    - by Tsvetan
    Trying to run any of the well-known McKesson's tutorials on a friend's laptop results in the aforementioned exception. I read that in order to run applications which use OpenGL 3.3 you must at least have an ATI HD or Nvidia 8xxx GPU series. He has an ATI HD class graphics processor which eliminates (maybe) this issue. Also, I read that this error may result in having old drivers. He updated his drivers but that didn't solve the problem. The tutorials are built as said in the book introduction and glsdk is installed. If more information is needed, say so and I will provide it. What are the other reasons for this kind of exception? And how can I fix them?

    Read the article

  • Examples of interesting implementations of character stats?

    - by Tchalvak
    I've got this BBG going ( http://ninjawars.net ), and the character stats currently are simplistic. I'm looking to add a few stats to the current 1/2 (strength and maximum hitpoints, essentially). I've come up with: (strength (unchanged), speed, stamina, and some others that are somewhat interesting wildcard stats). However, I'm not satisfied with how boring the effects of some of these stats are, because they're very linear. Better stat, better effects of the stat, but the stats don't interact with each-other, there's no Rock-Paper-Scissors interaction, having more is always better all the time. So what I'd really like is to see examples of interesting character stats or effects of stats? Examples that I can think of off hand: Call of Cthulu's Insanity stat (things get really weird/chaotic if you start losing sanity) White Wolf stats, to a certain extent (the stats themselves have some basic effects, and all skills effectiveness base themselves off of stats as well) What are some other ways people have used stats to check out?

    Read the article

  • tunnel effect cocos2d

    - by samfisher
    I am looking to create a similar tunnel effect in COCOS2D (iOS). Could anyone suggest any pointers? ref Video 1 ref Video 2 Till now I have tried with several ring shape sprites with decreasing scale and positioned center to a same point and keeping Z decreasing as well for each smaller sprite. With that, animating it with CCScaleTo and changing the size to 2.0 with animation duration but it does not come anyway near to the tunnel effect shown in the reference. Thanks, sam

    Read the article

  • Realistic planetary terrain generation with weights

    - by Programmdude
    I need terrain generation for a planet. The planet will be divided up into several hundred hexes, and I need it to be realistic and based on weights. I have dabbled in terrain generation before, but nothing like this. So I figure it would be a good idea to ask the community for answers, recommended articles or the like. By realistic, I mean not just random hexes, but continent shaped things with a few islands. More desert around the equator and more ice around the poles. I also have two weights I need to base it around: ice percentage and water percentage. That means that around XX% of the planet will need to be water. Does anyone have any advice or places to start? Generating arbitrary terrain is easy, but something a bit more "organic" like this seems rather difficult. It also needs to be seamless. Should be obvious since it's a planet, but no harm in pointing it out.

    Read the article

  • Best practices with Vertices in Open GL

    - by Darestium
    What is the best practice in regards to storing vertex data in Open GL? I.e: struct VertexColored { public: GLfloat position[]; GLfloat normal[]; byte colours[]; } class Terrian { private: GLuint vbo_vertices; GLuint vbo_normals; GLuint vbo_colors; GLuint ibo_elements; VertexColored vertices[]; } or having them stored seperatly in the required class like: class Terrian { private: GLfloat vertices[]; GLfloat normals[]; GLfloat colors[]; GLuint vbo_vertices; GLuint vbo_normals; GLuint vbo_colors; GLuint ibo_elements; }

    Read the article

  • What light attenuation function does UDK use?

    - by ananamas
    I'm a big fan of the light attenuation in UDK. Traditionally I've always used the constant-linear-quadratic falloff function to control how "soft" the falloff is, which gives three values to play with. In UDK you can get similar results, but you only need to tweak one value: FalloffExponent. I'm interested in what the actual mathematical function here is. The UDK lighting reference describes it as follows: FalloffExponent: This allows you to modify the falloff of a light. The default falloff is 2. The smaller the number, the sharper the falloff and the more the brightness is maintained until the radius is reached. Does anyone know what it's doing behind the scenes?

    Read the article

  • Making a surface transparent from blackness of texture

    - by Dan the Man
    I am making a "halo" shader in unity using GLSL. And I've come to a roadblock. What I need to do is take a texture, like the following, and make it transparent according to the darkness of it. And I don't want a cutout, because that cuts it off at a hard edge. This line of code doesn't seem to work. gl_FragColor = texture2D( vec4( _MainTex.r, _MainTex.g, _MainTex.b, _MainTex.a), vec2(textureCoordinates));

    Read the article

  • Algorithm allowing a good waypoint path following?

    - by Thierry Savard Saucier
    I'm more looking into how should I implement this, either a tutorial or even the name of the concept I'm missing. I'm pretty sure some basic pathfinding algorithm could help me here, but I dont know which one ... I have a worldmap, with different cities on it. The player can choose a city from a menu, or click on an available cities on the world map, and the toon should walk over there. But I want him to follow a predefine path. Lets say our hero is on the city 1. He clicks on city 4. I want him to follow the path to city 2 and from there to city 4. I was handling this easily with arrow movement (left right top bottom) since its a single check. Now I'm not sure how I should do this. Should I loop threw each possible path and check which one leads me to D the fastest ... and if I do how do I avoid running in circle forever with cities 1-5-2 ?

    Read the article

  • Render To Texture Using OpenGL is not working but normal rendering works just fine

    - by Franky Rivera
    things I initialize at the beginning of the program I realize not all of these pertain to my issue I just copy and pasted what I had //overall initialized //things openGL related I initialize earlier on in the project glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClearDepth( 1.0f ); glEnable(GL_ALPHA_TEST); glEnable( GL_STENCIL_TEST ); glEnable(GL_DEPTH_TEST); glDepthFunc( GL_LEQUAL ); glEnable(GL_CULL_FACE); glFrontFace( GL_CCW ); glEnable(GL_COLOR_MATERIAL); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST ); //we also initialize our shader programs //(i added some shader program functions for definitions) //this enum list is else where in code //i figured it would help show you guys more about my //shader compile creation function right under this enum list VVVVVV /*enum eSHADER_ATTRIB_LOCATION { VERTEX_ATTRIB = 0, NORMAL_ATTRIB = 2, COLOR_ATTRIB, COLOR2_ATTRIB, FOG_COORD, TEXTURE_COORD_ATTRIB0 = 8, TEXTURE_COORD_ATTRIB1, TEXTURE_COORD_ATTRIB2, TEXTURE_COORD_ATTRIB3, TEXTURE_COORD_ATTRIB4, TEXTURE_COORD_ATTRIB5, TEXTURE_COORD_ATTRIB6, TEXTURE_COORD_ATTRIB7 }; */ //if we fail making our shader leave if( !testShader.CreateShader( "SimpleShader.vp", "SimpleShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; if( !testScreenShader.CreateShader( "ScreenShader.vp", "ScreenShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; SHADER PROGRAM FUNCTIONS bool CShaderProgram::CreateShader( const char* szVertexShaderName, const char* szFragmentShaderName, ... ) { //here are our handles for the openGL shaders int iGLVertexShaderHandle = -1, iGLFragmentShaderHandle = -1; //get our shader data char *vData = 0, *fData = 0; int vLength = 0, fLength = 0; LoadShaderFile( szVertexShaderName, &vData, &vLength ); LoadShaderFile( szFragmentShaderName, &fData, &fLength ); //data if( !vData ) return false; //data if( !fData ) { delete[] vData; return false; } //create both our shader objects iGLVertexShaderHandle = glCreateShader( GL_VERTEX_SHADER ); iGLFragmentShaderHandle = glCreateShader( GL_FRAGMENT_SHADER ); //well we got this far so we have dynamic data to clean up //load vertex shader glShaderSource( iGLVertexShaderHandle, 1, (const char**)(&vData), &vLength ); //load fragment shader glShaderSource( iGLFragmentShaderHandle, 1, (const char**)(&fData), &fLength ); //we are done with our data delete it delete[] vData; delete[] fData; //compile them both glCompileShader( iGLVertexShaderHandle ); //get shader status int iShaderOk; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLVertexShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLVertexShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szVertexShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLVertexShaderHandle); return false; } glCompileShader( iGLFragmentShaderHandle ); //get shader status glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //lets check to see if the fragment shader compiled int iCompiled = 0; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { //this shader did not compile leave return false; } //lets check to see if the fragment shader compiled glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //make our new shader program m_iShaderProgramHandle = glCreateProgram(); glAttachShader( m_iShaderProgramHandle, iGLVertexShaderHandle ); glAttachShader( m_iShaderProgramHandle, iGLFragmentShaderHandle ); glLinkProgram( m_iShaderProgramHandle ); int iLinked = 0; glGetProgramiv( m_iShaderProgramHandle, GL_LINK_STATUS, &iLinked ); if( !iLinked ) { //we didn't link return false; } //NOW LETS CREATE ALL OUR HANDLES TO OUR PROPER LIKING //start from this parameter va_list parseList; va_start( parseList, szFragmentShaderName ); //read in number of variables if any unsigned uiNum = 0; uiNum = va_arg( parseList, unsigned ); //for loop through our attribute pairs int enumType = 0; for( unsigned x = 0; x < uiNum; ++x ) { //specify our attribute locations enumType = va_arg( parseList, int ); char* name = va_arg( parseList, char* ); glBindAttribLocation( m_iShaderProgramHandle, enumType, name ); } //end our list parsing va_end( parseList ); //relink specify //we have custom specified our attribute locations glLinkProgram( m_iShaderProgramHandle ); //fill our handles InitializeHandles( ); //everything went great return true; } void CShaderProgram::InitializeHandles( void ) { m_uihMVP = glGetUniformLocation( m_iShaderProgramHandle, "mMVP" ); m_uihWorld = glGetUniformLocation( m_iShaderProgramHandle, "mWorld" ); m_uihView = glGetUniformLocation( m_iShaderProgramHandle, "mView" ); m_uihProjection = glGetUniformLocation( m_iShaderProgramHandle, "mProjection" ); ///////////////////////////////////////////////////////////////////////////////// //texture handles m_uihDiffuseMap = glGetUniformLocation( m_iShaderProgramHandle, "diffuseMap" ); if( m_uihDiffuseMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihDiffuseMap, RM_DIFFUSE+GL_TEXTURE0 ); (0)+ } m_uihNormalMap = glGetUniformLocation( m_iShaderProgramHandle, "normalMap" ); if( m_uihNormalMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihNormalMap, RM_NORMAL+GL_TEXTURE0 ); (1)+ } } void CShaderProgram::SetDiffuseMap( const unsigned& uihDiffuseMap ) { (0)+ glActiveTexture( RM_DIFFUSE+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihDiffuseMap ); } void CShaderProgram::SetNormalMap( const unsigned& uihNormalMap ) { (1)+ glActiveTexture( RM_NORMAL+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihNormalMap ); } //MY 2 TEST SHADERS also my math order is correct it pertains to my matrix ordering in my math library once again i've tested the basic rendering. rendering to the screen works fine ----------------------------------------SIMPLE SHADER------------------------------------- //vertex shader looks like this #version 330 in vec3 vVertexPos; in vec3 vNormal; in vec2 vTexCoord; uniform mat4 mWorld; // Model Matrix uniform mat4 mView; // Camera View Matrix uniform mat4 mProjection;// Camera Projection Matrix out vec2 vTexCoordVary; // Texture coord to the fragment program out vec3 vNormalColor; void main( void ) { //pass the texture coordinate vTexCoordVary = vTexCoord; vNormalColor = vNormal; //calculate our model view projection matrix mat4 mMVP = (( mWorld * mView ) * mProjection ); //result our position gl_Position = vec4( vVertexPos, 1 ) * mMVP; } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; in vec3 vNormalColor; uniform sampler2D diffuseMap; uniform sampler2D normalMap; out vec4 fragColor[2]; void main( void ) { //CORRECT fragColor[0] = texture( normalMap, vTexCoordVary ); fragColor[1] = vec4( vNormalColor, 1.0 ); }; ----------------------------------------SCREEN SHADER------------------------------------- //vertext shader looks like this #version 330 in vec3 vVertexPos; // This is the position of the vertex coming in in vec2 vTexCoord; // This is the texture coordinate.... out vec2 vTexCoordVary; // Texture coord to the fragment program void main( void ) { vTexCoordVary = vTexCoord; //set our position gl_Position = vec4( vVertexPos.xyz, 1.0f ); } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; // Incoming "varying" texture coordinate uniform sampler2D diffuseMap;//the tile detail texture uniform sampler2D normalMap; //the normal map from earlier out vec4 vTheColorOfThePixel; void main( void ) { //CORRECT vTheColorOfThePixel = texture( normalMap, vTexCoordVary ); }; .Class RenderTarget Main Functions //here is my render targets create function bool CRenderTarget::Create( const unsigned uiNumTextures, unsigned uiWidth, unsigned uiHeight, int iInternalFormat, bool bDepthWanted ) { if( uiNumTextures <= 0 ) return false; //generate our variables glGenFramebuffers(1, &m_uifboHandle); // Initialize FBO glBindFramebuffer(GL_FRAMEBUFFER, m_uifboHandle); m_uiNumTextures = uiNumTextures; if( bDepthWanted ) m_uiNumTextures += 1; m_uiTextureHandle = new unsigned int[uiNumTextures]; glGenTextures( uiNumTextures, m_uiTextureHandle ); for( unsigned x = 0; x < uiNumTextures-1; ++x ) { glBindTexture( GL_TEXTURE_2D, m_uiTextureHandle[x]); // Reserve space for our 2D render target glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, iInternalFormat, uiWidth, uiHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + x, GL_TEXTURE_2D, m_uiTextureHandle[x], 0); } //if we need one for depth testing if( bDepthWanted ) { glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0); glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0);*/ // Must attach texture to framebuffer. Has Stencil and depth glBindRenderbuffer(GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glRenderbufferStorage(GL_RENDERBUFFER, /*GL_DEPTH_STENCIL*/GL_DEPTH24_STENCIL8, TEXTURE_WIDTH, TEXTURE_HEIGHT ); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); } glBindFramebuffer(GL_FRAMEBUFFER, 0); //everything went fine return true; } void CRenderTarget::Bind( const int& iTargetAttachmentLoc, const unsigned& uiWhichTexture, const bool bBindFrameBuffer ) { if( bBindFrameBuffer ) glBindFramebuffer( GL_FRAMEBUFFER, m_uifboHandle ); if( uiWhichTexture < m_uiNumTextures ) glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + iTargetAttachmentLoc, m_uiTextureHandle[uiWhichTexture], 0); } void CRenderTarget::UnBind( void ) { //default our binding glBindFramebuffer( GL_FRAMEBUFFER, 0 ); } //this is all in a test project so here's my straight forward rendering function for testing this render function does basic rendering steps keep in mind i have already tested my textures i have already tested my box thats being rendered all basic rendering works fine its just when i try to render to a texture then display it in a render surface that it does not work. Also I have tested my render surface it is bound exactly to the screen coordinate space void TestRenderSteps( void ) { //Clear the color and the depth glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //bind the shader program glUseProgram( testShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().GetBufferHandle() ); //2) how our stream will be split here ( 4 bytes position, ..ext ) CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //send the needed information into the shader testShader.SetWorldMatrix( boxPosition ); testShader.SetViewMatrix( Static_Camera.GetView( ) ); testShader.SetProjectionMatrix( Static_Camera.GetProjection( ) ); testShader.SetDiffuseMap( iTextureID ); testShader.SetNormalMap( iTextureID2 ); GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; glDrawBuffers(2, buffers); //bind to our render target //RM_DIFFUSE, RM_NORMAL are enums (0 && 1) renderTarget.Bind( RM_DIFFUSE, 1, true ); renderTarget.Bind( RM_NORMAL, 1, false); //false because buffer is already bound //i clear here just to clear the texture to make it a default value of white //by doing this i can see if what im rendering to my screen is just drawing to the screen //or if its my render target defaulted glClearColor( 1.0f, 1.0f, 1.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //i have this box object which i draw testBox.Draw(); //the draw call looks like this //my normal rendering works just fine so i know this draw is fine // glDrawElementsBaseVertex( m_sides[x].GetPrimitiveType(), // m_sides[x].GetPrimitiveCount() * 3, // GL_UNSIGNED_INT, // BUFFER_OFFSET(sizeof(unsigned int) * m_sides[x].GetStartIndex()), // m_sides[x].GetStartVertex( ) ); //we unbind the target back to default renderTarget.UnBind(); //i stop mapping my vertex format CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().UnMapVertexStride(); //i go back to default in using no shader program glUseProgram( 0 ); //now that everything is drawn to the textures //lets draw our screen surface and pass it our 2 filled out textures //NOW RENDER THE TEXTURES WE COLLECTED TO THE SCREEN QUAD //bind the shader program glUseProgram( testScreenShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionTexBuffer().GetBufferHandle() ); //2) how our stream will be split here CVertexBufferManager::GetInstance()->GetPositionTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //pass our 2 filled out textures (in the shader im just using the diffuse //i wanted to see if i was rendering anything before i started getting into other techniques testScreenShader.SetDiffuseMap( renderTarget.GetTextureHandle(0) ); //SetDiffuseMap definitions in shader program class testScreenShader.SetNormalMap( renderTarget.GetTextureHandle(1) ); //SetNormalMap definitions in shader program class //DO the draw call drawing our screen rectangle glDrawElementsBaseVertex( m_ScreenRect.GetPrimitiveType(), m_ScreenRect.GetPrimitiveCount() * 3, GL_UNSIGNED_INT, BUFFER_OFFSET(sizeof(unsigned int) * m_ScreenRect.GetStartIndex()), m_ScreenRect.GetStartVertex( ) );*/ //unbind our vertex mapping CVertexBufferManager::GetInstance()->GetPositionTexBuffer().UnMapVertexStride(); //default to no shader program glUseProgram( 0 ); } Last words: 1) I can render my box just fine 2) i can render my screen rect just fine 3) I cannot render my box into a texture then display it into my screen rect 4) This entire project is just a test project I made to test different rendering practices. So excuse any "ugly-ish" unclean code. This was made just on a fly run through when I was trying new test cases.

    Read the article

  • How to use UILongPressGestureRecognizer with sprite drag & wait?

    - by ganesh
    May be it's asked before also but I couldn't find any good answer. Please tell me how this can be implemented with UILongPressGestureRecognizer? A user drags a sprite from X location to Y location. Then it waits at Y location (touch is not ended yet) for 1 or 2 secs and release the touch i.e touch is ended. In this case, shouldn't following states be triggered in below order for UILongPressGestureRecognizer: UIGestureRecognizerStateBegan UIGestureRecognizerStateChanged UIGestureRecognizerStateEnded ? My problem is if UIPanGestureRecognizer is also implemented to handle drags, UILongPressGesture is never triggered even after Long waits. Any thoughts?

    Read the article

  • Does Unity's "Transparent Bumped Specular" translate to "semi-shiny must be semi-transparent"?

    - by Shivan Dragon
    Unity's documentation for the "Transparent Bumped Specular" shader/material-type is simply a concatenation of each of the descriptions for its Transparent and Specular Shaders (and also Bumped, but that doesn't apply to the question): Transparent Properties This shader can make mesh geometry partially or fully transparent by reading the alpha channel of the main texture. In the alpha, 0 (black) is completely transparent while 255 (white) is completely opaque. If your main texture does not have an alpha channel, the object will appear completely opaque. (...) Specular Properties (...) Additionally, the alpha channel of the main texture acts as a Specular Map (sometimes called "gloss map"), defining which areas of the object are more reflective than others. Black areas of the alpha will be zero specular reflection, while white areas will be full specular reflection. To me this translates to: I have a mesh representig a car tire The texture need to be very shiny on the rims parts, and almost not shiny at all for the rubber parts Also since the rim is really complex, (with like cut-out decoretions and such), I will not build that into the mesh, but fake it with transparency in the texture I can't do all this using Unity's "Transparent Bumped Specular" shader, because the "rubber" part of the texture will become semi transparent due to me painting the alpha channel dark-grey (because I want it to also be less shiny). Is this correct? If not, how can I make this work?

    Read the article

  • Does DirectX implement Triple Buffering?

    - by Asik
    As AnandTech put it best in this 2009 article: In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default). The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed). As I understand it, DirectX "Swap Chain" is merely a render ahead queue, i.e. buffers cannot be dropped; the longer the chain, the greater the input latency. At the same time, I find it hard to believe that the most widely used graphics API would not implement such fundamental functionality correctly. Is there a way to get proper triple buffered vertical synchronisation in DirectX?

    Read the article

  • XNA - Finding boundaries in isometric tilemap

    - by Yheeky
    I have an issue with my 2D isometric engine. I'm using my own 2D camera class which works with matrices and need to find the tilemaps boundaries so the user always sees the map. Currently my map size is 100x100 (with 128x128 tiles) so the calculation (e.g. for the right boundary) is: var maxX = (TileMap.MapWidth + 1) * (TileMap.TileWidth / 2) - ViewSize.X; var maxX = (100 + 1) * (128 / 2) - 1360; // = 5104 pixels. This works fine while having scale factor of 1.0f but not for any other zoom factor. When I zoom out to 0.9f the right border should be at approx. 4954. I´m using the following code for transformation but I always get a wrong value: var maxXVector = new Vector2(maxX, 0); var maxXTransformed = Vector2.Transform(maxXVector, tempTransform).X; The result is 4593. Does anyone of you have an idea what I´m during wrong? Thanks for your help! Yheeky

    Read the article

  • XNA 4: GetData from Texture2D and Set it into Texture3D with specific order

    - by cubrman
    I am trying to convert my color grading 2d lookup texture into 3d LUT. When I simply use: ColorAtlas.GetData(data); ColorAtlas3D.SetData(data); I get this: I tried building my 2d atlass horizontally but it did not helped - the data was messed up in a different way. So my question is how can I influence the order of the data I get from the 2d atlas and how can I properly pass it into my 3d atlas? Update: I know that I can GetData from a specific Rectangular area and put it into several arrays, but the result is still the same. This is what I tried: Color[] data2D = new Color[0]; for (int i = 0; i < 32; i++) { Color[] data = new Color[32 * 32]; GraphicsDevice.SetRenderTarget(null); ColorAtlas.GetData(0, new Rectangle(0, i*32, 32, 32), data, 0, data.Length); int oldLength = data2D.Length; Array.Resize<Color>(ref data2D, oldLength + data.Length); Array.Copy(data, 0, data2D, oldLength, data.Length); } ColorAtlas3D.SetData(data2D);

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >