Search Results

Search found 24037 results on 962 pages for 'game design'.

Page 505/962 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • libgdx intersection problem between rectangle and circle

    - by Chris
    My collision detection in libgdx is somehow buggy. player.png is 20*80px and ball.png 25*25px. Code: @Override public void create() { // ... batch = new SpriteBatch(); playerTex = new Texture(Gdx.files.internal("data/player.png")); ballTex = new Texture(Gdx.files.internal("data/ball.png")); player = new Rectangle(); player.width = 20; player.height = 80; player.x = Gdx.graphics.getWidth() - player.width - 10; player.y = 300; ball = new Circle(); ball.x = Gdx.graphics.getWidth() / 2; ball.y = Gdx.graphics.getHeight() / 2; ball.radius = ballTex.getWidth() / 2; } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); // draw player, ball batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(ballTex, ball.x, ball.y); batch.draw(playerTex, player.x, player.y); batch.end(); // update player position if(Gdx.input.isKeyPressed(Keys.DOWN)) player.y -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.UP)) player.y += 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.LEFT)) player.x -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.RIGHT)) player.x += 250 * Gdx.graphics.getDeltaTime(); // don't let the player leave the field if(player.y < 0) player.y = 0; if(player.y > 600 - 80) player.y = 600 - 80; // check collision if (Intersector.overlaps(ball, player)) Gdx.app.log("overlaps", "yes"); }

    Read the article

  • Move model forward base on model orientation

    - by ChocoMan
    My model rotates on it's own Y-axis regardless of where it is in the world. Here are the controls for the left ThumbStick: UP (move model forward on Z-Axis) DOWN (move model backward on Z-Axis) LEFT & RIGHT (strafe to either side) The problem is adjusting the direction the model's orientation UP and DOWN if the player should also rotate the player while moving forward or backwards. An example what Im trying to achieve would be a car doing donuts. The car is always facing the current direction that it interprets as forward (or rear as backwards) in relation to it's local rotation. Here is how Im calling the movement: // Rotate model with Right Thumbstick along X-Axis modelRotation -= pController.ThumbSticks.Right.X * mRotSpeed; // Move Forward if (pController.IsButtonDown(Buttons.LeftThumbstickUp)) { modelPosition.Z -= -pController.ThumbSticks.Left.Y * speed; } // Move Backward if (pController.IsButtonDown(Buttons.LeftThumbstickDown)) { modelPosition.Z += pController.ThumbSticks.Left.Y * speed; } // Strafe Left if (pController.IsButtonDown(Buttons.LeftThumbstickLeft)) { modelPosition.X += -pController.ThumbSticks.Left.X * speed; } // Strafe Right if (pController.IsButtonDown(Buttons.LeftThumbstickRight)) { modelPosition.X -= pController.ThumbSticks.Left.X * speed; } // DeadZone if (!pController.IsButtonDown(Buttons.LeftThumbstickUp) && !pController.IsButtonDown(Buttons.LeftThumbstickDown) && !pController.IsButtonDown(Buttons.LeftThumbstickLeft) && !pController.IsButtonDown(Buttons.LeftThumbstickRight)) { }

    Read the article

  • Can I use DllImport/PInvoke in libraries loaded as Assets in Unity Free?

    - by sebf
    I am interested in using utilising third-party libraries in Unity Free. I know Unity can use managed libraries as Assets, but only the Pro version supports using native libraries. (DllImport within scripts). This thread however suggests that it is possible to import DLLs in the free version. I would like to utilise native libraries (as a hobbyist I cannot afford Pro), but want to do it the supported way so I don't have to worry about Unity 'fixing' this hole if that is what it is. Is there any supported way to use native libraries with Unity free? (i.e. does that thread suggest a workaround or is it a 'bug'? Is it supported to use DllImport/PInvoke in libraries loaded as assets? (could I create a wrapper myself?)

    Read the article

  • JMonkey Engine (JME) load Blender scene with textures?

    - by leigero
    I am having the hardest time trying to accomplish the simplest task. I have created a floor and 4 walls (not that complicated) in Blender. I added a basic material and cloud texture so they have something to look at other than gray. When I import them into JMonkey they show up as solid white objects with no shading or depth. White silhouettes. I thought this may be a lighting issue, but I have ambient light added to the scene. I can remove that light or adjust its intensity and it has no affect on the scene. I exported all Blender files into OgreXML format, then converted them to .j3o format in JMonkey. I renamed the textures to match their corresponding mesh and this didn't do anything. Does anybody know how to create a flat object and put it into JMonkey with a texture? This sounds simple and there is absolutely no information on this. This should be step 1!

    Read the article

  • About Alpha blending sprites in Direct3D9

    - by ambrozija
    I have a Direct3D9 application that is rendering ID3DXSprites. The problem I am experiencing is best described in this situation: I have a texture that is totally opaque. On top of it I draw a rectangle filled with solid color and alpha of 128. On top of the rectangle I have a text that is totally opaque. I draw all of this and get the resulting image through GetRenderTarget call. The problem is that on the resulting image, on the area where the transparent rectangle is, I have semi transparent pixels. It is not a problem that the rectangle is transparent, the problem is that the resulting image is. The question is how to setup the blending so in this situation I don't get the transparent pixels in the resulting image? I use the sprite with D3DXSPRITE_ALPHABLEND which sets the device state to D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA. I tried couple of combinations of SetRenderState, like D3DBLEND_SRCALPHA, D3DBLEND_DESTALPHA etc., but couldn't make it work. Thanks.

    Read the article

  • Extra fire simulation on iPad device

    - by Nezam
    I have with me an iOS app for iPad which creates a few fire simulations over a png.Well,its working well exactly how we wanted it but when we are testing it on a device,we get an extra fire simulation.Heres the screen: iPad Simulator: This is how it should display (iPad Simulation) iPad Device: This is how its displaying (iPad Device) M ready to share whichever portion of my code which gets me to my solution once someone gets hit here.Thanks in advance

    Read the article

  • Island Generation Library

    - by thatguy
    Can anyone recommend a tile map generator (written in Java is a plus), where one can control some land types? For example: islands, large continents, singe large continent, archipelago, etc. I've been reading through many posts on the subject, it almost seems like many are just rolling their own. Before creating my own, I'm wondering if there's already an open source implementation that I might not be finding. If not, it seems like using Perlin Noise is a popular choice. Some articles I've been reading: http://simblob.blogspot.com/2010/01/simple-map-generation.html Generate islands/continents with simplex noise https://sites.google.com/site/minecraftlandgenerator/

    Read the article

  • Corona SDK: Animation takes a long time to play after "prepare" step

    - by Michael Taufen
    First off, I'm using the current publicly available build, version 2011.704 I'm building a platformer, and have a character that runs along and jumps when the screen is tapped. While jumping, the animation code has him assume a svelte jumping pose, and upon the detection of a collision with the ground, he returns to running. All of this happens. The problem is that there is this strange gap of time, about 1/2 a second by the feel of it, where my character sits on the first frame of the run animation after landing, before it actually starts playing. This leads me to believe that the problem is somewhere between the "prepare" step of loading up a sprite set's animation sequence and the "play" step. Thanks in advance for any help :). My code for when my character lands is as follows: local function collisionHandler ( event ) if (event.object1.myName == "character") and (event.object2.type == "terrain") then inAir = false characterInstance:prepare( "run" ) -- TODO: time between prepare and play is curiously long... characterInstance:play() end end

    Read the article

  • Basic tutorial/introduction for 3d matrices, idealy in c++, without openGl or directX

    - by René Nyffenegger
    I am wondering if there is a simple tutorial that covers the basics of how to initialize rotation, translation and projection matrices, and how to multiply them, and how to get the screen coordinates afterwards for a 3d point. Idealy, the tutorial comes with compilable code and is not dependent on any 3rd party library. Searching the internet, I found lots of tutorials, so this is not the problem. Yet, it seemed all of these either covered openGl or directX, or they were theoretical in nature.

    Read the article

  • Transparency in XNA-4 primitives

    - by Shashwat
    I'm using XNA 4 with Visual Studio 2010. I'm trying to create a simple 3D world with walls and doors in which the user to free to roam around. A wall is just a rectangle which is currently being rendered with four vertices using triangle strips. But to create a door, I'd have to split it into three rectangles as shown in the figure. Four quadrilaterals if I want to have the following door-style It will become more complex to have multiple doors on the same wall or if I have windows. Is there any shorter way to handle this? I am looking for something that will just make the wall transparent wherever I want. I found a solution but facing a problem here

    Read the article

  • My grid based collision detection is slow

    - by Fibericon
    Something about my implementation of a basic 2x4 grid for collision detection is slow - so slow in fact, that it's actually faster to simply check every bullet from every enemy to see if the BoundingSphere intersects with that of my ship. It becomes noticeably slow when I have approximately 1000 bullets on the screen (36 enemies shooting 3 bullets every .5 seconds). By commenting it out bit by bit, I've determined that the code used to add them to the grid is what's slowest. Here's how I add them to the grid: for (int i = 0; i < enemy[x].gun.NumBullets; i++) { if (enemy[x].gun.bulletList[i].isActive) { enemy[x].gun.bulletList[i].Update(timeDelta); int bulletPosition = 0; if (enemy[x].gun.bulletList[i].position.Y < 0) { bulletPosition = (int)Math.Floor((enemy[x].gun.bulletList[i].position.X + 900) / 450); } else { bulletPosition = (int)Math.Floor((enemy[x].gun.bulletList[i].position.X + 900) / 450) + 4; } GridItem bulletItem = new GridItem(); bulletItem.index = i; bulletItem.type = 5; bulletItem.parentIndex = x; if (bulletPosition > -1 && bulletPosition < 8) { if (!grid[bulletPosition].Contains(bulletItem)) { for (int j = 0; j < grid.Length; j++) { grid[j].Remove(bulletItem); } grid[bulletPosition].Add(bulletItem); } } } } And here's how I check if it collides with the ship: if (ship.isActive && !ship.invincible) { BoundingSphere shipSphere = new BoundingSphere( ship.Position, ship.Model.Meshes[0].BoundingSphere.Radius * 9.0f); for (int i = 0; i < grid.Length; i++) { if (grid[i].Contains(shipItem)) { for (int j = 0; j < grid[i].Count; j++) { //Other collision types omitted else if (grid[i][j].type == 5) { if (enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].isActive) { BoundingSphere bulletSphere = new BoundingSphere(enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].position, enemy[grid[i][j].parentIndex].gun.bulletModel.Meshes[0].BoundingSphere.Radius); if (shipSphere.Intersects(bulletSphere)) { ship.health -= enemy[grid[i][j].parentIndex].gun.damage; enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].isActive = false; grid[i].RemoveAt(j); break; //no need to check other bullets } } else { grid[i].RemoveAt(j); } } What am I doing wrong here? I thought a grid implementation would be faster than checking each one.

    Read the article

  • SlimDX Texture2D from DataRectangle array

    - by Rebekah Bryant
    I'm totally new to DirectX. I'm using SlimDX to create a texture consisting of 13046 DataRectangles. Here's my code. It's breaking on the Texture2D constructor with "E_INVALIDARG: An invalid parameter was passed to the returning function (-2147024809)." inParms is just a struct containing handle to a Panel. public Renderer(Parameters inParms, ref DataRectangle[] inShapes) { Texture2DDescription description = new Texture2DDescription() { Width = 500, Height = 500, MipLevels = 1, ArraySize = inShapes.Length, Format = Format.R32G32B32_Float, SampleDescription = new SampleDescription(1, 0), Usage = ResourceUsage.Default, BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None }; SwapChainDescription chainDescription = new SwapChainDescription() { BufferCount = 1, IsWindowed = true, Usage = Usage.RenderTargetOutput, ModeDescription = new ModeDescription(0, 0, new Rational(60, 1), Format.R8G8B8A8_UNorm), SampleDescription = new SampleDescription(1, 0), Flags = SwapChainFlags.None, OutputHandle = inParms.Handle, SwapEffect = SwapEffect.Discard }; Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.None, chainDescription, out mDevice, out mSwapChain); Texture2D texture = new Texture2D(Device, description, inShapes); } EDIT: Running with the Debug flag set, I got: D3D11 ERROR: ID3D11Device::CreateTexture2D: The format (0x6, R32G32B32_FLOAT) cannot be bound as a RenderTarget, or cast to a format that could be bound as a RenderTarget. This is because the current graphics implementation does not even support this Format. Therefore this format does not support D3D11_BIND_RENDER_TARGET. Use CheckFormatSupport to check Format support. [ STATE_CREATION ERROR #92: CREATETEXTURE2D_UNSUPPORTEDFORMAT] D3D11 ERROR: ID3D11Device::CreateTexture2D: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #104: CREATETEXTURE2D_INVALIDARG_RETURN]

    Read the article

  • add collision detection to sprite?

    - by xBroak
    bassically im trying to add collision detection to the sprite below, using the following: self.rect = bounds_rect collide = pygame.sprite.spritecollide(self, wall_list, False) if collide: # yes print("collide") However it seems that when the collide is triggered it continuously prints 'collide' over and over when instead i want them to simply not be able to walk through the object, any help? def update(self, time_passed): """ Update the creep. time_passed: The time passed (in ms) since the previous update. """ if self.state == Creep.ALIVE: # Maybe it's time to change the direction ? # self._change_direction(time_passed) # Make the creep point in the correct direction. # Since our direction vector is in screen coordinates # (i.e. right bottom is 1, 1), and rotate() rotates # counter-clockwise, the angle must be inverted to # work correctly. # self.image = pygame.transform.rotate( self.base_image, -self.direction.angle) # Compute and apply the displacement to the position # vector. The displacement is a vector, having the angle # of self.direction (which is normalized to not affect # the magnitude of the displacement) # displacement = vec2d( self.direction.x * self.speed * time_passed, self.direction.y * self.speed * time_passed) self.pos += displacement # When the image is rotated, its size is changed. # We must take the size into account for detecting # collisions with the walls. # self.image_w, self.image_h = self.image.get_size() global bounds_rect bounds_rect = self.field.inflate( -self.image_w, -self.image_h) if self.pos.x < bounds_rect.left: self.pos.x = bounds_rect.left self.direction.x *= -1 elif self.pos.x > bounds_rect.right: self.pos.x = bounds_rect.right self.direction.x *= -1 elif self.pos.y < bounds_rect.top: self.pos.y = bounds_rect.top self.direction.y *= -1 elif self.pos.y > bounds_rect.bottom: self.pos.y = bounds_rect.bottom self.direction.y *= -1 self.rect = bounds_rect collide = pygame.sprite.spritecollide(self, wall_list, False) if collide: # yes print("collide") elif self.state == Creep.EXPLODING: if self.explode_animation.active: self.explode_animation.update(time_passed) else: self.state = Creep.DEAD self.kill() elif self.state == Creep.DEAD: pass #------------------ PRIVATE PARTS ------------------# # States the creep can be in. # # ALIVE: The creep is roaming around the screen # EXPLODING: # The creep is now exploding, just a moment before dying. # DEAD: The creep is dead and inactive # (ALIVE, EXPLODING, DEAD) = range(3) _counter = 0 def _change_direction(self, time_passed): """ Turn by 45 degrees in a random direction once per 0.4 to 0.5 seconds. """ self._counter += time_passed if self._counter > randint(400, 500): self.direction.rotate(45 * randint(-1, 1)) self._counter = 0 def _point_is_inside(self, point): """ Is the point (given as a vec2d) inside our creep's body? """ img_point = point - vec2d( int(self.pos.x - self.image_w / 2), int(self.pos.y - self.image_h / 2)) try: pix = self.image.get_at(img_point) return pix[3] > 0 except IndexError: return False def _decrease_health(self, n): """ Decrease my health by n (or to 0, if it's currently less than n) """ self.health = max(0, self.health - n) if self.health == 0: self._explode() def _explode(self): """ Starts the explosion animation that ends the Creep's life. """ self.state = Creep.EXPLODING pos = ( self.pos.x - self.explosion_images[0].get_width() / 2, self.pos.y - self.explosion_images[0].get_height() / 2) self.explode_animation = SimpleAnimation( self.screen, pos, self.explosion_images, 100, 300) global remainingCreeps remainingCreeps-=1 if remainingCreeps == 0: print("all dead")

    Read the article

  • tunnel effect cocos2d

    - by samfisher
    I am looking to create a similar tunnel effect in COCOS2D (iOS). Could anyone suggest any pointers? ref Video 1 ref Video 2 Till now I have tried with several ring shape sprites with decreasing scale and positioned center to a same point and keeping Z decreasing as well for each smaller sprite. With that, animating it with CCScaleTo and changing the size to 2.0 with animation duration but it does not come anyway near to the tunnel effect shown in the reference. Thanks, sam

    Read the article

  • Custom Music in Skyrim's Creation Kit?

    - by CptSupermrkt
    Can you bring in external music such as mp3s? If so, how? I didn't see anything about this in the wiki Bethesda released. Also how does this work with regards to the Steam Workshop? Don't imagine they would appreciate uploading copyrighted content. I don't particularly care about making a public mod, I just want to screw around privately and create dungeons/towns using music from some of my favorite games. Thanks.

    Read the article

  • GLSL, is it possible to offsetting vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others.

    Read the article

  • obj-c classes and sub classes (Cocos2d) conversion

    - by Lewis
    Hi I'm using this version of cocos2d: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which supports the UIGestureRecognizer within a CCLayer in a cocos2d scene like so: @interface HelloWorldLayer : CCLayer <UIGestureRecognizerDelegate> { } Now I want to make this custom gesture work within the scene, attaching it to a sprite in cocos2d: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Header file for view controller: #import "OneFingerRotationGestureRecognizer.h" @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> @property (nonatomic, strong) IBOutlet UIImageView *image; @property (nonatomic, strong) IBOutlet UITextField *textDisplay; @end then this is in the .m file: gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; Now my question is, is it possible to add this custom gesture into the cocos2d project found on that github, and if so, what do I need to change in the OneFingerRotationGestureRecognizerDelegate to get it to work within cocos2d. Because at the minute it is setup in a standard iOS project and not a cocos2d project and I do not know enough about UIViews and classing/ sub classing in obj-c to get this to work. Also it seems to inherit from a UIView where cocos2d uses CCLayer. Kind regards, Lewis. I also realise I may have not included enough code from the custom gesture project for readers to interpret it fully, so the full project can be found here: https://github.com/melle/OneFingerRotationGestureDemo

    Read the article

  • Arcball 3D camera - how to convert from camera to object coordinates

    - by user38873
    I have checked multiple threads before posting, but i havent been able to figure this one out. Ok so i have been following this tutorial, but im not using glm, ive been implementing everything up until now, like lookat etc. http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Arcball So i can rotate with the click and drag of the mouse, but when i rotate 90º degrees around Y and then move the mouse upwards or donwwards, it rotates on the wrong axis, this problem is demonstrated on this part of the tutorial An extra trick is converting the rotation axis from camera coordinates to object coordinates. It's useful when the camera and object are placed differently. For instace, if you rotate the object by 90° on the Y axis ("turn its head" to the right), then perform a vertical move with your mouse, you make a rotation on the camera X axis, but it should become a rotation on the Z axis (plane barrel roll) for the object. By converting the axis in object coordinates, the rotation will respect that the user work in camera coordinates (WYSIWYG). To transform from camera to object coordinates, we take the inverse of the MV matrix (from the MVP matrix triplet). What i have to do acording to the tutorial is convert my axis_in_camera_coordinates to object coordinates, and the rotation is done well, but im confused on what matrix i use to do just that. The tutorial talks about converting the axis from camera to object coordinates by using the inverse of the MV. Then it shows these 3 lines of code witch i havent been able to understand. glm::mat3 camera2object = glm::inverse(glm::mat3(transforms[MODE_CAMERA]) * glm::mat3(mesh.object2world)); glm::vec3 axis_in_object_coord = camera2object * axis_in_camera_coord; So what do i aply to my calculated axis?, the inverse of what, i supose the inverse of the model view? So my question is how do you transform camera axis to object axis. Do i apply the inverse of the lookat matrix? My code: if (cur_mx != last_mx || cur_my != last_my) { va = get_arcball_vector(last_mx, last_my); vb = get_arcball_vector( cur_mx, cur_my); angle = acos(min(1.0f, dotProduct(va, vb)))*20; axis_in_camera_coord = crossProduct(va, vb); axis.x = axis_in_camera_coord[0]; axis.y = axis_in_camera_coord[1]; axis.z = axis_in_camera_coord[2]; axis.w = 1.0f; last_mx = cur_mx; last_my = cur_my; } Quaternion q = qFromAngleAxis(angle, axis); Matrix m; qGLMatrix(q,m); vi = mMultiply(m, vi); up = mMultiply(m, up); ViewMatrix = ogLookAt(vi.x, vi.y, vi.z,0,0,0,up.x,up.y,up.z);

    Read the article

  • GLM Velocity Vectors - Basic Maths to Simulate Steering

    - by Reanimation
    UPDATE - Code updated below but still need help adjusting my math. I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • Implementing Light Volume Front Faces

    - by cubrman
    I recently read an article about light indexed deferred rendering from here: http://code.google.com/p/lightindexed-deferredrender/ It explains its ideas in a clear way, but there was one point that I failed to understand. It in fact is one of the most interesting ones, as it explains how to implement transparency with this approach: Typically when rendering light volumes in deferred rendering, only surfaces that intersect the light volume are marked and lit. This is generally accomplished by a “shadow volume like” technique of rendering back faces – incrementing stencil where depth is greater than – then rendering front faces and only accepting when depth is less than and stencil is not zero. By only rendering front faces where depth is less than, all future lookups by fragments in the forward rendering pass will get all possible lights that could hit the fragment. Can anyone explain how exactly you need to render only front faces? Another question is why do you need the front faces at all? Why can't we simply render all the lights and store the ones that overlap at this pixel in a texture? Does this approach serves as a cut-off plane to discard lights blocked by opaque geometry?

    Read the article

  • Open GL stars are not rendering

    - by Darestium
    I doing Nehe's Open GL Lesson 9. I'm using SFML for windowing, the strange thing is no stars are rendering. #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app); void renderGlScene(sf::Window *app); void init(); int loadResources(); const int NUM_OF_STARS = 50; float triRot = 0.0f; float quadRot = 0.0f; bool twinkle = false; bool tKey = false; float zoom = 15.0f; float tilt = 90.0f; float spin = 0.0f; unsigned int loop; unsigned int texture_handle[1]; typedef struct { int r, g, b; float distance; float angle; } stars; stars star[NUM_OF_STARS]; int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 9"); app.UseVerticalSync(false); init(); if (loadResources() == -1) { return EXIT_FAILURE; } while (app.IsOpened()) { processEvents(&app); processInput(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } int loadResources() { sf::Image img_data; // Load Texture if (!img_data.LoadFromFile("data/images/star.bmp")) { std::cout << "Could not load data/images/star.bmp"; return -1; } // Generate 1 texture glGenTextures(1, &texture_handle[0]); // Linear filtering glBindTexture(GL_TEXTURE_2D, texture_handle[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img_data.GetWidth(), img_data.GetHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img_data.GetPixelsPtr()); return 0; } void processInput(sf::Window *app) { const sf::Input& input = app->GetInput(); if (input.IsKeyDown(sf::Key::T) && !tKey) { tKey = true; twinkle = !twinkle; } if (!input.IsKeyDown(sf::Key::T)) { tKey = false; } if (input.IsKeyDown(sf::Key::Up)) { tilt -= 0.05f; } if (input.IsKeyDown(sf::Key::Down)) { tilt += 0.05f; } if (input.IsKeyDown(sf::Key::PageUp)) { zoom -= 0.02f; } if (input.IsKeyDown(sf::Key::Up)) { zoom += 0.02f; } } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable texturing glEnable(GL_TEXTURE_2D); //glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); glBlendFunc(GL_SRC_ALPHA, GL_ONE); glEnable(GL_BLEND); for (loop = 0; loop < NUM_OF_STARS; loop++) { star[loop].distance = (float)loop / NUM_OF_STARS * 5.0f; // Calculate distance from the centre // Give stars random rgb value star[loop].r = rand() % 256; star[loop].g = rand() % 256; star[loop].b = rand() % 256; } } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); // Clear color depth buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Apply some transformations glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Select texture glBindTexture(GL_TEXTURE_2D, texture_handle[0]); for (loop = 0; loop < NUM_OF_STARS; loop++) { glLoadIdentity(); // Reset The View Before We Draw Each Star glTranslatef(0.0f, 0.0f, zoom); // Zoom Into The Screen (Using The Value In 'zoom') glRotatef(tilt, 1.0f, 0.0f, 0.0f); // Tilt The View (Using The Value In 'tilt') glRotatef(star[loop].angle, 0.0f, 1.0f, 0.0f); // Rotate To The Current Stars Angle glTranslatef(star[loop].distance, 0.0f, 0.0f); // Move Forward On The X Plane glRotatef(-star[loop].angle,0.0f,1.0f,0.0f); // Cancel The Current Stars Angle glRotatef(-tilt,1.0f,0.0f,0.0f); // Cancel The Screen Tilt if (twinkle) { glColor4ub(star[(NUM_OF_STARS - loop) - 1].r, star[(NUM_OF_STARS - loop)-1].g, star[(NUM_OF_STARS - loop) - 1].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad } glRotatef(spin,0.0f,0.0f,1.0f); // Rotate The Star On The Z Axis // Assign A Color Using Bytes glColor4ub(star[loop].r, star[loop].g, star[loop].b, 255); glBegin(GL_QUADS); // Begin Drawing The Textured Quad glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.0f); glEnd(); // Done Drawing The Textured Quad spin += 0.01f; // Used To Spin The Stars star[loop].angle += (float)loop / NUM_OF_STARS; // Changes The Angle Of A Star star[loop].distance -= 0.01f; // Changes The Distance Of A Star if (star[loop].distance < 0.0f) { star[loop].distance += 5.0f; // Move The Star 5 Units From The Center star[loop].r = rand() % 256; // Give It A New Red Value star[loop].g = rand() % 256; // Give It A New Green Value star[loop].b = rand() % 256; // Give It A New Blue Value } } } I've looked over the code atleast 10 times now and I can't figure out the problem. Any help would be much appreciated.

    Read the article

  • CreateRenderTarget returns 0x80070057 in big surface resolution

    - by senggen
    I have created the SLI merged desktop of three 1920x1680 monitors, so the desktop resolution is 5760x1080. There is a 0x80070057 error, while calling CreateRenderTarget to create the RT_Surface: IDirect3DSurface9* _render_surface; HRESULT hr = _device->CreateRenderTarget( _desktop_width * 2, _desktop_height + 1, D3DFMT_A8R8G8B8, D3DMULTISAMPLE_NONE, 0, TRUE, &_render_surface, NULL); It works OK with desktop resolution 1024x768, and the total resolution is 3072x768. In http://msdn.microsoft.com/en-us/library/windows/desktop/bb174361(v=vs.85).aspx, it says If the method succeeds, the return value is D3D_OK. If the method fails, the return value can be one of the following: D3DERR_NOTAVAILABLE, D3DERR_INVALIDCALL, D3DERR_OUTOFVIDEOMEMORY, E_OUTOFMEMORY. and no description about 0x80070057. HRESULT: 0x80070057 (2147942487) Name: E_INVALIDARG Description: An invalid parameter was passed to the returning function Somebody please help me.

    Read the article

  • How can I view an R32G32B32 texture?

    - by bobobobo
    I have a texture with R32G32B32 floats. I create this texture in-program on D3D11, using DXGI_FORMAT_R32G32B32_FLOAT. Now I need to see the texture data for debug purposes, but it will not save to anything but dds, showing the error in debug output, "Can't find matching WIC format, please save this file to a DDS". So, I write it to DDS but I can't open it now! The DirectX texture tool says "An error occurred trying to open that file". I know the texture is working because I can read it in the GPU and the colors seem correct. How can I view an R32G32B32 texture in an image viewer?

    Read the article

  • Loading Wavefront Data into VAO and Render It

    - by Jordan LaPrise
    I have successfully loaded a triangulated wavefront(.obj) into 6 vectors, the first 3 vectors contain the locations for vertices, uv coords, and normals. The last three have the indices stored for each of the faces. I have been looking into using VAO's and VBO's to render, and I'm not quite sure how to load and render the data. One of my biggest concerns is the fact that indexed rendering only allows you to have one array of indices, meaning I somehow have to make all of the first three vectors the same size, the only way I thought of doing this, is to make 3 new vertex's of equal size, and load in the data for each face, but that would completely defeat the purpose of indexing. Any help would be appreciated. Thanks in advance, Jordan

    Read the article

  • How many vertices are needed to draw reasonably good-looking terrain?

    - by bobbaluba
    I have some pretty expensive code in my terrain vertex shader, and I am trying to figure out if it will still be fast enough. I haven't yet developed a level-of-detail system for my terrain rendering, but I can easily benchmark my code by just drawing mock triangles. My problem is, how do I know how many vertices to test with? Are there for example rendering engines that will tell me how many terrain vertices are currently on-screen? Or maybe it is possible to create a formula that will give me an estimate based on screen resolution?

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >