Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 427/1022 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • How do I draw anti-aliased holes in a bitmap

    - by gyozo kudor
    I have an artillery game (hobby-learning project) and when the projectile hits it leaves a hole in the ground. I want this hole to have antialiased edges. I'm using System.Drawing for this. I've tried with clipping paths, and drawing with a transparent color using gfx.CompositingMode = CompositingMode.SourceCopy, but it gives me the same result. If I draw a circle with a solid color it works fine, but I need a hole, a circle with 0 alpha values. I have enabled these but they work only with solid colors: gfx.CompositingQuality = CompositingQuality.HighQuality; gfx.InterpolationMode = InterpolationMode.HighQualityBicubic; gfx.SmoothingMode = SmoothingMode.AntiAlias; In the two pictures consider black as being transparent. This is what I have (zoomed in): And what I need is something like this (made with photoshop): This will be just a visual effect, in code for collision detection I still treat everything with alpha 128 as solid. Edit: I'm usink OpenTK for this game. But for this question I think it doesn't really matter probably it is gdi+ related.

    Read the article

  • Slick2D Rendering Lots of Polygons

    - by Hazzard
    I'm writing an little isometric game using Slick. The world terrain is made up of lots of quadrilaterals. In a small world that is 128 by 128 squares, over 16,000 quadrilaterals need to be rendered. This puts my pretty powerful computer down to 30 fps. I've though about caching "chunks" of the world so only single chunks would ever need updating at a time, but I don't know how to do this, and I am sure there are other ways to optimize it besides that. Maybe I'm doing the whole thing wrong, surely fancy 3D games that run fine on my machine are more intensive than this. My question is how can I improve the FPS and am I doing something wrong? Or does it actually take that much power to render those polygons? -- Here is the source code for the render method in my game state. It iterates through a 2d array or heights and draws polygons based on the height. public void render(GameContainer container, StateBasedGame game, Graphics gfx) throws SlickException { gfx.translate(offsetX * d + container.getWidth() / 2, offsetY * d + container.getHeight() / 2); gfx.scale(d, d); for (int y = 0; y < placeholder.length; y++) {// x & y are isometric // diag for (int x = 0; x < placeholder[0].length; x++) { Polygon poly; int hor = TestState.TILE_WIDTH * (x - y);// hor and ver are orthagonal int W = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x];//points to go off of int S = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x + 1]; int E = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x + 1]; int N = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x]; if (placeholder[y][x] == null) { poly = new Polygon();//Create actual surface polygon poly.addPoint(-TestState.TILE_WIDTH + hor, W); poly.addPoint(hor, S + TestState.TILE_HEIGHT); poly.addPoint(TestState.TILE_WIDTH + hor, E); poly.addPoint(hor, N - TestState.TILE_HEIGHT); float z = ((float) heights[y][x + 1] - heights[y + 1][x]) / 32 + 0.5f; placeholder[y][x] = new Tile(poly, new Color(z, z, z)); //ShapeRenderer.fill(placeholder[y][x]); } if (true) {//ONLY draw tile if it's on screen gfx.setColor(placeholder[y][x].getColor()); ShapeRenderer.fill(placeholder[y][x]); //gfx.fill(placeholder[y][x]); //placeholder[y][x]. //DRAW EDGES if (y + 1 == placeholder.length) {//draw South foundation edges gfx.setColor(Color.gray); Polygon found = new Polygon(); found.addPoint(-TestState.TILE_WIDTH + hor, W); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(-TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); } if (x + 1 == placeholder[0].length) {//north gfx.setColor(Color.darkGray); Polygon found = new Polygon(); found.addPoint(TestState.TILE_WIDTH + hor, E); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); }//*/ } } } }

    Read the article

  • dynamic 2d texture creation in unity from script

    - by gman
    I'm coming from HTML5 and I'm used to having the 2D Canvas API I can use to generate textures. Is there anything similar in Unity3D? For example, let's say at runtime I want to render a circle, put 3 initials in the middle and then take the result and put that in a texture. In HTML5 I'd do this var initials = "GAT"; var textureWidth = 256; var textureHeight = 256; // create a canvas var c = document.createElement("canvas"); c.width = textureWidth; c.height = textureHeight; var ctx = c.getContext("2d"); // Set the origin to the center of the canvas ctx.translate(textureWidth / 2, textureHeight / 2); // Draw a yellow circle ctx.fillStyle = "rgb(255,255,0)"; // yellow ctx.beginPath(); var radius = (Math.min(textureWidth, textureHeight) - 2) / 2; ctx.arc(0, 0, radius, 0, Math.PI * 2, true); ctx.fill(); // Draw some black initials in the middle. ctx.fillStyle = "rgb(0,0,0)"; ctx.font = "60pt Arial"; ctx.textAlign = "center"; ctx.fillText(initials, 0, 30); // now I can make a texture from that var tex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, tex); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, c); gl.generateMipmap(gl.TEXTURE_2D); I know I can edit individual pixels in a Unity texture but is there any higher level API for drawing to texture in unity?

    Read the article

  • Pygame: Save a list of objects/classes/surfaces

    - by Sam Tubb
    I am working on a game, in which you can create mazes. You place blocks on a 16x16 grid, while choosing from a variety of block to make the level with. Whenever you create a block, it adds this class: class Block(object): def __init__(self,x,y,spr): self.x=x self.y=y self.sprite=spr self.rect=self.sprite.get_rect(x=self.x,y=self.y) to a list called instances. I tried shelving it to a .bin file, but it returns some error dealing with surfaces. How can I go about saving and loading levels? Any help is appreciated! :) Here is the whole code for reference: import pygame from pygame.locals import * #initstuff pygame.init() screen=pygame.display.set_mode((640,480)) pygame.display.set_caption('PiMaze') instances=[] #loadsprites menuspr=pygame.image.load('images/menu.png').convert() b1spr=pygame.image.load('images/b1.png').convert() b2spr=pygame.image.load('images/b2.png').convert() currentbspr=b1spr curspr=pygame.image.load('images/curs.png').convert() curspr.set_colorkey((0,255,0)) #menu menuspr.set_alpha(185) menurect=menuspr.get_rect(x=-260,y=4) class MenuItem(object): def __init__(self,pos,spr): self.x=pos[0] self.y=pos[1] self.sprite=spr self.pos=(self.x,self.y) self.rect=self.sprite.get_rect(x=self.x,y=self.y) class Block(object): def __init__(self,x,y,spr): self.x=x self.y=y self.sprite=spr self.rect=self.sprite.get_rect(x=self.x,y=self.y) while True: #menu items b1menu=b1spr.get_rect(x=menurect.left+32,y=48) b2menu=b2spr.get_rect(x=menurect.left+64,y=48) menuitems=[MenuItem(b1menu,b1spr),MenuItem(b2menu,b2spr)] screen.fill((20,30,85)) mse=pygame.mouse.get_pos() key=pygame.key.get_pressed() placepos=((mse[0]/16)*16,(mse[1]/16)*16) if key[K_q]: if mse[0]<260: if menurect.right<255: menurect.right+=1 else: if menurect.left>-260: menurect.left-=1 else: if menurect.left>-260: menurect.left-=1 for e in pygame.event.get(): if e.type==QUIT: exit() if menurect.right<100: if e.type==MOUSEBUTTONUP: if e.button==1: to_remove = [i for i in instances if i.rect.collidepoint(placepos)] for i in to_remove: instances.remove(i) if not to_remove: instances.append(Block(placepos[0],placepos[1],currentbspr)) for i in instances: screen.blit(i.sprite,i.rect) if not key[K_q]: screen.blit(curspr,placepos) screen.blit(menuspr,menurect) for item in menuitems: screen.blit(item.sprite,item.pos) if item.rect.collidepoint(mse): if pygame.mouse.get_pressed()==(1,0,0): currentbspr=item.sprite pygame.draw.rect(screen, ((255,0,0)), item, 1) pygame.display.flip()

    Read the article

  • Skyrim Creation Kit with Xbox 360

    - by funseiki
    I posted this on stackoverflow but was advised to post here (here is a link to the stackoverflow question). I'm hoping for constructive feedback on its plausibility. Update on progress: It looks like there are ways to stuff files back onto the console (horizon, modio, xplorer360, etc) and they do require some form of signing. As of now, though, I've had no luck. I was hoping I could get away with just placing the ".esp" into the directory containing marketplace downloads for Skyrim, along with the signed ".bsa" file (basically a zipped up file containing any extra content the .esp will need to refer that doesn't exist in the basic game). This doesn't work, at least not in the ways I've tried, so next I'm going to try install the entire game to my flash drive (if possible) and attempt to traverse through the game's directory (this is probably unlikely). If anyone else has suggestions or luck or wants more detail on my failures comment/answer away. Here is the question: I'm thinking about buying the PC version of Skyrim to get the Creation Kit (I already own a copy for the Xbox). I have read the faq and scoured plenty of forums to see if there was some way to mod Skyrim for a console (Xbox 360, in particular), but they are generally coming up negative. I realize the CreationKit is on the PC, but I was wondering if there was a way to set up the '.esp' (hopefully I'm interpreting this correctly) files to be placed on the Xbox 360 file system in a similar manner to how game add-ons are downloaded from the Xbox Live Marketplace. I believe it is possible to transfer saves between the console and the PC (e.g. google: 'skyrim mod xbox360'), but these are referencing items that already exist in the game (e.g. a console command for maximum carry weight does not require reference to new animations or models). It would probably be easier if one could navigate through the xbox's file system to see where the games' files are placed, but with the current setup, the file system is abstracted away. Any help or insight on the matter would be much appreciated. I would love to work on a project that would make it possible to let console gamers experience and enjoy all the great mods available to the PC community.

    Read the article

  • Direct2D Transform

    - by James
    I have a beginner question about Direct2D transforms. I have a 20 x 10 bitmap that I would like to draw in different orientations. To start, I would like to draw it vertically with a destination rectangle of say: (left, top, right, bottom) (300, 300, 310, 320) The bitmap is wider than it is tall (20 x 10), but when I draw it vertically, it will be appear taller than it is wide (10 x 20). I know that I can use a rotation matrix like so: m_pRenderTarget->SetTransform( D2D1::Matrix3x2F::Rotation( 90.0f, D2D1::Point2F(<center of shape>)) ); But when I use this method to rotate my shape, the destination rectangle is still wider than it is tall. Maybe it would look something like this: (left, top, right, bottom) (280, 290, 300, 300) The destination rectangle is 20 x 10 but the bitmap appears on the screen as 10 x 20. I can't look at the destination rectangle in the debugger and compare it to: (left, top, right, bottom) (300, 300, 310, 320) Is there any simple way to say "I want to rotate it so that the image is rendered to exactly this destination rectangle after the rotation?" In this case, I would like to say "Please rotate the bitmap so that it appears on the screen at this location:" (left, top, right, bottom) (300, 300, 310, 320) If I can't do that, is there any way to find out the 10 x 20 destination rectangle where the bitmap is actually being rendered to the screen?

    Read the article

  • How exactly does XNA's SpriteBatch work?

    - by David Gouveia
    To be more precise, if I needed to recreate this functionality from scratch in another API (e.g. in OpenGL) what would it need to be capable of doing? I do have a general idea of some of the steps, such as how it prepares an orthographic projection matrix and creates a quad for each draw call. I'm not too familiar, however, with the batching process itself. Are all quads stored in the same vertex buffer? Does it need an index buffer? How are different textures handled? If possible I'd be grateful if you could guide me through the process from when SpriteBatch.Begin() is called until SpriteBatch.End(), at least when using the default Deferred mode.

    Read the article

  • How can I test if a point lies between two parallel lines?

    - by Harold
    In the game I'm designing there is a blast that shoots out from an origin point towards the direction of the mouse. The width of this blast is always going to be the same. Along the bottom of the screen (what's currently) squares move about which should be effected by the blast that the player controls. Currently I am trying to work out a way to discover if the corners of these squares are within the blast's two bounding lines. I thought the best way to do this would be to rotate the corners of the square around an origin point as if the blast were completely horizontal and see if the Y values of the corners were less than or equal to the width of the blast which would mean that they lie within the effected region, but I can't work out

    Read the article

  • Fixed-Function vs Shaders: Which for beginner?

    - by Rob Hays
    I'm currently going to college for computer science. Although I do plan on utilizing an existing engine at some point to create a small game, my aim right now is towards learning the fundamentals: namely, 3D programming. I've already done some research regarding the choice between DirectX and OpenGL, and the general sentiment that came out of that was that whether you choose OpenGL or DirectX as your training-wheels platform, a lot of the knowledge is transferrable to the other platform. Therefore, since OpenGL is supported by more systems (probably a silly reason to choose what to learn), I decided that I'm going to learn OpenGL first. After I made this decision to learn OpenGL, I did some more research and found out about a dichotomy that I was somewhere unaware of all this time: fixed-function OpenGL vs. modern programmable shader-based OpenGL. At first, I thought it was an obvious choice that I should choose to learn shader-based OpenGL since that's what's most commonly used in the industry today. However, I then stumbled upon the very popular Learning Modern 3D Graphics Programming by Jason L. McKesson, located here: http://www.arcsynthesis.org/gltut/ I read through the introductory bits, and in the "About This Book" section, the author states: "First, much of what is learned with this approach must be inevitably abandoned when the user encounters a graphics problem that must be solved with programmability. Programmability wipes out almost all of the fixed function pipeline, so the knowledge does not easily transfer." yet at the same time also makes the case that fixed-functionality provides an easier, more immediate learning curve for beginners by stating: "It is generally considered easiest to teach neophyte graphics programmers using the fixed function pipeline." Naturally, you can see why I might be conflicted about which paradigm to learn: Do I spend a lot of time learning (and then later unlearning) the ways of fixed-functionality, or do I choose to start out with shaders? My primary concern is that modern programmable shaders somehow require the programmer to already understand the fixed-function pipeline, but I doubt that's the case. TL;DR == As an aspiring game graphics programmer, is it in my best interest to learn 3D programming through fixed-functionality or modern shader-based programming?

    Read the article

  • Split vector vs matrix notation for transformation

    - by seahorse
    Some rendering engines like Ogre prefer to use a individual vector based notation for transformations like the following Split vector notation: Net Transformation is represented by Scale vector = sx, sy, sz Transformation vector = tx, ty, tz Rotation Quaternion Vector = w,x,y,z Matrix notation: There are other engines which simply use a net combined transformation matrix. What are the advantages of the first notation over the second? Also for animation interpolation does it work in the first notation that we interpolate across the individual components and use the interpolated parts to get the net transformation? Is this another advantage?

    Read the article

  • Unity Occlusion Portals: What and How?

    - by Nick Wiggill
    (Here I eat my words on Meta about posting Unity questions on Unity Answers... since that site is less responsive than this one.) Unity provides cell-based Occlusion Culling (via Umbra, I believe). However, a newer feature that it supports is Occlusion Portals. The question is, if BSP-based occlusion culling is already a feature of Unity, what do portals add, and how? PS. This question is not "What are portals?" -- I'm aware of the original Quake BSP-style portals -- which is partly why I find the explicit portal concept in Unity odd, since it uses BSP anyway.

    Read the article

  • Implementing a FSM with ActionScript 2 without using classes?

    - by Up2u
    I have seen several references of A.I. and FSM, but sadly I still can't understand the point of an FSM in AS2.0. Is it a must to create a class for each state? I have a game-project which also it has an A.I., the A.I. has 3 states: distanceCheck, ChaseTarget, and Hit the target. It's an FPS game and played via mouse. I have created the A.I. successfully, but I want to convert it to FSM method... My first state is CheckDistanceState() and in that function I look for the nearest target and trigger the function ChaseState(), there I insert the Hit() function to destroy the enemy, The 3 functions that I created are being called in AI_cursor.onEnterframe. Is there any chance to implement an FSM without the need to create a class? From what I've read before, you have to create a class. I prefer to write the code on frames in flash and I still don't understand how to have external classes.

    Read the article

  • How do I prevent other dynamic bodies from affecting the player's velocity with Box2D?

    - by Milo
    I'm working on my player object for my game. PhysicsBodyDef def; def.fixedRotation = true; def.density = 1.0f; def.position = Vec2(200.0f, 200.0f); def.isDynamic = true; def.size = Vec2(50.0f,200.0f); m_player.init(def,&m_physicsEngine.getWorld()); This is how he moves: b2Vec2 vel = getBody()->GetLinearVelocity(); float desiredVel = 0; if (m_keys[ALLEGRO_KEY_A] || m_keys[ALLEGRO_KEY_LEFT]) { desiredVel = -5; } else if (m_keys[ALLEGRO_KEY_D] || m_keys[ALLEGRO_KEY_RIGHT]) { desiredVel = 5; } else { desiredVel = 0; } float velChange = desiredVel - vel.x; float impulse = getBody()->GetMass() * velChange; //disregard time factor getBody()->ApplyLinearImpulse( b2Vec2(impulse,0), getBody()->GetWorldCenter(),true); This creates a few problems. First, to move the player at a constant speed he must be given a high velocity. The problem with this is if he just comes in contact with a small box, he makes it move a lot. Now, I can fix this by lowering his density, but then comes my main issue: I need other objects to be able to run into him, but when they do, he should be like a static wall and not move. I'm not sure how to do that without high density. I cannot use collision groups since I still need him to be solid toward other dynamic things. How can this be done? Essentially, how do I prevent other dynamic bodies from affecting the player's velocity?

    Read the article

  • 3D Paint tool in Maya

    - by Joris
    Hey everyone, could someone help me out on this question? When I am trying to texture objects in maya (for example a barrel) I want to texture it with a brush. When I try to use the 3D painting tool it al gets black even when I have a image file selected. Also the resolution is very very low of the models and textures in maya, though the texture image is very high quality. Can someone please help me? Thanks so much for helping.

    Read the article

  • Why do camera's aspect ratio look good on computer but not on Android devices?

    - by Pooya Fayyaz
    I'm developing a game for Android devices and I have a script that solves the aspect-ratio problem for computer screens but not for my intended target platform. It looks perfect on computer, even when re-sizing the game screen, but not when running my game in landscape mode on mobile phones. This is my script using UnityEngine; using System.Collections; using System.Collections.Generic; public class reso : MonoBehaviour { void Update() { // set the desired aspect ratio (the values in this example are // hard-coded for 16:9, but you could make them into public // variables instead so you can set them at design time) float targetaspect = 16.0f / 9.0f; // determine the game window's current aspect ratio float windowaspect = (float)Screen.width / (float)Screen.height; // current viewport height should be scaled by this amount float scaleheight = windowaspect / targetaspect; // obtain camera component so we can modify its viewport Camera camera = GetComponent<Camera>(); // if scaled height is less than current height, add letterbox if (scaleheight < 1.0f && Screen.width <= 490 ) { Rect rect = camera.rect; rect.width = 1.0f; rect.height = scaleheight; rect.x = 0; rect.y = (1.0f - scaleheight) / 2.0f; camera.rect = rect; } else // add pillarbox { float scalewidth = 1.0f / scaleheight; Rect rect = camera.rect; rect.width = scalewidth; rect.height = 1.0f; rect.x = (1.0f - scalewidth) / 2.0f; rect.y = 0; camera.rect = rect; } } } I figured that my problem occurs in this part of the script: if (scaleheight < 1.0f) { Rect rect = camera.rect; rect.width = 1.0f; rect.height = scaleheight; rect.x = 0; rect.y = (1.0f - scaleheight) / 2.0f; camera.rect = rect; } Its look like this on my mobile phone (portrait): and on landscape mode:

    Read the article

  • Overriding component behavior

    - by deft_code
    I was thinking of how to implement overriding of behaviors in a component based entity system. A concrete example, an entity has a heath component that can be damaged, healed, killed etc. The entity also has an armor component that limits the amount of damage a character receives. Has anyone implemented behaviors like this in a component based system before? How did you do it? If no one has ever done this before why do you think that is. Is there anything particularly wrong headed about overriding component behaviors? Below is rough sketch up of how I imagine it would work. Components in an entity are ordered. Those at the front get a chance to service an interface first. I don't detail how that is done, just assume it uses evil dynamic_casts (it doesn't but the end effect is the same without the need for RTTI). class IHealth { public: float get_health( void ) const = 0; void do_damage( float amount ) = 0; }; class Health : public Component, public IHealth { public: void do_damage( float amount ) { m_damage -= amount; } private: float m_health; }; class Armor : public Component, public IHealth { public: float get_health( void ) const { return next<IHealth>().get_health(); } void do_damage( float amount ) { next<IHealth>().do_damage( amount / 2 ); } }; entity.add( new Health( 100 ) ); entity.add( new Armor() ); assert( entity.get<IHealth>().get_health() == 100 ); entity.get<IHealth>().do_damage( 10 ); assert( entity.get<IHealth>().get_health() == 95 ); Is there anything particularly naive about the way I'm proposing to do this?

    Read the article

  • Mapping dynamic buffers in Direct3D11 in Windows Store apps

    - by Donnie
    I'm trying to make instanced geometry in Direct3D11, and the ID3D11DeviceContext1->Map() call is failing with the very helpful error of "Invalid Parameter" when I'm attempting to update the instance buffer. The buffer is declared as a member variable: Microsoft::WRL::ComPtr<ID3D11Buffer> m_instanceBuffer; Then I create it (which succeeds): D3D11_BUFFER_DESC instanceDesc; ZeroMemory(&instanceDesc, sizeof(D3D11_BUFFER_DESC)); instanceDesc.Usage = D3D11_USAGE_DYNAMIC; instanceDesc.ByteWidth = sizeof(InstanceData) * MAX_INSTANCE_COUNT; instanceDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; instanceDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; instanceDesc.MiscFlags = 0; instanceDesc.StructureByteStride = 0; DX::ThrowIfFailed(d3dDevice->CreateBuffer(&instanceDesc, NULL, &m_instanceBuffer)); However, when I try to map it: D3D11_MAPPED_SUBRESOURCE inst; DX::ThrowIfFailed(d3dContext->Map(m_instanceBuffer.Get(), 0, D3D11_MAP_WRITE, 0, &inst)); The map call fails with E_INVALIDARG. Nothing is NULL incorrectly, and this being one of my first D3D apps I'm currently stumped on what to do next to track it down. I'm thinking I must be creating the buffer incorrectly, but I can't see how. Any input would be appreciated.

    Read the article

  • Typical collision detection

    - by marcg11
    I would like to know how is the typical collision detection of most games. For example, you control a character which can move in 2 dimensional directions (except up and down). Now lets asume he walks into a wall, most of the games depending on character angle and the BB normal face will only stop the player in one axis, but will continue moving in the other along the wall axis. How is that done? I've only managed to stop the character from going through the wall by seting the position to the last one in the past frame if the new position colllisions the bounding box. But this just makes the player stop sharply and unrealisticly.

    Read the article

  • Is there any advantage in using DX10/11 for a 2D game?

    - by David Gouveia
    I'm not entirely familiar with the feature set introduced by DX10/11 class hardware. I'm vaguely familiar with the new stages added to the programmable graphics pipeline, such as the geometry shader, the compute shader, and the new tesselation stages. I don't see how any of these make much of a difference for a 2D game though. Is there any compelling reason to make the switch to DX10/11 (or the OpenGL equivalents) for a 2D game, or would it be wiser to stick with DX9 considering that that a significant share of the market still runs on older technologies (e.g. the February 2012 Steam surveys lists around 17% of users as still using Windows XP)?

    Read the article

  • XNA shield effect with a Primative sphere problem

    - by Sparky41
    I'm having issue with a shield effect i'm trying to develop. I want to do a shield effect that surrounds part of a model like this: http://i.imgur.com/jPvrf.png I currently got this: http://i.imgur.com/Jdin7.png (The red likes are a simple texture a black background with a red cross in it, for testing purposes: http://i.imgur.com/ODtzk.png where the smaller cross in the middle shows the contact point) This sphere is drawn via a primitive (DrawIndexedPrimitives) This is how i calculate the pieces of the sphere using a class i've called Sphere (this class is based off the code here: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d) public class Sphere { // During the process of constructing a primitive model, vertex // and index data is stored on the CPU in these managed lists. List vertices = new List(); List indices = new List(); // Once all the geometry has been specified, the InitializePrimitive // method copies the vertex and index data into these buffers, which // store it on the GPU ready for efficient rendering. VertexBuffer vertexBuffer; IndexBuffer indexBuffer; BasicEffect basicEffect; public Vector3 position = Vector3.Zero; public Matrix RotationMatrix = Matrix.Identity; public Texture2D texture; /// <summary> /// Constructs a new sphere primitive, /// with the specified size and tessellation level. /// </summary> public Sphere(float diameter, int tessellation, Texture2D text, float up, float down, float portstar, float frontback) { texture = text; if (tessellation < 3) throw new ArgumentOutOfRangeException("tessellation"); int verticalSegments = tessellation; int horizontalSegments = tessellation * 2; float radius = diameter / 2; // Start with a single vertex at the bottom of the sphere. AddVertex(Vector3.Down * ((radius / up) + 1), Vector3.Down, Vector2.Zero);//bottom position5 // Create rings of vertices at progressively higher latitudes. for (int i = 0; i < verticalSegments - 1; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude / up);//(up)5 float dxz = (float)Math.Cos(latitude); // Create a single ring of vertices at this latitude. for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)(Math.Cos(longitude) * dxz) / portstar;//port and starboard (right)2 float dz = (float)(Math.Sin(longitude) * dxz) * frontback;//front and back1.4 Vector3 normal = new Vector3(dx, dy, dz); AddVertex(normal * radius, normal, new Vector2(j, i)); } } // Finish with a single vertex at the top of the sphere. AddVertex(Vector3.Up * ((radius / down) + 1), Vector3.Up, Vector2.One);//top position5 // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; AddIndex(1 + i * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(CurrentVertex - 1); AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); AddIndex(CurrentVertex - 2 - i); } //InitializePrimitive(graphicsDevice); } /// <summary> /// Adds a new vertex to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddVertex(Vector3 position, Vector3 normal, Vector2 texturecoordinate) { vertices.Add(new VertexPositionNormal(position, normal, texturecoordinate)); } /// <summary> /// Adds a new index to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); indices.Add((ushort)index); } /// <summary> /// Queries the index of the current vertex. This starts at /// zero, and increments every time AddVertex is called. /// </summary> protected int CurrentVertex { get { return vertices.Count; } } public void InitializePrimitive(GraphicsDevice graphicsDevice) { // Create a vertex declaration, describing the format of our vertex data. // Create a vertex buffer, and copy our vertex data into it. vertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormal), vertices.Count, BufferUsage.None); vertexBuffer.SetData(vertices.ToArray()); // Create an index buffer, and copy our index data into it. indexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), indices.Count, BufferUsage.None); indexBuffer.SetData(indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. basicEffect = new BasicEffect(graphicsDevice); //basicEffect.EnableDefaultLighting(); } /// <summary> /// Draws the primitive model, using the specified effect. Unlike the other /// Draw overload where you just specify the world/view/projection matrices /// and color, this method does not set any renderstates, so you must make /// sure all states are set to sensible values before you call it. /// </summary> public void Draw(Effect effect) { GraphicsDevice graphicsDevice = effect.GraphicsDevice; // Set our vertex declaration, vertex buffer, and index buffer. graphicsDevice.SetVertexBuffer(vertexBuffer); graphicsDevice.Indices = indexBuffer; graphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); int primitiveCount = indices.Count / 3; graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Count, 0, primitiveCount); } graphicsDevice.BlendState = BlendState.Opaque; } /// <summary> /// Draws the primitive model, using a BasicEffect shader with default /// lighting. Unlike the other Draw overload where you specify a custom /// effect, this method sets important renderstates to sensible values /// for 3D model rendering, so you do not need to set these states before /// you call it. /// </summary> public void Draw(Camera camera, Color color) { // Set BasicEffect parameters. basicEffect.World = GetWorld(); basicEffect.View = camera.view; basicEffect.Projection = camera.projection; basicEffect.DiffuseColor = color.ToVector3(); basicEffect.TextureEnabled = true; basicEffect.Texture = texture; GraphicsDevice device = basicEffect.GraphicsDevice; device.DepthStencilState = DepthStencilState.Default; if (color.A < 255) { // Set renderstates for alpha blended rendering. device.BlendState = BlendState.AlphaBlend; } else { // Set renderstates for opaque rendering. device.BlendState = BlendState.Opaque; } // Draw the model, using BasicEffect. Draw(basicEffect); } public virtual Matrix GetWorld() { return /*world */ Matrix.CreateScale(1f) * RotationMatrix * Matrix.CreateTranslation(position); } } public struct VertexPositionNormal : IVertexType { public Vector3 Position; public Vector3 Normal; public Vector2 TextureCoordinate; /// <summary> /// Constructor. /// </summary> public VertexPositionNormal(Vector3 position, Vector3 normal, Vector2 textCoor) { Position = position; Normal = normal; TextureCoordinate = textCoor; } /// <summary> /// A VertexDeclaration object, which contains information about the vertex /// elements contained within this struct. /// </summary> public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0), new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0) ); VertexDeclaration IVertexType.VertexDeclaration { get { return VertexPositionNormal.VertexDeclaration; } } } A simple call to the class to initialise it. The Draw method is called in the master draw method in the Gamecomponent. My current thoughts on this are: The direction of the weapon hitting the ship is used to get the middle position for the texture Wrap a texture around the drawn sphere based on this point of contact Problem is i'm not sure how to do this. Can anyone help or if you have a better idea please tell me i'm open for opinion? :-) Thanks.

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • Euler angles to Cartesian Coordinates for use with gluLookAt

    - by notrodash
    I have searched all of the internet but just couldn't find the answer. I am using LibGDX and this is part of my code that loops over and over: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)Math.cos(yaw) * (float)Math.cos(pitch); float centerY = (float)Math.sin(yaw) * (float)Math.cos(pitch); float centerZ = (float)Math.sin(pitch); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } I might just be bad at the math, but I dont get it. Does someone have a good explanation and an idea about how to deal with this? I am trying to make a first person camera. By the way, the camera is translated by +10 on the Z axis. Currently when I run the application, this is what I get: Watch video in browser | Download video (for those who cant download the video, everything shakes in a clockwise/anticlockwise action, depending on if I increase or decrease the Yaw value) -Thank you. [edit] and with this code: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)(MathUtils.cosDeg(yaw)*4); float centerY = 0; float centerZ = (float)(MathUtils.sinDeg(yaw)*4); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } it slowly swings from the left to the right. This approach worked for turning left and right for 2d games though. What am I doing wrong?

    Read the article

  • How to cull liquids

    - by Cyral
    I use culling on my Tiles in my 2D Tile Based Platformer, so only ones needed are drawn on screen. Thats easy to do. However, My Liquid tiles (Water, lava, etc) require an Update Method aswell as the normal Draw, which does checks against tiles, makes it flow, etc. So how should I cull liquid updates in my game? Not culling is to slow, culling only on screen looks awkward when you move. What do you think would be best for the player? Maybe someway of culling the visible tiles PLUS also adding the width/height of the viewport to start culling tiles at a fast enough rate in front of the player so it dosent look awkward when moving? (Not sure how to do this though, something with MaxSpeed of player and width of screen)

    Read the article

  • In-Game Encyclopedias

    - by SHiNKiROU
    There are some games where there is an in-game encyclopedia where you can know many things about characters and settings of the game. For example, the Codex in Mass Effect. I want to know if it is exclusive to Bioware, and get inspired about other encyclopedia systems. What are some other examples of in-game encyclopedias? How effective is it? I also want some examples where the in-game encyclopedia is not effective at all or an ignored feature

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >