Search Results

Search found 12338 results on 494 pages for 'game ai'.

Page 340/494 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • IndexOutOfRangeException on World.Step after enabling/disabling a Farseer physics body?

    - by WilHall
    Earlier, I posted a question asking how to swap fixtures on the fly in a 2D side-scroller using Farseer Physics Engine. The ultimate goal being that the player's physical body changes when the player is in different states (I.e. standing, walking, jumping, etc). After reading this answer, I changed my approach to the following: Create a physical body for each state when the player is loaded Save those bodies and their corresponding states in parallel lists Swap those physical bodies out when the player state changes (which causes an exception, see below) The following is my function to change states and swap physical bodies: new protected void SetState(object nState) { //If mBody == null, the player is being loaded for the first time if (mBody == null) { mBody = mBodies[mStates.IndexOf(nState)]; mBody.Enabled = true; } else { //Get the body for the given state Body nBody = mBodies[mStates.IndexOf(nState)]; //Enable the new body nBody.Enabled = true; //Disable the current body mBody.Enabled = false; //Copy the current body's attributes to the new one nBody.SetTransform(mBody.Position, mBody.Rotation); nBody.LinearVelocity = mBody.LinearVelocity; nBody.AngularVelocity = mBody.AngularVelocity; mBody = nBody; } base.SetState(nState); } Using the above method causes an IndexOutOfRangeException when calling World.Step: mWorld.Step(Math.Min((float)nGameTime.ElapsedGameTime.TotalSeconds, (1f / 30f))); I found that the problem is related to changing the .Enabled setting on a body. I tried the above function without setting .Enabled, and there was no error thrown. Turning on the debug views, I saw that the bodies were updating positions/rotations/etc properly when the state was changes, but since they were all enabled, they were just colliding wildly with each other. Does Enabling/Disabling a body remove it from the world's body list, which then causes the error because the list is shorter than expected? Update: For such a straightforward issue, I feel this question has not received enough attention. Has anyone else experienced this? Would anyone try a quick test case? I know this issue can be sidestepped - I.e. by not disabling a body during the simulation - but it seems strange that this issue would exist in the first place, especially when I see no mention of it in the documentation for farseer or box2d. I can't find any cases of the issue online where things are more or less kosher, like in my case. Any leads on this would be helpful.

    Read the article

  • Which jar has JBox2d's p5 package

    - by Brantley Blanchard
    Using eclipse, I'm trying to write a simple hello world program in processing that simply draws a rectangle on the screen then has gravity drop it as seen in this Tutorial. The problem is that when I try to import the p5 package, it's not resolving so I can't declare my Physics object. I tried two things. Download the zip, unzip it, then import the 3 jars (library, serialization, & testbed) a. import org.jbox2d.p5.*; doesn't resolve but the others do b. Physics physics; doesn't resolve Download the older standalone testbed jar then import it a. Physics physics; doesn't resolve; Here is basically where I'm starting import org.jbox2d.util.nonconvex.*; import org.jbox2d.dynamics.contacts.*; import org.jbox2d.testbed.*; import org.jbox2d.collision.*; import org.jbox2d.common.*; import org.jbox2d.dynamics.joints.*; import org.jbox2d.p5.*; import org.jbox2d.dynamics.*; import processing.core.PApplet; public class MyFirstJBox2d extends PApplet { Physics physics; public void setup() { size(640,480); frameRate(60); initScene(); } public void draw() { background(0); if (keyPressed) { //Reset everything physics.destroy(); initScene(); } } public void initScene() { physics = new Physics(this, width, height); physics.setDensity(1.0f); physics.createRect(300,200,340,300); } }

    Read the article

  • Can someone explain the (reasons for the) implications of colum vs row major in multiplication/concatenation?

    - by sebf
    I am trying to learn how to construct view and projection matrices, and keep reaching difficulties in my implementation owing to my confusion about the two standards for matrices. I know how to multiply a matrix, and I can see that transposing before multiplication would completely change the result, hence the need to multiply in a different order. What I don't understand though is whats meant by only 'notational convention' - from the articles here and here the authors appear to assert that it makes no difference to how the matrix is stored, or transferred to the GPU, but on the second page that matrix is clearly not equivalent to how it would be laid out in memory for row-major; and if I look at a populated matrix in my program I see the translation components occupying the 4th, 8th and 12th elements. Given that: "post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. " Why in the following snippet of code: Matrix4 r = t3 * t2 * t1; Matrix4 r2 = t1.Transpose() * t2.Transpose() * t3.Transpose(); Does r != r2 and why does pos3 != pos for: Vector4 pos = wvpM * new Vector4(0f, 15f, 15f, 1); Vector4 pos3 = wvpM.Transpose() * new Vector4(0f, 15f, 15f, 1); Does the multiplication process change depending on whether the matrices are row or column major, or is it just the order (for an equivalent effect?) One thing that isn't helping this become any clearer, is that when provided to DirectX, my column major WVP matrix is used successfully to transform vertices with the HLSL call: mul(vector,matrix) which should result in the vector being treated as row-major, so how can the column major matrix provided by my math library work?

    Read the article

  • Syntax error in Maya Python Script [on hold]

    - by Enchanter
    Ok this error is immensly frustrating as it is obviously a simple syntax issue. Basically I've written two lines of maya script in python designed to create a list of the names of all the joints of a model currently selected in the model viewer. Here are the two lines of script: import maya.cmds joints = ls(selection = true, type = 'joint') Upon compiling the code the script editor is saying there is a syntax error in the second line, but I do not see any reason why this code should not execute?

    Read the article

  • Why is my animation getting aborted?

    - by Homer_Simpson
    I have a class named Animation which handles my animations. The animation class can be called from multiple other classes. For example, the class Player.cs can call the animation class like this: Animation Playeranimation; Playeranimation = new Animation(TimeSpan.FromSeconds(2.5f), 80, 40, Animation.Sequences.forwards, 0, 5, false, true); //updating the animation public void Update(GameTime gametime) { Playeranimation.Update(gametime); } //drawing the animation public void Draw(SpriteBatch batch) { playeranimation.Draw(batch, PlayerAnimationSpritesheet, PosX, PosY, 0, SpriteEffects.None); } The class Lion.cs can call the animation class with the same code, only the animation parameters are changing because it's another animation that should be played: Animation Lionanimation; Lionanimation = new Animation(TimeSpan.FromSeconds(2.5f), 100, 60, Animation.Sequences.forwards, 0, 8, false, true); Other classes can call the animation class with the same code like the Player class. But sometimes I have some trouble with the animations. If an animation is running and then shortly afterwards another class calls the animation class too, the second animation starts but the first animation is getting aborted. In this case, the first animation couldn't run until it's end because another class started a new instance of the animation class. Why is an animation sometimes getting aborted when another animation starts? How can I solve this problem? My animation class: public class Animation { private int _animIndex, framewidth, frameheight, start, end; private TimeSpan PassedTime; private List<Rectangle> SourceRects = new List<Rectangle>(); private TimeSpan Duration; private Sequences Sequence; public bool Remove; private bool DeleteAfterOneIteration; public enum Sequences { forwards, backwards, forwards_backwards, backwards_forwards } private void forwards() { for (int i = start; i < end; i++) SourceRects.Add(new Rectangle(i * framewidth, 0, framewidth, frameheight)); } private void backwards() { for (int i = start; i < end; i++) SourceRects.Add(new Rectangle((end - 1 - i) * framewidth, 0, framewidth, frameheight)); } private void forwards_backwards() { for (int i = start; i < end - 1; i++) SourceRects.Add(new Rectangle(i * framewidth, 0, framewidth, frameheight)); for (int i = start; i < end; i++) SourceRects.Add(new Rectangle((end - 1 - i) * framewidth, 0, framewidth, frameheight)); } private void backwards_forwards() { for (int i = start; i < end - 1; i++) SourceRects.Add(new Rectangle((end - 1 - i) * framewidth, 0, framewidth, frameheight)); for (int i = start; i < end; i++) SourceRects.Add(new Rectangle(i * framewidth, 0, framewidth, frameheight)); } public Animation(TimeSpan duration, int frame_width, int frame_height, Sequences sequences, int start_interval, int end_interval, bool remove, bool deleteafteroneiteration) { Remove = remove; DeleteAfterOneIteration = deleteafteroneiteration; framewidth = frame_width; frameheight = frame_height; start = start_interval; end = end_interval; switch (sequences) { case Sequences.forwards: { forwards(); break; } case Sequences.backwards: { backwards(); break; } case Sequences.forwards_backwards: { forwards_backwards(); break; } case Sequences.backwards_forwards: { backwards_forwards(); break; } } Duration = duration; Sequence = sequences; } public void Update(GameTime dt) { PassedTime += dt.ElapsedGameTime; if (PassedTime > Duration) { PassedTime -= Duration; } var percent = PassedTime.TotalSeconds / Duration.TotalSeconds; if (DeleteAfterOneIteration == true) { if (_animIndex >= SourceRects.Count) Remove = true; _animIndex = (int)Math.Round(percent * (SourceRects.Count)); } else { _animIndex = (int)Math.Round(percent * (SourceRects.Count - 1)); } } public void Draw(SpriteBatch batch, Texture2D Textures, float PositionX, float PositionY, float Rotation, SpriteEffects Flip) { if (DeleteAfterOneIteration == true) { if (_animIndex >= SourceRects.Count) return; } batch.Draw(Textures, new Rectangle((int)PositionX, (int)PositionY, framewidth, frameheight), SourceRects[_animIndex], Color.White, Rotation, new Vector2(framewidth / 2.0f, frameheight / 2.0f), Flip, 0f); } }

    Read the article

  • AS3 - At exactly 23 empty alpha channels, images below stop drawing

    - by user46851
    I noticed, while trying to draw large numbers of circles, that occasionally, there would be some kind of visual bug where some circles wouldn't draw properly. Well, I narrowed it down, and have noticed that if there is 23 or more objects with 00 for an alpha value on the same spot, then the objects below don't draw. It appears to be on a pixel-by-pixel basis, since parts of the image still draw. Originally, this problem was noticed with a class that inherited Sprite. It was confirmed to also be a problem with Sprites, and now Bitmaps, too. If anyone can find a lower-level class than Bitmap which doesn't have this problem, please speak up so we can try to find the origin of the problem. I prepared a small test class that demonstrates what I mean. You can change the integer value at line 20 in order to see the three tests I came up with to clearly show the problem. Is there any kind of workaround, or is this just a limit that I have to work with? Has anyone experienced this before? Is it possible I'm doing something wrong, despite the bare-bones implementation? package { import flash.display.Sprite; import flash.events.Event; import flash.display.Bitmap; import flash.display.BitmapData; public class Main extends Sprite { public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); // entry point Test(3); } private function Test(testInt:int):void { if(testInt==1){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var i:int = 0; i < 22; i++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==2){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var j:int = 0; j < 23; j++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } } if(testInt==3){ addChild(new Bitmap(new BitmapData(200, 200, true, 0xFFFF0000))); for (var k:int = 0; k < 22; k++) { addChild(new Bitmap(new BitmapData(100, 100, true, 0x00000000))); } var M:Bitmap = new Bitmap(new BitmapData(100, 100, true, 0x00000000)); M.x += 50; M.y += 50; addChild(M); } } } }

    Read the article

  • How to make other semantics behave like SV_Position?

    - by object
    I'm having a lot of trouble with shadow mapping, and I believe I've found the problem. When passing vectors from the vertex shader to the pixel shader, does the hardware automatically change any of the values based on the semantic? I've compiled a barebones pair of shaders which should illustrate the problem. Vertex shader : struct Vertex { float3 position : POSITION; }; struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; cbuffer Matrices { matrix projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), projection); output.light_position = output.position; // We simply pass the same vector in screenspace through different semantics. return output; } And a simple pixel shader to go along with it: struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; float4 RenderPixelShader(Pixel input) : SV_Target { // At this point, (input.position.z / input.position.w) is a normal depth value. // However, (input.light_position.z / input.light_position.w) is 0.999f or similar. // If the primitive is touching the near plane, it very quickly goes to 0. return (0.0f).rrrr; } How is it possible to make the hardware treat light_position in the same way which position is being treated between the vertex and pixel shaders? EDIT: Aha! (input.position.z) without dividing by W is the same as (input.light_position.z / input.light_position.w). Not sure why this is.

    Read the article

  • Changing location after CommitAnimations

    - by Will Youmans
    I'm using the following code to move a UIImageView: shootImg.image = [UIImage imageNamed:@"Projectile Left 1.png"]; [UIView beginAnimations:nil context:nil]; shootImg.center = CGPointMake(shootImg.center.x+1000, shootImg.center.y); [UIView commitAnimations]; This works but what I want to do is after [UIView CommitAnimations]; I want to set the location of shootImg using CGPointMake. If I just put it after commitAnimations then the animation doesn't fully complete. Any suggestions? I'm not using any frameworks like cocos2d and if you need to see any more code just ask.

    Read the article

  • Problem playing repeat animation/action?

    - by Beast
    I'm calling this function on multiple sprites after checking numberOfRunningActions()"to play same animation but it's not working only the first tagged sprite plays the animation. What am I doing wrong? void CGame::playAnimation(const char* filename, int tag, CCLayer* target) { CCAnimation* animation = CCAnimation::animation(); CCSprite* spriteSheet = CCSprite::spriteWithFile(filename); for(int i = 0; i < spriteSheet->getTexture()->getPixelsWide()/SIZE; i++) // SIZE is an int value { animation->addFrameWithTexture(spriteSheet->getTexture(), CCRect(SIZE * i, 0, SIZE, SIZE)); } CCActionInterval* action = CCAnimate::actionWithDuration(1, animation, true); CCRepeatForever* repeatAction = CCRepeatForever::actionWithAction(action); target->getChildByTag(tag)->runAction(repeatAction); }

    Read the article

  • How do professional games avoid showing pixel seams in adjacent mesh boundaries due to decimal imprecision?

    - by ufomorace
    Graphics cards are mathematically imprecise. So when some meshes are joined by their borders, the graphics card often makes mistakes and decides that some pixels at the seam represent neither object, and unwanted pixels appear. It's a natural behaviour on all graphics cards. How are such worries avoided in Pro Games? Batching? Shaders? Different tangent vectors? Merging? Overlaping seams? Dark backgrounds? Extra vertices at borders? Z precision? Camera distance tweaks? Screencap of a fix that ended up not working:

    Read the article

  • Particles are not moving correctly [closed]

    - by cr33p
    I want to make a particle explosion, after something gets destroyed, but somehow only one line of mixed colors show up on the screen. Here's the header: http://pastebin.com/JW5bPLj2 Here's the source: http://pastebin.com/KHmFqytD I don't get what's wrong, as it's nearly the same as in "Programming Linux Games" Can somebody help me fix that? PS: "Uint32 delta" is needed to update the pixels based on time. PSS: Maybe I should add that it's programmed in C and includes SDL. EDIT: Found the problem. It was the "drawParticles" function. The problem was, that I passed a double to "offset" (as particles[i].x, etc are all doubles). So I ended up with values like ~MAX_INT because I didn't cast the doubles properly to ints.

    Read the article

  • GPU optimization question: pre-computed or procedural?

    - by Jay
    Good morning, I'm learning shader program and need some general direction. I want to add noise to my laser beam (like this). Which is the best way to handle it? I could pre-compute an image and pass it to the shader. I could then use the image to change the opacity and easily animate the smoke by changing the offset of the texture lookup. I could also generate noise in the shader and do the same thing the texture was used for. Is it generally better to avoid I/O to the graphics card or the opposite? Thanks!

    Read the article

  • SDL: How would I add tile layers with my area class as a singleton?

    - by Tony
    I´m trying to wrap my head around how to get this done, if at all possible. So basically I have a Area class, Map class and Tile class. My Area class is a singleton, and this is causing some confusion. I´m trying to draw like this: Background / Tiles / Entities / Overlay Tiles / UI. void C_Application::OnRender() { // Fill the screen black SDL_FillRect( Surf_Screen, &Surf_Screen->clip_rect, SDL_MapRGB( Surf_Screen->format, 0x00, 0x00, 0x00 ) ); // Draw background // Draw tiles C_Area::AreaControl.OnRender(Surf_Screen, -C_Camera::CameraControl.GetX(), -C_Camera::CameraControl.GetY()); // Draw entities for(unsigned int i = 0;i < C_Entity::EntityList.size();i++) { if( !C_Entity::EntityList[i] ) { continue; } C_Entity::EntityList[i]->OnRender( Surf_Screen ); } // Draw overlay tiles // Draw UI // Update the Surf_Screen surface SDL_Flip( Surf_Screen); } Would be nice if someone could give a little input. Thanks.

    Read the article

  • Triangles in a C++ STL Vector as an Objective-C member sometimes draws incorrectly in OpenGL ES

    - by Rahil627
    The polygons draw correctly 80% of the time. When it fails, a vertex is dislocated. The polygon is consistently drawn with the wrong vertex. I checked that the vector is correct during initialization, even when it's wrongly drawn. I'm using Cocos2d. The class member: @interface Polygon : CCSprite { std::vector<float> triangleVertices; } The draw function called in [Polygon draw]: + (void)drawTrianglesWithVertices:(const std::vector<float> &)v { //glEnableClientState(GL_VERTEX_ARRAY); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glVertexPointer(2, GL_FLOAT, 0, &v[0]); glDrawArrays(GL_TRIANGLES, 0, v.size()); //glDisableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); } Any ideas?

    Read the article

  • what's wrong with my lookAt and move forward code?

    - by alaslipknot
    so am still in the process of getting familiar with libGdx and one of the fun things i love to do is to make basics method for reusability on future projects, and for now am stacked on getting a Sprite rotate toward target (vector2) and then move forward based on that rotation the code am using is this : // set angle public void lookAt(Vector2 target) { float angle = (float) Math.atan2(target.y - this.position.y, target.x - this.position.x); angle = (float) (angle * (180 / Math.PI)); setAngle(angle); } // move forward public void moveForward() { this.position.x += Math.cos(getAngle())*this.speed; this.position.y += Math.sin(getAngle())*this.speed; } and this is my render method : @Override public void render(float delta) { // TODO Auto-generated method stub Gdx.gl.glClearColor(0, 0, 0.0f, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); // groupUpdate(); Vector3 mousePos = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); ball.lookAt(new Vector2(mousePos.x, mousePos.y)); // if (Gdx.input.isTouched()) { ball.moveForward(); } batch.begin(); batch.draw(ball.getSprite(), ball.getPos().x, ball.getPos().y, ball .getSprite().getOriginX(), ball.getSprite().getOriginY(), ball .getSprite().getWidth(), ball.getSprite().getHeight(), .5f, .5f, ball.getAngle()); batch.end(); } the goal is to make the ball always look at the mouse cursor, and then move forward when i click, am also using this camera : // create the camera and the SpriteBatch camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); aaaand the result was so creepy lol Thank you

    Read the article

  • Rendering only a part of text FTGL, OpenGL

    - by Mosquito
    I'm using FTGL library to render text in my C++ project. I can easily render text by using: CFontManager::Instance().renderWrappedText(font, lineLength, position, text); Unfortunately there is a situation in which this Button which displays text, is partly hidden because of resizing container in which it is situated. I'm able without any problem to draw Button's background to fit the container, but I've got a problem with doing the same with a text. Is it possible to somehow draw only text for given width and the rest just ignore? This is a screen which presents my problem: As you can see, the Button "Click here" is being drawn properly, but I can't do the same with "Click here" text.

    Read the article

  • Any recommended books/resources on component-based design?

    - by user1163640
    I come from a background with heavy use of the classical object-oriented paradigm for software development. The company I am a part of switched to Unity not too long ago, and we're all very excited to get started using it However, one aspect that have sparked my interested, and which I think will become a very important part of our future development, is Unity's approach to component-based design with scripting; with less focus on typical hierarchical aspect. Question I was wondering if anyone could recommend any good books on this subject? I have had trouble finding any books or books with reliable reviews, and was wondering if anyone more experienced here had something to say on the issue? Any other kind of resource would be excellent too, I'm just interested in getting to learn everything I can about it. This is not meant as a discussion about best books or resources on the topic, but simply a question regarding any resources that any of you find useful. Thank you all for your time!

    Read the article

  • How would I detect if two 2D arrays of any shape collided?

    - by user2104648
    Say there's two or more moveable objects of any shape in 2D plane, each object has its own 2D boolean array to act as a bounds box which can range from 10 to 100 pixels, the program then reads each pixel from a image that represents it, and appropriatly changes the array to true(pixel has a alpha more then 1) or false(pixel has a alpha less than one). Each time one of these objects moves, what would be the best accurate way to test if they hit another object in Java using as few APIs/libraries as possible?

    Read the article

  • Application window as polygon texture?

    - by nekome
    Is there a way, or method, to have some application rendered as texture in 3D scene on some polygon, and also have full interactivity with it? I'm talking about Windows platform, and maybe OpenGL but I guess it doesn't matter is it OGL or DX. For example: I run Calculator using WINAPI functions (preferably hidden, not showing on desktop) and I want to render it inside 3D scene on some polygon but still be able to type or click buttons and have it respond. My idea to realize this is to have WINAPI take screenshot (or render it to memory if possible) of that Calculator and pass it to OpenGL as texture for each frame (I'm experimenting with SDL through pygame) and for mouse interactivity to use coordination translation and calculate where on application window it would act, and then use WINAPI functions such as SetCursorPos to set cursor ant others to simulate click or something else. I haven't found any tutorials with topic similar to this one. Am I on a right track? Is there better way to do this if possible at all?

    Read the article

  • How to implement custom texture formats in Android?

    - by random1337
    What I know: Android can load PNG, BMP, WEBP,... via BitmapFactory. What I want to achive: Load my own 2D file format (e.g. 1-bit texture with a 1-bit alpha channel) and output a RGBA8888 texture. Question: Is there any interface to achieve this?(or any other way) The resulting image is used as a texture for a 3D model. Why would you do that? Saving phone memory and download bandwidth while expanding the texture at runtime to RAM seems reasonable for very simple textures.

    Read the article

  • Spritegroups and colorkeys

    - by Fristi
    I have a problem using spritegroups in pygame. In my situation I have 2 spritegroups, one for humans, one for "infected". A human is represented by a blue circle: image = pygame.Surface((32,32)) image.fill((255,255,255)) pygame.draw.circle(image,(0,0,255),(16,16),16) image = image.convert() image.set_colorkey((255,255,255)) An infected by a red one (same code, different color). I update my spritegroups as follows: self.humans.clear(self.screen, self.bg) self.humans.update(time_passed) self.humans.draw(self.screen) self.infected.clear(self.screen, self.bg) self.infected.update(time_passed) self.infected.draw(self.screen) Self.bg is defined: self.bg = pygame.Surface((SCREEN_WIDTH, SCREEN_HEIGHT)) self.bg.fill((255,255,255)) self.bg.convert() This all works, except that when a red circle overlaps with a blue one, you can see the white corners of the bounding box around the actual circle. Within a spritegroup it works, using the set_colorkey function. This does not happen with overlapping blue circles or overlapping red circles. I tried adding a colorkey to self.bg but that did not work. Same for adding a colorkey to self.screen.

    Read the article

  • OpenGL : Keeping alpha in a render buffer

    - by Cyan
    In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent/semi-transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi-transparent pixels, which RGB are "tainted" by the grey background.

    Read the article

  • How to draw a spotlight in 3D

    - by RecursiveCall
    To be clear, I am not talking about the light result (the lit area) but the spotlight itself, like this The two common suggestions that I tried are 2D image and a 3D cone. The problem with the pre-regenerated 2D image is that it always look 2D and flat no matter how it is rotated in world space. The cone on the other hand is next to impossible to control when it comes to fade distance, it doesn't look soft (smooth) and it is expensive to compute.

    Read the article

  • How can I keep straight alpha during rendering particles?

    - by April
    Rencently,I was trying to save textures of 3D particles so that I can reuse the in 2D rendering.Now I had some problem with alpha channel.Some artist told me I that my textures should have unpremultiplied alpha channel.When I try to get the rgb value back,I got strange result.Some area went lighter and even totally white.I mainly focus on additive and blend mode,that is: ADDITIVE: srcAlpha VS 1 BLEND: srcAlpha VS 1-srcAlpha I tried a technique called premultiplied alpha.This technique just got you the right rgb value,its all you need on screen.As for alpha value,it worked well with BLEND mode,but not ADDITIVE mode.As you can see in parameters,BLEND mode always controlled its value within 1.While ADDITIVE mode cannot guarantee. I want proper alpha,but it just got too big or too small consider to rgb.Now what can I do?Any help will be great thankful. PS:If you don't understand what I am trying to do,there is a commercial software called "Particle Illusion".You can create various particles and then save the scene to texture,where you can choose to remove background of particles. Now,I changed the title.For some software like maya or AE,what I want is called [straight alpha].

    Read the article

  • Is it safe to run multiple XNA ContentManager instances on multiple threads?

    - by Boinst
    My XNA project currently uses one ContentManager instance, and one dedicated background thread for loading all content. I wonder, would it be safe to have multiple ContentManager instances, each in it's own dedicated thread, loading different content at the same time? I'm prompted to ask this question because this article makes the following statement: If there are two textures created at the same time on different threads, they will clobber the other and you will end up with some garbage in the textures. I think that what the author is saying here, is that if I access one ContentManager simultaneously on two threads, I'll get garbage. But what if I have separate ContentManager instances for each thread? If no-one knows the answer already from experience, I'll go ahead and try it and see what happens.

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >