Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 406/772 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • how to make HLSL effect just for lighning without texture mapping?

    - by naprox
    I'm new to XNA, i created an effect and just want to use lightning but in default effect that XNA create we should do texture mapping or the model appears 'RED', because of this lines of code in the effect file: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 output = float4(1,0,0,1); return output; } and if i want to see my model (appear like when i use basiceffect) must do texture mapping by UV coordinates. but my model does not have UV coordinates assigned or its UV coordinates is not exported. and if i do texture mapping i got error. (i do texture mapping by this line of code in vertexshaderfunction and other necessary codes) output.UV= input.UV i have many of this models and want to work with them.(my models are in .FBX format) when i use Bassiceffect i have no problem and model appears correctly. how can i use "just" lightnings in my custom effects? and don't do texture mapping (because i have no UV coordinates in my models) and my model be look like when i use BasicEffect? if you need my complete code Here it is: http://www.mediafire.com/?4jexhd4ulm2icm2 here is inside of my Model Using BasicEffect http://i.imgur.com/ygP2h.jpg?1 and this is my code for drawing with or without BasicEffect inside of my draw() method: Matrix baseWorld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position); foreach(ModelMesh mesh in Model.Meshes) { Matrix localWorld = ModelTransforms[mesh.ParentBone.Index] * baseWorld; foreach(ModelMeshPart part in mesh.MeshParts) { Effect effect = part.Effect; if (effect is BasicEffect) { ((BasicEffect)effect).World = localWorld; ((BasicEffect)effect).View = View; ((BasicEffect)effect).Projection = Projection; ((BasicEffect)effect).EnableDefaultLighting(); } else { setEffectParameter(effect, "World", localWorld); setEffectParameter(effect, "View", View); setEffectParameter(effect, "Projection", Projection); setEffectParameter(effect, "CameraPosition", CameraPosition); } } mesh.Draw(); } setEffectParameter is another method that sets effect parameter if i use my custom effect.

    Read the article

  • The practical cost of swapping effects

    - by sebf
    I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • loading 3d model data into buffers

    - by mulletdevil
    I am using assimp to load 3d model data. I have noticed that each loaded model is made up of different meshes. I was wondering should each mesh have it's own vertex/index buffer or should there just be one for the whole model? From looking through the index data that is loaded it seems to suggest that I will need a vertex buffer per mesh but I'm not 100% sure. I am using C++ and DirectX9 Thank you, Mark

    Read the article

  • What is the correct and most efficient approach of streaming vertex data?

    - by Martijn Courteaux
    Usually, I do this in my current OpenGL ES project (for iOS): Initialization: Create two VBO's and one IndexBuffer (since I will use the same indices), same size. Create two VAO's and configure them, both bound to the same Index Buffer. Each frame: Choose a VBO/VAO couple. (Different from the previous frame, so I'm alternating.) Bind that VBO Upload new data using glBufferSubData(GL_ARRAY_BUFFER, ...). Bind the VAO Render my stuff using glDrawElements(GL_***, ...); Unbind the VAO However, someone told me to avoid uploading data (step 3) and render immediately the new data (step 5). I should avoid this, because the glDrawElements call will stall until the buffer is effectively uploaded to VRAM. So he suggested to draw all my geometry I uploaded the previous frame and upload in the current frame what will be drawn in the next frame. Thus, everything is rendered with the delay of one frame. Is this true or am I using the good approach to work with streaming vertex data? (I do know that the pipeline will stall the other way around. Ie: when you draw and immediately try to change the buffer data. But I'm not doing that, since I implemented double buffering.)

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • Clear edged sprite

    - by Ananth
    I am a newbie to cocos2d. I would like make user to draw similar to what a painting brush would do. I am using CCSprite for that. I almost implemented the velocity, color and opacity factors for that tool, but I couldn't get the Sprite to be as clear as it should be. I can draw only in the below image http://i.imgur.com/KBe0L.png which has blunt edges. But I want it to be harder / clear outside edges as in http://i.stack.imgur.com/GrFlv.png. I am getting no idea to make it clear edged. The piece of code Im using is glEnable(GL_BLEND); [brush.texture setAliasTexParameters]; [brush setBlendFunc:(ccBlendFunc){GL_ONE, GL_ONE_MINUS_SRC_ALPHA}]; [brush visit]; I suspect the problem would be on blending mode. I tried some blending modes, but with no expected results. I am trying this for the past five days and so confused. Can some one help me sort this out? Thanks in advance.

    Read the article

  • Errors happen when using World.destroyBody( Body body )

    - by minami
    on Android application using libgdx, when I use World.destroyBody( Body body ) method, once in a while the application suddenly shuts down. Is there some setting I need to do with body collision or Box2DDebugRenderer before I destroy bodies? Below is the source I use for destroying bodies. private void deleteUnusedObject( ) { for( Iterator<Body> iter = mWorld.getBodies() ; iter.hasNext() ; ){ Body body = iter.next( ) ; if( body.getUserData( ) != null ) { Box2DUserData data = (Box2DUserData) body.getUserData( ) ; if( ! data.getActFlag() ) { if( body != null ) { mWorld.destroyBody( body ) ; } } } } } Thanks

    Read the article

  • Tile map collision is not working properly

    - by Sigh-AniDe
    I am having problems setting collision between my sprite and the tiles. I have only done the code for colision for moving upwards but some places on the map it moves up and some places it doesn't. Here is what I have so far: Vector2 position; private static float scalingFactor = 32; static int tileWidth = 32; static int tileHeight = 32; int[ , ] map = { {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, }; // This is in turtle.update if ( keyboardState.IsKeyDown( Keys.Up ) ) { if ( position.Y > screenHeight / 4 ) { //// current position of the turtle on the tiles int mapX = ( int )( position.X / scalingFactor ); int mapY = ( int )( position.Y / scalingFactor ) - 1; if ( isMovable( mapX , mapY , map ) ) { position.Y = position.Y - scalingFactor; } } else { MoveUp(); } } private void MoveUp() { motion.Y = -1; } public bool isMovable( int mapX , int mapY , int[ , ] map ) { if ( mapX < 0 || mapX > 19 || mapY < 0 || mapY > 20 ) { return false; } int tile = map[ mapX , mapY ]; if ( tile == 0 ) { return false; } return true; } protected override void Update( GameTime gameTime ) { turtle.Update( screenHeight , scalingFactor , map ); base.Update( gameTime ); }

    Read the article

  • Distance between two 3D objects' faces

    - by Arthur Gibraltar
    I'm really newbie on programming and I'm making some tests. I couldn't find nowhere on Internet how could I calculate the distance between two 3D objects' faces. Is there anyway? Detailing, as an example, I have two 3D cubes. Each one has a vector3 position designating it's center on the 3D space and an orientation matrix. And each cube has a size (float width, float height and float length). I could get a simple distance between them by calling Vector3.Distance(), but it doesn't consider its sizes, just the position. Then the distance would be between its centers. Is there any way to calculate the distance between the faces? Thanks for any reply.

    Read the article

  • Narrow-phase collision detection algorithms

    - by Marian Ivanov
    There are three phases of collision detection. Broadphase: It loops between all objecs that can interact, false positives are allowed, if it would speed up the loop. Narrowphase: Determines whether they collide, and sometimes, how, no false positives Resolution: Resolves the collision. The question I'm asking is about the narrowphase. There are multiple algorithms, differing in complexity and accuracy. Hitbox intersection: This is an a-posteriori algorithm, that has the lowest complexity, but also isn't too accurate, Color intersection: Hitbox intersection for each pixel, a-posteriori, pixel-perfect, not accuratee in regards to time, higher complexity Separating axis theorem: This is used more often, accurate for triangles, however, a-posteriori, as it can't find the edge, when taking last frame in account, it's more stable Linear raycasting: A-priori algorithm, useful for semi-realistic-looking physics, finds the intersection point, even more accurate than SAT, but with more complexity Spline interpolation: A-priori, even more accurate than linear rays, even more coplexity. There are probably many more that I've forgot about. The question is, in when is it better to use SAT, when rays, when splines, and whether there is anything better.

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be png, jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan b (actually, I'm probably on plan 'e' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the api. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via jni (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • View matrix question (rotate by 180 degrees)

    - by King Snail
    I am using a third party rendering API on top of OpenGL code and I cannot get my matrices correct. The API states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • Solving 2D Collision Detection Issues with Relative Velocities

    - by Jengerer
    Imagine you have a situation where two objects are moving parallel to one-another and are both within range to collide with a static wall, like this: A common method used in dynamic collision detection is to loop through all objects in arbitrary order, solve for pair-wise collision detection using relative velocities, and then move the object to the nearest collision, if any. However, in this case, if the red object is checked first against the blue one, it would see that the relative velocity to the blue object is -20 m/s (and would thereby not collide this time frame). Then it would see that the red object would collide with the static wall, and the solution would be: And the red object passes through the blue one. So it appears to be a matter of choosing the right order in which you check collisions; but how can you determine which order is correct? How can this passing through of objects be avoided? Is ignoring relative velocity and considering every object as static during pair-wise checks a better idea for this reason?

    Read the article

  • Most efficient AABB - Ray intersection algorithm for input/output distance calculation

    - by Tobbey
    Thanks to the following thread : most efficient AABB vs Ray collision algorithms I have seen very fast algorithm for ray/AABB intersection point computation. Unfortunately, most of the recent algorithm are accelerated by omitting the "output" intersection point of the box. In my application, I would interested in getting both the the distance from source ray to input: t0 and source ray to output of bounding box: t1. I have seen for instance Eisemann designed a very fast version regarding plucker, smits, ... , but it does not compare the case when both input/output distance should be computed see: http://www.cg.cs.tu-bs.de/publications/Eisemann07FRA/ Does someone know where I can find more information on algorithm performances for the specific input/output problem ? Thank you in advance

    Read the article

  • Pixel Collision - Detecting corners

    - by Milkboat
    How would I go about detecting the corners of a texture when I use pixel collision detection? I read about corner collision with rectangles, but I am unsure how to adapt it to my situation. Right now my map is tile based and I do rectangular collision until the player is intersecting with a blocked tile, then I switch to pixel collision. The effect I would like to achieve is when the player hits the corner of an object to push him around the side so he doesn't just hit the edge and stop. Any ideas?

    Read the article

  • Component-based design: handling objects interaction

    - by Milo
    I'm not sure how exactly objects do things to other objects in a component based design. Say I have an Obj class. I do: Obj obj; obj.add(new Position()); obj.add(new Physics()); How could I then have another object not only move the ball but have those physics applied. I'm not looking for implementation details but rather abstractly how objects communicate. In an entity based design, you might just have: obj1.emitForceOn(obj2,5.0,0.0,0.0); Any article or explanation to get a better grasp on a component driven design and how to do basic things would be really helpful.

    Read the article

  • Adding sub-entities to existing entities. Should it be done in the Entity and Component classes?

    - by Coyote
    I'm in a situation where a player can be given the control of small parts of an entity (i.e. Left missile battery). Therefore I started implementing sub entities as follow. Entities are Objects with 3 arrays: pointers to components pointers to sub entities communication subscribers (temporary implementation) Now when an entity is built it has a few components as you might expect and also I can attach sub entities which are handled with some dedicated code in the Entity and Component classes. I noticed sub entities are sharing data in 3 parts: position: the sub entities are using the parent's position and their own as an offset. scrips: sub entities are draining ammo and energy from the parent. physics: sub entities add weight to the parent I made this to quickly go forward, but as I'm slowly fixing current implementations I wonder if this wasn't a mistake. Is my current implementation something commonly done? Will this implementation put me in a corner? I thought it might be a better thing to create some sort of SubEntityComponent where sub entities are attached and handled. But before changing anything I wanted to seek the community's wisdom.

    Read the article

  • My server is behind a router. How can I see my website correctly? [closed]

    - by Tokyo Dan
    I'm running a web server (Ubuntu) on my local home network. I'm behind a router. On the WAN I have a direct IP. When not on my home network and accessing my website via the WAN direct IP my website displays correctly and everything works. On my home LAN behind the router, accessing my website via the WAN direct gets me to my router's admin login page. This is normal. But... Accessing my website (via it's home LAN IP address) from another computer on my home LAN gets me to the website but the layout display is broken and clicking on any link takes me to the WAN direct IP (my router's Admin login page). How can i get my website to display properly and the links to work when accessing it from my home LAN?

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D

    Read the article

  • iOS : Creating a 3D Compass

    - by Md. Abdul Munim
    Originally posted here: iOS : Creating a 3D Compass Hi everybody, Quite new in this forum.Posted the same question in stackoverflow and there some people suggested to shift it here, so that I can get a quick help from more specialists in this regard. So what's the big matter? Actually, I want to make a 3D metal compass in iOS which will have a movable cover. That is when you touch it by 3 fingers and try to move your fingers upward the cover keeps moving with your fingers and after certain distance it gets opened.Once you pull it down using 3 fingers again, it gets closed.I can not attach an image here as I don't have that much reputation. So I request you to check the original question at stack overflow that I linked at top. Is it possible using core animations and CALayers? Or would I have to use OpenGL ES? Please someone help me out, I am badly in need of it.And I need to complete it asap!

    Read the article

  • Need help with a complex 3d scene (using Ogre and bullet)

    - by Matthias
    In my setup there is a box with a hole on one side, and a freely movable "stick" (or bar, tube). This stick can be inserted/moved through the hole into the box. This hole is exactly as wide as the diameter of the stick. In reality, when you would now hold the end of the stick in your hand and move the hand left/right or up/down, the other end of the stick, which is inside the box, would move into the opposite direction of your hand movement (because the stick is affixed at the pivot point where it is entering the box through the hole). (I hope you understand what I mean so far.) Now I need to simulate such a setup in a 3d program. I have already successfully developed an Ogre3d framework for this application, including bullet. But what I don't know is how I can implement in my program what I have described above. This application must include two more features: The scene camera is attached to the end of the stick that is inserted into the box. So when the user would move the mouse (to control "his" end of the stick outside the box), then the camera attached to the stick would move in the opposite direction, as described above. The stick has some length, and the user can push it further into the box, or pull it closer to him again. That means of course that the max. radius on which the end of the stick inside the box can move depends on how far the stick is pushed into the box. Thus, the more the stick is pushed into the box, the larger the max. radius of this end of the stick with the camera will be. I understand this is maybe quite a complex thing, so I don't expect any real source code here. I already have the Ogre and bullet part as said up and running, as well as a camera attached to the stick. This works fine. What I don't know though is how I can simulate the setup described above. Especially the requirement that the stick is affixed at the position of the hole on the box, where it is inserted into the box. Any ideas how I could approach to implement the described setup?

    Read the article

  • Split vector vs matrix notation for transformation

    - by seahorse
    Some rendering engines like Ogre prefer to use a individual vector based notation for transformations like the following Split vector notation: Net Transformation is represented by Scale vector = sx, sy, sz Transformation vector = tx, ty, tz Rotation Quaternion Vector = w,x,y,z Matrix notation: There are other engines which simply use a net combined transformation matrix. What are the advantages of the first notation over the second? Also for animation interpolation does it work in the first notation that we interpolate across the individual components and use the interpolated parts to get the net transformation? Is this another advantage?

    Read the article

  • Tangent basis calculation problem

    - by Kirill Daybov
    I have the problem with seams with calculating a tangent basis in my application. I'm using a seems to be right algorithm, but it gives wrong result on the seams. What am I doing wrong? Is there a problem with an algorithm, or with the model? The designer says that our models with our normal maps are rendered correctly in Xoliul Shader Plugin in 3Ds Max, so there should be a way to calculate correct tangent basis programmatically. Here's an example of the problem I'm talking about. Steps, I've already taken: - Tried different algorithm (from Gamasutra, I can't post the link because I don't have enough reputation yet). I got wrong, much worse, results; - Tried to average basis vectors for vertexes are used in multiple faces; - Tried to average basis vectors for vertexes that have same world coordinates (this would be obviously wrong solution, but I've tried it anyway).

    Read the article

  • Where can I find free or buy "next-gen" 3D Assets?

    - by Valmond
    Usually I buy 3D Assets from sites like turbosquid.com or similar. My problem is that I have lately implemented glow, normal maps, specular (and specular power) maps and reflection maps and I can't find any models that use those techniques. So where can I find / buy "next gen" assets (at least models/items with a normal map)? I have checked for similar posts but those I found are about either free only or 2D or 'ordinary' 3D so I hope this is not a duplicate.

    Read the article

  • XNA C# How to draw fonts in different color

    - by XNA newbie
    I'm doing a simple chat system with XNA C#. It is a chatbox that contains 5 lines of chat typed by the users. Something like a MMORPG chatting system. [User1name] says: Hi [User2name] says: Hello [User1name] says: What are you doing? [User2name] says: I'm fine [System] The time is now 1:03AM. When the user pressed 'ENTER', the text he entered will be added inside an ArrayList chatList.Add(s); For displaying the text he entered, I used for (int i = 0; i < chatList.Count(); i++) { spriteBatch.DrawString(font, chatList[i], new Vector2(40, arr1[i]), Color.Yellow); } *arr1[i] contains 5 y-axis points to print my 5 line of chats in the chatbox Question1: What if I have another type of message which will be added into ChatList (something like a system message). I need the System Message to be printed out in red color. And if the user keeps on chatting, the chat box will be updated according: (MAX 5 LINES) The newest chat will be shown below, and the oldest one will be deleted if they reached the max 5 lines. [User2name] says: Hello [User1name] says: What are you doing? [User2name] says: I'm fine [System] The time is now 1:03AM. [User1name] says: Ok, great to hear that! I'm having trouble to print each line color according to their msg type. For normal msg, it should be yellow. For system msg, it should be red. Question2: And for the next problem, I need the chat texts to be white color, while the names of the user is yellow (like warcraft3 chat system). How do I do that? I have a hard time thinking of a solution for these to work. Advise needed.

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >