Search Results

Search found 12087 results on 484 pages for 'game mechanics'.

Page 299/484 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • PNG file loading error in ImageMagick

    - by khanhhh89
    I'm trying to understand the tutorial 16 at http://ogldev.atspace.co.uk, which requires the image processing library ImageMagick. But when I run the tutorial, I encountered an following error: freeglut: failed to change scree settings Error loading textures 'test.png': no decode delegates for this image format 'C:/../appdata/magick-6024a_cIJcw90t-j'@error/constitute.c/ReadImage/552 I searched for google and found that my ImageMagick library do not have PNG Delegaes, but when I checked for the information of ImageMagick, it sees PNG in its delegate lists. Command line: convert -configure Result: LIB_VERSION 0x687 DELEGATES: bzlib, freetype, jpeg, jp2, lcms, png, tiff, x11, xml, wmf, zlib Could you explain to me this error, thanks so much!

    Read the article

  • XNA hlsl tex2D() only reads 3 channels from normal maps and specular maps

    - by cubrman
    Our engine uses deferred rendering and at the main draw phase gathers plenty of data from the objects it draws. In order to save on tex2D calls, we packed our objects' specular maps with all sorts of data, so three out of four channels are already taken. To make it clear: I am talking about the assets that come with the models and are stored in their material's Specular Level channel, not about the RenderTarget. So now I need another information to be stored in the alpha channel, but I cannot make the shader to read it properly! Nomatter what I write into alpha it ends up being 1 (255)! I tried: saving the textures in PNG/TGA formats. turning off pre-computed alpha in model's properties. Out of every texture available to me (we use Diffuse map, Normal Map and Specular Map) I was only able to read alpha successfully from the Diffuse Map! Here is how I add specular and normal maps to my model's material in the content processor: if (geometry.Material.Textures.ContainsKey(normalMapKey)) { ExternalReference<TextureContent> texRef = geometry.Material.Textures[normalMapKey]; geometry.Material.Textures.Remove("NormalMap"); geometry.Material.Textures.Add("NormalMap", texRef); } ... foreach (KeyValuePair<String, ExternalReference<TextureContent>> texture in material.Textures) { if ((texture.Key == "Texture") || (texture.Key == "NormalMap") || (texture.Key == "SpecularMap")) mat.Textures.Add(texture.Key, texture.Value); } In the shader I obviously use: float4 data = tex2D(specularMapSampler, TexCoords); so data.a is always 1 in my case, could you suggest a reason?

    Read the article

  • UIView with IrrlichtScene - iOS

    - by user1459024
    i have a UIViewController in a Storyboard and want to draw a IrrlichtScene in this View Controller. My Code: WWSViewController.h #import <UIKit/UIKit.h> @interface WWSViewController : UIViewController { IBOutlet UILabel *errorLabel; } @end WWSViewController.mm #import "WWSViewController.h" #include "../../ressources/irrlicht/include/irrlicht.h" using namespace irr; using namespace core; using namespace scene; using namespace video; using namespace io; using namespace gui; @interface WWSViewController () @end @implementation WWSViewController -(void)awakeFromNib { errorLabel = [[UILabel alloc] init]; errorLabel.text = @""; IrrlichtDevice *device = createDevice( video::EDT_OGLES1, dimension2d<u32>(640, 480), 16, false, false, false, 0); /* Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text. */ device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo"); /* Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment(). */ IVideoDriver* driver = device->getVideoDriver(); ISceneManager* smgr = device->getSceneManager(); IGUIEnvironment* guienv = device->getGUIEnvironment(); /* We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner. */ guienv->addStaticText(L"Hello World! This is the Irrlicht Software renderer!", rect<s32>(10,10,260,22), true); /* To show something interesting, we load a Quake 2 model and display it. We only have to get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). We check the return value of getMesh() to become aware of loading problems and other errors. Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modelled by Brian Collins. */ IAnimatedMesh* mesh = smgr->getMesh("/Users/dbocksteger/Desktop/test/media/sydney.md2"); if (!mesh) { device->drop(); if (!errorLabel) { errorLabel = [[UILabel alloc] init]; } errorLabel.text = @"Konnte Mesh nicht laden."; return; } IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh ); /* To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color. */ if (node) { node->setMaterialFlag(EMF_LIGHTING, false); node->setMD2Animation(scene::EMAT_STAND); node->setMaterialTexture( 0, driver->getTexture("/Users/dbocksteger/Desktop/test/media/sydney.bmp") ); } /* To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is. */ smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0)); /* Ok, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window). */ while(device->run()) { /* Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen. */ driver->beginScene(true, true, SColor(255,100,101,140)); smgr->drawAll(); guienv->drawAll(); driver->endScene(); } /* After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information. */ device->drop(); } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @end Sadly the result is just a black View in the Simulator. :( Hope here is anyone who can explain me how i draw the scene in a UIView. Furthermore I'm getting this Error: Could not load sprite bank because the file does not exist: #DefaultFont How can i fix it ?

    Read the article

  • Determining relative velocities on impact?

    - by meds
    I'm trying to figure out a way to determine the relative velocity of a body colliding with another in a 2D environment. For example if one body is moving at (1,0) and another traveling behind it collides with it from behind at (2,0) the velocity of the impact relative to the first body was (1,0). I need a method which takes in two velocities, one velocity belonging to the body the velocity is being measured against, and the other for the impacting body and return the relative velocity.

    Read the article

  • Are there any open source projects for car engine sound simulation?

    - by Petteri Hietavirta
    I have been thinking how to create realistic sound for a car. The main sound is the engine, then all kind of wind, road and suspension sounds. Are there any open source projects for the engine sound simulation? Simply pitching up the sample does not sound too great. The ideal would be to something that allows me to pick type of the engine (i.e. inline-4 vs v-8), add extras like turbo/supercharger whine and finally set the load and rpm. Edit: Something like http://www.sonory.org/examples.html

    Read the article

  • Why are trees shining in background?

    - by Kinected
    Currently I am creating a forest scene in the dark, and the trees are shining far away, but when I get close they are fine. I have the shaders set to "Nature/Tree Soft Occlusion [bark/leaves]", but they are still rendering strange far away, but close they are fine. I tried placing the trees in a folder named "Ambient-Occlusion" like said here, but no luck. Also fog is turned off. Thanks in advance.

    Read the article

  • Algorithm to map an area [on hold]

    - by user37843
    I want to create a crawler that starts in a room and from that room to move North,East,West and South until there aren't any new rooms to visit. I don't want to have duplicates and the output format per line to be something like this: current room, neighbour 1, neighbour 2 ... and in the end to apply BFS algorithm to find the shortest path between 2 rooms. Can anyone offer me some suggestion what to use? Thanks

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • Create a kind of Interface c++ [migrated]

    - by Liuka
    I'm writing a little 2d rendering framework with managers for input and resources like textures and meshes (for 2d geometry models, like quads) and they are all contained in a class "engine" that interacts with them and with a directX class. So each class have some public methods like init or update. They are called by the engine class to render the resources, create them, but a lot of them should not be called by the user: //in pseudo c++ //the textures manager class class TManager { private: vector textures; .... public: init(); update(); renderTexture(); //called by the "engine class" loadtexture(); gettexture(); //called by the user } class Engine { private: Tmanager texManager; public: Init() { //initialize all the managers } Render(){...} Update(){...} Tmanager* GetTManager(){return &texManager;} //to get a pointer to the manager //if i want to create or get textures } In this way the user, calling Engine::GetTmanager will have access to all the public methods of Tmanager, including init update and rendertexture, that must be called only by Engine inside its init, render and update functions. So, is it a good idea to implement a user interface in the following way? //in pseudo c++ //the textures manager class class TManager { private: vector textures; .... public: init(); update(); renderTexture(); //called by the "engine class" friend class Tmanager_UserInterface; operator Tmanager_UserInterface*(){return reinterpret_cast<Tmanager_UserInterface*>(this)} } class Tmanager_UserInterface : private Tmanager { //delete constructor //in this class there will be only methods like: loadtexture(); gettexture(); } class Engine { private: Tmanager texManager; public: Init() Render() Update() Tmanager_UserInterface* GetTManager(){return texManager;} } //in main function //i need to load a texture //i always have access to Engine class engine-GetTmanger()-LoadTexture(...) //i can just access load and get texture; In this way i can implement several interface for each object, keeping visible only the functions i (and the user) will need. There are better ways to do the same?? Or is it just useless(i dont hide the "framework private functions" and the user will learn to dont call them)? Before i have used this method: class manager { public: //engine functions userfunction(); } class engine { private: manager m; public: init(){//call manager init function} manageruserfunciton() { //call manager::userfunction() } } in this way i have no access to the manager class but it's a bad way because if i add a new feature to the manager i need to add a new method in the engine class and it takes a lot of time. sorry for the bad english.

    Read the article

  • Debugging HLSL for Windows 8 application [migrated]

    - by Shervanator
    i'm currently in the process of creating a Windows 8 applicaiton using SharpDX (the managed c# directx wrapper). However I have ran into problems with one of my shaders and I want to know if its possible to debug such applications. PIX doesn't seem to work of directX apps as the executable does not like opening directly, and the new visual studio graphics debugging toolkit in VS2012 always states "unable to start the experiment" when I try to capture any information about my session. Thanks!

    Read the article

  • Hashing 3D position into 2D position

    - by notabene
    I am doing volumetric raycasting and curently working on depth jitter. I have 3D position on ray and want to sample 2D noise texture to jitter the depth. Function for converting (or hashing) 3D position to 2D have to produce absolutely different numbers for a little changes (especialy because i am sampling in texture space so sample values differs very very little) and have to be "shader-wise" - so forget about branches, cycles etc. I'm looking forward for yours nice and fast solutions.

    Read the article

  • How to deal with animated doors in isometric tiles

    - by George Profenza
    I've got a tricky issue I'm not sure how to tackle best: I have an animated tile of a door. When it's closed it should be sorted one way, but when it's openend it will need to be sorted a different way, as it belonging to a different(neighbouring tile). Here's the door closed: and the door opened: I imagine it would be possible to override the sorting system for such tiles and adjust the sorting based on the frame, but it feels a bit hacky. Has anyone encountered a similar scenario ? Any elegant solutions ?

    Read the article

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

  • How does this snippet of code create a ray direction vector?

    - by Isaac Waller
    In the Minecraft source code, this code is used to create a direction vector for a ray from pitch and yaw:' float f1 = MathHelper.cos(-rotationYaw * 0.01745329F - 3.141593F); float f3 = MathHelper.sin(-rotationYaw * 0.01745329F - 3.141593F); float f5 = -MathHelper.cos(-rotationPitch * 0.01745329F); float f7 = MathHelper.sin(-rotationPitch * 0.01745329F); return Vec3D.createVector(f3 * f5, f7, f1 * f5); I was wondering how it worked, and what is the constant 0.01745329F?

    Read the article

  • Alternative to NV Occlusion Query - getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provide a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? ... ?

    Read the article

  • Finding the shorter turning direction towards a target

    - by A.B.
    I'm trying to implement a type of movement where the object gradually faces the target. The problem I've run into is figuring out which turning direction is faster. The following code works until the object's orientation crosses the -PI or PI threshold, at which point it will start turning into the opposite direction void moveToPoint(sf::Vector2f destination) { if (destination == position) return; auto distance = distanceBetweenPoints(position, destination); auto direction = angleBetweenPoints(position, destination); /// Decides whether incrementing or decrementing orientation is faster /// the next line is the problem if (atan2(sin(direction - rotation), cos(direction - rotation)) > 0 ) { /// Increment rotation rotation += rotation_speed; } else { /// Decrement rotation rotation -= rotation_speed; } if (distance < movement_speed) { position = destination; } else { position.x = position.x + movement_speed*cos(rotation); position.y = position.y + movement_speed*sin(rotation); } updateGraphics(); } 'rotation' and 'rotation_speed' are implemented as custom data type for radians which cannot have values lower than -PI and greater than PI. Any excess or deficit "wraps around". For example, -3.2 becomes ~3.08.

    Read the article

  • Calculate vector direction

    - by Starkers
    Is the direction angle always measured from the plus x axis? Does a vector in the +,+ quadrant always have a direction between 0 and 90, and in -,+ between 90 and 180 and in -,- between 180 and 270 and in -,+ between 270 and 360 ? Also, how should we calculate the direction using tan? Would that mean nested if statements to find out what quadrant we're in, and then applying the appropriate "work arounds"? E.g. If we were in the -,+ (like in the diagram) would we find the angle from the + axis would be 90 + tan^-1(y/x), the 90 + only used because we're in the -,+ quadrant. Also, that's just a quick solution, may be off, I just want to know if we use nested if statements to get the angle from the + x axis. Finally, should we find the distance in degrees or radians?

    Read the article

  • Help with converting an XML into a 2D level (Actionscript 3.0)

    - by inzombiak
    I'm making a little platformer and wanted to use Ogmo to create my level. I've gotten everything to work except the level that my code generates is not the same as what I see in Ogmo. I've checked the array and it fits with the level in Ogmo, but when I loop through it with my code I get the wrong thing. I've included my code for creating the level as well as an image of what I get and what I'm supposed to get. EDIT: I tried to add it, but I couldn't get it to display properly Also, if any of you know of better level editors please let me know. xmlLoader.addEventListener(Event.COMPLETE, LoadXML); xmlLoader.load(new URLRequest("Level1.oel")); function LoadXML(e:Event):void { levelXML = new XML(e.target.data); xmlFilter = levelXML.* for each (var levelTest:XML in levelXML.*) { crack = levelTest; } levelArray = crack.split(''); trace(levelArray); count = 0; for(i = 0; i <= 23; i++) { for(j = 0; j <= 35; j++) { if(levelArray[i*36+j] == 1) { block = new Platform; s.addChild(block); block.x = j*20; block.y = i*20; count++; trace(i); trace(block.x); trace(j); trace(block.y); } } } trace(count);

    Read the article

  • Collision checking problem on a Tiled map

    - by nosferat
    I'm working on a pacman styled dungeon crawler, using the free oryx sprites. I've created the map using Tiled, separating the floor, walls and treasure in three different layers. After importing the map in libGDX, it renders fine. I also added the player character, for now it just moves into one direction, the player cannot control it yet. I wanted to add collision and I was planning to do this by checking if the player's new position is on a wall tile. Therefore as you can see in the following code snippet, I get the tile type of the appropriate tile and if it is not zero (since on that layer there is nothing except the wall tile) it is a collision and the player cannot move further: final Vector2 newPos = charController.move(warrior.getX(), warrior.getY()); if(!collided(newPos)) { warrior.setPosition(newPos.x, newPos.y); warrior.flip(charController.flipX(), charController.flipY()); } [..] private boolean collided(Vector2 newPos) { int row = (int) Math.floor((newPos.x / 32)); int col = (int) Math.floor((newPos.y / 32)); int tileType = tiledMap.layers.get(1).tiles[row][col]; if (tileType == 0) { return false; } return true; } The character only moves one tile with this code: If I reduce the col value by two it two more tiles. I think the problem will be around indexing, but I'm totally confused because the zero in the coordinate system of libGDX is in the bottom left corner of the screen, and I don't know the tiles array's indexing is similair or not. The size of the map is 19x21 tiles and looks like the following (the starting position of the player is marked with blue:

    Read the article

  • How can I get accurate collision resolution on the corners of rectangles?

    - by ssb
    I have a working collision system implemented, and it's based on minimum translation vectors. This works fine in most cases except when the minimum translation vector is not actually in the direction of the collision. For example: When a rectangle is on the far edge on any side of another rectangle, a force can be applied, in this example down, the pushes one rectangle into the other, particularly a static object like a wall or a floor. As in the picture, the collision is coming from above, but because it's on the very edge, it translates to the left instead of back up. I've searched for a while to find an approach but everything I can find deals with general corner collisions where my problem is only in this one limited case. Any suggestions?

    Read the article

  • How to determine if a 3D voxel-based room is sealed, efficiently

    - by NigelMan1010
    I've been having some issues with efficiently determining if large rooms are sealed in a voxel-based 3D rooms. I'm at a point where I have tried my hardest to solve the problem without asking for help, but not tried enough to give up, so I'm asking for help. To clarify, sealed being that there are no holes in the room. There are oxygen sealers, which check if the room is sealed, and seal depending on the oxygen input level. Right now, this is how I'm doing it: Starting at the block above the sealer tile (the vent is on the sealer's top face), recursively loop through in all 6 adjacent directions If the adjacent tile is a full, non-vacuum tile, continue through the loop If the adjacent tile is not full, or is a vacuum tile, check if it's adjacent blocks are, recursively. Each time a tile is checked, decrement a counter If the count hits zero, if the last block is adjacent to a vacuum tile, return that the area is unsealed If the count hits zero and the last block is not a vacuum tile, or the recursive loop ends (no vacuum tiles left) before the counter is zero, the area is sealed If the area is not sealed, run the loop again with some changes: Checking adjacent blocks for "breathable air" tile instead of a vacuum tile Instead of using a decrementing counter, continue until no adjacent "breathable air" tiles are found. Once loop is finished, set each checked block to a vacuum tile. Here's the code I'm using: http://pastebin.com/NimyKncC The problem: I'm running this check every 3 seconds, sometimes a sealer will have to loop through hundreds of blocks, and a large world with many oxygen sealers, these multiple recursive loops every few seconds can be very hard on the CPU. I was wondering if anyone with more experience with optimization can give me a hand, or at least point me in the right direction. Thanks a bunch.

    Read the article

  • Bump mapping Problem GLSL

    - by jmfel1926
    I am having a slight problem with my Bump Mapping project. Although everything works OK (at least from what I know) there is a slight mistake somewhere and I get incorrect shading on the brick wall when the light goes to the one side or the other as seen in the picture below: The light is on the right side so the shading on the wall should be the other way. I have provided the shaders to help find the issue (I do not have much experience with shaders). Shaders: varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; attribute vec3 tangent; attribute vec3 binormal; uniform vec3 lightpos; uniform mat4 cameraMat; void main() { gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = ftransform(); position = vec3(gl_ModelViewMatrix * gl_Vertex); lightvec = vec3(cameraMat * vec4(lightpos,1.0)) - position ; vec3 eyeVec = vec3(gl_ModelViewMatrix * gl_Vertex); viewVec = normalize(-eyeVec); } uniform sampler2D colormap; uniform sampler2D normalmap; varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; vec3 vv; uniform float diffuset; uniform float specularterm; uniform float ambientterm; void main() { vv=viewVec; vec3 normals = normalize(texture2D(normalmap,gl_TexCoord[0].st).rgb * 2.0 - 1.0); normals.y = -normals.y; //normals = (normals * gl_NormalMatrix).xyz ; vec3 distance = lightvec; float dist_number =length(distance); float final_dist_number = 2.0/pow(dist_number,diffuset); vec3 light_dir=normalize(lightvec); vec3 Halfvector = normalize(light_dir+vv); float angle=max(dot(Halfvector,normals),0.0); angle= pow(angle,specularterm); vec3 specular=vec3(angle,angle,angle); float diffuseterm=max(dot(light_dir,normals),0.0); vec3 diffuse = diffuseterm * texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 ambient = ambientterm *texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 diffusefinal = diffuse * final_dist_number; vec3 finalcolor=diffusefinal+specular+ambient; gl_FragColor = vec4(finalcolor, 1.0); }

    Read the article

  • 2D Side scroller collision detection

    - by Shanon Simmonds
    I am trying to do some collision detection between objects and tiles, but the tiles do not have there own x and y position, they are just rendered to the x and y position given, there is an array of integers which has the ids of the tiles to use(which are given from an image and all the different colors are assigned different tiles) int x0 = camera.x / 16; int y0 = camera.y / 16; int x1 = (camera.x + screen.width) / 16; int y1 = (camera.y + screen.height) / 16; for(int y = y0; y < y1; y++) { if(y < 0 || y >= height) continue; // height is the height of the level for(int x = x0; x < x1; x++) { if(x < 0 || x >= width) continue; // width is the width of the level getTile(x, y).render(screen, x * 16, y * 16); } } I tried using the levels getTile method to see if the tile that the object was going to advance to, to see if it was a certain tile, but, it seems to only work in some directions. Any ideas on what I'm doing wrong and fixes would be greatly appreciated. What's wrong is that it doesn't collide properly in every direction and also this is how I tested for a collision in the objects class if(!level.getTile((x + xa) / 16, (y + ya) / 16).isSolid()) { x += xa; y += ya; } EDIT: xa and ya represent the direction as well as the movement, if xa is negative it means the object is moving left, if its positive it is moving right, and same with ya except negative for up, positive for down.

    Read the article

  • Falling CCSprites

    - by Coder404
    Im trying to make ccsprites fall from the top of the screen. Im planning to use a touch delegate to determine when they fall. How could I make CCSprites fall from the screen in a way like this: -(void)addTarget { Monster *target = nil; if ((arc4random() % 2) == 0) { target = [WeakAndFastMonster monster]; } else { target = [StrongAndSlowMonster monster]; } // Determine where to spawn the target along the Y axis CGSize winSize = [[CCDirector sharedDirector] winSize]; int minY = target.contentSize.height/2; int maxY = winSize.height - target.contentSize.height/2; int rangeY = maxY - minY; int actualY = (arc4random() % rangeY) + minY; // Create the target slightly off-screen along the right edge, // and along a random position along the Y axis as calculated above target.position = ccp(winSize.width + (target.contentSize.width/2), actualY); [self addChild:target z:1]; // Determine speed of the target int minDuration = target.minMoveDuration; //2.0; int maxDuration = target.maxMoveDuration; //4.0; int rangeDuration = maxDuration - minDuration; int actualDuration = (arc4random() % rangeDuration) + minDuration; // Create the actions id actionMove = [CCMoveTo actionWithDuration:actualDuration position:ccp(-target.contentSize.width/2, actualY)]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; // Add to targets array target.tag = 1; [_targets addObject:target]; } This code makes CCSprites move from the right side of the screen to the left. How could I change this to make the CCSprites to move from the top of the screen to the bottom?

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >