Search Results

Search found 28914 results on 1157 pages for 'cloud development'.

Page 553/1157 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • XNA 4.0 Point Vertex Rendering

    - by luis
    I have a buffer of about 134 million particles and a very powerful computer to render them smoothly but I am getting an error when trying to render them as primitive lines it says I cannot render more than around 1 million. I wonder how can I do this, also if is there a better way to render this other than with lines, I'm comfortable with having 1 pixel points or anything as long as the vertices are shown all the time. I'm basically just plotting the points. thanks.

    Read the article

  • Cocos2d v2.0 and OpenGL 2.0/1.0: where to start

    - by mm24
    I started developing my very first game 3 months ago using Cocos2d 2.0 for iPhone. I am now in the stage where I'd like to add some cool effects to the bullets and some special weapons (see my waveforms question here). I got a good answer in the cocos2d-iphone forum (see this one). Unfortunately I am a bit paralized now. I don't know if I will be overdoing by learning OpengGL 2.0 or if I should just stick ot the old 1.0. There is a good intro on various tutorial's written in Steffen Itterheims blog (see this post). I would like to add to my game: a blur effect to the bullets (here is a tutorial for OpenGL 1.0) a waveform (see above) some realistic water ripples (here is a nice sample code) So now, given that I don't want to overdo things but at the same time I want to achieve those effects, from where should I start? Should I discard the OpenGL 1.0 tutorials? OR should I use only OpenGL 1.0 code? How can I avoid confusion? I mean, it seems that the compiler recognizes both, but that there are some conflictual calls in some circumnstances, I am fairly sure this has some explanation, is there some reference to this somewhere?

    Read the article

  • How many players can UDK support without Networking

    - by N0xus
    I've been looking for the answer to this for some time now, but cannot find anything online that is helpful. What I want to know is the amount of players that the UDK can support on one single machine. An example of this would be golden eye on the N64. On that, you could get 4 players all playing the same game at the same time using split screen. Like in this image: Does anyone know is the UDK is capable of doing similar?

    Read the article

  • libgdx draw issue and animation

    - by johnny-b
    it seems as though i cannot get the draw method to work??? it seems as though the bullet.draw(batcher) does not work and i cannot understand why as the bullet is a sprite. i have made a Sprite[] and added them as animation. could that be it? i tried batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY(), bullet.getOriginX() / 2, bullet.getOriginY() / 2, bullet.getWidth(), bullet.getHeight(), 1, 1, bullet.getRotation()); but that dont work, the only way it draws is this batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY()); below is the code. // this is in a Asset Class texture = new Texture(Gdx.files.internal("SpriteN1.png")); texture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest); bullet1 = new Sprite(texture, 380, 350, 45, 20); bullet1.flip(false, true); bullet2 = new Sprite(texture, 425, 350, 45, 20); bullet2.flip(false, true); Sprite[] bullets = { bullet1, bullet2 }; bulletAnimation = new Animation(0.06f, bullets); bulletAnimation.setPlayMode(Animation.PlayMode.LOOP); // this is the GameRender class public class GameRender() { private Bullet bullet; private Ball ball; public GameRenderer(GameWorld world) { myWorld = world; cam = new OrthographicCamera(); cam.setToOrtho(true, 480, 320); batcher = new SpriteBatch(); // Attach batcher to camera batcher.setProjectionMatrix(cam.combined); shapeRenderer = new ShapeRenderer(); shapeRenderer.setProjectionMatrix(cam.combined); // Call helper methods to initialize instance variables initGameObjects(); initAssets(); } private void initGameObjects() { ball = GameWorld.getBall(); bullet = myWorld.getBullet(); scroller = myWorld.getScroller(); } private void initAssets() { ballAnimation = AssetLoader.ballAnimation; bulletAnimation = AssetLoader.bulletAnimation; } public void render(float runTime) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL30.GL_COLOR_BUFFER_BIT); batcher.begin(); // Disable transparency // This is good for performance when drawing images that do not require // transparency. batcher.disableBlending(); // The ball needs transparency, so we enable that again. batcher.enableBlending(); batcher.draw(AssetLoader.ballAnimation.getKeyFrame(runTime), ball.getX(), ball.getY(), ball.getWidth(), ball.getHeight()); batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY()); // End SpriteBatch batcher.end(); } } // this is the gameworld class public class GameWorld { public static Ball ball; private Bullet bullet; private ScrollHandler scroller; public GameWorld() { ball = new Ball(480, 273, 32, 32); bullet = new Bullet(10, 10); scroller = new ScrollHandler(0); } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet() { return bullet; } } is there anyway so make the sprite work?

    Read the article

  • How do I apply 2 rotations about different points to a single primitive using OpenGL

    - by Fenoec
    I'm working on a 2D top-down shooter game that has a rotation feature like Realm Of The Mad God such that if you press e the camera rotates around the character in a clockwise direction and q rotates the camera around the character in a counterclockwise direction. I have this working with my floors and walls by translating to the character, doing the screen rotation, and drawing everything with the character as the origin. The problem arises when I shoot projectiles which need to both rotate around the character and rotate around themselves (shooting uses the mouse cursor so I can shoot at any angle). For example, if the screen is not rotated and I'm shooting rectangular projectiles, I want them to face in the direction I'm shooting (rotation around themselves). However if I only do this rotation, when I then rotate the screen the projectiles will always shoot at the same position because my cursor position does not change. Therefore I need to also either rotate the projectiles around the character or rotate the mouse cursor position to get the correct position (which would then totally screw up all of the collision detection). Likewise if I only do the screen rotation on projectiles, the rectangles will always be facing the same way and they would only look correct if I were shooting straight up or straight down. So my question is, how can I perform 2 rotations on a primitive around 2 different points? The only way I can think of is to translate to the character and do the screen rotation, then somehow calculate the translation required to move back to the middle of the projectile (seeing as how my axes are now rotated) and do its rotation. Or am I thinking about this in the wrong way and there is a different solution to accomplishing this effect?

    Read the article

  • fragment shader directional light positioning with camera

    - by meWantToLearn
    Im trying to set up directional lighting in the fragment shader. So the direction of my light moves with the camera position. #version 150 core uniform sampler2D diffuseTex; uniform vec4 lightColour; uniform vec3 lightDirection; vec3 LNorm = normalize(lightDirection); vec3 normal = normalize(IN.normal); vec3 calColour = lightColour[i].rgb * intensity; gl_FragColor = vec4(diffuse.rbg * calColour, diffuse.a); It lights the entire scene.

    Read the article

  • How can I log key presses in Game Maker?

    - by skeletalmonkey
    I'm trying to create a log of a players actions as they play a game of Spelunky. The easiest I've found to do this is to log what keys are pressed at each frame. What I don't know how to do is how to integrate this with the Game Maker source code of Spelunky. Is there a specific way to create a script that is checked every frame/tick (don't know the right term) and a command to find what buttons are pressed?

    Read the article

  • Is it a good idea to use a formula to balance a game's complexity, in order to keep players in constant flow?

    - by user1107412
    I read a lot about Flow theory and its applications to video games, and I got an idea sticking in my mind. If a number of weight values are applied to different parameters of a certain game level (i.e. the size of the level, the number of enemies, their overal strength, the variance in their behavior, etc), then it should be technically possible to find an overal score mechanism for each level in the game. If a constant ratio of complexity increase were empirically defined, for instance 1,3333, or say, the Golden Ratio, would it be a good idea to arrange the levels in such an order that the increase of overal complexity tends to increase that much? Has somebody tried it?

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • PNG file loading error in ImageMagick

    - by khanhhh89
    I'm trying to understand the tutorial 16 at http://ogldev.atspace.co.uk, which requires the image processing library ImageMagick. But when I run the tutorial, I encountered an following error: freeglut: failed to change scree settings Error loading textures 'test.png': no decode delegates for this image format 'C:/../appdata/magick-6024a_cIJcw90t-j'@error/constitute.c/ReadImage/552 I searched for google and found that my ImageMagick library do not have PNG Delegaes, but when I checked for the information of ImageMagick, it sees PNG in its delegate lists. Command line: convert -configure Result: LIB_VERSION 0x687 DELEGATES: bzlib, freetype, jpeg, jp2, lcms, png, tiff, x11, xml, wmf, zlib Could you explain to me this error, thanks so much!

    Read the article

  • XNA hlsl tex2D() only reads 3 channels from normal maps and specular maps

    - by cubrman
    Our engine uses deferred rendering and at the main draw phase gathers plenty of data from the objects it draws. In order to save on tex2D calls, we packed our objects' specular maps with all sorts of data, so three out of four channels are already taken. To make it clear: I am talking about the assets that come with the models and are stored in their material's Specular Level channel, not about the RenderTarget. So now I need another information to be stored in the alpha channel, but I cannot make the shader to read it properly! Nomatter what I write into alpha it ends up being 1 (255)! I tried: saving the textures in PNG/TGA formats. turning off pre-computed alpha in model's properties. Out of every texture available to me (we use Diffuse map, Normal Map and Specular Map) I was only able to read alpha successfully from the Diffuse Map! Here is how I add specular and normal maps to my model's material in the content processor: if (geometry.Material.Textures.ContainsKey(normalMapKey)) { ExternalReference<TextureContent> texRef = geometry.Material.Textures[normalMapKey]; geometry.Material.Textures.Remove("NormalMap"); geometry.Material.Textures.Add("NormalMap", texRef); } ... foreach (KeyValuePair<String, ExternalReference<TextureContent>> texture in material.Textures) { if ((texture.Key == "Texture") || (texture.Key == "NormalMap") || (texture.Key == "SpecularMap")) mat.Textures.Add(texture.Key, texture.Value); } In the shader I obviously use: float4 data = tex2D(specularMapSampler, TexCoords); so data.a is always 1 in my case, could you suggest a reason?

    Read the article

  • Top down space game control problem

    - by Phil
    As the title suggests I'm developing a top down space game. I'm not looking to use newtonian physics with the player controlled ship. I'm trying to achieve a control scheme somewhat similar to that of FlatSpace 2 (awesome game). I can't figure out how to achieve this feeling with keyboard controls as opposed to mouse controls though. Any suggestions? I'm using Unity3d and C# or javaScript (unityScript or whatever is the correct term) works fine if you want to drop some code examples. Edit: Of course I should describe FlatSpace 2's control scheme, sorry. You hold the mouse button down and move the mouse in the direction you want the ship to move in. But it's not the controls I don't know how to do but rather the feeling of a mix of driving a car and flying an aircraft. It's really well made. Youtube link: FlatSpace2 on iPhone I'm not developing an iPhone game but the video shows the principle of the movement style. Edit 2 As there seems to be a slight interest, I'll post the version of the code I've used to continue. It works good enough. Sometimes good enough is sufficient! using UnityEngine; using System.Collections; public class ShipMovement : MonoBehaviour { public float directionModifier; float shipRotationAngle; public float shipRotationSpeed = 0; public double thrustModifier; public double accelerationModifier; public double shipBaseAcceleration = 0; public Vector2 directionVector; public Vector2 accelerationVector = new Vector2(0,0); public Vector2 frictionVector = new Vector2(0,0); public int shipFriction = 0; public Vector2 shipSpeedVector; public Vector2 shipPositionVector; public Vector2 speedCap = new Vector2(0,0); void Update() { directionModifier = -Input.GetAxis("Horizontal"); shipRotationAngle += ( shipRotationSpeed * directionModifier ) * Time.deltaTime; thrustModifier = Input.GetAxis("Vertical"); accelerationModifier = ( ( shipBaseAcceleration * thrustModifier ) ) * Time.deltaTime; directionVector = new Vector2( Mathf.Cos(shipRotationAngle ), Mathf.Sin(shipRotationAngle) ); //accelerationVector = Vector2(directionVector.x * System.Convert.ToDouble(accelerationModifier), directionVector.y * System.Convert.ToDouble(accelerationModifier)); accelerationVector.x = directionVector.x * (float)accelerationModifier; accelerationVector.y = directionVector.y * (float)accelerationModifier; // Set friction based on how "floaty" controls you want shipSpeedVector.x *= 0.9f; //Use a variable here shipSpeedVector.y *= 0.9f; //<-- as well shipSpeedVector += accelerationVector; shipPositionVector += shipSpeedVector; gameObject.transform.position = new Vector3(shipPositionVector.x, 0, shipPositionVector.y); } }

    Read the article

  • What game systems exist which uses camera input?

    - by Marc Pilgaard
    The group and I is in the middle of a semester project where we are currently researching on which game systems are using camera as input or as an interactive medium? We would like some help listing some of the game systems which uses camera input, as it seems hard to find other examples. Currently we know that webcam browser games uses camera input (Newgrounds webcam games), as well as the xbox kinect. I know this questions seems rather vague, though I still hope some people is capable of helping.

    Read the article

  • Libgdx 2D Game, Random generated World of random size, how to get mouse coordinates?

    - by Solom
    I'm a noob and English is not my mothertongue, so please bear with me! I'm generating a map for a Sidescroller out of a 2D-array. That is, the array holds different values and I create blocks based on that value. Now, my problem is to match mouse coordinates on screen with the actual block the mouse is pointing at. public class GameScreen implements Screen { private static final int WIDTH = 100; private static final int HEIGHT = 70; private OrthographicCamera camera; private Rectangle glViewport; private Spritebatch spriteBatch; private Map map; private Block block; ... @Override public void show() { camera = new OrthographicCamera(WIDTH, HEIGHT); camera.position.set(WIDTH/2, HEIGHT/2, 0); glViewport = new Rectangle(0, 0, WIDTH, HEIGHT); map = new Map(16384, 256); map.printTileMap(); // Debugging only spriteBatch = new SpriteBatch(); } @Override public void render(float delta) { // Clear previous frame Gdx.gl.glClearColor(1, 1, 1, 1 ); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); GL30 gl = Gdx.graphics.getGL30(); // gl.glViewport((int) glViewport.x, (int) glViewport.y, (int) glViewport.width, (int) glViewport.height); spriteBatch.setProjectionMatrix(camera.combined); camera.update(); spriteBatch.begin(); // Draw Map this.drawMap(); // spriteBatch.flush(); spriteBatch.end(); } private void drawMap() { for(int a = 0; a < map.getHeight(); a++) { // Bounds check (y) if(camera.position.y + camera.viewportHeight < a)// || camera.position.y - camera.viewportHeight > a) break; for(int b = 0; b < map.getWidth(); b++) { // Bounds check (x) if(camera.position.x + camera.viewportWidth < b)// || camera.position.x > b) break; // Dynamic rendering via BlockManager int id = map.getTileMap()[a][b]; Block block = BlockManager.map.get(id); if(block != null) // Check if Air { block.setPosition(b, a); spriteBatch.draw(block.getTexture(), b, a, 1 ,1); } } } } As you can see, I don't use the viewport anywhere. Not sure if I need it somewhere down the road. So, the map is 16384 blocks wide. One block is 16 pixels in size. One of my naive approaches was this: if(Gdx.input.isButtonPressed(Input.Buttons.LEFT)) { Vector3 mousePos = new Vector3(); mousePos.set(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); System.out.println(Math.round(mousePos.x)); // *16); // Debugging // TODO: round // map.getTileMap()[mousePos.x][mousePos.y] = 2; // Draw at mouse position } I confused myself somewhere down the road I fear. What I want to do is, update the "block" (or rather the information in the Map/2D-Array) so that in the next render() there is another block. Basically drawing on the spriteBatch g So if anyone could point me in the right direction this would be highly appreciated. Thanks!

    Read the article

  • Multiplayer online game engine/pipeline

    - by Slav
    I am implementing online multiplayer game where client must be written in AS3 (Flash) to embed game into browser and server in C++ (abstract part of which is already written and used with other games). Networking models may differ from each other, but currently I'm looking toward game's logic run on both client and server parts but they're written on different languages while it's not the main problem. My previous game (pretty big one - was implemented with efforts of ~5 programmers in 1.5 years) was mainly "written" within electronic tables as structured objects with implemented inheritance: was written standalone tool which generated AS3 and C++ (languages of platforms to which the game was published) using specified electronic tables file (.xls or .ods). That file contained ~50 tables with ~50 rows and ~50 columns each and was mainly written by game designers which do not know any programming languages. But that game was single-player. Having declared problem with my currently implementing MMO, I'm looking toward some vast pipeline, where will be resolved such problems like: game objects descriptions (which starships exist within game, how much HP they have, how fast move, what damage deal...) actions descriptions (what players or NPCs can do: attack each other, collect resources, build structures, move, teleport, cast spells) - actions are transmitted through server between clients influences (what happens when specified action applied on specified object, e.i "Ship A attacked Ship B: field "HP" of Ship B reduced by amount of field "damage" of Ship A" Influences can be much more difficult, yes, e.i. "damage is twice it's size when Ship has =5 allies around him in a 200 units range during night" and so on. If to be able to write such logic within some "design document" it will be easily possible to: let designers to do their job without programmer's intervention or any bug-prone programming validate described logic transfer (transform, convert) to any programming language where it will be executed Did somebody worked on something like that? Is there some tools/engines/pipelines which concernes with it? How to handle all of this problems simultaneously in a best way or do I properly imagine my tasks and problems to myself?

    Read the article

  • andEngine dynamic sprites

    - by Blucreation
    Ive just started with andEngine the past week and i only started learning java/android 3 weeks. I can use a for loop to add multiple sprites to the screen but when i try to check collisions on them it only does it to one and not the rest. I want to be able to add a specific number for sprites made from the same texture to the scene, add collision detection to them and also make them slide across the screen (im making a game where you avoid the obstacles). My simple code: private void createobstacle(float pX, float pY) { obstacle = new AnimatedSprite(pX, pY, this.mObjTextureRegion.deepCopy(), getVertexBufferObjectManager()); obstacle.setScale(MathUtils.random(0.5f, 3f)); scene.attachChild(obstacle); } private void createobstacle(int num) { for(int i=0; i<=num; i++ ) { final float xPos = MathUtils.random(30.0f, (CAMERA_WIDTH - 30.0f)); final float yPos = MathUtils.random(30.0f, (CAMERA_HEIGHT - 30.0f)); createobstacle(xPos, yPos); } } Ive read about arrays but i cannot find any tutorials about anything im stuck with. Any help would be great!

    Read the article

  • UIView with IrrlichtScene - iOS

    - by user1459024
    i have a UIViewController in a Storyboard and want to draw a IrrlichtScene in this View Controller. My Code: WWSViewController.h #import <UIKit/UIKit.h> @interface WWSViewController : UIViewController { IBOutlet UILabel *errorLabel; } @end WWSViewController.mm #import "WWSViewController.h" #include "../../ressources/irrlicht/include/irrlicht.h" using namespace irr; using namespace core; using namespace scene; using namespace video; using namespace io; using namespace gui; @interface WWSViewController () @end @implementation WWSViewController -(void)awakeFromNib { errorLabel = [[UILabel alloc] init]; errorLabel.text = @""; IrrlichtDevice *device = createDevice( video::EDT_OGLES1, dimension2d<u32>(640, 480), 16, false, false, false, 0); /* Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text. */ device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo"); /* Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment(). */ IVideoDriver* driver = device->getVideoDriver(); ISceneManager* smgr = device->getSceneManager(); IGUIEnvironment* guienv = device->getGUIEnvironment(); /* We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner. */ guienv->addStaticText(L"Hello World! This is the Irrlicht Software renderer!", rect<s32>(10,10,260,22), true); /* To show something interesting, we load a Quake 2 model and display it. We only have to get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). We check the return value of getMesh() to become aware of loading problems and other errors. Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modelled by Brian Collins. */ IAnimatedMesh* mesh = smgr->getMesh("/Users/dbocksteger/Desktop/test/media/sydney.md2"); if (!mesh) { device->drop(); if (!errorLabel) { errorLabel = [[UILabel alloc] init]; } errorLabel.text = @"Konnte Mesh nicht laden."; return; } IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh ); /* To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color. */ if (node) { node->setMaterialFlag(EMF_LIGHTING, false); node->setMD2Animation(scene::EMAT_STAND); node->setMaterialTexture( 0, driver->getTexture("/Users/dbocksteger/Desktop/test/media/sydney.bmp") ); } /* To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is. */ smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0)); /* Ok, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window). */ while(device->run()) { /* Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen. */ driver->beginScene(true, true, SColor(255,100,101,140)); smgr->drawAll(); guienv->drawAll(); driver->endScene(); } /* After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information. */ device->drop(); } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @end Sadly the result is just a black View in the Simulator. :( Hope here is anyone who can explain me how i draw the scene in a UIView. Furthermore I'm getting this Error: Could not load sprite bank because the file does not exist: #DefaultFont How can i fix it ?

    Read the article

  • Open source management game in java

    - by jcw
    I am trying to find an open source sport management game, much like the link below, but am failing to do so. There are two links provided in the below question that are both fine,'except for one minor problem - I only know java! Is there an open source sports manager project? After some googling, I have been unsuccessful in finding a sports management game that is written in java. I am do not particullarly care about the type of sport, becuase I am mostly interested in mechanics. Does anyone know of any such projects or am I out of luck on java?

    Read the article

  • Hashing 3D position into 2D position

    - by notabene
    I am doing volumetric raycasting and curently working on depth jitter. I have 3D position on ray and want to sample 2D noise texture to jitter the depth. Function for converting (or hashing) 3D position to 2D have to produce absolutely different numbers for a little changes (especialy because i am sampling in texture space so sample values differs very very little) and have to be "shader-wise" - so forget about branches, cycles etc. I'm looking forward for yours nice and fast solutions.

    Read the article

  • Alternative to NV Occlusion Query - getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provide a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? ... ?

    Read the article

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

  • Debugging HLSL for Windows 8 application [migrated]

    - by Shervanator
    i'm currently in the process of creating a Windows 8 applicaiton using SharpDX (the managed c# directx wrapper). However I have ran into problems with one of my shaders and I want to know if its possible to debug such applications. PIX doesn't seem to work of directX apps as the executable does not like opening directly, and the new visual studio graphics debugging toolkit in VS2012 always states "unable to start the experiment" when I try to capture any information about my session. Thanks!

    Read the article

  • Designing a game - Where to start?

    - by OghmaOsiris
    A friend of mine and I are planning a game together to work on in our free time. It's not an extensive game, but it's not a simple one either. He's working on the story behind the game while I'm working on the graphics and code. I don't really know where to start with the game. We know what the basic type of game it's going to be and how it would be played, but I'm having a hard time of actually knowing where to begin. I have Xcode open but I don't really even know what I should be designing first. What is some advice for this writer's block? Where is a good place to start with a game? Should I design all the graphics and layout before even touching Xcode? Should I program the things I know I'll have difficulty with first before getting to the easy stuff?

    Read the article

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • Help with Strategy-game AI

    - by f20k
    Hi, I am developing a strategy-game AI (think: Final Fantasy Tactics), and I am having trouble coming up for the design of the AI. My main problem is determining which is the optimal thing for it to do. First let me describe the priority of what action I would like the AI to take: Kill nearest player unit Fulfill primary directive (kill all player units, kill target unit, survive for x turns) Heal ally unit / cast buffer Now the AI can do the following in its turn: Move - {Attack / Ability / Item} (either attack or ability or item) {Attack / Ability / Item} - Move Move closer (if targets not in range) {Attack / Ability / Item} (if move not available) Notes Abilities have various ranges / effects / costs / effects. Each ai unit has maybe 5-10 abilities to choose from. The AI will prioritize killing over safety unless its directive is to survive for x turns. It also doesn't care about ability cost much. While a player may want to save a big spell for later, the AI will most likely use it asap. Movement is on a (hex) grid num of player units: 3-6 num of ai units: 3-7 or more. Probably max 10. AI and player take turns controlling ONE unit, instead of all at the same time. Platform is Android (if program doesnt respond after some time, there will be a popup saying to Force Quit or Wait - which looks really bad!). Now comes the questions: The best ability to use would obviously be the one that hits the most targets for the most damage. But since each ability has different ranges, I won't know if they are in range without exploring each possible place I can move to. One solution would be to go through each possible places to move to, determine the optimal attack at that location - which gives me a list of optimal moves for each location. Then choose the optimal out of the list and execute it. But this will take a lot of CPU time. Is there a better solution? My current idea is to move as close as possible towards the closest, largest group of people, and determine the optimal attack/ability from there. I think this would be a lot less work for the CPU and still allow for wide-range attacks. Its sub-optimal but the AI will still seem 'smart'. Other notes/questions: Am I over-thinking/over-complicating it? Better solution? I am open to all sorts of suggestions I have taken a look at the spell-casting question, but it doesn't take into account the movement - so perhaps use that algo for each possible move location? The top answer mentioned it wasn't great for area-of-effect and group fights - so maybe requires more tweaking? Please, if you mention a graph/tree, let me know basically how to use it. E.g. Node means ability, level corresponds to damage, then search for the deepest node.

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >