Search Results

Search found 16473 results on 659 pages for 'game logic'.

Page 374/659 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Vertex Array Object (OpenGL)

    - by user5140
    I've just started out with OpenGL I still haven't really understood what Vertex Array Objects are and how they can be employed. If Vertex Buffer Object are used to store vertex data (such as their positions and texture coordinates) and the VAOs only contain status flags, where can they be used? What's their purpose? As far as I understood from the (very incomplete and unclear) GL Wiki, VAOs are used to set the flags/status for every vertex, following the order described in the Element Array Buffer, but the wiki was really ambiguous about it and I'm not really sure about what VAOs really do and how I could employ them.

    Read the article

  • Simulating a sine wave/oscillating pattern for enemies

    - by Sun
    I'm creating a simple top down shooter, right now I have an enemy which simply follows the player. I'd like to change things up and have the enemies move towards the player but in a wave like motion. I have looked at some similar questions like this but they don't take into account for the Y changing. How can I simulate a wave like pattern for my enemies whilst they are homing into their target. Edit: Sample code In my update method I have the following: Vector2 trackingPos = position - target; trackingPos.Normalize(); position -= trackingPos * elaspedTime * speed;

    Read the article

  • Entity System and rendering

    - by hayer
    Okey, what I know so far; The entity contains a component(data-storage) which holds information like; - Texture/sprite - Shader - etc And then I have a renderer system which draws all this. But what I don't understand is how the renderer should be designed. Should I have one component for each "visual type". One component without shader, one with shader, etc? Just need some input on whats the "correct way" to do this. Tips and pitfalls to watch out for.

    Read the article

  • XNA 4.0: 2D Camera Y and X are going in wrong direction

    - by Setheron
    I asked this question on stackoverflow but assumed this might be a better area to ask it as well for a more informed answer. My problem is that I am trying to create a camera class and have it so that my camera follows the proper RHS, however the Y axis seems to be inverted since on the screen the 0 starts at the top. Here is my Camera2D Class: class Camera2D { private Vector2 _position; private float _zoom; private float _rotation; private float _cameraSpeed; private Viewport _viewport; private Matrix _viewMatrix; private Matrix _viewMatrixIverse; public static float MinZoom = float.Epsilon; public static float MaxZoom = float.MaxValue; public Camera2D(Viewport viewport) { _viewMatrix = Matrix.Identity; _viewport = viewport; _cameraSpeed = 4.0f; _zoom = 1.0f; _rotation = 0.0f; _position = Vector2.Zero; } public void Move(Vector2 amount) { _position += amount; } public void Zoom(float amount) { _zoom += amount; _zoom = MathHelper.Clamp(_zoom, MaxZoom, MinZoom); UpdateViewTransform(); } public Vector2 Position { get { return _position; } set { _position = value; UpdateViewTransform(); } } public Matrix ViewMatrix { get { return _viewMatrix; } } private void UpdateViewTransform() { Matrix proj = Matrix.CreateTranslation(new Vector3(_viewport.Width * 0.5f, _viewport.Height * 0.5f, 0)) * Matrix.CreateScale(new Vector3(1f, 1f, 1f)); _viewMatrix = Matrix.CreateRotationZ(_rotation) * Matrix.CreateScale(new Vector3(_zoom, _zoom, 1.0f)) * Matrix.CreateTranslation(_position.X, _position.Y, 0.0f); _viewMatrix = proj * _viewMatrix; } } I test it using SpriteBatch in the following way: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); Vector2 position = new Vector2(0, 0); // TODO: Add your drawing code here spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, null, camera.ViewMatrix); Texture2D circle = CreateCircle(100); spriteBatch.Draw(circle, position, Color.Red); spriteBatch.End(); base.Draw(gameTime); }

    Read the article

  • Basic Use of ApplyImpulse

    - by nycynik
    I am trying to apply a force to a bunch of b2_dynamicBodys, but it seems to only work for a random number of items and then stops with an error. //create some items to move bodyDef.type = b2Body.b2_dynamicBody; for(var i = 0; i < 5; ++i) { fixDef.shape = new b2PolygonShape; fixDef.shape.SetAsBox(1,1); fixDef.friction = 1; fixDef.restitution = .1; bodyDef.position.x = Math.random() * 10; bodyDef.position.y = Math.random() * 10; bodyDef.linearDamping=1; bodyDef.angularDamping=.8; itemsArray.push(world.CreateBody(bodyDef).CreateFixture(fixDef)); // store for later } then i try to apply a force later with: angle = 20; for (var xIdx=0; xIdx<itemArray.length; xIdx++) { itemsArray[xIdx].GetBody().ApplyImpulse(new b2Vec2(50*Math.cos(angle*Math.PI/180),50*Math.sin(angle*Math.PI/180));); } the error I receive is TypeError: 'undefined' is not an object (evaluating 'c.x') Is there something wrong with saving the items for later use when I am creating them? Does anyone know what is causing this.

    Read the article

  • How do I convert a partially transparent image into polygons?

    - by user82779
    I'm using GLEE2D, a level editor allowing me to import images, scale them, rotate them, and position them onto layers and export the data into XML format. However, it does not tell me objects' boundaries. I can calculate them, but only given the original image's polygons. How do I get polygons of objects in a transparent image? An example object (I outlined it): How would I turn the object, knowing the scaled size of the image, into polygons? Is there an algorithm for this? I'll use OpenGL to draw them.

    Read the article

  • How many views can be bound to a 2D texture at a time?

    - by Recker
    I am a newbie trying to learn on DX11.x. While reading about resources and views in MSDN, I thought this question For a given 2D Texture created with ID3D11Texture2Dinterface (or for that matter any kind of resource), how many of following views can be bound to it? 1) DepthStencilView 2) RenderTargetView 3) ShaderResourceView 4) UnorderedAccessView Thanks in advance. PS: I know the answer would be app specific, but still any insight into this would be helpful.

    Read the article

  • XNA 4.0 - container with content, that can slide (C#)

    - by DijkeMark
    I got an idea, but I got no idea on how to make it. Okay, so here is the deal. I want a container which can contain certain objects (These objects will draw the sprites/graphics). But because of different screen sizes, I want to be able to scale the containers width and height. But I do not want the objects in the container, that go outside of the container, because of the scaling to be visible. Because I want the objects all to be positioned horizontaly to eachother and I want a horizontal sliderbar, so I can slide from left to right within the container. I wonder if anyone could point me in the right direction. Thanks in Advance, Mark

    Read the article

  • TexturePacker ignores extensions

    - by The Oddler
    I'm using TexturePacker in one of my games, though when packing a bunch of textures their extension is kept in the data file. So when I want to find a texture I need to search for "image.png" instead of just "image". Is there an option to let texture packer ignore the extensions of my source images in the data file? Solved: So if anyone else wants this, here's the exported I made: https://www.box.com/s/bf12q1i1yc9jr2c5yehd Just extract it into "C:\Program Files (x86)\CodeAndWeb\TexturePacker\bin\exporters\UIToolkit No Extensions" (or something similar) and it should show op as an exporter.

    Read the article

  • Sprite batching seems slow

    - by Dekowta
    I have implemented a sprite batching system in OpenGL which will batch sprites based on their texture. How ever when I'm rendering ~5000 sprites all using the same texture i'm getting roughly 30fps. The process is as followed create sprite batch which also create a VBO with a set size and also creates the shaders as well call begin and initialise the render mode (at the moment just setting alpha on) call Draw with a sprite. This checks to see if the texture of the sprite has already been loaded and if so it just creates a pointer to the batch item and adds the new sprite coords. If not then it creates a new batch item and adds the sprite coords to that; it adds the batch item to the main batch. if the max sprite count is reached render will be called call end which calls render to render the left over sprites in the batch. and also resets the buffer offset render loops through each item in the batch and will bind the texture of the batch item, map the data to the buffer and then draw the array. the buffer will then be offset by the amount of sprites drawn. I have a feeling that it could be the method i'm using to store the batched sprites or it could be something else that i'm missing but I still can work it out. the cpp and h files are as followed http://pastebin.com/ZAytErGB http://pastebin.com/iCB608tA On top of this i'm also getting a weird issue where then two sprites are batched on after the other the second sprite will use the same coordinates as the last. And then when one if drawn after it is fine. I can't seem to find what is causing this issue. any help would be appreciated iv been sat trying to work this all out for a while now and cant seems to put my finger on what's causing it all.

    Read the article

  • Is It More Efficient To Make Games In Languages I Like?

    - by Dsfsdfsdfsdfsd Fsdf
    Is it more "efficiant" to develop games with languages your good with and like best rather then the "best" language? A example is like I like C# (It's My First Language) and I'm really good at it and used to how it works, I'm not as good with C++ but I'm kinda slow at it because I don't prefer how you systems work like I think int a[] is not as good as int[] a. Would it be better to go with what I know best or what's the "best" available? Thanks For Reading!

    Read the article

  • What class to use in order to have a number move around the screen?

    - by AllenZ41
    What i am trying to accomplish is have a randomly created number move around the screen but it is touchable. I am planning to have lots of numbers on the screen, so my question is what class is appropriate to use, so I could set a number randomly at run time and display it while it moves around the screen? I was planning the use a TextView, since I want to use a custom font of mine but I think creating a bunch at a time could cause a memory problem and to my understanding they cant move around the screen at runtime.

    Read the article

  • Using bone joints

    - by raser
    I am trying to save bone joints to a file, and am using this format. I was wondering if anyone could clear up a few questions I have why do I need to provide rotation data for the bone, if I already gave it the location? How do I calculate the rotation of each axis if I have the relative location from the parent joint? ** EDIT ** After doing some more digging, I think that it has something to do with quaternions, so, could someone point me to a good resource on using quaternions for bone joints? ** EDIT AGAIN ** I think I've solved it, but I don't understand how it works. I can't seem to find any google results explaining it. I'd appreciate if anyone could send resources explaining it to me.

    Read the article

  • Drawing chunks, and positioning the camera

    - by Troubleshoot
    I've seen many questions and answers regarding how to draw tiled maps but I can't really get my head around it. Many answers suggest either loading the visible part of the map, or loading and unloading chunks of the map. I've decided the best option would be to load chunks, but I'm slightly confused as to how this would be implemented. Currently I'm loading the full map to a 2D array of buffered images, then drawing it every time repaint is called. Q1: If I were to load chunks of the map, would I load the map as a whole then draw the necessary chunk(s), or load & unload the chunks as the player moves along, and if so, how? My second question regards the camera. I want the player to be in the centre of the X axis and the camera to follow it. I've thought of drawing everything in relation to the map and calculating the position of the camera in relation to the players coordinates on the map. So, to calculate the camera's X position I understand that I should use cameraX = playerX - (canvasWidth/2), but how should I calculate the Y position? I want the camera to only move up when the player reaches cameraHeight/2 but to move down when the player reaches 3/4(cameraHeight). Q2: Should I check for this in the same way I check for collision, and move the camera relative to the movement of the player until the player stops moving, or am I thinking about it in the wrong way?

    Read the article

  • What is w componet [duplicate]

    - by Tifa
    This question already has an answer here: What does the graphics card do with the fourth element of a vector as the final position? 3 answers What is the W component on graphics programming. I read a blog about opengl that says that w must be equal to either 0 or 1 here. But the book I am currently reading has put w component to more than 1 value. So im kinda confuse what does it really do. The book I am reading is OpenGL es a quick start guide.

    Read the article

  • Function for building an isosurface (a sphere cut by planes)

    - by GameDevEnthusiast
    I want to build an octree over a quarter of a sphere (for debugging and testing). The octree generator relies on the AIsosurface interface to compute the density and normal at any given point in space. For example, for a full sphere the corresponding code is: // returns <0 if the point is inside the solid virtual float GetDensity( float _x, float _y, float _z ) const override { Float3 P = Float3_Set( _x, _y, _z ); Float3 v = Float3_Subtract( P, m_origin ); float l = Float3_LengthSquared( v ); float d = Float_Sqrt(l) - m_radius; return d; } // estimates the gradient at the given point virtual Float3 GetNormal( float _x, float _y, float _z ) const override { Float3 P = Float3_Set( _x, _y, _z ); float d = this->AIsosurface::GetDensity( P ); float Nx = this->GetDensity( _x + 0.001f, _y, _z ) - d; float Ny = this->GetDensity( _x, _y + 0.001f, _z ) - d; float Nz = this->GetDensity( _x, _y, _z + 0.001f ) - d; Float3 N = Float3_Normalized( Float3_Set( Nx, Ny, Nz ) ); return N; } What is a nice and fast way to compute those values when the shape is bounded by a low number of half-spaces?

    Read the article

  • UnityEngine.Vector2 does not contian a definition for "Set".... using futile

    - by FreshJays
    I am a bit lost, I am using futile and I am just trying to run the demo. But I keep getting UnityEngine.Vector2 does not contian a definition for "Set" in just one class, my using statments are: using System; using UnityEngine; using System.Collections; using System.Collections.Generic; When I look at the documents, I see that Set is a function http://docs.unity3d.com/Documentation/ScriptReference/Vector2.html I am using version 3.4.2 (in futile its happening in just the FAtlas class)

    Read the article

  • Cycling through ItemStacks whlie supplying data... LOST [on hold]

    - by user3251606
    Ok so i am working on a plugin for my server that will open and inventory and when closed it will pass items to this class... object of this class is to cycle through the inventory and use a cfg file to define items and prices and then grab that info in a for loop and add it all up... heres what i have thus far... public void sell(Player p, Inventory inv) { ListIterator<ItemStack> it = inv.iterator(); double total = 0; for (ItemStack is : inv) { is = it.next(); if (is.getType() != null) { String type = is.getType().toString(); //short dur = is.getDurability(); String check = ChestSell.plugin.getConfig().getString(type); p.sendMessage("Item Type: " + type); if (check != null) { int amou = is.getAmount(); double value = ChestSell.plugin.getConfig().getDouble(type + ".price"); double tv = amou * value; p.sendMessage("Items in chest: Type " + type + " Ammount: " + amou + " Value: $" + tv); } //TODO Add return Items } } p.sendMessage("You got paid $" + total + " for your items!"); inv.clear(); }

    Read the article

  • What is the easiest and fastest way to display an SDL_Surface in a window with SDL2?

    - by Semmu
    I would like to have an SDL_Surface representing the contents of the window, just like in the old days with SDL1.2. What is the best and fastest way to do it in SDL2? What I found is that I need an SDL_Window, an SDL_Renderer for that window, an SDL_Texture to render, and an SDL_Surface to create a texture from. This seems a bit too much to me, since I just want to display a single image on the screen. Not to mention the impact on the performance. On my machine (Lenovo Y510p laptop) this whole procedure takes 9ms, without any memory allocation, only using pre-allocated variables and totally black SDL_Surface. Is there a way I could speed up things?

    Read the article

  • libgdx - #iterator() cannot be used nested

    - by TimSim
    I'm getting this error when I try to check if any of the targets overlap each other: iterTargets = targets.iterator(); while (iterTargets.hasNext()) { Target target = iterTargets.next(); for (Target otherTarget:targets) { if (target.rectangle.overlaps(otherTarget.rectangle)) { // do something } } } So I can't do that? How am I supposed to check each member of an array to see if it overlaps any other member?

    Read the article

  • Artificial Neural Networks

    - by user1724140
    I have an Artificial Networks which needs to recognize 130 different types of moves encoded in terms of 1s and 0s. Therefore the number of outputs I used is 8 so that all my patterns could be distinguished. However, by using 8 outputs, the different types of patterns possible is 256, leaving me with 126 different types of patterns useless. Do these extra 126 different patterns ruin my ANN's ability? Is there a better way not to have these unused holes?

    Read the article

  • How do you deal with monotony of certain tasks? [on hold]

    - by aaronmallen
    I love programming methods, and functions. The if {}, while {}, etc... logic behind them is so much fun. I also love making commits, merging branches, solving merge conflicts. Unfortunately these activities usually require that I create classes which I find tedious and monotonous. The simple action of defining properties, is getting in the way of me writing the logic on what to do with those properties. I can't be alone here there has to be a part of coding for everyone that they dread or at least severely dislike doing compared to other parts of coding. How do you deal with the code based tasks that you find tedious?

    Read the article

  • D20 java engine

    - by javydreamercsw
    I'm trying to incorporate PCGen into my application to avoid recreating the wheel for D20 system. (Are there any other libraries out there?) In other words I would like to use PCGen as a library and do things like: CRUD characters (and all related information) Experience/level managements I don't need the GUI part of it just the information to pass to my application. This is a scenario I can think of: 1. Load available custom classes 2. Create a new Character/NPC 3. Transfer the stats to my system 4. Player keeps playing so I need to update experience 5. Save player in my system and recreate it later. I'm trying to start to create a character from PCClass with no luck. Looking at the code it seems it'll try to load it. I assume load from a file generated from the GUI. Is there a way of bypassing the GUI for all this? I was looking for a tutorial of some sorts without luck. I was able to figure out how to use the dice within PCGen but that's about it. Any ideas?

    Read the article

  • How to add two textures ,one is used as background and another one is used in a rotating cube!

    - by VampirEMufasa
    I am working in OpenGL ES 2.0. Now I am writing a demo for my project, I load two png images as my textures with the libSOIL But now I need to use one of them as the texture of my demo's background and another one as the texture of a rotating cube. In OpenGL ES 2.0, the adding texture operation is in the shader But now I don't know how to add the different textures to the different place in a shader Who can help me! Thank you very much!

    Read the article

  • Camera not working

    - by user17548
    I made a camera in DX9. To move forward I press the Up arrow. To rotate on the Y axis I use the mouse. When I perform these movements on their own the camera moves at the speed I want. However, if I hold down Up and move the mouse at the same time then the camera moves a lot faster than it should. I want it to move at the same speed as it does when only the Up arrow is pressed. I think I need to normalize something somewhere but not sure what and not sure where. Have tried various combinations without success so if anyone can point me in the right direction that would be great. Thanks. My code #define KEY_DOWN(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0) LRESULT WINAPI MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { if( KEY_DOWN(VK_UP)) MovePlayer(D3DXVECTOR3(0, 0, -1.0f)); if( KEY_DOWN(VK_DOWN)) MovePlayer(D3DXVECTOR3(0, 0, 1.0f)); switch( msg ) { case WM_MOUSEMOVE: ProcessMouseInput(); } } void MovePlayer( D3DXVECTOR3 in_vec ) { D3DXMATRIX CameraRot; D3DXMatrixRotationY(&CameraRot,D3DXToRadian(AngleY)); D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&in_vec,&CameraRot); CameraPos += (m_timeElapsed * CameraRotTarget); } void ProcessMouseInput() { GetCursorPos( &CurrentMouseState ); if ((CurrentMouseState.x != GameMouseState.x) || (CurrentMouseState.y != GameMouseState.y)) { int dx = CurrentMouseState.x - GameMouseState.x; int dy = CurrentMouseState.y - GameMouseState.y; AngleY+=m_timeElapsed*dx*7.0f; } GameMouseState = CurrentMouseState; // Set back to window center in Render function } VOID UpdateCamera() { D3DXVECTOR3 CameraOrigTarget(0, 0, -1); D3DXVECTOR3 CameraOrigUp(0, 1, 0); D3DXMATRIX CameraRot; D3DXMATRIX CameraRotX; D3DXMatrixRotationX(&CameraRotX,D3DXToRadian(AngleX)); D3DXMATRIX CameraRotY; D3DXMatrixRotationY(&CameraRotY,D3DXToRadian(AngleY)); CameraRot = CameraRotX * CameraRotY; D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&CameraOrigTarget,&CameraRot); D3DXVECTOR3 CameraTarget; CameraTarget = CameraPos + CameraRotTarget; D3DXVECTOR3 vUpVec( 0.0f, 1.0f, 0.0f ); D3DXMatrixLookAtLH( &matView, &CameraPos, &CameraTarget, &vUpVec ); g_pd3dDevice->SetTransform( D3DTS_VIEW, &matView ); D3DXMatrixPerspectiveFovLH( &matProj, D3DX_PI / 4, 1.0f, 1.0f, 100.0f ); g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj ); }

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >