Search Results

Search found 35689 results on 1428 pages for 'development mode'.

Page 422/1428 | < Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >

  • Developing for iOS on Linux

    - by Jay
    I am looking for an engine or library to develop a game for iOS on Linux. High level, low level, GUI, no GUI, does not matter too much. I am really looking for anything. I'm not actually talking about deploying to iOS from Linux or anything like. I just want to do the bulk of the work on Linux, with minimal changes required to run it on iOS. Edit: YES, I do have access to a Mac, but it is limited. So I want to be able to work on the project on my regular Ubuntu box. Also, I am in the paid developer program, so I can deploy to iOS devices from the Mac.

    Read the article

  • Direct3D11 and SharpDX - How to pass a model instance's world matrix as an input to a vertex shader

    - by Nathan Ridley
    Using Direct3D11, I'm trying to pass a matrix into my vertex shader from the instance buffer that is associated with a given model's vertices and I can't seem to construct my InputLayout without throwing an exception. The shader looks like this: cbuffer ConstantBuffer : register(b0) { matrix World; matrix View; matrix Projection; } struct VIn { float4 position: POSITION; matrix instance: INSTANCE; float4 color: COLOR; }; struct VOut { float4 position : SV_POSITION; float4 color : COLOR; }; VOut VShader(VIn input) { VOut output; output.position = mul(input.position, input.instance); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } The input layout looks like this: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; InputLayout = new InputLayout(device, signature, elements); The buffer initialization looks like this: public ModelDeviceData(Model model, Device device) { Model = model; var vertices = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, model.Vertices); var instances = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, Model.Instances.Select(m => m.WorldMatrix).ToArray()); VerticesBufferBinding = new VertexBufferBinding(vertices, Utilities.SizeOf<ColoredVertex>(), 0); InstancesBufferBinding = new VertexBufferBinding(instances, Utilities.SizeOf<Matrix>(), 0); IndicesBuffer = Helpers.CreateBuffer(device, BindFlags.IndexBuffer, model.Triangles); } The buffer creation helper method looks like this: public static Buffer CreateBuffer<T>(Device device, BindFlags bindFlags, params T[] items) where T : struct { var len = Utilities.SizeOf(items); var stream = new DataStream(len, true, true); foreach (var item in items) stream.Write(item); stream.Position = 0; var buffer = new Buffer(device, stream, len, ResourceUsage.Default, bindFlags, CpuAccessFlags.None, ResourceOptionFlags.None, 0); return buffer; } The line that instantiates the InputLayout object throws this exception: *HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.* Note that the data for each model instance is simply an instance of SharpDX.Matrix. EDIT Based on Tordin's answer, it sems like I have to modify my code like so: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE0", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE1", 1, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE2", 2, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE3", 3, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; and in the shader: struct VIn { float4 position: POSITION; float4 instance0: INSTANCE0; float4 instance1: INSTANCE1; float4 instance2: INSTANCE2; float4 instance3: INSTANCE3; float4 color: COLOR; }; VOut VShader(VIn input) { VOut output; matrix world = { input.instance0, input.instance1, input.instance2, input.instance3 }; output.position = mul(input.position, world); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } However I still get an exception.

    Read the article

  • Softbody with complex geometry

    - by philipp
    I have modeled an Handball, based on the tutorial here, with a custom texture. Now I am trying to animate this model with the reactor module as a soft body. Therefor I have watched and tried a lot of tutorials and for animating a simple Sphere everything works fine. But if i try to use the model I have created, than it results in the crash of max or an animation that shows a crystal like structure that transforms itself to another crystal. Is it possible to animate this kind of complex geometry as a soft body and am i just setting the values wrong? If yes, which are the important ones I should check? Thanks in advance! Greetings philipp

    Read the article

  • Node.js MMO - process and/or map division

    - by Gipsy King
    I am in the phase of designing a mmo browser based game (certainly not massive, but all connected players are in the same universe), and I am struggling with finding a good solution to the problem of distributing players across processes. I'm using node.js with socket.io. I have read this helpful article, but I would like some advice since I am also concerned with different processes. Solution 1: Tie a process to a map location (like a map-cell), connect players to the process corresponding to their location. When a player performs an action, transmit it to all other players in this process. When a player moves away, he will eventually have to connect to another process (automatically). Pros: Easier to implement Cons: Must divide map into zones Player reconnection when moving into a different zone is probably annoying If one zone/process is always busy (has players in it), it doesn't really load-balance, unless I split the zone which may not be always viable There shouldn't be any visible borders Solution 1b: Same as 1, but connect processes of bordering cells, so that players on the other side of the border are visible and such. Maybe even let them interact. Solution 2: Spawn processes on demand, unrelated to a location. Have one special process to keep track of all connected player handles, their location, and the process they're connected to. Then when a player performs an action, the process finds all other nearby players (from the special player-process-location tracking node), and instructs their matching processes to relay the action. Pros: Easy load balancing: spawn more processes Avoids player reconnecting / borders between zones Cons: Harder to implement and test Additional steps of finding players, and relaying event/action to another process If the player-location-process tracking process fails, all other fail too I would like to hear if I'm missing something, or completely off track.

    Read the article

  • Geometry instancing in OpenGL ES 2.0

    - by seahorse
    I am planning to do geometry instancing in OpenGL ES 2.0 Basically I plan to render the same geometry(a chair) maybe 1000 times in my scene. What is the best way to do this in OpenGL ES 2.0? I am considering passing model view mat4 as an attribute. Since attributes are per vertex data do I need to pass this same mat4, three times for each vertex of the same triangle(since modelview remains constant across vertices of the triangle). That would amount to a lot of extra data sent to the GPU( 2 extra vertices*16 floats*(Number of triangles) amount of extra data). Or should I be sending the mat4 only once per triangle?But how is that possible using attributes since attributes are defined as "per vertex" data? What is the best and efficient way to do instancing in OpenGL ES 2.0?

    Read the article

  • Seperating entities from their actions or behaviours

    - by Jamie Dixon
    Hi everyone, I'm having a go at creating a very simple text based game and am wondering what the standard design patterns are when it comes to entities (characters, sentient scenery) and the actions those entities can perform. As an example, I have entity that is a 'person' with various properties such as age, gender, height, etc. This 'person' can also perform some actions such as speaking, walking, jumping, flying, etc etc. How would you seperate out the entity from the actions it can perform and what are some common design patterns that solve this kind of problem?

    Read the article

  • Collision library for bullet hell in Python

    - by darkfeline
    I am making a bullet hell game in Python and am looking for a suitable collision library, taking the following into consideration: The library should do 2D polygon collision. It should be very fast. As a bullet hell game, I expect to do collision checks between hundreds, likely thousands of objects every frame at a consistent 60fps. Good documentation Permissive license (like MIT, not GPL) I am also considering writing my own library in C/C++ and wrapping with python ctypes in the event that no such library exists, though I do not have experience with collision detection algorithms, so I am not sure if this would be more trouble than it's worth. Could someone provide some guidance on this matter?

    Read the article

  • Performance of pixel shaders vs. SpriteBatch: XNA

    - by ashes999
    Precondition: I read this question/answer about using shaders, or spritebatch, to render and mark a sprite. I need to do something like that. I also have a 2D lighting PoC which I need to write. The way it will work will basically be something like: Draw all the sprites Draw lighting gradients to create a lighting texture Multiply/add the lighting texture to achieve different effects (I use multiple passes of add/multiply the lighting texture to achieve different effects.) My question is really about a generalization: can I say with certainty that pixel shaders are always faster than adding/multiplying textures to the SpriteBatch? Or that adding/multiplying is always faster? Or if it's not generalizable, how do I decide which approach to take, given that I can probably code either of them? (If it matters, I'm using MonoGame 3.0 beta for Windows games)

    Read the article

  • Pygame Tile Based Character movement speed

    - by Ryan
    Thanks for taking the time to read this. Right now I'm making a really basic tile based game. The map is a large amount of 16x16 tiles, and the character image is 16x16 as well. My character has its own class that is an extension of the sprite class, and the x and y position is saved in terms of the tile position. To note I am fairly new to pygame. My question is, I am planning to have character movement restricted to one tile at a time, and I'm not sure how to make it so that, even if the player hits the directional key dozens of time quickly, (WASD or arrow keys) it will only move from tile to tile at a certain speed. How could I implement this generally with pygame? (Similar to game movement of like Pokemon or NexusTk). Edit: I should probably note that I want it so that the player can only end a movement in a tile. He couldn't stop moving halfway inbetween a tile for example. Thanks for your time! Ryan

    Read the article

  • Interpolation using a sprite's previous frame and current frame

    - by user22241
    Overview I'm currently using a method which has been pointed out to me is extrapolation rather than interolation. As a result, I'm also now looking into the possibility of using another method which is based on a sprite's position at it's last (rendered) frame and it's current one. Assuming an interpolation value of 0.5 this is, (visually), how I understand it should affect my sprite's position.... This is how I'm obtaining an inerpolation value: public void onDrawFrame(GL10 gl) { // Set/re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip) { SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick += skipTicks; timeCorrection += (1000d / ticksPerSecond) % 1; nextGameTick += timeCorrection; timeCorrection %= 1; loops++; tics++; } interpolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(interpolation); } I am then applying it like so (in my rendering call): render(float interpolation) { spriteScreenX = (spriteScreenX - spritePreviousX) * interpolation + spritePreviousX; spritePreviousX = spriteScreenX; // update and store this for next time } Results This unfortunately does nothing to smooth the movement of my sprite. It's pretty much the same as without the interpolation code. I can't get my head around how this is supposed to work and I honestly can't find any decent resources which explain this in any detail. My understanding of extrapolation is that when we arrive at the rendering call, we calculate the time between the last update call and the render call, and then adjust the sprite's position to reflect this time (moving the sprite forward) - And yet, this (Interpolation) is moving the sprite back, so how can this produce smooth results? Any advise on this would be very much appreciated. Edit I've implemented the code from OriginalDaemon's answer like so: @Override public void onDrawFrame(GL10 gl) { newTime = System.currentTimeMillis()*0.001; frameTime = newTime - currentTime; if ( frameTime > (dt*25)) frameTime = (dt*25); currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { SceneManager.getInstance().getCurrentScene().updateLogic(); previousState = currentState; t += dt; accumulator -= dt; } interpolation = (float) (accumulator / dt); render(); } Interpolation values are now being produced between 0 and 1 as expected (similar to how they were in my original loop) - however, the results are the same as my original loop (my original loop allowed frames to skip if they took too long to draw which I think this loop is also doing). I appear to have made a mistake in my previous logging, it is logging as I would expect it to (interpolated position does appear to be inbetween the previous and current positions) - however, the sprites are most definitely choppy when the render() skipping happens.

    Read the article

  • Libgdx - 2D Mesh rendering overlap glitch

    - by user46858
    I am trying to render a 2D circle segment mesh (quarter circle)using Libgdx/Opengl ES 2.0 but I seem to be getting an overlapping issue as seen in the picture attached. I cant seem to find the cause of the problem but the overlapping disappears/reappears if I drag and resize the window to random sizes. The problem occurs on both pc and android. The strange thing is the first two segments atleast dont seem to be causing any overlapping only the third and/or forth segment.......even though they are all rendered using the same mesh object..... I have spent ages trying to find the cause of the problem before posting here for help so ANY help/advice in finding the cause of this problem would be really appreciated. public class MyGdxGame extends Game { private SpriteBatch batch; private Texture texture; private OrthographicCamera myCamera; private float w; private float h; private ShaderProgram circleSegShader; private Mesh circleScaleSegMesh; private Stage stage; private float TotalSegments; Vector3 virtualres; @Override public void create() { w = Gdx.graphics.getWidth(); h = Gdx.graphics.getHeight(); batch = new SpriteBatch(); ViewPortsize = new Vector2(); TotalSegments = 4.0f; virtualres = new Vector3(1280.0f, 720.0f, 0.0f); myCamera = new OrthographicCamera(); myCamera.setToOrtho(false, w, h); texture = new Texture(Gdx.files.internal("data/libgdx.png")); texture.setFilter(TextureFilter.Linear, TextureFilter.Linear); circleScaleSegMesh = createCircleMesh_V3(0.0f,0.0f,200.0f, 30.0f,3, (360.0f /TotalSegments) ); circleSegShader = loadShaderFromFile(new String("circleseg.vert"), new String("circleseg.frag")); shaderProgram.pedantic = false; stage = new Stage(); stage.setViewport(new ExtendViewport(w, h)); Gdx.input.setInputProcessor(stage); } @Override public void render() { .... //render renderInit(); renderCircleScaledSegment(); } @Override public void resize(int width, int height) { stage.getViewport().update(width, height, true); myCamera.position.set( virtualres.x/2.0f, virtualres.y/2.0f, 0.0f); myCamera.update(); } public void renderInit(){ Gdx.gl20.glClearColor(1.0f, 1.0f, 1.0f, 0.0f); Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); batch.setShader(null); batch.setProjectionMatrix(myCamera.combined); } public void renderCircleScaledSegment(){ Gdx.gl20.glEnable(GL20.GL_DEPTH_TEST); Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA); Gdx.gl20.glEnable(GL20.GL_BLEND); batch.begin(); circleSegShader.begin(); Matrix4 modelMatrix = new Matrix4(); Matrix4 cameraMatrix = new Matrix4(); Matrix4 cameraMatrix2 = new Matrix4(); Matrix4 cameraMatrix3 = new Matrix4(); Matrix4 cameraMatrix4 = new Matrix4(); cameraMatrix = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f)).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix.mul(modelMatrix); cameraMatrix2 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f) +(360.0f /TotalSegments) ).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix2.mul(modelMatrix); cameraMatrix3 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f) +(2*(360.0f /TotalSegments))).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix3.mul(modelMatrix); cameraMatrix4 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f),0.0f - ((360.0f /TotalSegments)/ 2.0f) +(3*(360.0f /TotalSegments)) ).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix4.mul(modelMatrix); Vector3 box2dpos = new Vector3(0.0f, 0.0f, 0.0f); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix2); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix3); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix4); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.end(); batch.flush(); batch.end(); Gdx.gl20.glDisable(GL20.GL_DEPTH_TEST); Gdx.gl20.glDisable(GL20.GL_BLEND); } public Mesh createCircleMesh_V3(float cx, float cy, float r_out, float r_in, int num_segments, float segmentSizeDegrees){ float theta = (float) (2.0f * MathUtils.PI / (num_segments * (360.0f / segmentSizeDegrees))); float c = MathUtils.cos(theta);//precalculate the sine and cosine float s = MathUtils.sin(theta); float t,t2; float x = r_out;//we start at angle = 0 float y = 0; float x2 = r_in;//we start at angle = 0 float y2 = 0; float[] meshCoords = new float[num_segments *2 *3 *7]; int arrayIndex = 0; //array for triangles without indices for(int ii = 0; ii < num_segments; ii++) { meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; t = x; x = c * x - s * y; y = s * t + c * y; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; t2 = x2; x2 = c * x2 - s * y2; y2 = s * t2 + c * y2; meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; } Mesh myMesh = new Mesh(VertexDataType.VertexArray, false, meshCoords.length, 0, new VertexAttribute(VertexAttributes.Usage.Position, 3, "a_position"), new VertexAttribute(VertexAttributes.Usage.Color, 4, "a_color")); myMesh.setVertices(meshCoords); return myMesh; } }

    Read the article

  • Monogame/SharpDX - Shader parameters missing

    - by Layoric
    I am currently working on a simple game that I am building in Windows 8 using MonoGame (develop3d). I am using some shader code from a tutorial (made by Charles Humphrey) and having an issue populating a 'texture' parameter as it appears to be missing. Edit I have also tried 'Texture2D' and using it with a register(t0), still no luck I'm not well versed writing shaders, so this might be caused by a more obvious problem. I have debugged through MonoGame's Content processor to see how this shader is being parsed, all the non 'texture' parameters are there and look to be loading correctly. Edit This seems to go back to D3D compiler. Shader code below: #include "PPVertexShader.fxh" float2 lightScreenPosition; float4x4 matVP; float2 halfPixel; float SunSize; texture flare; sampler2D Scene: register(s0){ AddressU = Clamp; AddressV = Clamp; }; sampler Flare = sampler_state { Texture = (flare); AddressU = CLAMP; AddressV = CLAMP; }; float4 LightSourceMaskPS(float2 texCoord : TEXCOORD0 ) : COLOR0 { texCoord -= halfPixel; // Get the scene float4 col = 0; // Find the suns position in the world and map it to the screen space. float2 coord; float size = SunSize / 1; float2 center = lightScreenPosition; coord = .5 - (texCoord - center) / size * .5; col += (pow(tex2D(Flare,coord),2) * 1) * 2; return col * tex2D(Scene,texCoord); } technique LightSourceMask { pass p0 { VertexShader = compile vs_4_0 VertexShaderFunction(); PixelShader = compile ps_4_0 LightSourceMaskPS(); } } I've removed default values as they are currently not support in MonoGame and also changed ps and vs to v4 instead of 2. Could this be causing the issue? As I debug through 'DXConstantBufferData' constructor (from within the MonoGameContentProcessing project) I find that the 'flare' parameter does not exist. All others seem to be getting created fine. Any help would be appreciated. Update 1 I have discovered that SharpDX D3D compiler is what seems to be ignoring this parameter (perhaps by design?). The ConstantBufferDescription.VariableCount seems to be not counting the texture variable. Update 2 SharpDX function 'GetConstantBuffer(int index)' returns the parameters (minus textures) which is making is impossible to set values to these variables within the shader. Any one know if this is normal for DX11 / Shader Model 4.0? Or am I missing something else?

    Read the article

  • Dynamic Memory Allocation and Memory Management

    - by Bunkai.Satori
    In an average game, there are hundreds or maybe thousands of objects in the scene. Is it completely correct to allocate memory for all objects, including gun shots (bullets), dynamically via default new()? Should I create any memory pool for dynamic allocation, or is there no need to bother with this? What if the target platform are mobile devices? Is there a need for a memory manager in a mobile game, please? Thank you. Language Used: C++; Currently developed under Windows, but planned to be ported later.

    Read the article

  • “Advanced” talk to text program [on hold]

    - by Rocky
    So, I have asked this question on 3 sites before, without getting a good answer, basically what I need is: being able to put recorded voice in a file (preferrebly .txt) Automatic recording when saying a key-word Automatic stop of the recording after a bit of silence If you have any idea on how this is possible I would be very happy :) I tried dragon natural speaking before as someone said it would work (it did not) so unless you know how that is possible dont say it ;) (Not sure what site to ask this on)

    Read the article

  • Isometric drawing "Not Tile Stuff" on isometric map?

    - by Icebone1000
    So I got my isometric renderer working, it can draw diamond or jagged maps...Then I want to move on...How do I draw characters/objects on it in a optimal way? What Im doing now, as one can imagine, is traversing my grid(map) and drawing the tiles in a order so alpha blending works correctly. So, anything I draw in this map must be drawed at the same time the map is being drawn, with sucks a lot, screws your very modular map drawer, because now everything on the game (but the HUD) must be included on the drawer.. I was thinking whats the best approach to do this, comparing the position of all objects(not tile stuff) on the grid against the current tile being draw seems stupid, would it be better to add an id ON the grid(map)? this also seems terrible, because objects can move freely, not per tile steps (it can occupies 2 tiles if its between them, etc.) Dont know if matters, but my grid is 3D, so its not a plane with objects poping out, its a bunch of pilled cubes.

    Read the article

  • In GLSL is it possible to offset vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others. This is the C++ code that generates the grid and texture co-orientates which I believe to be correct as the texture is mapped to the grid, hence the white blob in the middle. The grid-lines are generated in the fragment shader, sorry for any confusion. I have tried multiplying the r value of hmColour by 1000 unfortunately that had no effect. The only other problem it could be is that the texture coordinate data is incorrect ? for (int z = 0; z < MAP_Z ; z++) { for(int x = 0; x < MAP_X ; x++) { //Generate Vertex Buffer vertexData[iVertex++] = float (x) * MAP_X; vertexData[iVertex++] = 0; vertexData[iVertex++] = -(float) (z) * MAP_Z; //Colour Buffer NOT NEEDED colourData[iColour++] = 255.0f; // R colourData[iColour++] = 1.0f; // G colourData[iColour++] = 0.0f; // B //Texture Buffer textureData[iTexture++] = (float ) x * (1.0f / MAP_X); textureData[iTexture++] = (float ) z * (1.0f / MAP_Z); } } The heightmap texture I am trying to use appears like so (without grid-lines). This is the corresponding fragment shader // Fragment Shader Code #version 330 uniform sampler2D hmTexture; layout (location=0) out vec4 fragColour; in vec2 fragTex; in vec4 pos; void main(void) { vec2 line = fragTex * 32; // Without Gridlines fragColour = texture(hmTexture,fragTex); // With grid lines // + mix(vec4(0.0, 0.0, 1.0, 0.0), vec4(1.0, 1.0, 1.0, 1.0), // smoothstep(0.05,fract(line.y), 0.99) * smoothstep(0.05,fract(line.x),0.99)); }

    Read the article

  • How handle nifty initialization in a Slick2D state based game?

    - by nathan
    I'm using Slick2D and Nifty GUI. I decided to use a state based approach for my game and since i want to use Nifty GUI, i use the classes NiftyStateBasedGame for the main and NiftyOverlayBasicGameState for the states. As the description say, i'm suppose to initialize the GUI in the method initGameAndGUI on my states, no problem: @Override protected void initGameAndGUI(GameContainer gc, StateBasedGame sbg) throws SlickException { initNifty(gc, sbg) } It works great when i have only one state but if i'm doing a call to initNifty several times from different states, it will raise the following exception: org.bushe.swing.event.EventServiceExistsException: An event service by the name NiftyEventBusalready exists. Perhaps multiple threads tried to create a service about the same time? at org.bushe.swing.event.EventServiceLocator.setEventService(EventServiceLocator.java:123) at de.lessvoid.nifty.Nifty.initalizeEventBus(Nifty.java:221) at de.lessvoid.nifty.Nifty.initialize(Nifty.java:201) at de.lessvoid.nifty.Nifty.<init>(Nifty.java:142) at de.lessvoid.nifty.slick2d.NiftyCarrier.initNifty(NiftyCarrier.java:94) at de.lessvoid.nifty.slick2d.NiftyOverlayBasicGameState.initNifty(NiftyOverlayBasicGameState.java:332) at de.lessvoid.nifty.slick2d.NiftyOverlayBasicGameState.initNifty(NiftyOverlayBasicGameState.java:299) at de.lessvoid.nifty.slick2d.NiftyOverlayBasicGameState.initNifty(NiftyOverlayBasicGameState.java:280) at de.lessvoid.nifty.slick2d.NiftyOverlayBasicGameState.initNifty(NiftyOverlayBasicGameState.java:264) The initializeEventBus that raise the exception is called from the Nifty constructor and a new Nifty object is created within the initNifty method: public void initNifty( final SlickRenderDevice renderDevice, final SlickSoundDevice soundDevice, final SlickInputSystem inputSystem, final TimeProvider timeProvider) { if (isInitialized()) { throw new IllegalStateException("The Nifty-GUI was already initialized. Its illegal to do so twice."); } final InputSystem activeInputSystem; if (relayInputSystem == null) { activeInputSystem = inputSystem; } else { activeInputSystem = relayInputSystem; relayInputSystem.setTargetInputSystem(inputSystem); } nifty = new Nifty(renderDevice, soundDevice, activeInputSystem, timeProvider); } Is this a bug in the nifty for slick2d implementation or am i missing something? How am i supposed to handle nifty initialization over multiple states?

    Read the article

  • Django & google openid authentication with socialauth

    - by Zayatzz
    Hello I am trying to use django-socialauth (http://github.com/uswaretech/Django-Socialauth) for authenticating users for my django project. This is firs time working with openid and i've had to figure out how exactly this open id works. I have more or less understood it, by now, but there are few things that elude me. The authentication process starts when the request is put together in in django-socialauth.openid_consumer.views.begin. I can see that the outgoing authentication request is more or less something like this: https://www.google.com/accounts/o8/ud?openid.assoc_handle=AOQobUckRThPUj3K1byG280Aze-dnfc9Iu6AEYaBwvHE11G0zy8kY8GZ& openid.ax.if_available=fname& openid.ax.mode=fetch_request& openid.ax.required=email& openid.ax.type.email=http://axschema.org/contact/email& openid.ax.type.fname=http://example.com/schema/fullname& openid.claimed_id=http://specs.openid.net/auth/2.0/identifier_select& openid.identity=http://specs.openid.net/auth/2.0/identifier_select& openid.mode=checkid_setup&openid.ns=http://specs.openid.net/auth/2.0& openid.ns.ax=http://openid.net/srv/ax/1.0& openid.ns.sreg=http://openid.net/extensions/sreg/1.1& openid.realm=http://localhost/& openid.return_to=http://localhost/social/gmail_login/complete/?janrain_nonce=2010-03-20T11%3A19%3A44ZPZCjNc&openid.sreg.optional=postcode,country,nickname,email This is lot like 2nd example here: http://code.google.com/apis/accounts/docs/OpenID.html#Samples The problem is, that the request, i get back, is nothing like the corresponding example from code.google.com (look at the 3rd example in example responses. Response dict i get is like this: { 'openid.op_endpoint': 'https://www.google.com/accounts/o8/ud', 'openid.sig': 'QWMa4x4ruMUvSCfLwKV6CZRuo0E=', 'openid.ext1.type.email': 'http://axschema.org/contact/email', 'openid.return_to': 'http://localhost/social/gmail_login/complete/?janrain_nonce=2010-03-20T17%3A54%3A06ZHV4cqh', 'janrain_nonce': '2010-03-20T17:54:06ZHV4cqh', 'openid.response_nonce': '2010-03-20T17:54:06ZdC5mMu9M_6O4pw', 'openid.claimed_id': 'https://www.google.com/accounts/o8/id?id=AItOghawkFz0aNzk91vaQWhD-DxRJo6sS09RwM3SE', 'openid.mode': 'id_res', 'openid.ns.ext1': 'http://openid.net/srv/ax/1.0', 'openid.signed': 'op_endpoint,claimed_id,identity,return_to,response_nonce,assoc_handle,ns.ext1,ext1.mode,ext1.type.email,ext1.value.email', 'openid.ext1.value.email': '[email protected]', 'openid.assoc_handle': 'AOQobUfssTJ2IxRlxrIvU4Xg8HHQKKTEuqwGxvwwuPR5rNvag0elGlYL', 'openid.ns': 'http://specs.openid.net/auth/2.0', 'openid.identity': 'https://www.google.com/accounts/o8/id?id=AItOawkghgfhf1FkvaQWhD-DxRJo6sS09RwMKjASE', 'openid.ext1.mode': 'fetch_response'} The socialauth itself has been built to accept my email address this way: elif request.openid and request.openid.ax: email = request.openid.ax.get('email') And obviously this fails. Why i am asking all this is, that perhaps i am doing something wrong and my outgoing request is wrong? Or am i doing all correctly and should change the socialaouth module to accept info in a new way and then commit the change? Alan

    Read the article

  • Blender Object Appearing Gray when all Lights are Off

    - by celestialorb
    I have an issue with Blender where, when I turn my only light off (a sun lamp) and render the image my object appears gray rather than black (and thus, not appear to the camera). I can't figure out why this is happening. Here's what I just did in my scene: Added a new UV Sphere mesh (to make a total of two spheres), made it visible to the camera, turned off the sun lamp (by setting energy to 0), and rendered. The result I obtained is below. I discovered this when attempting to render the first sphere with a material/texture on it and it was too bright. The material on the spheres (which are different) are very basic, there's no emit, diffuse and specular are at default values. Could there be an issue with the way my camera is setup? Thanks in advance!

    Read the article

  • 3D collision detection with meshes using only raycasting?

    - by Nick
    I'm building a game using WebGL and Three.js, and so far I have a terrain with a guy walking on it. I simply cast a ray downwards to know the terrain height. How can I do this for other 3D objects, like the inside of a house? Is this possible by casting many rays in every direction of the player? If not, I would like to know how I can achieve the simplest collision detection possible for other meshes. Do you have to cast a ray to every triangle in every mesh nearby?

    Read the article

  • How to wire finite state machine into component-based architecture?

    - by Pup
    State machines seem to cause harmful dependencies in component-based architectures. How, specifically, is communication handled between a state machine and the components that carry out state-related behavior? Where I'm at: I'm new to component-based architectures. I'm making a fighting game, although I don't think that should matter. I envision my state machine being used to toggle states like "crouching", "dashing", "blocking", etc. I've found this state-management technique to be the most natural system for a component-based architecture, but it conflicts with techniques I've read about: Dynamic Game Object Component System for Mutable Behavior Characters It suggests that all components activate/deactivate themselves by continually checking a condition for activation. I think that actions like "running" or "walking" make sense as states, which is in disagreement with the accepted response here: finite state machine used in mario like platform game I've found this useful, but ambiguous: How to implement behavior in a component-based game architecture? It suggests having a separate component that contains nothing but a state machine. But, this necessitates some kind of coupling between the state machine component and nearly all the other components. I don't understand how this coupling should be handled. These are some guesses: A. Components depend on state machine: Components receive reference to state machine component's getState(), which returns an enumeration constant. Components update themselves regularly and check this as needed. B. State machine depends on components: The state machine component receives references to all the components it's monitoring. It queries their getState() methods to see where they're at. C. Some abstraction between them Use an event hub? Command pattern? D. Separate state objects that reference components State Pattern is used. Separate state objects are created, which activate/deactivate a set of components. State machine switches between state objects. I'm looking at components as implementations of aspects. They do everything that's needed internally to make that aspect happen. It seems like components should function on their own, without relying on other components. I know some dependencies are necessary, but state machines seem to want to control all of my components.

    Read the article

  • How To Approach 360 Degree Snake

    - by Austin Brunkhorst
    I've recently gotten into XNA and must say I love it. As sort of a hello world game I decided to create the classic game "Snake". The 90 degree version was very simple and easy to implement. But as I try to make a version of it that allows 360 degree rotation using left and right arrows, I've come into sort of a problem. What i'm doing now stems from the 90 degree version: Iterating through each snake body part beginning at the tail, and ending right before the head. This works great when moving every 100 milliseconds. The problem with this is that it makes for a choppy style of gameplay as technically the game progresses at only 6 fps rather than it's potential 60. I would like to move the snake every game loop. But unfortunately because the snake moves at the rate of it's head's size it goes way too fast. This would mean that the head would need to move at a much smaller increment such as (2, 2) in it's direction rather than what I have now (32, 32). Because I've been working on this game off and on for a couple of weeks while managing school I think that I've been thinking too hard on how to accomplish this. It's probably a simple solution, i'm just not catching it. Here's some pseudo code for what I've tried based off of what makes sense to me. I can't really think of another way to do it. for(int i = SnakeLength - 1; i > 0; i--){ current = SnakePart[i], next = SnakePart[i - 1]; current.x = next.x - (current.width * cos(next.angle)); current.y = next.y - (current.height * sin(next.angle)); current.angle = next.angle; } SnakeHead.x += cos(SnakeAngle) * SnakeSpeed; SnakeHead.y += sin(SnakeAngle) * SnakeSpeed; This produces something like this: Code in Action. As you can see each part always stays behind the head and doesn't make a "Trail" effect. A perfect example of what i'm going for can be found here: Data Worm. Not the viewport rotation but the trailing effect of the triangles. Thanks for any help!

    Read the article

  • Converting 2D coordinates from multiple viewpoints into 3D coordinates

    - by Kirk Smith
    Here's the situation. I've got a set of 2D coordinates that specify a point on an image. These 2D coordinates relate to an event that happened in a 3D space (video game). I have 5 images with the same event point on it, so I have 5 sets of 2D coordinates for a single 3D coordinate. I've tried everything I can think to translate these 2D coordinates into 3D coordinates, but the math just escapes me. I have a good estimate of the coordinates from which each image was taken, they're not perfect but they're close. I tried simplifying this and opening up Cinema 4D, a 3D modeling application. I placed 5 cameras at the coordinates where the pictures were taken and lined up the pictures with the event points for each one and tried to find a link, but nothing was forthcoming. I know it's a math question, but like I said, I just can't get it. I took physics in high school 6 years ago, but we didn't deal with a whole lot of this sort of thing. Any help will be very much appreciated, I've been thinking on it for quite a while and I just can't come up with anything.

    Read the article

  • How do I limit the game loop?

    - by user1758938
    How do I make a game update at a consistent speed? For example, this would loop too fast: while(true) { GetInput(); Render(); } This just wont work (hard to explain): while(true) { GetInput(); Render(); Sleep(16); } How do I sync my game to a certain framerate and still have input and functions going at a consistent rate?

    Read the article

  • Exporting .FBX model into XNA - unorthogonal bones

    - by Sweta Dwivedi
    I create a butterfly model in 3ds max with some basic animation, however trying to export it to .FBX format I get the following exception.. any idea how i can transform the wings to be orthogonal.. One or more objects in the scene has local axes that are not perpendicular to each other (non-orthogonal). The FBX plug-in only supports orthogonal (or perpendicular) axes and will not correctly import or export any transformations that involve non-perpendicular local axes. This can create an inaccurate appearance with the affected objects: -Right.Wing I have attached a picture for reference . . .

    Read the article

< Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >