Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 448/1328 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • How can I implement an Iris Wipe effect?

    - by Vandell
    For those who doesn't know: An iris wipe is a wipe that takes the shape of a growing or shrinking circle. It has been frequently used in animated short subjects, such as those in the Looney Tunes and Merrie Melodies cartoon series, to signify the end of a story. When used in this manner, the iris wipe may be centered around a certain focal point and may be used as a device for a "parting shot" joke, a fourth wall-breaching wink by a character, or other purposes. Example from flasheff.com Your answer may or may not include a coding sample, a language agnostic explanation is considered enough.

    Read the article

  • Using multiple indexes with buffer objects in OpenTK

    - by Rushyo
    I've got multiple buffers in OpenGL holding data on position, normals and texcoords. I also have an equal number of buffers holding distinct index data for each of those buffers. I quite like this format (indvidual indexes for each buffer) utilised by COLLADA since it strikes me as optimally efficient at accessing each buffer. I've set up pointers to the relevant data arrays using VertexPointer, NormalPointer, etc however I have no way to assign pointers to the index buffers since DrawElements appear to only look at one ElementArrayBuffer. Can I utilise multiple indices some way or will I be better off using a different technique which can support this? I'd prefer to keep the distinct indices if at all possible.

    Read the article

  • Circular motion on low powered hardware

    - by Akroy
    I was thinking about platforms and enemies moving in circles in old 2D games, and I was wondering how that was done. I understand parametric equations, and it's trivial to use sin and cos to do it, but could an NES or SNES make real time trig calls? I admit heavy ignorance, but I thought those were expensive operations. Is there some clever way to calculate that motion more cheaply? I've been working on deriving an algorithm from trig sum identities that would only use precalculated trig, but that seems convoluted.

    Read the article

  • GetContactList stops reporting collisions on welded bodies

    - by Henrique Jung
    I have some strange problem with my game which uses Box2D as physics engine and I'm out of ideas on what I can do to solve it. My game is a class assignment where I need to build a simple game where the main character moves in a 2D environment while square blocks comes from below him. Each time a collision occurs, that block is attached to the character using a weld joint, when three blocks of the same colors are together, they annihilate themselves(an effect similar to Bejeweled). I'm using a recursive function to iterate through all the attached blocks of a given block to see if there are enough blocks for them to be deleted. I'm using GetContactList function to iterate through the list of contacts to see which blocks are adjacent to each other. The results are quite disappointing, the blocks only get annihilated in few cases. After a lot of debugging, I found the issue, but I still don't know how to solve. My issue is: after some time, GetContactList STOPS returning contacts (return NULL) to blocks that were already attached for some time. I spent some time reading the Box2D manual as well as some tutorials and still didn't find any clue of what is happening. Below there's some simplified version of the code that I wrote. for(int a = 0; a < blocksList.size(); a++) { blocksList[a].BuildConnections(); } And on BuildConnections b2ContactEdge* edge = body->GetContactList(); while(edge != NULL) { if (long_check_to_see_if_there's_a_block_nearby) { // add itself to the list to be anihilated globalList.push_back(this); //if there's, call BuildConnections again on the adjacent block adjacentBody->GetUserData()->BuildConnections; } edge = edge->next; } I know that there's another issue related to circular inclusions, but I fairly sure that this problem isn't causing the problem with the collisions. You can download my entire code from this page if you'd like http://code.google.com/p/fellz/source/list

    Read the article

  • 2D metaball liquid effect - how to feed output of one rendering pass as input to another shader

    - by Guye Incognito
    I'm attempting to make a shader for unity3d web project. I want to implement something like in the great answer by DMGregory in this question. in order to achieve a final look something like this.. Its metaballs with specular and shading. The steps to make this shader are. 1. Convert the feathered blobs into a heightmap. 2. Generate a normalmap from the heightmap 3. Feed the normal map and height map into a standard unity shader, for instance transparent parallax specular. I pretty much have all the pieces I need assembled but I am new to shaders and need help putting them together I can generate a heightmap from the blobs using some fragment shader code I wrote (I'm just using the red channel here cus i dont know if you can access the brightness) half4 frag (v2f i) : COLOR{ half4 texcol,finalColor; texcol = tex2D (_MainTex, i.uv); finalColor=_MyColor; if(texcol.r<_botmcut) { finalColor.r= 0; } else if((texcol.r>_topcut)) { finalColor.r= 0; } else { float r = _topcut-_botmcut; float xpos = _topcut - texcol.r; finalColor.r= (_botmcut + sqrt((xpos*xpos)-(r*r)))/_constant; } return finalColor; } turns these blobs.. into this heightmap Also I've found some CG code that generates a normal map from a height map. The bit of code that makes the normal map from finite differences is here void surf (Input IN, inout SurfaceOutput o) { o.Albedo = fixed3(0.5); float3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv_MainTex)); float me = tex2D(_HeightMap,IN.uv_MainTex).x; float n = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y+1.0/_HeightmapDimY)).x; float s = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y-1.0/_HeightmapDimY)).x; float e = tex2D(_HeightMap,float2(IN.uv_MainTex.x-1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float w = tex2D(_HeightMap,float2(IN.uv_MainTex.x+1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float3 norm = normal; float3 temp = norm; //a temporary vector that is not parallel to norm if(norm.x==1) temp.y+=0.5; else temp.x+=0.5; //form a basis with norm being one of the axes: float3 perp1 = normalize(cross(norm,temp)); float3 perp2 = normalize(cross(norm,perp1)); //use the basis to move the normal in its own space by the offset float3 normalOffset = -_HeightmapStrength * ( ( (n-me) - (s-me) ) * perp1 + ( ( e - me ) - ( w - me ) ) * perp2 ); norm += normalOffset; norm = normalize(norm); o.Normal = norm; } Also here is the built-in transparent parallax specular shader for unity. Shader "Transparent/Parallax Specular" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 0) _Shininess ("Shininess", Range (0.01, 1)) = 0.078125 _Parallax ("Height", Range (0.005, 0.08)) = 0.02 _MainTex ("Base (RGB) TransGloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} _ParallaxMap ("Heightmap (A)", 2D) = "black" {} } SubShader { Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"} LOD 600 CGPROGRAM #pragma surface surf BlinnPhong alpha #pragma exclude_renderers flash sampler2D _MainTex; sampler2D _BumpMap; sampler2D _ParallaxMap; fixed4 _Color; half _Shininess; float _Parallax; struct Input { float2 uv_MainTex; float2 uv_BumpMap; float3 viewDir; }; void surf (Input IN, inout SurfaceOutput o) { half h = tex2D (_ParallaxMap, IN.uv_BumpMap).w; float2 offset = ParallaxOffset (h, _Parallax, IN.viewDir); IN.uv_MainTex += offset; IN.uv_BumpMap += offset; fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Transparent/Bumped Specular" }

    Read the article

  • dynamic 2d texture creation in unity from script

    - by gman
    I'm coming from HTML5 and I'm used to having the 2D Canvas API I can use to generate textures. Is there anything similar in Unity3D? For example, let's say at runtime I want to render a circle, put 3 initials in the middle and then take the result and put that in a texture. In HTML5 I'd do this var initials = "GAT"; var textureWidth = 256; var textureHeight = 256; // create a canvas var c = document.createElement("canvas"); c.width = textureWidth; c.height = textureHeight; var ctx = c.getContext("2d"); // Set the origin to the center of the canvas ctx.translate(textureWidth / 2, textureHeight / 2); // Draw a yellow circle ctx.fillStyle = "rgb(255,255,0)"; // yellow ctx.beginPath(); var radius = (Math.min(textureWidth, textureHeight) - 2) / 2; ctx.arc(0, 0, radius, 0, Math.PI * 2, true); ctx.fill(); // Draw some black initials in the middle. ctx.fillStyle = "rgb(0,0,0)"; ctx.font = "60pt Arial"; ctx.textAlign = "center"; ctx.fillText(initials, 0, 30); // now I can make a texture from that var tex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, tex); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, c); gl.generateMipmap(gl.TEXTURE_2D); I know I can edit individual pixels in a Unity texture but is there any higher level API for drawing to texture in unity?

    Read the article

  • C++ OpenGL wireframe cube rendering blank

    - by caleb.breckon
    I'm just trying to draw a bunch of lines that make up a "cube". I can't for the life of me figure out why this is producing a black screen. The debugger does not break at any point. I'm sure it's a problem with my pointers, as I'm only decent at them in regular c++ and in OpenGL it gets even worse. const char* vertexSource = "#version 150\n" "in vec3 position;" "void main() {" " gl_Position = vec4(position, 1.0);" "}"; const char* fragmentSource = "#version 150\n" "out vec4 outColor;" "void main() {" " outColor = vec4(1.0, 1.0, 1.0, 1.0);" "}"; int main() { initializeGLFW(); // Initialize GLEW glewExperimental = GL_TRUE; glewInit(); // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers( 1, &vbo ); float vertices[] = { 1.0f, 1.0f, 1.0f, // Vertex 0 (X, Y, Z) -1.0f, 1.0f, 1.0f, // Vertex 1 (X, Y, Z) -1.0f, -1.0f, 1.0f, // Vertex 2 (X, Y, Z) 1.0f, -1.0f, 1.0f, // Vertex 3 (X, Y, Z) 1.0f, 1.0f, -1.0f, // Vertex 4 (X, Y, Z) -1.0f, 1.0f, -1.0f, // Vertex 5 (X, Y, Z) -1.0f, -1.0f, -1.0f, // Vertex 6 (X, Y, Z) 1.0f, -1.0f, -1.0f // Vertex 7 (X, Y, Z) }; GLuint indices[] = { 0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 4, 1, 5, 2, 6, 3, 7 }; glBindBuffer( GL_ARRAY_BUFFER, vbo ); glBufferData( GL_ARRAY_BUFFER, sizeof( vertices ), vertices, GL_STATIC_DRAW ); //glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vbo); //glBufferData( GL_ELEMENT_ARRAY_BUFFER, sizeof( indices ), indices, GL_STATIC_DRAW ); // Create and compile the vertex shader GLuint vertexShader = glCreateShader( GL_VERTEX_SHADER ); glShaderSource( vertexShader, 1, &vertexSource, NULL ); glCompileShader( vertexShader ); // Create and compile the fragment shader GLuint fragmentShader = glCreateShader( GL_FRAGMENT_SHADER ); glShaderSource( fragmentShader, 1, &fragmentSource, NULL ); glCompileShader( fragmentShader ); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader( shaderProgram, vertexShader ); glAttachShader( shaderProgram, fragmentShader ); glBindFragDataLocation( shaderProgram, 0, "outColor" ); glLinkProgram (shaderProgram); glUseProgram( shaderProgram); // Specify the layout of the vertex data GLint posAttrib = glGetAttribLocation( shaderProgram, "position" ); glEnableVertexAttribArray( posAttrib ); glVertexAttribPointer( posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0 ); // Main loop while(glfwGetWindowParam(GLFW_OPENED)) { // Clear the screen to black glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT ); // Draw lines from 2 vertices glDrawElements(GL_LINES, sizeof(indices), GL_UNSIGNED_INT, indices ); // Swap buffers glfwSwapBuffers(); } // Clean up glDeleteProgram( shaderProgram ); glDeleteShader( fragmentShader ); glDeleteShader( vertexShader ); //glDeleteBuffers( 1, &ebo ); glDeleteBuffers( 1, &vbo ); glDeleteVertexArrays( 1, &vao ); glfwTerminate(); exit( EXIT_SUCCESS ); }

    Read the article

  • Collision Resolution

    - by CiscoIPPhone
    I know quite well how to check for collisions, but I don't know how to handle the collision in a good way. Simplified, if two objects collide I use some calculations to change the velocity direction. If I don't move the two objects they will still overlap and if the velocity is not big enough they will still collide after next update. This can cause objects to get stuck in each other. But what if I try to move the two objects so they do not overlap. This sounds like a good idea but I have realised that if there is more than two objects this becomes very complicated. What if I move the two objects and one of them collides with other objects so I have to move them too and they may collide with walls etc. I have a top down 2D game in mind but I don't think that has much to do with it. How are collisions usually handled? This question is asked on behalf of Wooh

    Read the article

  • Collision detection, stop gravity

    - by Scott Beeson
    I just started using Gamemaker Studio and so far it seems fairly intuitive. However, I set a room to "Room is Physics World" and set gravity to 10. I then enabled physics on my player object and created a block object to match a platform on my background sprite. I set up a Collision Detection event for the player and the block objects that sets the gravity to 0 (and even sets the vspeed to 0). I also put a notification in the collision event and I don't get that either. I have my key down and key up events working well, moving the player left and right and changing the sprites appropriately, so I think I understand the event system. I must just be missing something simple with the physics. I've tried making both and neither of the objects "solid". Pretty frustrating since it looks so easy. The player starting point is directly above the block object in the grid and the player does fall through the block. I even made the block sprite solid red so I could see it (initially it was invisible, obviously).

    Read the article

  • Component-based design: handling objects interaction

    - by Milo
    I'm not sure how exactly objects do things to other objects in a component based design. Say I have an Obj class. I do: Obj obj; obj.add(new Position()); obj.add(new Physics()); How could I then have another object not only move the ball but have those physics applied. I'm not looking for implementation details but rather abstractly how objects communicate. In an entity based design, you might just have: obj1.emitForceOn(obj2,5.0,0.0,0.0); Any article or explanation to get a better grasp on a component driven design and how to do basic things would be really helpful.

    Read the article

  • Animation API vs frame animation

    - by Max
    I'm pretty far down the road in my game right now, closing in on the end. And I'm adding little tweaks here and there. I used custom frame animation of a single image with many versions of my sprite on it, and controlled which part of the image to show using rectangles. But I'm starting to think that maybe I should've used the Animation API that comes with android instead. Will this effect my performance in a negative way? Can I still use rectangles to draw my bitmap? Could I add effects from the Animation API to my current frame-controlled animation? like the fadeout-effect etc? this would mean I wont have to change my current code. I want some of my animations to fade out, and just noticed that using the Animation API makes things alot easier. But needless to say, I would prefer not having to change all my animation-code. I'm bad at explaining, so Ill show a bit of how I do my animation: private static final int BMP_ROWS = 1; //I use top-view so only need my sprite to have 1 direction private static final int BMP_COLUMNS = 3; public void update(GameControls controls) { if (sprite.isMoving) { currentFrame = ++currentFrame % BMP_COLUMNS; } else { this.setFrame(1); } } public void draw(Canvas canvas, int x, int y, float angle) { this.x=x; this.y=y; canvas.save(); canvas.rotate(angle , x + width / 2, y + height / 2); int srcX = currentFrame * width; int srcY = 0 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); canvas.restore(); }

    Read the article

  • How do I draw anti-aliased holes in a bitmap

    - by gyozo kudor
    I have an artillery game (hobby-learning project) and when the projectile hits it leaves a hole in the ground. I want this hole to have antialiased edges. I'm using System.Drawing for this. I've tried with clipping paths, and drawing with a transparent color using gfx.CompositingMode = CompositingMode.SourceCopy, but it gives me the same result. If I draw a circle with a solid color it works fine, but I need a hole, a circle with 0 alpha values. I have enabled these but they work only with solid colors: gfx.CompositingQuality = CompositingQuality.HighQuality; gfx.InterpolationMode = InterpolationMode.HighQualityBicubic; gfx.SmoothingMode = SmoothingMode.AntiAlias; In the two pictures consider black as being transparent. This is what I have (zoomed in): And what I need is something like this (made with photoshop): This will be just a visual effect, in code for collision detection I still treat everything with alpha 128 as solid. Edit: I'm usink OpenTK for this game. But for this question I think it doesn't really matter probably it is gdi+ related.

    Read the article

  • Better data structure for a game like Bubble Witch

    - by CrociDB
    I'm implementing a bubble-witch-like game (http://www.king.com/games/puzzle-games/bubble-witch/), and I was thinking on what's the better way to store the "bubbles" and to work with. I thought of using graphs, but that might be too complex for a trivial thing. Thought of a matrix, just like a tile map, but that might get too 'workaroundy'. I don't know. I'll be doing in Flash/AS3, though. Thanks. :)

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • OpenGL ES 2 on Android: native window

    - by ThreaderSlash
    According to OGLES specification, we have the following definition: EGLSurface eglCreateWindowSurface(EGLDisplay display, EGLConfig config, NativeWindowType native_window, EGLint const * attrib_list) More details, here: http://www.khronos.org/opengles/documentation/opengles1_0/html/eglCreateWindowSurface.html And also by definition: int32_t ANativeWindow_setBuffersGeometry(ANativeWindow* window, int32_t width, int32_t height, int32_t format); More details, here: http://mobilepearls.com/labs/native-android-api I am running Android Native App on OGLES 2 and debugging it in a Samsung Nexus device. For setting up the 3D scene graph environment, the following variables are defined: struct android_app { ... ANativeWindow* window; }; android_app* mApplication; ... mApplication=&pApplication; And to initialize the App, we run the commands in the code: ANativeWindow_setBuffersGeometry(mApplication->window, 0, 0, lFormat); mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); Funny to say is that, the command ANativeWindow_setBuffersGeometry behaves as expected and works fine according to its definition, accepting all the parameters sent to it. But the eglCreateWindowSurface does no accept the parameter mApplication-window, as it should accept according to its definition. Instead, it looks for the following input: EGLNativeWindowType hWnd; mSurface = eglCreateWindowSurface(mDisplay,lConfig,hWnd,NULL); As an alternative, I considered to use instead: NativeWindowType hWnd=android_createDisplaySurface(); But debugger says: Function 'android_createDisplaySurface' could not be resolved Is 'android_createDisplaySurface' compatible only for OGLES 1 and not for OGLES 2? Can someone tell if there is a way to convert mApplication-window? In a way that the data from the android_app get accepted to the window surface?

    Read the article

  • What exactly can shaders be used for?

    - by Bane
    I'm not really a 3D person, and I've only used shaders a little in some Three.js examples, and so far I've got an impression that they are only being used for the graphical part of the equation. Although, the (quite cryptic) Wikipedia article and some other sources lead me to believe that they can be used for more than just graphical effects, ie, to program the GPU (Wikipedia). So, the GPU is still a processor, right? With a larger and a different instruction set for easier and faster vector manipulation, but still a processor. Can I use shaders to make regular programs (provided I've got access to the video memory, which is probable)? Edit: regular programs == "Applications", ie create windows/console programs, or at least have some way of drawing things on the screen, maybe even taking user input.

    Read the article

  • Android Loading Screen: How do I use a stack to load elements?

    - by tom_mai78101
    I have some problems with figuring out what value I should put in the function: int value_needed_to_figure_out = X; ProgressBar.incrementProgressBy(value_needed_to_figure_out); I've been researching about loading screens and how to use them. Some examples I've seen have implemented Thread.sleep() in a Handler.post(new Runnable()) function. To me, I got most of that concept of using the Handler to update the ProgressBar, while pretending to do some heavy crunching work. So, I kept looking. I have read this thread here: How do I load chunks of data from an assest manager during a loading screen? It said that I can try using a stack it needs to load, and adding a size counter as I add elements to the stack. What does it mean? This is the part where I'm totally stumped. If anyone would provide some hints, I'll gladly appreciate it. Thanks in advance.

    Read the article

  • Method for spawning enemies according to player score and game time

    - by Sun
    I'm making a top-down shooter and want to scale the difficulty of the game according to what the score is and how much time has Passed. Along with this, I want to spawn enemies in different patterns and increase the intervals at which these enemies are shown. I'm going for a similar effect to Geometry wars. However, I can think of a to do this other than have multiple if-else statments, e.g. : if (score > 1000) { //spawn x amount if enemies } else if (score > 10000) { //spawn x amount of enemy type 1 & 2 } else if (score > 15000) { //spawn x amount of enemy type 1 & 2 & 3 } else if (score > 25000) { //spawn x amount of enemy type 1 & 2 & 3 //create patterns with enemies } ...etc What would be a better method of spawning enemies as I have described?

    Read the article

  • SharpDX/D3D: How to implement and draw fonts/text

    - by Dmitrij A
    I am playing with SharpDX (Direct3D for .NET) without using "Toolkit", already finished with basic rendering 3D models. But now i am wondering how to program/create fonts for game (2D), or how to simple draw variable text to output with Direct3D? (it is not as SharpDX question as common Direct3D question, how to start with game GUIs? And what should i do to program simple GUI's like menu for a game (generally i understand that it's shaders).

    Read the article

  • 2D platformers: why make the physics dependent on the framerate?

    - by Archagon
    "Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast. Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.) What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine? Thank you, and sorry if the question was confusing.

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D

    Read the article

  • Correct way to handle path-finding collision matrix

    - by Xander Lamkins
    Here is an example of me utilizing path finding. The red grid represents the grid utilized by my A* library to locate a distance. This picture is only an example, currently it is all calculated on the 1x1 pixel level (pretty darn laggy). I want to make it so that the farther I click, the less accurate it will be (split the map into larger grid pieces). Edit: as mentioned by Eric, this is not a required game mechanic. I am perfectly fine with any method that allows me to make this accurate while still fast. This isn't the really the topic of this question though. The problem I have is, my current library uses a two dimensional grid of integers. The higher the number in a cell, the more resistance for that grid tile. Currently I'm setting all unwalkable spots to Integer Max. Here is an example of what I want: I'm just not sure how I should set up the arrays of integers of the grid. Every time an element is added/removed to/from the game, it's collision details are updated in the table. Here is a picture of what the map looks like on my collision layer: I probably shouldn't be creating new arrays every time I have to do a path find because my game needs to support tons of PF at the same time. Should I have multiple arrays that are all updated when the dynamic elements are updated (a building is built/a building is destroyed). The problem I see with this is that it will probably make the creation and destruction of buildings a little more laggy than I would want because it would be setting the collision grid for each built in accuracy level. I would also have to add more/remove some arrays if I ever in the future changed the map size. Should I generate the new array based on an accuracy value every time I need to PF? The problem I see with this is that it will probably make any form of PF just as laggy because it will have to search through a MapWidth x MapHeight number of cells to shrink it all down. Or is there a better way? I'm certainly not the best at optimizing really anything. I've just started dealing with XNA so I'm not used to having optimization code really doing much of an affect until now... :( If you need code examples, please ask. I'll add it as an edit. EDIT: While this doesn't directly relate to the question, I figure the more information I provide, the better. To keep your units from moving as accurately to the players desired position, I've decided that once the unit PFs over to the less accurate grid piece, it will then PF on a more accurate level to the exact position requested.

    Read the article

  • Adding sub-entities to existing entities. Should it be done in the Entity and Component classes?

    - by Coyote
    I'm in a situation where a player can be given the control of small parts of an entity (i.e. Left missile battery). Therefore I started implementing sub entities as follow. Entities are Objects with 3 arrays: pointers to components pointers to sub entities communication subscribers (temporary implementation) Now when an entity is built it has a few components as you might expect and also I can attach sub entities which are handled with some dedicated code in the Entity and Component classes. I noticed sub entities are sharing data in 3 parts: position: the sub entities are using the parent's position and their own as an offset. scrips: sub entities are draining ammo and energy from the parent. physics: sub entities add weight to the parent I made this to quickly go forward, but as I'm slowly fixing current implementations I wonder if this wasn't a mistake. Is my current implementation something commonly done? Will this implementation put me in a corner? I thought it might be a better thing to create some sort of SubEntityComponent where sub entities are attached and handled. But before changing anything I wanted to seek the community's wisdom.

    Read the article

  • Hide individual regions on a territory map

    - by Paul
    I have a map of adjacent irregularly-shaped territories. It happens to be a county map, but that's not important. In the game, the player only sees a few to start, and as he progresses to new territories, the ones adjacent to it are revealed. It's a territory-scale fog-of-war. I'm using Unity3D, and my inclination is to make a set of planes, each of which has an image of a single territory on it, and then arrange them manually like a jigsaw puzzle. It then is fairly easy to respond to a click on each region and also to mark the planes as visible or invisible, or even do clever things like fade or zoom on individual regions. This sounds like an arduous task, and if we need to change the visual design of the territories, we'd have to cut up the main map all over again into each of the individual pieces. Does anyone have a more elegant solution to this problem?

    Read the article

  • Light shaped like a line

    - by Michael
    I am trying to figure out how line-shaped lights fit into the standard point light/spotlight/directional light scheme. The way I see it, there are two options: Seed the line with regular point lights and just deal with the artifacts. Easy, but seems wasteful. Make the line some kind of emissive material and apply a bloom effect. Sounds like it could work, but I can't test it in my engine yet. Is there a standard way to do this? Or for non-point lights in general?

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >