Search Results

Search found 23473 results on 939 pages for 'game programming'.

Page 498/939 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • Physics-based dynamic audio generation in games

    - by alexc
    I wonder if it is possible to generate audio dynamically without any (!) audio assets, using pure mathematics/physics and some input values like material properties and spatial distribution of content in scene space. What I have in mind is something like a scene, with concrete floor, wooden table and glass on it. Now let's assume force pushes the glass towards the edge of table and then the glass falls onto the floor and shatters. The near-realistic glass destruction itself would be possible using voxels and good physics engine, but what about the sound the glass makes while shattering? I believe there is a way to generate that sound, because physics of sound is fairly known these days, but how computationaly costy that would be? Consumer hardware or supercomputers? Do any of you know some good resources/videos of such an experiment?

    Read the article

  • Extrapolation breaks collision detection

    - by user22241
    Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works. This is how things worked before extrapolation: However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ). After I apply my extrapolation How to correct this behaviour? I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question. I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this. I've found a couple of similar questions on here but the answers haven't helped me. This is my extrapolation code: public void onDrawFrame(GL10 gl) { //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; tics++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } Applying extrapolation render(float extrapolation){ //This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid) extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation; spriteScreenPosX = extrapolationPosX * screenWidth; drawSprite(spriteScreenX, spriteScreenY); } Edit As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems. Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops? I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help. If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help

    Read the article

  • Confusion with Libgdx UI

    - by BrotherJack
    I've started with Libgdx and am currently stumbling about trying to understand how to set up the interface. I have generated the base projects in Eclipse ( < proj-name ,< proj-name -android, < proj-name -desktop, < proj-name -html), and can get the program to display a simple background, play a looping sound file, and draw a tank. I have been having some problems implementing the UI though. I want to make a collapsible interface bar at the bottom of the screen that would contain buttons for movement, and selecting weapons. I'm confused since there appears to be several ways of doing this and the documentation (or tutorials explaining it) tend to be obsolete. How would one go about this? Use a stage for the bar and actors for the widgets? I'm a little lost on this.

    Read the article

  • Texture displays on Android emulator but not on device

    - by Rob
    I have written a simple UI which takes an image (256x256) and maps it to a rectangle. This works perfectly on the emulator however on the phone the texture does not show, I see only a white rectangle. This is my code: public void onSurfaceCreated(GL10 gl, EGLConfig config) { byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuffer.asFloatBuffer(); vertexBuffer.put(cardshape); vertexBuffer.position(0); byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); textureBuffer = byteBuffer.asFloatBuffer(); textureBuffer.put(textureshape); textureBuffer.position(0); // Set the background color to black ( rgba ). gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Enable Smooth Shading, default not really needed. gl.glShadeModel(GL10.GL_SMOOTH); // Depth buffer setup. gl.glClearDepthf(1.0f); // Enables depth testing. gl.glEnable(GL10.GL_DEPTH_TEST); // The type of depth testing to do. gl.glDepthFunc(GL10.GL_LEQUAL); // Really nice perspective calculations. gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); gl.glEnable(GL10.GL_TEXTURE_2D); loadGLTexture(gl); } public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glDisable(GL10.GL_DEPTH_TEST); gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glLoadIdentity(); gl.glTranslatef(card.x, card.y, 0.0f); gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } public void onSurfaceChanged(GL10 gl, int width, int height) { // Sets the current view port to the new size. gl.glViewport(0, 0, width, height); // Select the projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); // Reset the projection matrix gl.glLoadIdentity(); // Calculate the aspect ratio of the window GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f); // Select the modelview matrix gl.glMatrixMode(GL10.GL_MODELVIEW); // Reset the modelview matrix gl.glLoadIdentity(); } public int[] texture = new int[1]; public void loadGLTexture(GL10 gl) { // loading texture Bitmap bitmap; bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image); // generate one texture pointer gl.glGenTextures(0, texture, 0); //adds texture id to texture array // ...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now // create nearest filtered texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); // Use Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); // Clean up bitmap.recycle(); } As per many other similar issues and resolutions on the web i have tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and putting them in the raw folder.. all of which didn't help. I'm not really sure what else to try..

    Read the article

  • What is a correct step by step logic of exporting scene with baked occlusion for loading it at runtime?

    - by myWallJSON
    I wonder what is a correct step by step logic of exporting scene with baked occlusion (Culling data) for loading that scene at runtime (on fly from the internet for example))? So currently my plan looks like this: I create prefabs Place them onto my scene (into Hierarchy) (say create 20 buffolows and some hourses and some buildings) Create empty prefab and drag all my scene objects from hierarchy onto it Export prefab So generally I put all my scene objects into one large prefab and export it but it seems that all objects that were marked as static get this property turned off when loading them at runtime and so no Frustrum Culling, and no Occlusion culling happens. So I wonder what is a correct way of exporting Sceen + Objecrts + Occlusion (and onther culing) data for future load of such scene at runtime? I wonder about current 3.5.2 Pro and future 4 Pro versions of U3D.

    Read the article

  • OpenGL vs DirectX?

    - by Harold
    I saw the articles that were going on about OpenGL being better than DirectX and that Microsoft are really just trying to get everyone to use DirectX even though it's inferior so that gaming is almost exclusively for Windows and XBox, but since the article was written in 2006 is it still relevant today? Also I know plenty of games are written in DirectX but does anyone have any examples of popular games written in OpenGL? Thanks

    Read the article

  • How do I render only part of a texture to a point sprite in OpenGL ES for Android?

    - by nbolton
    Using the libgdx framework, I've figured out how to render a texture to a point sprite. The problem is, it renders the entire texture to the point sprite, where I only want a small part of it (since it's an isometric tile image). Here's a snippet from some demo code I wrote... create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.internal("data/tiles2.png"), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); Gdx.gl.glEnable(GL10.GL_TEXTURE_2D); Gdx.gl.glEnable(GL11.GL_POINT_SPRITE_OES); Gdx.gl11.glTexEnvi( GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL11.GL_TRUE); Gdx.gl10.glPointSize(s); tiles.bind(); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); renderer.begin(GL10.GL_POINTS); // render 3 point sprites at various 3d points renderer.vertex(-.1f, 0, -.1f); renderer.vertex(0, 0, 0); renderer.vertex(.1f, 0, .1f); // ... more vertices here if needed (one for each sprite) ... renderer.end(); }

    Read the article

  • Drawing an outline around an arbitrary group of hexagons

    - by Perky
    Is there an algorithm for drawing an outline around around an arbitrary group of hexagons? The polygon outline drawn may be concave. See the images below, the green line is what I am trying to achieve. The hexagons are stored as vertices and drawn as polygons. Edit: I've uploaded images that should explain more. I want to favour convex hulls because it's conveys an area of control more quickly. Each hexagon is stored in a multidimensional array so they all have x and y coordinates, I can easily find adjacent hexagons and the opposite vertex, i.e. adjacentHexagon = getAdjacentHexagon( someHexagon, NORTHWEST ) if there isn't a hexagon immediately adjacent it will continue to search in that direction until it finds one or hits the map edges.

    Read the article

  • Finding closest object to a location within a specific perpendicular distance to direction vector

    - by Sniper
    I have a location and a direction vector indicating facing, I want to find the closest object to that location that is within some tolerance distance (perpendicular distance) to the ray formed by the location and direction vector. Basically I want to get the object that is being aimed at. I have thought about finding all objects within a box and then finding the closest object to my vector from them results, but I am sure that there is a more efficient way. The Z axis is optional, the objects are most likely within a few meters of the search vector.

    Read the article

  • Low complexity shader to indicate the sides of a polyline

    - by Pris
    I have a bunch of polylines that I draw using GL_LINES. They can have thousands of points. They actually represent the separation of land and water on a map. I don't have complete polygons, just the ordered set of points. I'm looking for a neat but efficient way to visually convey Side A and Side B as being different. For example I could offset the polyline in one direction a few times and fade it out (but every offset is doubling the number of points), or offset it once to make a "ribbon" and give one side a 'glow' like effect to mimic the outer glow or shadow of a polygon). This is for a mobile application and I'm using OpenGL ES 2. I'd like to keep the effect as simple as possible from a complexity stand point. I'm looking for some additional ideas; maybe there's a clever shader technique out there or a visual effect I haven't considered.

    Read the article

  • Best way to detect if vec3 is between vec3(x) and vec3(y) in glsl

    - by elect
    As titled I am sampling from a texture and if the color is somehow gray [vec3(.8), vec3(.9)] and an uniform is 1 I need to substitute that color with another one I am not a glsl veteran but I am pretty sure there is a more elegant and compact (without mentioning faster) way than this: vec3 textureColor = texture(texture0, oUV); if(settings.w == 1 && textureColor.r > .8 && textureColor.r < .9 && textureColor.g > .8 && textureColor.g < .9 && textureColor.b > .8 && textureColor.b < .9)

    Read the article

  • Precision loss when transforming from cartesian to isometric

    - by Justin Skiles
    My goal is to display a tile map in isometric projection. This tile map has 25 tiles across and 25 tiles down. Each tile is 32x32. See below for how I'm accomplishing this. World Space World Space to Screen Space Rotation (45 degrees) Using a 2D rotation matrix, I use the following: double rotation = Math.PI / 4; double rotatedX = ((tileWorldX * Math.Cos(rotation)) - ((tileWorldY * Math.Sin(rotation))); double rotatedY = ((tileWorldX * Math.Sin(rotation)) + (tileWorldY * Math.Cos(rotation))); World Space to Screen Space Scale (Y-axis reduced by 50%) Here I simply scale down the Y value by a factor of 0.5. Problem And it works, kind of. There are some tiny 1px-2px gaps between some of the tiles when rendering. I think there's some precision loss somewhere, or I'm not understanding how to get these tiles to fit together perfectly. I'm not truncating or converting my values to non-decimal types until I absolutely have to (when I pass to the render method, which only takes integers). I'm not sure how to guarantee pixel perfect rendering precision when I'm rotating and scaling on a level of higher precision. Any advice? Do I need to supply for information?

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Is good practice to optimize FPS even when it's above the lower limit to give illusion of movement?

    - by rraallvv
    I started over 50 FPS on the iPhone, but now I'm bellow 30 PFS, I've seen most iPhone games clamped to either 60 or 30 FPS, even when 24 or less would give the illusion of movement. I've concidered my limit to be a little bit over 15 FPS, in fact my physics simulation is updated at that rate (15.84 steps/s) as that is the lowest that still give fluid movement, a bit lower gives jerky motion. Is there a practical reason why to clamp FPS way above the lower limit? Update: The following image could help to clarify I can independently set the physic simulation step, frame rate, and simulation interval update. My concern is why should I clamp any of those to values greater than the minimum? For instance to conserve battery life I could just to choose the lower limits, but it seems that 60 or 30 FPS are the most used values.

    Read the article

  • glColor3f Setting colour

    - by Aaron
    This draws a white vertical line from 640 to 768 at x512: glDisable(GL_TEXTURE_2D); glBegin(GL_LINES); glColor3f((double)R/255,(double)G/255,(double)B/255); glVertex3f(SX, -SPosY, 0); // origin of the line glVertex3f(SX, -EPosY, 0); // ending point of the line glEnd(); glEnable(GL_TEXTURE_2D); This works, but after having a problem where it wouldn't draw it white (Or to any colour passed) I discovered that disabling GL_TEXTURE_2D Before drawing the line, and the re-enabling it afterwards for other things, fixed it. I want to know, is this a normal step a programmer might take? Or is it highly inefficient? I don't want to be causing any slow downs due to a mistake =) Thanks

    Read the article

  • Checking whether a specific key was pressed in enchantJS

    - by MxyL
    I am using enchantJS and would like to use the letters and numbers as well as numpad on a keyboard to do different things (eg: hotkeys). From this page http://users.csc.calpoly.edu/~foaad/enchant/guide/playerInput.html By default, enchant.js provides input listeners for six buttons: UP, DOWN, LEFT, RIGHT, A, and B. By default, the directions are bound to the arrow keys. Any of the six buttons may also be bound to any key with an ASCII value. We’ll address that later. So enchant provides the ability to bind keys to different input such as up, down, left, right...but how can I simply check whether the D or X key was pressed, and if so, perform certain actions based on that event?

    Read the article

  • Stop a rotating object at a specified angle?

    - by Krummelz
    I'm working in JavaScript with HTML5 and the canvas. I have an object which is rotating at a certain speed, and I need the object's rotation to slow down gradually and the front of the object to stop at a specified angle. (I'm using radians, not degrees.) I have a variable to keep track of the angle which the object is facing, as it rotates. How would I go about getting the object to come to rest, facing the direction I want it to?

    Read the article

  • which flash 3d particle engine generate such xml file

    - by Huang F. Lei
    I found some particle config files like below one, but I don't know which flash 3d particle engine use them, they are different from away3d's which use 'root' as root element of xml. <effect pos="0 0 0"> <property cache="1" lifetime="10000"/> <mesh blendmode="add"> <path> <frame y="100" durtime="1000" x="0" z="0"/> </path> <scale> <frame y="0.2000000001" durtime="300" x="2.2" z="2.2"/> <frame y="0.4" durtime="300" x="2.7" z="2.7"/> </scale> </mesh> <vibrate delayTime="100" amplitude="10" durationTime="750" intension="50"/> <quad billboard="false" > </quad> <particle global="false" pos=""> <scale> <frame y="1" durtime="0" x="1" z="1"/> <frame y="1" durtime="2000" x="1.5" z="1.5"/> </scale> </particle> </effect>

    Read the article

  • Frame timing for GLFW versus GLUT

    - by linello
    I need a library which ensures me that the timing between frames are more constant as possible during an experiment of visual psychophics. This is usually done synchronizing the refresh rate of the screen with the main loop. For example if my monitor runs at 60Hz I would like to specify that frequency to my framework. For example if my gameloop is the following void gameloop() { // do some computation printDeltaT(); Flip buffers } I would like to have printed a constant time interval. Is it possible with GLFW?

    Read the article

  • Rotate 3D Model from a custom position

    - by Nipuna Silva
    I have a 3D Model like above in which i want to rotate it from a given location(pointed in red) but I can only rotate it from the middle. How can I rotate it from a custom point. Edit: I successfully able to rotate the model from the below position by getting the radius of the model and applying it to the world matrix Vector3 point = new Vector3(-radius, 0, 0); world = Matrix.CreateTranslation(-radius, 0, 0); But now I cannot change the position of the object and it always centered in middle of the screen. I think that's because i applied the above code. How can I place it anywhere I want?

    Read the article

  • How is the iOS support in UDK compared to Unity?

    - by Joe
    I have some significant experience in Unity for web clients, but I'm skeptical about the 3K$ price tag to create/deploy iOS games. I noticed UDK now supports iOS, and appears to have "free" version control- and it's only 100$ from what I can tell. My primary question is: Does UDK make iOS development and deployment easy, or do you have to jump through a couple of hoops to make it work? A few side questions not worth another post: How hard is the transition from Unity to UDK? Is UnrealScript easy to pick up from a C/C# background? Does the UDK have good documentation compared to Unity?

    Read the article

  • How can I use the dualforward parameter in my unity shader to use lightmaps and normal maps together?

    - by Raphaeltm
    I'm using the free version of unity and I would like to combine lightmaps with specularity and normal maps. After doing a -bunch- of research, I've figured out that there doesn't seem to be any easy way to do this in the free version of unity, which doesn't support deferred rendering/easy use of dual lightmaps. However, it looks like it's possible, by writing a custom shader, using the "dualforward" parameter in a shader, switching the lightmapping mode to "dual lightmaps" and turning on "Use in forward ren." (basically, writing a shader that specifies the use of dual lightmaps, which should allow for a combination of lightmaps and normal maps) So I downloaded the source code for the default shaders (because all I need is a normal specular bumped shader) and added "dualforward" to the parameters: Shader "Bumped Specular Dual Lightmaps" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 400 CGPROGRAM #pragma surface surf BlinnPhong dualforward sampler2D _MainTex; sampler2D _BumpMap; fixed4 _Color; half _Shininess; struct Input { float2 uv_MainTex; float2 uv_BumpMap; }; void surf (Input IN, inout SurfaceOutput o) { fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Specular" } This, however, doesn't seem to work. When I keep the "dualforward" param, every object that uses it seems to be lit by the one directional light in the scene. When I remove the "dualforward" param, it they look like normal lightmapped objects with no normal maps or specularity. I noticed that the support for "dualforward" seems to be new in v.3.4.2, so I made sure to download it (I was running 3.4.1), but it still doesn't work. Anybody have any advice for me?

    Read the article

  • backface culling error (in world space)

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation (is it correct?). (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason? For better explanation i upload picture with cube rotation screenshots: http://imageshack.us/photo/my-images/842/bcmove.png/ UPDATED: The error occurs only when triangle has non-zero offset from origin UPDATED 2: If i process backface culling in clip space (after transforming all vertices with view and projection matrix), and just check z coordinate of triangle normal - it works perfect... Can i perform culing RIGHT BEFORE view/proj transforms? In this case looks like culling will not depends of projection and it's not right?.. UPDATED 3: I found answer and will post it in two hours - again coz of reputation lack.

    Read the article

  • Writing to a structured buffer with a compute shader (D3D11)

    - by Vertexwahn
    I have some problems writing to a structured buffer. First I create a structured buffer that is filled with float values beginning from 0 to 99. Afterwards a copy the structured buffer to a CPU accessible buffer is made to print the content of the structured buffer to the console. The output is as expected (Numbers 0 to 99 appear on the console). Afterwards I use a compute shader that should change the contents of the structured buffer: RWStructuredBuffer<float> Result : register( u0 ); [numthreads(1, 1, 1)] void CS_main( uint3 GroupId : SV_GroupID ) { Result[GroupId.x] = GroupId.x * 10; } But the compute shader does not change the contents of the structured buffer. The source code can be found here (main.cpp): https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/main.cpp?at=default FillCS.hlsl: https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/FillCS.hlsl?at=default

    Read the article

  • How can I gain access to a player instance in a Minecraft mod?

    - by Andrew Graber
    I'm creating Minecraft mod with a pickaxe that takes away experience when you break a block. The method for taking away experience from a player is addExperience on EntityPlayer, so I need to get an instance of EntityPlayer for the player using my pickaxe when the pickaxe breaks a block, so that I can remove the appropriate amount of experience. My pickaxe class currently looks like this: public class ExperiencePickaxe extends ItemPickaxe { public ExperiencePickaxe(int ItemID, EnumToolMaterial material){ super(ItemID, material); } public boolean onBlockDestroyed(ItemStack par1ItemStack, World par2World, int par3, int par4, int par5, int par6, EntityLiving par7EntityLiving) { if ((double)Block.blocksList[par3].getBlockHardness(par2World, par4, par5, par6) != 0.0D) { EntityPlayer e = new EntityPlayer(); // create an instance e.addExperience(-1); } return true; } } Obviously, I cannot actually create a new EntityPlayer since it is an abstract class. How can I get access to the player using my pickaxe?

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >