Search Results

Search found 39156 results on 1567 pages for 'device driver development'.

Page 694/1567 | < Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • What are some ways to texture map a terrain?

    - by ApocKalipsS
    I'm working with XNA on a 3D Game, and I'm trying to have a proper and nice environnement. I actually followed a tutorial to create a terrain from a heightmap. To texture it, I just apply a grass texture on it and tile it a number of times. But what I want to do is to have a really realistic texturing, but also generate it automatically (for example if I want to use Perlin noise to generate a terrain and then texture it). I already learned about multi-texturing, loading a map file with different colors for different textures, but I don't think this is really efficient, for instance for cliffs or very steep areas it will tile a texture badly as it's a view from the top. (Also, I don't know how I'll draw roads or dirt paths with that.) I'm looking for an efficient solution to realistically texture mapping procedurally-generated terrain.

    Read the article

  • Spherical harmonics lighting interpolation

    - by TravisG
    I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients (=4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL_LINEAR with and without mip mapping) without any considerations? In other terms: If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?

    Read the article

  • As a indie, how to protect your game?

    - by user16829
    As a indie, you might not work in a company. And you may have a great game idea and you feel it gonna be a big success. When you released your game. How do you protect it as your own creation? So that someone also can't steal the title and publish a "sequel" e.g. Your-Game-Name 2,3,4. Or even produce by-products like Angry Birds but without your permission. So how we can prevent these from happening by legal methods. Like copyrights, trademarks? If a professional can fill us those info, it will be great.

    Read the article

  • Synchronise graphics and logic code

    - by Skeith
    I have a procedural approach to the game loop that runs various classes. it looks like this: continue any in progress animations check for used input apply AI move things resolve events such as collisions draw it all to screen I have seen a lot of posts about how drawing should be running separately as fast as it can, possibly in another thread. My problem is that if the drawing runs as fast as it, can what happens if it tried to draw while I'm still applying the AI or resolving a collision? It could draw the wrong thing on screen. This seems to be a well established idea so there must be an explanation to this problem as I just cant get my head around it. The only solution I have is to update the screen so fast that any errors like that get refreshed before we see them but that sounds hacky. So how does this work / how would you implement it so that they are in sync but running at different speeds?

    Read the article

  • Client Side Prediction for a Look Vector

    - by Mike Sawayda
    So I am making a first person networked shooter. I am working on client-side prediction where I am predicting player position and look vectors client-side based on input messages received from the server. Right now I am only worried about the look vectors though. I am receiving the correct look vector from the server about 20 times per second and I am checking that against the look vector that I have client side. I want to interpolate the clients look vector towards the correct one that is server side over a period of time. Therefore no matter how far you are away from the servers look vector you will interpolate to it over the same amount of time. Ex. if you were 10 degrees off it would take the same amount of time as if you were 2 degrees off to be correctly lined up with the server copy. My code looks something like this but the problem is that the amount that you are changing the clients copy gets infinitesimally small so you will actually never reach the servers copy. This is because I am always calculating the difference and only moving by a percentage of that every frame. Does anyone have any suggestions on how to interpolate towards the servers copy correctly? if(rotationDiffY > ClientSideAttributes::minRotation) { if(serverRotY > clientRotY) { playerObjects[i]->collisionObject->rotation.y += (rotationDiffY * deltaTime); } else { playerObjects[i]->collisionObject->rotation.y -= (rotationDiffY deltaTime); } }

    Read the article

  • How to adjust the shooting angle of an object

    - by Blue
    I've been trying to add an angle adjustment feature to a power bar that I got from unity3dStudents. But I can't seem to get the code right. I'm using addforce to rigidbody, it works but the power is too great. I also found that rotating the object it's shooting from changes the angle. But I don't know how to proceed from that. Can somebody show me the problem with the script below, as in how to add height to the addforce without it going to far up or to the side? Or how to change the angle of the object? var theAngle : int; var maxAngle : int = 130; var minAngle : int = 0; var angleIncreasing : boolean = false; var angleDecreasing : boolean = false; var rotationSpeed : float = 10; var ball : Rigidbody; var spawnPos : Transform; var shotForce : float = 25; function Update () { if(Input.GetKeyDown("k")){ angleIncreasing = true; angleDecreasing = false; } if(Input.GetKeyUp("k")){ angleIncreasing = false; } if(Input.GetKeyDown("l")){ angleIncreasing = false; angleDecreasing = true; } if(Input.GetKeyUp("l")){ angleDecreasing = false; } ------- if(angleIncreasing){ theAngle += Time.deltaTime * rotationSpeed; if(theAngle > maxAngle){ theAngle = maxAngle; } } if(angleDecreasing){ theAngle -= Time.deltaTime * rotationSpeed; if(theAngle < minAngle){ theAngle = minAngle; } } } function Shoot(power : float, angle : int){ ---- var forward : Vector3 = spawnPos.forward; var upward : Vector3 = spawnPos.up; pFab.AddForce(forward * power * shotForce); pFab.AddForce(upward * angle * 10); ---- }

    Read the article

  • HTML5 Game (Canvas) - UI Techniques?

    - by Jason L.
    Hi! I'm in the process of building a JavaScript / HTML5 game (using Canvas) for mobile (Android / iPhone/ WebOS) with PhoneGap. I'm currently trying to design out how the UI and playing board should be built and how they should interact but I'm not sure what the best solution is. Here's what I can think of - Build the UI right into the canvas using things like drawImage and fillText Build parts of the UI outside of the canvas using regular DOM objects and then float a div over the canvas when UI elements need to overlap the playing board canvas. Are there any other possible techniques I can use for building the game UI that I haven't thought of? Also, which of these would be considered the "standard" way (I know HTML5 games are not very popular so there probably isn't a "standard" way yet)? And finally, which way would YOU recommend / use? Many thanks in advance!

    Read the article

  • HowTo Enable jBullet DebugMode

    - by Kenneth Bray
    I would like to render the physics world of jBullet to debug some issues in my game, and I am not finding too much on enabling the debugDraw method of jBullet. Do I need to write my own debugDraw method, or is there an easier way to draw the physics models to the screen? If there is already a built in method I would prefer to use that, otherwise I guess I will start making my own functions to handle this.

    Read the article

  • Portal View/Projection Matrix near plane

    - by melak47
    For RenderToTexture/Camera based portal rendering, the basics seems simple enough. However, with a free camera, most of the time it is going to be looking at such portals at an angle: Now a regular near clipping plane will not always work here, it will either intersect with the wall the portal is sitting on, or possibly with objects in front of the wall. The desired near clipping plane would be aligned like the portal, producing a view volume more like this: or this in 3D: So here is my question: How does one construct or "truncate" a view/projection matrix to achieve such an off-camera-normal (near) clipping plane?

    Read the article

  • Searching for fast skeletal animation algorithm

    - by igf
    Is it theoretically possible to dynamicaly animate scene with 100-150 400 poly characters meshes on high-end GL ES 2.0 mobile devices or i definetley should use prerendered keyframe animation? Scene have only one light source and precalculated shadow maps. View from top like in Warcraft 3. No any other meshes or objects. 2d collision detecting between objects calculated via spatial hashing. It can be any other algorithm besides ragdoll, but it must supply fast and simple skeletal animation for frame with 100+ low poly meshes for each mesh. Any ideas?

    Read the article

  • How to divide hex grid evenly among n players?

    - by manabreak
    I'm making a simple hex-based game, and I want the map to be divided evenly among the players. The map is created randomly, and I want the players to have about equal amount of cells, with relatively small areas. For example, if there's four players and 80 cells in the map, each of the players would have about 20 cells (it doesn't have to be spot-on accurate). Also, each player should have no more than four adjacent cells. That is to say, when the map is generated, the biggest "chunks" cannot be more than four cells each. I know this is not always possible for two or three players (as this resembles the "coloring the map" problem), and I'm OK with doing other solutions for those (like creating maps that solve the problem instead). But, for four to eight players, how could I approach this problem? As always, any and all help is appreciated. :)

    Read the article

  • Open Source Analysis

    - by BluFire
    There are a lot of code in open source projects, looking at all of the code is time consuming and can be confusing to a novice like me. Are there any sections of open-source projects that should be focused on? What should I focus on when I look at code? I'm asking this in general because if I ask this specifically, the question will only apply in one or two projects rather than an entire group of projects ranging in different types of games and difficulty.

    Read the article

  • Direct3D - Zooming into Mouse Position

    - by roohan
    I'm trying to implement my camera class for a simulation. But I cant figure out how to zoom into my world based on the mouse position. I mean the object under the mouse cursor should remain at the same screen position. My zooming looks like this: VOID ZoomIn(D3DXMATRIX& WorldMatrix, FLOAT const& MouseX, FLOAT const& MouseY) { this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); } I passed the world matrix to the function because I had the idea to move my drawing origin according to the mouse position. But I cant find out how to calculate the offset in to move my drawing origin. Anyone got an idea how to calculate this? Thanks in advance. SOLVED Ok I solved my problem. Here is the code if anyone is interested: VOID CAMERA2D::ZoomIn(FLOAT const& MouseX, FLOAT const& MouseY) { // Get the setting of the current view port. D3DVIEWPORT9 ViewPort; this->Direct3DDevice->GetViewport(&ViewPort); // Convert the screen coordinates of the mouse to world space coordinates. D3DXVECTOR3 VectorOne; D3DXVECTOR3 VectorTwo; D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ = 0.0f; float WorldX = ((WorldZ - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY = ((WorldZ - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Move the camera into the screen. this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); // Calculate the world space vector again based on the new view matrix, D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ2 = 0.0f; float WorldX2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Create a temporary translation matrix for calculating the origin offset. D3DXMATRIX TranslationMatrix; D3DXMatrixIdentity(&TranslationMatrix); // Calculate the origin offset. D3DXMatrixTranslation(&TranslationMatrix, WorldX2 - WorldX, WorldY2 - WorldY, 0.0f); // At the offset to the cameras world matrix. this->WorldMatrix = this->WorldMatrix * TranslationMatrix; } Maybe someone has even a better solution than mine.

    Read the article

  • Is white the best base color to start with when planning to shade sprites within Unity?

    - by SpartanDonut
    I'm looking into prototyping a game in Unity which will consist of solid square sprites / tiles. I figure I can represent different types of objects with different colors for each of the tiles in the game. I figure that I can import a single square sprite and shade it appropriately in Unity as opposed to imported squares of many different colors. My experience with adjusting the hue and saturation within Photoshop shows that white is not an easy color to change as things that are white often stay white. My testing in Unity shows that I can change the "color" of a sprite to anything other than white and the sprite is seemingly shaded appropriately, despite what I would have thought given my Photoshop experience. Since white objects do seem to take on the appropriate color shading when changed within Unity my gut tells me that this is the best base color to begin with, meaning that I can import a single white square sprite and simply adjust the color to represent different objects and object states. Is a white sprite actually the best color sprite to begin with and why does something like this work in Unity as opposed to adjusting the hue and saturation within Photoshop?

    Read the article

  • How to animate the sprite along with action in Cocos2d?

    - by user1201239
    Cocos2d-android - I have an animation which has 5 Frames they are close cropped images. Now I want Sprite to do animation as well as Move in X direction.i.e. I have a player running which gets collided with obstacle and falls down .. Now I want sprite to run animation as well as moveBy in -ve x direction gameOverAnimation =CCSprite.sprite("gmovr00") gameOverAnimation.setAnchorPoint(0, 0); gameOverAnimation.setPosition(340.0f, 200.0f); addChild(gameOverAnimation,10); CCIntervalAction action1 = CCAnimate.action(mEndAnimation, false); action1.setDuration(1.0f); CCIntervalAction delay = CCDelayTime.action(0.68f); CCMoveBy actionBy = CCMoveBy.action(1.0f, CGPoint.ccp(-340,0)); CCIntervalAction seq1 = CCSpawn.actions(action1,actionBy); //CCSpawn spawn = CCSpawn.actions(action1, actionBy); CCSequence sequence1 = CCSequence.actions(seq1,CCCallFuncN.action(this,"gameOver")); gameOverAnimation.runAction(sequence1); Above code makes animation run first then moved in y direction Thanks for the help.. And can some one explaing me the concept of time with frame Animation or good example ?

    Read the article

  • Should I use float, double, or decimal for stats, position, etc?

    - by Ryan Peschel
    The problem with float and double is that they are not exact. If you are to do something like store replays, the values would have to be exact. The problems with decimal is that they are approximately 16x slower (confirmed by searching and personal testing) than floats and doubles. Couldn't Vector2s be another problem because they use floats internally for all the components? How do other games solve this problem? I'm sure they must use floats and doubles but aren't they not deterministic across platforms and different architecture? The replay files for games like SC2 run in a linear fashion so you cannot skip ahead so how do they solve the determinism issue with floating point numbers?

    Read the article

  • Grpahic hardwares

    - by Vanangamudi
    Which vendor provides better GPGPU. my requirements are confined to rendering utilising the GPU for BSDF building for e.g. Intel started providing Ivy Bridge chipset GPU, which are comparably fast to HD5960 cards. I'm not that against nvidia or amd. but I'm a fan of Intel. how it compares to nvidia in price and performance. if possible may I know, how all of them perform with OpenCL?? I'm not sure if it is right to ask it here. but I don't know where to ask.

    Read the article

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • How can I get a 2D texture to rotate like a compass in XNA?

    - by IronGiraffe
    I'm working on a small maze puzzle game and I'm trying to add a compass to make it somewhat easier for the player to find their way around the maze. The problem is: I'm using XNA's draw method to rotate the arrow and I don't really know how to get it to rotate properly. What I need it to do is point towards the exit from the player's position, but I'm not sure how I can do that. So does anyone know how I can do this? Is there a better way to do it?

    Read the article

  • Should I be using a game engine?

    - by Kyle
    I'm an experienced programmer, but I'm completely new to making games. I'm thinking of making an iPhone game that is similar to a 2d tower defense type game. In the web programming world, it would be a big waste of time to make a website without using some sort of web framework (eg ruby on rails). Is that the same for making games? Do people mostly use some sort of framework/game engine for making a game? If so, what are the popular ones for iOS?

    Read the article

  • Complete Beginner to Game Programming and Unreal Engine 4, Looking For Advice [on hold]

    - by onemic
    I am currently a 2nd year programming student(Just finished my first year so I will be starting my second year in September) and have mainly learned C and C++ in my classes. In terms of what I know of C++, I know about general inheritance, polymorphism, overloading operators, iterators, a little bit about templates(only class and function templates) etc. but not of the more advanced topics like linked lists and other sequential containers(containers in general I guess), enumerations, most of the standard library(other than like strings and vectors), and probably a bunch of other stuff I dont even know about yet. I subscribed to Unreal Engine 4 as I was very intrigued by their Unreal Tournament announcement earlier this month, especially after hearing that UE4 is going completely C++. Of course my end goal in doing this programming program is to eventually go into game/graphics programming. Since it's my summer off, I thought what better way then to actually apply some of my skills to a personal project so I actually have a firmer understanding of C++ past what my professors tell me. My questions are this: What would be the best way to start off making a small personal game in UE4 as a project for the summer? What should I be aiming for, especially for someone that is still learning C++? Should I focus on making a simple 2D game rather than a 3D one to get started? Seeing the Flappy Chicken showcase intrigued me because before I thought the UE engine was pretty much pigeonholed into being for FPS games What should my expectations be going into UE4 and a game engine for the first time?(UE4 will be my first foray into making a game) What can I expect to gain from making things in UE4, in terms of making games and in terms of further fleshing out my knowledge of C++? Would you recommend I start off 100% using C++ for scripting or using the visual blueprints? Since I'm not a designer, how would I be able to add objects and designs to my game? For someone at my level is retaining the UE4 subscription worth it or is it better to cancel and resub when I learn enough about UE4 and C++? Lastly is there anything to be gained in terms of knowledge/insight through me looking at the source code for UE4? I opened it in VS2013, but noticed that most of the files were C# files and not cpp's. Thanks in advance for taking the time to answer.

    Read the article

  • Trying to use stencils in 2D while retaining layer depth

    - by Steve
    This is a screen of what's going on just so you can get a frame of reference. http://i935.photobucket.com/albums/ad199/fobwashed/tilefloors.png The problem I'm running into is that my game is slowing down due to the amount of texture swapping I'm doing during my draw call. Since walls, characters, floors, and all objects are on their respective sprite sheet containing those types, per tile draw, the loaded texture is swapping no less than 3 to 5+ times as it cycles and draws the sprites in order to layer properly. Now, I've tried throwing all common objects together into their respective lists, and then using layerDepth drawing them that way which makes things a lot better, but the new problem I'm running into has to do with the way my doors/windows are drawn on walls. Namely, I was using stencils to clear out a block on the walls that are drawn in the shape of the door/window so that when the wall would draw, it would have a door/window sized hole in it. This is the way my draw was set up for walls when I was going tile by tile rather than grouped up common objects. first it would check to see if a door/window was on this wall. If not, it'd skip all the steps and just draw normally. Otherwise end the current spriteBatch Clear the buffers with a transparent color to preserve what was already drawn start a new spritebatch with stencil settings draw the door area end the spriteBatch start a new spritebatch that takes into account the previously set stencil draw the wall which will now be drawn with a hole in it end that spritebatch start a new spritebatch with the normal settings to continue drawing tiles In the tile by tile draw, clearing the depth/stencil buffers didn't matter since I wasn't using any layerDepth to organize what draws on top of what. Now that I'm drawing from lists of common objects rather than tile by tile, it has sped up my draw call considerably but I can't seem to figure out a way to keep the stencil system to mask out the area a door or window will be drawn into a wall. The root of the problem is that when I end a spriteBatch to change the DepthStencilState, it flattens the current RenderTarget and there is no longer any depth sorting for anything drawn further down the line. This means walls always get drawn on top of everything regardless of depth or positioning in the game world and even on top of each other as the stencil has to happen once for every wall that has a door or window. Does anyone know of a way to get around this? To boil it down, I need a way to draw having things sorted by layer depth while also being able to stencil/mask out portions of specific sprites.

    Read the article

  • Move Camera Freely Around Object While Looking at It

    - by Alex_Hyzer_Kenoyer
    I've got a 3D model loaded (a planet) and I have a camera that I want to allow the user to move freely around it. I have no problem getting the camera to orbit the planet around either the x or y axis. My problem is when I try to move the camera on a different axis I have no idea how to go about doing it. I am using OpenGL on Android with the libGDX library. I want the camera to orbit the planet in the direction that the user swipes their finger on the screen.

    Read the article

< Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >