Search Results

Search found 20376 results on 816 pages for 'online game'.

Page 297/816 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Text contains characters that cannot be resolved by this SpriteFont

    - by Appeltaart
    I'm trying to draw currency symbols in an Arial Spritefont. According to thissite, the font contains these symbols. I've included the Unicode character region for these symbols in the Content Processor SpriteFont file, like so: <CharacterRegions> <CharacterRegion> <Start>&#32;</Start> <End>&#126;</End> </CharacterRegion> <CharacterRegion> <!-- Currency symbols --> <Start>&#8352;</Start> <End>&#8378;</End> </CharacterRegion> </CharacterRegions> However, whenever I try to draw a € (Euro) symbol, I get this exception: Text contains characters that cannot be resolved by this SpriteFont Inspecting the SpriteFont in the IDE at that breakpoint shows the € symbol is in there. I'm at a loss at what could be wrong. Why is this exception thrown?

    Read the article

  • How can I find the right UV coordinates for interpolating a bezier curve?

    - by ssb
    I'll let this picture do the talking. I'm trying to create a mesh from a bezier curve and then add a texture to it. The problem here is that the interpolation points along the curve do not increase linearly, so points farther from the control point (near the endpoints) stretch and those in the bend contract, causing the texture to be uneven across the curve, which can be problematic when using a pattern like stripes on a road. How can I determine how far along the curve the vertices actually are so I can give a proper UV coordinate? EDIT: Allow me to clarify that I'm not talking about the trapezoidal distortion of the roads. That I know is normal and I'm not concerned about. I've updated the image to show more clearly where my concerns are. Interpolating over the curve I get 10 segments, but each of these 10 segments is not spaced at an equal point along the curve, so I have to account for this in assigning UV data to vertices or else the road texture will stretch/shrink depending on how far apart vertices are at that particular part of the curve.

    Read the article

  • How can I make an object's hitbox rotate with its texture?

    - by Matthew Optional Meehan
    In XNA, when you have a rectangular sprite that doesnt rotate, it's easy to get its four corners to make a hitbox. However, when you do a rotation, the points get moved and I assume there is some kind of math that I can use to aquire them. I am using the four points to draw a rectangle that visually represents the hitboxes. I have seen some per-pixel collision examples, but I can forsee they would be hard to draw a box/'convex hull' around. I have also seen physics like farseer but I'm not sure if there is a quick tutorial to do what I want.

    Read the article

  • A few questions about integrating AudioKinetic Wwise and Unity

    - by SaldaVonSchwartz
    I'm new to Wwise and to using it with Unity, and though I have gotten the integration to work, I'm still dealing with some loose ends and have a few questions: (I'm on Unity 4.3 as of now but I think it shouldn't make any difference) The base path: The Wwise documentation implies you set this in the AkGlobalSoundEngineInitializer basePath public ivar, which is exposed to the editor. However, I found that this variable is not really used. Instead, the path is hardcoded to /Audio/GeneratedSoundBanks in AkBankPath. I had to modify both scripts to actually look in the path that I set in the editor property. What's the deal with this? Just sloppyness or am I missing something? Also about paths: since I'm on Mac, I'm using Unity natively under OS X and in tadem, the Wwise authoring tool via VMWare and I share the OS X Unity project folder so I can generate the soundbanks into the assets folder. However, the authoring tool (downloaded the latest one for Windows) doesn't automatically generate any "platform-specific" subfolders for my wwise files. That is, again, the Unity integration scripts assume the path to be /Audio/GeneratedSoundBanks/<my-platform>/ which in my case would be Mac (I set the authoring tool to generate for Mac). The documentation says wwise will automatically generate the platform-specific folders but it just dumps all the stuff in GeneratedSoundBanks. Am I missing some setting? cause right now I just manually create the /Mac folder. The C# methods AkSoundEngine.PostEvent and AkSoundEngine.LoadBank for instance, have a few overloads, including ones where I can refer to the soundbanks or events by their ID. However, if I try to use these, for instance: AkSoundEngine.LoadBank(, AkSoundEngine.AK_DEFAULT_POOL_ID) where the int I got from the .h header, I get Ak_Fail. If I use the overloads that reference the objects by string name then it works. What gives? Converting the ID header to C#: The integration comes with a C# script that seems to fork a process to call Python in turn to covert the C++ header into a C# script. This always fails unless I manually execute the Python script myself from outside Unity. Might be a permissions thing, but has anyone experienced this? The Profiler: I set up the Unity player to run in the background and am using the "Profile" version of the plugin. However, when I start the Unity OS X standalone app, the profiler in VMWare does not see it. This I'm thinking might just be that I'm trying to see a running instance of the sound engine inside an OS X binary from a Windows virtual machine. But I'm just wondering if anyone has gotten the Windows profiler to see an OS X Unity binary. Different versions of the integration plugin: It's not clear to me from the documentation whether I have to manually (or write a script to do it) remove the "Profile" version and install the "Release" version when I'm going to do a Release build or if I should install both version in Unity and it'll select the right one. Thanks!

    Read the article

  • Python library for scripting (C++ integration)

    - by Edward83
    Please advise me good wrapper/library for python. I need to implement simple scripting in c++ app; Under "good" I mean pretty understandable, well documented, no memory leaking, fast. For creating base interface of GameObject on Python and C++; Your own experience and useful links will be nice!!! I found link about it, but I need more specific within gamedev context. What combinations of libraries you used for python integration into c++? For example about ogre-python it said built using Py++ and Boost.Python library And one more question, maybe someone of you know how Python was integrated into BigWorld engine (it's own port or some library)? Thank you!!!

    Read the article

  • iPhone image asset recommended resolution/dpi/format

    - by Matthew
    I'm learning iPhone development and a friend will be doing the graphics/animation. I'll be using cocos2d most likely (if that matters). My friend wants to get started on the graphics, and I don't know what image resolution or dpi or formats are recommended. This probably depends on if something is a background vs. a small character. Also, I know I read something about using @2x in image file names to support high res iphone screens. Does cocos2d prefer a different way? Or is this not something to worry about at this point? What should I know before they start working on the graphics?

    Read the article

  • Making an interface for input in C - How?

    - by tloszk
    I have a big question. I started to develop a simple 3D engine (or should I call it framework?). I use OpenGL for rendering and it is developed for Windows. It is all written in C. But I don't know, how to write an "interface" for the keyboard/mouse input. I would like to keep it as simple as possible and nice - what the Win32 "native" input system is not. If anyone has suggestions about the topic, please, tell me. Thanks for everyone, who answers to my question!

    Read the article

  • Having a hard time having consecutive animations for an attack

    - by Kelby Styler
    So I've been trying to figure this out for about 8 hours now...It's driving me nuts because I am pretty sure that it is something dead simple that I am just not understanding. I had everything working fine when I was just cycling through the animation: Idle - Attack - Attack 1 - Attack 2. Just in an infinite loop. The problem now is that I want it to go Attack - check if x time passes if ctrl pressed before x passes move to Attack 1, if not move back to Idle - Then either Attack 1 or Idle depending on how long has passed. I've almost gotten it a few time, but something always happens where it falls apart if I press ctrl too fast or after multiple cycles of the animation. Any help would be appreciated, I'm just at my wits end on this one. I've been looking at this so long that I just don't know where to go anymore. Code is below, here is the controller using UnityEngine; using System.Collections; public class MeleeAttack : MonoBehaviour { public int damage; public bool Attack; public bool Attack1; public bool Attack2; public bool Idle; private Animator animator; private int attnum = 0; private float count = 2f; private float timeLeft; //Gives value to damage output void MAttackDmg () { if (Input.GetKeyDown (KeyCode.RightControl) || Input.GetKeyDown (KeyCode.LeftControl)) { switch (attnum) { case (0): Attack = true; damage = 2; animator.SetBool ("Attack", Attack); attnum++; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; case (1): Attack1 = true; damage = 2; animator.SetBool ("Attack1", Attack1); attnum++; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; case (2): Attack2 = true; damage = 2; animator.SetBool ("Attack2", Attack2); attnum = 0; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; } } if (Input.GetKeyUp (KeyCode.RightControl) || Input.GetKeyUp (KeyCode.LeftControl)) { switch (attnum) { case (0): Debug.Log ("false"); damage = 0; if (timeLeft <= 0f) { Attack2 = false; animator.SetBool ("Attack2", Attack2); Debug.Log ("t1"); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; case (1): Debug.Log ("false1"); damage = 0; if (timeLeft <= 0f) { Debug.Log ("t2"); Attack = false; animator.SetBool ("Attack", Attack); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; case (2): Debug.Log ("false2"); damage = 0; if (timeLeft <= 0f) { Attack1 = false; animator.SetBool ("Attack1", Attack1); Debug.Log ("t3"); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; } } } // Use this for initialization void Awake () { animator = GetComponent<Animator> (); } // Update is called once per frame void Update () { timeLeft -= Time.deltaTime;; MAttackDmg (); } void Start (){ timeLeft = count; } }

    Read the article

  • Cocos2dx InApp Purchase for ios

    - by Ahmad dar
    I am trying to integrate In App Purchases in my app made by using cocos2d x c++. I am using easyNdk Helper for In App Purchases. My In App Purchases works perfectly for my Objective C apps. But for cocos2d x it is throwing error for the following line if ([[RageIAPHelper sharedInstance] productPurchased:productP.productIdentifier]) Actually value came from CPP file perfectly in form of arguments and Properly shows their value in NSLog , But it always shows the objects as nil even objetcs print their stored value in NSLog also @try catch condition is not working and finally throw the following error Please Help me what i have to do ? Thanks

    Read the article

  • Vertex Normals, Loading Mesh Data

    - by Ramon Johannessen
    My test FBX mesh is a cube. From what I surmise, it seems that the cube is on the extreme end of this issue, but I believe that the same issue would be able to occur in any mesh: Each vertex has 3 normals, each pointing a different direction. Of course loading in any type of mesh, potentially ones having thousands of vertices, I need to use indices and not duplicate shared verts. Currently, I'm just writing the normals to the vertex at the index that the FBX data tells me they go to, which has the effect of overwriting any previous normal data. But for lighting calculations I need more info, something that's equivalent to a normal per face, but I have no idea how this should be done. Do I average the 3 different verts' normals together or what?

    Read the article

  • Download PowerCommands for VS 2008

    - by Editor
    PowerCommands is a set of useful extensions for the Visual Studio 2008 adding additional functionality to various areas of the IDE. The source code is included and requires the VS SDK for VS 2008 to allow modification of functionality or as a reference to create additional custom PowerCommand extensions. Visit the [...]

    Read the article

  • Download PowerCommands for VS 2008

    - by Editor
    PowerCommands for Visual Studio 2008 is now available for free download, along with source code and a readme document. PowerCommands, is a set of useful extensions for the Visual Studio 2008 adding additional functionality to various areas of the IDE. The source code, which requires the VS SDK for VS 2008 [...]

    Read the article

  • Calculating adjacent quads on a quad sphere

    - by Caius Eugene
    I've been experimenting with generating a quad sphere. This sphere subdivides into a quadtree structure. Eventually I'm going to be applying some simplex noise to the vertices of each face to create a terrain like surface. To solve the issue of cracks I want to be able to apply a geomitmap technique of triangle fanning on the edges of each quad, but in order to know the subdivision level of the adjacent quads I need to identify which quads are adjacent to each other. Does anyone know any approaches to computing and storing these adjacent quads for quick lookup? Also It's important that I know which direction they are in so I can easily adjust the correct edge.

    Read the article

  • per pixel based collision detection.

    - by pengume
    I was wondering if anyone had any ideas on how to get per pixel collision detection for the android. I saw that the andEngine has great collision detection on rotation as well but couldn't see where the actual detection happened per pixel. Also noticed a couple solutions for java, could these be replicated for use with the Android SDK? Maybe someone here has a clean piece of code I could look at to help understand what is going on with per pixel detection and also why when rotating it is a different process.

    Read the article

  • Vehicle: Boat accelerating and turning in Unity

    - by Emilios S.
    I'm trying to make a player-controllable boat in Unity and I'm running into problems with my code. 1) I want to make the boat to accelerate and decelerate steadily instead of simply moving the speed I'm telling it to right away. 2) I want to make the player unable to steer the boat unless it is moving. 3) If possible, I want to simulate the vertical floating of a boat during its movement (it going up and down) My current code (C#) is this: using UnityEngine; using System.Collections; public class VehicleScript : MonoBehaviour { public float speed=10; public float rotationspeed=50; // Use this for initialization // Update is called once per frame void Update () { // Forward movement if(Input.GetKey(KeyCode.I)) speed = transform.Translate (Vector3.left*speed*Time.deltaTime); // Backward movement if(Input.GetKey(KeyCode.K)) transform.Translate (Vector3.right*speed*Time.deltaTime); // Left movement if(Input.GetKey(KeyCode.J)) transform.Rotate (Vector3.down*rotationspeed*Time.deltaTime); // Right movement if(Input.GetKey(KeyCode.L)) transform.Rotate (Vector3.up*rotationspeed*Time.deltaTime); } } In the current state of my code, when I press the specified keys, the boat simply moves 10 units/sec instantly, and also stops instantly. I'm not really sure how to make the things stated above, so any help would be appreciated. Just to clarify, I don't necessarily need the full code to implement those features, I just want to know what functions to use in order to achieve the desired effects. Thank you very much.

    Read the article

  • Does XNA 4 support 3D affine transformations for 2D images?

    - by Paul Baker Salt Shaker
    Looooong story short I'm essentially trying to code Mode 7 in XNA. Before I continue bashing my brains out in research and various failed matrix math equations; I just want to make sure that XNA supports this just out-of-the-box (so to speak). I'd prefer not to have to import other libraries, because I want to learn how it works myself that way I understand the whole thing better. However that's all for naught if it won't work at all. So no opengl, directx, etc if possible (will eventually do it just to optimize everything, but not for now). tl;dr: Can I has Mode 7 in XNA?

    Read the article

  • texture mapping with lib3ds and SOIL help

    - by Adam West
    I'm having trouble with my project for loading a texture map onto a model. Any insight into what is going wrong with my code is fantastic. Right now the code only renders a teapot which I have assinged after creating it in 3DS Max. 3dsloader.cpp #include "3dsloader.h" Object::Object(std:: string filename) { m_TotalFaces = 0; m_model = lib3ds_file_load(filename.c_str()); // If loading the model failed, we throw an exception if(!m_model) { throw strcat("Unable to load ", filename.c_str()); } // set properties of texture coordinate generation for both x and y coordinates glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); // if not already enabled, enable texture generation if(! glIsEnabled(GL_TEXTURE_GEN_S)) glEnable(GL_TEXTURE_GEN_S); if(! glIsEnabled(GL_TEXTURE_GEN_T)) glEnable(GL_TEXTURE_GEN_T); } Object::~Object() { if(m_model) // if the file isn't freed yet lib3ds_file_free(m_model); //free up memory glDisable(GL_TEXTURE_GEN_S); glDisable(GL_TEXTURE_GEN_T); } void Object::GetFaces() { m_TotalFaces = 0; Lib3dsMesh * mesh; // Loop through every mesh. for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { // Add the number of faces this mesh has to the total number of faces. m_TotalFaces += mesh->faces; } } void Object::CreateVBO() { assert(m_model != NULL); // Calculate the number of faces we have in total GetFaces(); // Allocate memory for our vertices and normals Lib3dsVector * vertices = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsVector * normals = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsTexel* texCoords = new Lib3dsTexel[m_TotalFaces * 3]; Lib3dsMesh * mesh; unsigned int FinishedFaces = 0; // Loop through all the meshes for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { lib3ds_mesh_calculate_normals(mesh, &normals[FinishedFaces*3]); // Loop through every face for(unsigned int cur_face = 0; cur_face < mesh->faces;cur_face++) { Lib3dsFace * face = &mesh->faceL[cur_face]; for(unsigned int i = 0;i < 3;i++) { memcpy(&texCoords[FinishedFaces*3 + i], mesh->texelL[face->points[ i ]], sizeof(Lib3dsTexel)); memcpy(&vertices[FinishedFaces*3 + i], mesh->pointL[face->points[ i ]].pos, sizeof(Lib3dsVector)); } FinishedFaces++; } } // Generate a Vertex Buffer Object and store it with our vertices glGenBuffers(1, &m_VertexVBO); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, vertices, GL_STATIC_DRAW); // Generate another Vertex Buffer Object and store the normals in it glGenBuffers(1, &m_NormalVBO); glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, normals, GL_STATIC_DRAW); // Generate a third VBO and store the texture coordinates in it. glGenBuffers(1, &m_TexCoordVBO); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsTexel) * 3 * m_TotalFaces, texCoords, GL_STATIC_DRAW); // Clean up our allocated memory delete vertices; delete normals; delete texCoords; // We no longer need lib3ds lib3ds_file_free(m_model); m_model = NULL; } void Object::applyTexture(const char*texfilename) { float imageWidth; float imageHeight; glGenTextures(1, & textureObject); // allocate memory for one texture textureObject = SOIL_load_OGL_texture(texfilename,SOIL_LOAD_AUTO,SOIL_CREATE_NEW_ID,SOIL_FLAG_MIPMAPS); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glBindTexture(GL_TEXTURE_2D, textureObject); // use our newest texture glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&imageWidth); glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&imageHeight); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // give the best result for texture magnification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //give the best result for texture minification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); // don't repeat texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat textureglTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat texture glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,imageWidth,imageHeight,0,GL_RGB,GL_UNSIGNED_BYTE,& textureObject); } void Object::Draw() const { // Enable vertex, normal and texture-coordinate arrays. glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Bind the VBO with the normals. glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); // The pointer for the normals is NULL which means that OpenGL will use the currently bound VBO. glNormalPointer(GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glTexCoordPointer(2, GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glVertexPointer(3, GL_FLOAT, 0, NULL); // Render the triangles. glDrawArrays(GL_TRIANGLES, 0, m_TotalFaces * 3); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } 3dsloader.h #include "main.h" #include "lib3ds/file.h" #include "lib3ds/mesh.h" #include "lib3ds/material.h" class Object { public: Object(std:: string filename); virtual ~Object(); virtual void Draw() const; virtual void CreateVBO(); void applyTexture(const char*texfilename); protected: void GetFaces(); unsigned int m_TotalFaces; Lib3dsFile * m_model; Lib3dsMesh* Mesh; GLuint textureObject; GLuint m_VertexVBO, m_NormalVBO, m_TexCoordVBO; }; Called in the main cpp file with: VBO,apply texture and draw (pretty simple, how ironic) and thats it, please help me forum :)

    Read the article

  • Parse/Write JSON with Unity iOS

    - by DannoEterno
    anybody know a tutorial or maybe can help me to develop a parser/reader for JSON compatible with Unity iOS pro? I've already tried different third part libraries but without luck (i've tried json.net, jsonfx, litjson). Im pretty in hurry of doing a simple parser/writer that i can use also under iOS and not only in Desktop. P.s. i can also use third part library, but please, first of suggest be sure that it will work under iOS! Thank you all

    Read the article

  • Generating Normal map from a Image with a given Albedo map

    - by snape
    I am working on a research problem part of which involves generating normal map from a given image of a rusted object. I searched the internet for techniques to achieve the above and apparently crazybump is mentioned a lot. I tried it but it didn't produce the desirable effects. Also I am looking for a method which draws inspiration from an existing research paper not some closed source software. I turned my attention to the technique described in the this paper. Results from this technique are satisfactory for normal objects because of bias in the training data but it doesn't work very well in the case of rusted objects. After this I focussed my attention on generating Albedo map (the above problem would become more solvable if Albedo map is obtained). Fortunately I am able to generate pretty good albedo maps for images of rusted objects. I used this paper's approach to generate Albedo maps. Now I want to know a good technique to get Normal map given an image and it's corresponding Albedo map. To give you an idea of what kind of images I am working with I am attaching a sample. Links to research material would be really appreciated. Thanks!

    Read the article

  • Using textureGrad for anisotropic integration approximation

    - by Amxx
    I'm trying to develop a real time rendering method using real time acquired envmap (cubemap) for lightning. This implies that my envmap can change as often as every frame and I therefore cannot use any method base on precomputation of the envmap (such as convolution with BRDF...) So far my method worked well with Phong BRDF. For specular contribution I direclty read the value in my sampleCube and I use mipmap levels + linear filter for simulating the roughtness of the material considered: int size = textureSize(envmap, 0).x; float specular_level = log2(size * sqrt(3.0)) - 0.5 * log2(ns + 1); vec3 env_specular = ks * specular_color * textureLod(envmap, l_g, specular_level); From this method I would like to upgrade to a microfacet based BRDF. I already have algorithm for evaluating the shape (including anisotropic direction) of the reflection but I cannot manage to read the values I want in my sampleCube. I believe I have to use textureGrad(envmap, l_g, X, Y); with l_g being the reflection direction in global space but I cannot manage to find which values to give to X and Y in order to correctly specify the area I want to consider. What value should I give to X and Y in orther for textureGrad(envmap, l_g, X, Y); to give the same result as textureLod(envmap, l_g, specular_level);

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • Cocos2d: Tongue effect like in Munch Time

    - by Joey Green
    I'm wanting to do a tongue effect for my character like the one in Munch Time( shown in pic ). The player does some action and his tongue attaches to the nearest platform. I'm thinking this is simply a get distance to platform and keep player at that distance as he moves back and forth giving him the swinging effect. For the drawing, I'm wanting the same effect where the tongue sprite is the skinniest in the middle of the distance between the character and platform. I know how to do this in a shader( I'm using cocos2d v2 btw ), but I'm wondering if there is some built-in functionality to allow me to do this. First, is this the right approach using distance? Second, is their an easy way to do the tongue sprite effect without a shader? Third, I'm wanting to have the player spring up at will in the direction of the platform. I'm using Box2D. Would there be a way to do this using force's or would it be easier to write my own code?

    Read the article

  • Entity Type specific updates in entity component system

    - by Nathan
    I am currently familiarizing myself with the entity component paradigm. For an example, take a collision system, that detects if entities collide and if they do let them explode. So the collision system has to test collision based on the position component and then set the state of those entities to exploding. But what if the "effect" (setting the state to exploding) is different for different entities? For example, a ship fades out while for an asteroid a particle system must be created. Since entities and components are only data, this must happen in some system. The collision system could do it, but then it must switch over the entity type, which in my opinion is a cumbersome and difficult to extend solution. So how do I trigger "entity type dependend" updates on an entity?

    Read the article

  • Drawing beam effect in UDK?

    - by sgrif
    I'm having trouble drawing a particle effect between two actors in UDK - Both the source and the target are not static objects, so as far as I can tell I need to do it in the code not in kismet. Here's what I've got at the moment and it seems to not be doing anything at all. Ideas? BeamEmitter[0] = new(self) class'UTParticleSystemComponent'; BeamEmitter[0].SetAbsolute(false, false, false); BeamEmitter[0].SetTemplate(BeamTemplate[0]); BeamEmitter[0].SetTickGroup(TG_PostUpdateWork); BeamEmitter[0].bUpdateComponentInTick = true; self.AttachComponent(BeamEmitter[0]); BeamEmitter[0].SetBeamEndPoint(2, tarPos); BeamEmitter[0].ActivateSystem();

    Read the article

  • Sprites are sometimes blurry in Flash

    - by Tim Cooper
    I am playing around with drawing an SVG sprite (imported in through [Embed]). Depending on the coordinates of the image, sometimes it appears more crisp than others. The following image shows how at different locations is it rendered differently: (Image link - You may have to download and zoom in with an image editor to see it) You'll notice that the middle sprite is more blurry than the ones on the sides. Does anyone know why this is? Any help would be appreciated.

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >