Search Results

Search found 27 results on 2 pages for 'raycasting'.

Page 1/2 | 1 2  | Next Page >

  • Raycasting tutorial / vector math question

    - by mattboy
    I'm checking out this nice raycasting tutorial at http://lodev.org/cgtutor/raycasting.html and have a probably very simple math question. In the DDA algorithm I'm having trouble understanding the calcuation of the deltaDistX and deltaDistY variables, which are the distances that the ray has to travel from 1 x-side to the next x-side, or from 1 y-side to the next y-side, in the square grid that makes up the world map (see below screenshot). In the tutorial they are calculated as follows, but without much explanation: //length of ray from one x or y-side to next x or y-side double deltaDistX = sqrt(1 + (rayDirY * rayDirY) / (rayDirX * rayDirX)); double deltaDistY = sqrt(1 + (rayDirX * rayDirX) / (rayDirY * rayDirY)); rayDirY and rayDirX are the direction of a ray that has been cast. How do you get these formulas? It looks like pythagorean theorem is part of it, but somehow there's division involved here. Can anyone clue me in as to what mathematical knowledge I'm missing here, or "prove" the formula by showing how it's derived?

    Read the article

  • Rectangular Raycasting?

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • Raycasting "fisheye effect" question

    - by mattboy
    Continuing my exploration of raycasting, I am very confused about how the correction of the fisheye effect works. Looking at the screenshot below from the tutorial at permadi.com, the way I understand the cause of the fisheye effect is that the rays that are cast are distances from the player, rather than the distances perpendicular to the screen (or camera plane) which is what really needs to be displayed. The distance perpendicular to the screen then, in my world, should simply be the distance of Y coordinates (Py - Dy) assuming that the player is facing straight upwards. Continuing the tutorial, this is exactly how it seems to be according to the below screenshot. From my point of view, the "distorted distance" below is the same as the distance PD calculated above, and what's labelled the "correct distance" below should be the same as Py - Dy. Yet, this clearly isn't the case according to the tutorial. My question is, WHY is this not the same? How could it not be? What am I understanding and visualizing wrong here?

    Read the article

  • Raycasting mouse coordinates to rotated object?

    - by SPL
    I am trying to cast a ray from my mouse to a plane at a specified position with a known width and length and height. I know that you can use the NDC (Normalized Device Coordinates) to cast ray but I don't know how can I detect if the ray actually hit the plane and when it did. The plane is translated -100 on the Y and rotated 60 on the X then translated again -100. Can anyone please give me a good tutorial on this? For a complete noob! I am almost new to matrix and vector transformations.

    Read the article

  • Raycasting with tags problems in unity3d

    - by user1855858
    i need some help here. i have a part of code to search unblocked neighbour with raycast. i need to get raycast that just collide with "WP" tag. both of the iteration shown a right results, so do the dump and the raycast, the raycast does success to collide something, but when i check what the raycast collided with, there is no result shown.... anyone knows whats wrong with this code..?? int flag = 0, flahNeigh = 0; for(flag = 0; flag< wayPoints.WPList.Length; flag++) // iteration to seek neighbour nodes { for (flagNeigh = 0; flagNeigh < wayPoints.WPList.Length; flagNeigh++) { if (wayPoints.WPList[flag].loc != wayPoints.WPList[flagNeigh].loc) // dump its own node location { if (Physics.Raycast(wayPoints.WPList[flag].loc.position, wayPoints.WPList[flagNeigh].loc.position, out hitted)) // raycasting to each node else its self { if (hitted.collider.gameObject.CompareTag("WP")) // check if the ray only collide the node { print(flag + " : " + flagNeigh + " : " + wayPoints.WPList[flagNeigh].loc.position); // debugging to see whether the code works or not (the error comes) } } } } } thanks for the appreciation and answers... sorry if i have a bad english...^^

    Read the article

  • 3D collision detection with meshes using only raycasting?

    - by Nick
    I'm building a game using WebGL and Three.js, and so far I have a terrain with a guy walking on it. I simply cast a ray downwards to know the terrain height. How can I do this for other 3D objects, like the inside of a house? Is this possible by casting many rays in every direction of the player? If not, I would like to know how I can achieve the simplest collision detection possible for other meshes. Do you have to cast a ray to every triangle in every mesh nearby?

    Read the article

  • Calculate the Intersection of Two Volumes

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • Diagonal line of sight with two corners

    - by Ash Blue
    Right now I'm using Bresenham's line algorithm for line of sight. The problem is I've found an edge case where players can look through walls. Occurs when the player looks between two corners of a wall with a gap on the other side at specific angles. The result I want is for the tile between two walls to be marked invalid as so. What is the fastest way to modify Bresenham's line algorithm to solve this? If there isn't a good solution, is there a better suited algorithm? Any ideas are welcome. Please note the solution should also be capable of supporting 3d. Edit: For the working source code and an interactive demo of the completed product please see http://ashblue.github.io/javascript-pathfinding/

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • How to raycast select a scaled OBB?

    - by user3254944
    I have the OBB picking code to select an OBB with code inspired from Real time Rendering 3 and opengl-tutorial.org. I can successfully select objects that have been moved or rotated. However, I cant correctly select an object that has been scaled. The bounding box scales right, but the I can only select the object in a thin strip on its center. How do I fix the checkForHits() function to allow it to read the scaling that I passed to it in the raycast matrix? void GLWidget::selectObjRaycast() { glm::vec2 mouse = (glm::vec2(mousePos.x(), mousePos.y()) / glm::vec2(this->width(), this->height())) * 2.0f - 1.0f; mouse.y *= -1; glm::mat4 toWorld = glm::inverse(ProjectionM * ViewM); glm::vec4 from = toWorld * glm::vec4(mouse, -1.0f, 1.0f); glm::vec4 to = toWorld * glm::vec4(mouse, 1.0f, 1.0f); from /= from.w; to /= to.w; fromAABB = glm::vec3(from); toAABB = glm::normalize(glm::vec3(to - from)); checkForHits(); } void GLWidget::checkForHits() { for (int i = 0; i < myWin.myEtc->allObj.size(); ++i) //check for hits on each obj's bb { bool miss = 0; float tMin = 0.0f; float tMax = 100000.0f; glm::vec3 bbPos(myWin.myEtc->allObj[i]->raycastM[3].x, myWin.myEtc->allObj[i]->raycastM[3].y, myWin.myEtc->allObj[i]->raycastM[3].z); glm::vec3 delta = bbPos - fromAABB; for (int j = 0; j < 3; ++j) { glm::vec3 axis(myWin.myEtc->allObj[i]->raycastM[j].x, myWin.myEtc->allObj[i]->raycastM[j].y, myWin.myEtc->allObj[i]->raycastM[j].z); float e = glm::dot(axis, delta); float f = glm::dot(toAABB, axis); if (fabs(f) > 0.001f) { float t1 = (e + myWin.myEtc->allObj[i]->bbMin[j]) / f; float t2 = (e + myWin.myEtc->allObj[i]->bbMax[j]) / f; if (t1 > t2) { float w = t1; t1 = t2; t2 = w; } if (t2 < tMax) tMax = t2; if (t1 > tMin) tMin = t1; if (tMax < tMin) miss = 1; } else { if (-e + myWin.myEtc->allObj[i]->bbMin[j] > 0.0f || -e + myWin.myEtc->allObj[i]->bbMax[j] < 0.0f) miss = 1; } } if (miss == 0) { intersection_distance = tMin; myWin.myEtc->sel.push_back(myWin.myEtc->allObj[i]); myWin.myEtc->allObj[i]->highlight = myWin.myGLHelp->highlight; break; } } } void Object::render(glm::mat4 PV) { scaleM = glm::scale(glm::mat4(), s->val_3); r_quat = glm::quat(glm::radians(r->val_3)); rotationM = glm::toMat4(r_quat); translationM = glm::translate(glm::mat4(), t->val_3); transLocal1M = glm::translate(glm::mat4(), -rsPivot->val_3); transLocal2M = glm::translate(glm::mat4(), rsPivot->val_3); raycastM = translationM * transLocal2M * rotationM * scaleM * transLocal1M; // MVP = PV * translationM * transLocal2M * rotationM * scaleM * transLocal1M; }

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • Most efficient AABB - Ray intersection algorithm for input/output distance calculation

    - by Tobbey
    Thanks to the following thread : most efficient AABB vs Ray collision algorithms I have seen very fast algorithm for ray/AABB intersection point computation. Unfortunately, most of the recent algorithm are accelerated by omitting the "output" intersection point of the box. In my application, I would interested in getting both the the distance from source ray to input: t0 and source ray to output of bounding box: t1. I have seen for instance Eisemann designed a very fast version regarding plucker, smits, ... , but it does not compare the case when both input/output distance should be computed see: http://www.cg.cs.tu-bs.de/publications/Eisemann07FRA/ Does someone know where I can find more information on algorithm performances for the specific input/output problem ? Thank you in advance

    Read the article

  • PhysX Question about Raycasting - Setting the shape masks

    - by sweet tv
    I am trying to make the raycast give me the distance of the terrain and nothing else. But I'm not sure how to use the group Mask Filter. virtual NxShape* raycastClosestShape (const NxRay& worldRay, NxShapesType shapeType, NxRaycastHit& hit, NxU32 groups=0xffffffff, NxReal maxDist=NX_MAX_F32, NxU32 hintFlags=0xffffffff, const NxGroupsMask* groupsMask=NULL, NxShape** cache=NULL) const = 0; \param[in] groups Mask used to filter shape objects. Let's say my terrain is set in group 1. How would I use the function above?

    Read the article

  • Extend MySQL implementation of PiP Algorithm?

    - by Mike
    I need to make a point in polygon MySQL query. I already found these two great solutions: http://forums.mysql.com/read.php?23,286574,286574 MySQL implementation of ray-casting Algorithm? But these functions can only check if one point is inside a poly. I have a query where the PiP part should only be one part of the query and check for x points inside a polygon. Something like this: $points = list/array/whatever of points in language of favour SELECT d.name FROM data AS d WHERE d.name LIKE %red% // just bla bla // how to do this ? AND $points INSIDE d.polygon AND myWithin( $points, d.polygo ) // or UPDATE I tried it with MBR functions like this: SET @g1 = GeomFromText('Polygon((13.43971 52.55757,13.41293 52.49825,13.53378 52.49574, 13.43971 52.55757))'); SET @g2 = GeomFromText('Point(13.497834 52.540489)'); SELECT MBRContains(@g1,@g2); G2 should NOT be inside G1 but MBR says it is.

    Read the article

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

  • Hashing 3D position into 2D position

    - by notabene
    I am doing volumetric raycasting and curently working on depth jitter. I have 3D position on ray and want to sample 2D noise texture to jitter the depth. Function for converting (or hashing) 3D position to 2D have to produce absolutely different numbers for a little changes (especialy because i am sampling in texture space so sample values differs very very little) and have to be "shader-wise" - so forget about branches, cycles etc. I'm looking forward for yours nice and fast solutions.

    Read the article

  • Narrow-phase collision detection algorithms

    - by Marian Ivanov
    There are three phases of collision detection. Broadphase: It loops between all objecs that can interact, false positives are allowed, if it would speed up the loop. Narrowphase: Determines whether they collide, and sometimes, how, no false positives Resolution: Resolves the collision. The question I'm asking is about the narrowphase. There are multiple algorithms, differing in complexity and accuracy. Hitbox intersection: This is an a-posteriori algorithm, that has the lowest complexity, but also isn't too accurate, Color intersection: Hitbox intersection for each pixel, a-posteriori, pixel-perfect, not accuratee in regards to time, higher complexity Separating axis theorem: This is used more often, accurate for triangles, however, a-posteriori, as it can't find the edge, when taking last frame in account, it's more stable Linear raycasting: A-priori algorithm, useful for semi-realistic-looking physics, finds the intersection point, even more accurate than SAT, but with more complexity Spline interpolation: A-priori, even more accurate than linear rays, even more coplexity. There are probably many more that I've forgot about. The question is, in when is it better to use SAT, when rays, when splines, and whether there is anything better.

    Read the article

  • Efficient way to calculate "vision cones" on 2D tile map?

    - by OverMachoGrande
    I'm trying to calculate which tiles a particular unit can "see" if facing a certain direction on a tile map (within a certain range and angle of facing). The easiest way would be to draw a certain number of tiles outward and raycast to each tile. However, I'm hoping for something slightly more efficient. A picture says a thousand words: The red dot is the unit (who's facing upwards). My goal is to calculate the yellow tiles. The green blocks are walls (walls are between tiles, and it's easy to check if you can pass between two tiles). The blue line represents something like the "raycasting" method I was talking about, but I'd rather not have to do this. EDIT: Units can only be facing north/south/east/west (0, 90, 180, or 270 degrees) and FoV is always 90 degrees. Should simplify some calculations. I'm thinking there's some sort of recursive-ish/stack-based/queue-based algorithm, but I can't quite figure it out. Thanks!

    Read the article

  • How is Basic Physics applied in CS/SE?

    - by Wulf
    What basic physics principles do software engineers and/or computer scientists use to help solve specific or common problems? The first one that came to my head was creating a Physics engine for a game; physics is involved, as it requires knowledge of: Forces and Motion: Kinematics, Dynamics, Circular Motion However, I need another example, but haven't come across one that involves basic physics. Please consider the following basic physics (grade 12 level) concepts: Energy and Momentum: Work and Energy, Momentum and Collisions, Gravitational and Celestial Mechanics Electric, Gravitational & Magnetic Field: Electric Charges and Electric Field, Magnetic Fields and Electomagnetism The Wave Nature of Light: Waves and Light, Wave Effects of Light Matter-Energy Interface: Einstein’s Special Theory of Relativity, Waves, Photons and Matter, Radioactivity and Elementary Particles I will be happy with any response; Keywords for google, names of methods like raycasting, etc.

    Read the article

  • Why doesn't Unity's OnCollisionEnter give me surface normals, and what's the most reliable way to get them?

    - by michael.bartnett
    Unity's on collision event gives you a Collision object that gives you some information about the collision that happened (including a list of ContactPoints with hit normals). But what you don't get is surface normals for the collider that you hit. Here's a screenshot to illustrate. The red line is from ContactPoint.normal and the blue line is from RaycastHit.normal. Is this an instance of Unity hiding information to provide a simplified API? Or do standard 3D realtime collision detection techniques just not collect this information? And for the second part of the question, what's a surefire and relatively efficient way to get a surface normal for a collision? I know that raycasting gives you surface normals, but it seems I need to do several raycasts to accomplish this for all scenarios (maybe a contact point/normal combination misses the collider on the first cast, or maybe you need to do some average of all the contact points' normals to get the best result). My current method: Back up the Collision.contacts[0].point along its hit normal Raycast down the negated hit normal for float.MaxValue, on Collision.collider If that fails, repeat steps 1 and 2 with the non-negated normal If that fails, try steps 1 to 3 with Collision.contacts[1] Repeat 4 until successful or until all contact points exhausted. Give up, return Vector3.zero. This seems to catch everything, but all those raycasts make me queasy, and I'm not sure how to test that this works for enough cases. Is there a better way?

    Read the article

  • What kind of physics to choose for our arcade 3D MMO?

    - by Nick
    We're creating an action MMO using Three.js (WebGL) with an arcadish feel, and implementing physics for it has been a pain in the butt. Our game has a terrain where the character will walk on, and in the future 3D objects (a house, a tree, etc) that will have collisions. In terms of complexity, the physics engine should be like World of Warcraft. We don't need friction, bouncing behaviour or anything more complex like joints, etc. Just gravity. I have managed to implement terrain physics so far by casting a ray downwards, but it does not take into account possible 3D objects. Note that these 3D objects need to have convex collisions, so our artists create a 3D house and the player can walk inside but can't walk through the walls. How do I implement proper collision detection with 3D objects like in World of Warcraft? Do I need an advanced physics engine? I read about Physijs which looks cool, but I fear that it may be overkill to implement that for our game. Also, how does WoW do it? Do they have a separate raycasting system for the terrain? Or do they treat the terrain like any other convex mesh? A screenshot of our game so far:

    Read the article

  • Recasting and Drawing in SDL

    - by user1078123
    I have some code that essentially draws a column on the screen of a wall in a raycasting-type 3d engine. I am trying to optimize it, as it takes about 10 milliseconds do draw a million pixels using this, and the vast majority of game time is spent in this loop. However, I don't quite understand what's occurring, particularly the recasting (I modified the "pixel manipulation" sample code from the SDL documentation). "canvas" is the surface I am drawing to, and "hello" is the surface containing the texture for the column. int c = (curcol)* canvas->format->BytesPerPixel; void *canvaspixels = canvas->pixels; Uint16 texpitch = hello->pitch; int lim = (drawheight +startdraw) * canvpitch +c + (int) canvaspixels; Uint8 *k = (Uint8 *)hello->pixels + (hit)* hello->format->BytesPerPixel; for (int j= (startdraw)*(canvpitch)+c + (int) canvaspixels; (j< lim); j+= canvpitch){ Uint8 *q = (Uint8 *) ((int(h))*(texpitch)+k); *(Uint32 *)j = *(Uint32 *)q; h += s; } We have void pointers (not sure how those are even represented), 8, 16, and 32 bit ints (h and s are floats), all being intermingled, and while it works, it is quite confusing.

    Read the article

  • OGRE 3D: How to create very basic gameworld [on hold]

    - by skiwi
    I'm considering trying around to create an FPS (First person shooter), using the Ogre 3D engine. I have done the Basic Tutorials (except CEGUI), and have read through the Intermediate Tutorial, I understand some of the more advanced concepts, but I'm stuck with very simple concepts. First of all: I would want to use some tiles (square ones, with relative little height) as the floor, I guess I need to set up a loop to get those tiles done. But how would I go about creating those tiles exactly? Like making it to be their own mesh, and then I would need to find some texture. Secondly: I guess I can derive the camera and movement functions from the basic tutorial. But I'll be needing a "soldier" (anything does for now), what is the best way to create a moderately decent looking soldier? (Or obtain a decent one from an open library?) And thirdly: How can I ensure that the soldier is actually walking on the ground, instead of mid air? Will raycasting into the ground + adjust position based on that, suffice?

    Read the article

  • Android Swipe In Unity 3D World with AR

    - by Christian
    I am working on an AR application using Unity3D and the Vuforia SDK for Android. The way the application works is a when the designated image(a frame marker in our case) is recognized by the camera, a 3D island is rendered at that spot. Currently I am able to detect when/which objects are touched on the model by raycasting. I also am able to successfully detect a swipe using this code: if (Input.touchCount > 0) { Touch touch = Input.touches[0]; switch (touch.phase) { case TouchPhase.Began: couldBeSwipe = true; startPos = touch.position; startTime = Time.time; break; case TouchPhase.Moved: if (Mathf.Abs(touch.position.y - startPos.y) > comfortZoneY) { couldBeSwipe = false; } //track points here for raycast if it is swipe break; case TouchPhase.Stationary: couldBeSwipe = false; break; case TouchPhase.Ended: float swipeTime = Time.time - startTime; float swipeDist = (touch.position - startPos).magnitude; if (couldBeSwipe && (swipeTime < maxSwipeTime) && (swipeDist > minSwipeDist)) { // It's a swiiiiiiiiiiiipe! float swipeDirection = Mathf.Sign(touch.position.y - startPos.y); // Do something here in reaction to the swipe. swipeCounter.IncrementCounter(); } break; } touchInfo.SetTouchInfo (Time.time-startTime,(touch.position-startPos).magnitude,Mathf.Abs (touch.position.y-startPos.y)); } Thanks to andeeeee for the logic. But I want to have some interaction in the 3D world based on the swipe on the screen. I.E. If the user swipes over unoccluded enemies, they die. My first thought was to track all the points in the moved TouchPhase, and then if it is a swipe raycast into all those points and kill any enemy that is hit. Is there a better way to do this? What is the best approach? Thanks for the help!

    Read the article

1 2  | Next Page >