Search Results

Search found 4092 results on 164 pages for 'initialization vector'.

Page 114/164 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Camera Collision inside the room model

    - by sanddy
    I am having a problem in Calculating the camera collision for my Room model which consists of sofa, tables and other models. The users shall be moving the camera front, back, rotating so i need to make sure that the camera does not collide with any of the models with in the room. I have treated all my models inside the room by BoundingBox[] and the camera by BoundingSphere. So, far i have implemented collision by looking into the tutorial from http://www.toymaker.info/Games/XNA/html/xna_model_collisions.html which was great. But, I guess the problem lies in the Transformation part. I debugged and found some points to be at Vector(-XXX,-XXX,-XXX) where X is digit. Also i found my radius of some models where too large(in thousand, i just looked into its radius value before converting to BoundingBox). Do I need to scale the model for collision??? Below are my code:- On My LoadContent(): Matrix[] transforms = new Matrix[myModel.Bones.Count]; myModel.CopyAbsoluteBoneTransformsTo(transforms); int index = 0; box = new List<BoundingBox>(); BoundingBox worldModel = Utility.CalculateBoundingBox(myModel); foreach (ModelMesh mesh in myModel.Meshes) { Vector3[] obb = new Vector3[8]; worldModel.GetCorners(obb); Vector3[] asdf = (Vector3[])obb.Clone(); Vector3.Transform(obb, ref transforms[mesh.ParentBone.Index], obb); BoundingBox worldBox = BoundingBox.CreateFromPoints(obb); box.Add(worldBox); index++; } On CameraPosition Update: BoundingSphere bs = new BoundingSphere(this.cameraPos, 5.0f); if (RoomWalkthrough.Utility.CheckCollision(bs, bb)) { // Do Something } Please Help.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • How can I resolve collisions at different speeds, depending on the direction?

    - by Raven Dreamer
    I have, for all intents and purposes, a Triangle class that objects in my scene can collide with (In actuality, the right side of a parallelogram). My collision detection and resolution code works fine for the purposes of preventing a gameobject from entering into the space of the Triangle, instead directing the movement along the edge. The trouble is, the maximum speed along the x and y axis is not equivalent in my game, and moving along the Y axis (up or down) should take twice as long as an equivalent distance along the X axis (left or right). Unfortunately, these speeds apply to the collision resolution too, and movement along the blue path above progresses twice as fast. What can I do in my collision resolution to make sure that the speedlimit for Y axis movement is obeyed in the latter case? Collision Resolution for this case below (vecInput and velocity are the position and velocity vectors of the game object): // y = mx+c // solve for y. M = 2, x = input's x coord, c = rightYIntercept lowY = 2*vecInput.x + parag.rightYIntercept ; ... else { // y = mx+c // vecInput.y = 2(x) + RightYIntercept // (vecInput.y - RightYIntercept) / 2 = x; //if velocity.Y (positive) greater than velocity.X (negative) //pushing from bottom, so push right. if(velocity.y > -1*velocity.x) { //change the input vector's x position to match the //y position on the shape's edge. Formula for line: Y = MX+C // M is 2, C is rightYIntercept, y is the input y, solve for X. vecInput = new Vector2((vecInput.y - parag.rightYIntercept)/2, vecInput.y); Debug.Log("adjusted rightwards"); } else { vecInput = new Vector2( vecInput.x, lowY); Debug.Log("adjusted downwards"); } }

    Read the article

  • What is a technique for 2D ray-box intersection that is suitable for old console hardware?

    - by DJCouchyCouch
    I'm working on a Sega Genesis homebrew game (it has a 7mhz 68000 CPU). I'm looking for a way to find the intersection between a particle sprite and a background tile. Particles are represented as a point with a movement vector. Background tiles are 8 x 8 pixels, with an (X,Y) position that is always located at a multiple of 8. So, really, I need to find the intersection point for a ray-box collision; I need to find out where along the edge of the tile the ray/particle hits. I have these two hard constraints: I'm working with pixel locations (integers). Floating point is too expensive. It doesn't have to be super exact, just close enough. Multiplications, divisions, dot products, et cetera, are incredibly expensive and are to be avoided. So I'm looking for an efficient algorithm that would fit those constraints. Any ideas? I'm writing it in C, so that would work, but assembly should be good as well.

    Read the article

  • MVC and individual elements of the model under a common base class

    - by Stewart
    Admittedly my experience of using the MVC pattern is limited. It might be argued that I don't really separate the V from the C, though I keep the M separate from the VC to the extent I can manage. I'm considering the scenario in which the application's model includes a number of elements that have a common base class. For example, enemy characters in a video game, or shape types in a vector graphics app. The view wants to render these elements. Of course, the different subclasses call for different rendering. The problem is that the elements are part of the model. Rendering them is conceptually part of the view. But how they are to be rendered depends on parameters of both: Attributes and state of the element are parameters of the model User settings are parameters of the view - and to support multiple platforms and/or view modes, different views may be used What's your preferred way of dealing with this? Put the rendering code in the model classes, passing in any view parameters? Put the rendering code in the view, using a switch or similar to select the right rendering for the model element type? Have some intermediate classes as a model-view interface, of which the model will create objects on demand and the view will then render them? Something else?

    Read the article

  • Sprites rendering blurry with velocity

    - by ashes999
    After adding velocity to my game, I feel like my textures are twitching. I thought it was just my eyes, until I finally captured it in a screenshot: The one on the left is what renders in my game; the one on the right is the original sprite, pasted over. (This is a screenshot from Photoshop, zoomed in 6x.) Notice the edges are aliasing -- it looks almost like sub-pixel rendering. In fact, if I had not forced my sprites (which have position and velocity as ints) to draw using integer values, I would swear that MonoGame is drawing with floating point values. But it isn't. What could be the cause of these things appearing blurry? It doesn't happen without velocity applied. To be precise, my SpriteComponent class has a Vector2 Position field. When I call Draw, I essentially use new Vector2((int)Math.Round(this.Position.X), (int)Math.Round(this.Position.Y)) for the position. I had a bug before where even stationary objects would jitter -- that was due to me using the straight Position vector and not rounding the values to ints. If I use Floor/Ceiling instead of round, the sprite sinks/hovers (one pixel difference either way) but still draws blurry.

    Read the article

  • How do I classify using SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. I gave the parameters exactly as in the example provided in the documentation itself. The output I obtained was stats = autoc: [1.857855266614132e+000 1.857955341199538e+000] contr: [5.103143332457753e-002 5.030548650257343e-002] corrm: [9.512661919561399e-001 9.519459060378332e-001] corrp: [9.512661919561385e-001 9.519459060378338e-001] cprom: [7.885631654779597e+001 7.905268525471267e+001] cshad: [1.219440700252286e+001 1.220659371449108e+001] dissi: [2.037387269065756e-002 1.935418927908687e-002] energ: [8.987753042491253e-001 8.988459843719526e-001] entro: [2.759187341212805e-001 2.743152140681436e-001] homom: [9.930016927881388e-001 9.935307908219834e-001] homop: [9.925660617240367e-001 9.930960070222014e-001] maxpr: [9.474275457490587e-001 9.474466930429607e-001] sosvh: [1.847174384255155e+000 1.846913030238459e+000] savgh: [2.332207337361002e+000 2.332108469591401e+000] svarh: [6.311174784234007e+000 6.314794324825067e+000] senth: [2.663144677055123e-001 2.653725436772341e-001] dvarh: [5.103143332457753e-002 5.030548650257344e-002] denth: [7.573115918713391e-002 7.073380266499811e-002] inf1h: [-8.199645492654247e-001 -8.265514568489666e-001] inf2h: [5.643539051044213e-001 5.661543271625117e-001] indnc: [9.980238521073823e-001 9.981394883569174e-001] idmnc: [9.993275086521848e-001 9.993404634013308e-001] The thing is, I run the program for three images. But all three gave me the same output. When I used graycoprops() stat = Contrast: 4.721877658740964e+005 Correlation: -3.282870417955449e-003 Energy: 8.647689474127760e-006 Homogeneity: 8.194621855726478e-003 stat = Contrast: 2.817160447307697e+004 Correlation: 2.113032196952781e-005 Energy: 4.124904827799189e-004 Homogeneity: 2.513567163994905e-002 stat = Contrast: 7.086638436309059e+004 Correlation: 2.459637878221028e-002 Energy: 4.640677159445994e-004 Homogeneity: 1.158305728309460e-002 The images are:

    Read the article

  • How do I create a third Person View using DXUTCamera in DX10?

    - by David
    I am creating a 3d flying game and using DXUTCamera for my view. I can get the camera to take on the characters position, But I would like to view my character in the 3rd person. Here is my code for first person view: //Put the camera on the object. D3DXVECTOR3 viewerPos; D3DXVECTOR3 lookAtThis; D3DXVECTOR3 up ( 5.0f, 1.0f, 0.0f ); D3DXVECTOR3 newUp; D3DXMATRIX matView; //Set the viewer's position to the position of the thing. viewerPos.x = character->x; viewerPos.y = character->y; viewerPos.z = character->z; // Create a new vector for the direction for the viewer to look character->setUpWorldMatrix(); D3DXVECTOR3 newDir, lookAtPoint; D3DXVec3TransformCoord(&newDir, &character->initVecDir, &character->matAllRotations); // set lookatpoint D3DXVec3Normalize(&lookAtPoint, &newDir); lookAtPoint.x += viewerPos.x; lookAtPoint.y += viewerPos.y; lookAtPoint.z += viewerPos.z; g_Camera.SetViewParams(&viewerPos, &lookAtPoint); So does anyone have an ideas how I can move the camera to the third person view? preferably timed so there is a smooth action in the camera movement. (I'm hoping I can just edit this code instead of bringing in another camera class)

    Read the article

  • Make an object slide around an obstacle

    - by Isaiah
    I have path areas set up in a game I'm making for canvas/html5 and have got it working to keep the player within these areas. I have a function isOut(boundary, x, y) that returns true if the point is outside the boundary. What I do is check only the new position x/y separately with the corresponding old position x/y. Then if each one is out I assign them the past value from the frame before. The old positions are kept in a variable from a closure I made. like this: opos = [x,y];//old position npos = [x,y];//new position if(isOut(bound, npos[0], opos[1])){ npos[0] = opos[0]; //assign it the old x position } if(isOut(bound, opos[0], npos[1])){ npos[1] = opos[1]; //assign it the old y position } It looks nice and works good at certain angles, but if your boundary has diagonal regions it results in jittery motion. What's happening is the y pos exits the area while x doesn't and continues pushing the player to the side, once it has moved the player to the side a bit the player can move forward and then the y exits again and the whole process repeats. Anyone know how I may be able to achieve a smoother slide? I have access to the player's velocity vector, the angle, and the speed(when used with the angle). I can move the play with either angle/speed or x/yvelocities as I've built in backups to translate one to the other if either have been altered manually.

    Read the article

  • C++ Iterator lifetime and detecting invalidation

    - by DK.
    Based on what's considered idiomatic in C++11: should an iterator into a custom container survive the container itself being destroyed? should it be possible to detect when an iterator becomes invalidated? are the above conditional on "debug builds" in practice? Details: I've recently been brushing up on my C++ and learning my way around C++11. As part of that, I've been writing an idiomatic wrapper around the uriparser library. Part of this is wrapping the linked list representation of parsed path components. I'm looking for advice on what's idiomatic for containers. One thing that worries me, coming most recently from garbage-collected languages, is ensuring that random objects don't just go disappearing on users if they make a mistake regarding lifetimes. To account for this, both the PathList container and its iterators keep a shared_ptr to the actual internal state object. This ensures that as long as anything pointing into that data exists, so does the data. However, looking at the STL (and lots of searching), it doesn't look like C++ containers guarantee this. I have this horrible suspicion that the expectation is to just let containers be destroyed, invalidating any iterators along with it. std::vector certainly seems to let iterators get invalidated and still (incorrectly) function. What I want to know is: what is expected from "good"/idiomatic C++11 code? Given the shiny new smart pointers, it seems kind of strange that STL allows you to easily blow your legs off by accidentally leaking an iterator. Is using shared_ptr to the backing data an unnecessary inefficiency, a good idea for debugging or something expected that STL just doesn't do? (I'm hoping that grounding this to "idiomatic C++11" avoids charges of subjectivity...)

    Read the article

  • What is wrong with my game loop/mechanic?

    - by elias94xx
    I'm currently working on a 2d sidescrolling game prototype in HTML5 canvas. My implementations so far include a sprite, vector, loop and ticker class/object. Which can be viewed here: http://elias-schuett.de/apps/_experiments/2d_ssg/js/ So my game essentially works well on todays lowspec PC's and laptops. But it does not on an older win xp machine I own and on my Android 2.3 device. I tend to get ~10 FPS with these devices which results in a too high delta value, which than automaticly gets fixed to 1.0 which results in a slow loop. Now I know for a fact that there is a way to implement a super smooth 60 or 30 FPS loop on both devices. Best example would be: http://playbiolab.com/ I don't need all the chunk and debugging technology impact.js offers. I could even write a super simple game where you just control a damn square and it still wouldn't run on a equally fast 30 or 60 fps. Here is the Loop class/object I'm using. It requires a requestAnimationFrame unify function. Both devices I've tested my game on support requestAnimationFrame, so there is no interval fallback. var Loop = function(callback) { this.fps = null; this.delta = 1; this.lastTime = +new Date; this.callback = callback; this.request = null; }; Loop.prototype.start = function() { var _this = this; this.request = requestAnimationFrame(function(now) { _this.start(); _this.delta = (now - _this.lastTime); _this.fps = 1000/_this.delta; _this.delta = _this.delta / (1000/60) > 2 ? 1 : _this.delta / (1000/60); _this.lastTime = now; _this.callback(); }); }; Loop.prototype.stop = function() { cancelAnimationFrame(this.request); };

    Read the article

  • Collision Detection with SAT: False Collision for Diagonal Movement Towards Vertical Tile-Walls?

    - by Macks
    Edit: Problem solved! Big thanks to Jonathan who pointed me in the right direction. Sean describes the method I used in a different thread. Also big thanks to him! :) Here is how I solved my problem: If a collision is registered by my SAT-method, only fire the collision-event on my character if there are no neighbouring solid tiles in the direction of the returned minimum translation vector. I'm developing my first tile-based 2D-game with Javascript. To learn the basics, I decided to write my own "game engine". I have successfully implemented collision detection using the separating axis theorem, but I've run into a problem that I can't quite wrap my head around. If I press the [up] and [left] arrow-keys simultaneously, my character moves diagonally towards the upper left. If he hits a horizontal wall, he'll just keep moving in x-direction. The same goes for [up] and [left] as well as downward-diagonal movements, it works as intended: http://i.stack.imgur.com/aiZjI.png Diagonal movement works fine for horizontal walls, for both left and right-movement However: this does not work for vertical walls. Instead of keeping movement in y-direction, he'll just stop as soon as he "enters" a new tile on the y-axis. So for some reason SAT thinks my character is colliding vertically with tiles from vertical walls: http://i.stack.imgur.com/XBEKR.png My character stops because he thinks that he is colliding vertically with tiles from the wall on the right. This only occurs, when: Moving into top-right direction towards the right wall Moving into top-left direction towards the left wall Bottom-right and bottom-left movement work: the character keeps moving in y-direction as intended. Is this inherited from the way SAT works or is there a problem with my implementation? What can I do to solve my problem? Oh yeah, my character is displayed as a circle but he's actually a rectangular polygon for the collision detection. Thank you very much for your help.

    Read the article

  • Building View Matrix in Direct3D11

    - by Balls
    Am I doing it right? I converted this. m_ViewMatrix = XMMatrixLookAtLH(XMLoadFloat3(&m_Position), lookAtVector, upVector); to this one. XMVECTOR vz = XMVector3Normalize( lookAtVector - XMLoadFloat3(&m_Position) ); XMVECTOR vx = XMVector3Normalize( XMVector3Cross( upVector, vz ) ); XMVECTOR vy = XMVector3Cross( vz, vx ); m_ViewMatrix.r[0] = vx; m_ViewMatrix.r[1] = vy; m_ViewMatrix.r[2] = vz; m_ViewMatrix.r[3] = XMLoadFloat3(&m_Position); m_ViewMatrix.r[0].m128_f32[3] = 0.0f; m_ViewMatrix.r[1].m128_f32[3] = 0.0f; m_ViewMatrix.r[2].m128_f32[3] = 0.0f; m_ViewMatrix.r[3].m128_f32[3] = 1.0f; m_ViewMatrix = XMMatrixInverse( &XMMatrixDeterminant(m_ViewMatrix), m_ViewMatrix ); Everything looks fine when I run it. Another question is, I saw on this site(http://webglfactory.blogspot.com/2011/06/how-to-create-view-matrix.html) that he subtracted lookat from position in his vector vz. I tried it but gave me wrong view matrix. Can anyone check my code. I'm studying linear algebra right now. Sucks my course doesn't have one. Thank you, Balls

    Read the article

  • Collision within a poly

    - by G1i1ch
    For an html5 engine I'm making, for speed I'm using a path poly. I'm having trouble trying to find ways to get collision with the walls of the poly. To make it simple I just have a vector for the object and an array of vectors for the poly. I'm using Cartesian vectors and they're 2d. Say poly = [[550,0],[169,523],[-444,323],[-444,-323],[169,-523]], it's just a pentagon I generated. The object that will collide is object, object.pos is it's position and object.vel is it's velocity. They're both 2d vectors too. I've had some success to get it to find a collision, but it's just black box code I ripped from a c++ example. It's very obscure inside and all it does though is return true/false and doesn't return what vertices are collided or collision point, I'd really like to be able to understand this and make my own so I can have more meaningful collision. I'll tackle that later though. Again the question is just how does one find a collision to walls of a poly given you know the poly vertices and the object's position + velocity? If more info is needed please let me know. And if all anyone can do is point me to the right direction that's great.

    Read the article

  • Blender - creating bones from transform matrices

    - by user975135
    Notice: this is for the Blender 2.5/2.6 API. Back in the old days in the Blender 2.4 API, you could easily create a bone from a transform matrix in your 3d file as EditBones had an attribute named "matrix", which was an armature-space matrix you could access and modify. The new 2.5+ API still has the "matrix" attribute for EditBones, but for some unknown reason it is now read-only. So how to create EditBones from transform matrices? I could only find one thing: a new "transform()" function, which takes a Matrix too. Transform the the bones head, tail, roll and envelope (when the matrix has a scale component). Perfect, but you already need to have some values (loc/rot/scale) for your bone, otherwise transforming with a matrix like this will give you nothing, your bone will be a zero-sized bone which will be deleted by Blender. if you create default bone values first, like this: bone.tail = mathutils.Vector([0,1,0]) Then transform() will work on your bone and it might seem to create correct bones, but setting a tail position actually generates a matrix itself, use transform() and you don't get the matrix from your model file on your EditBone, but the multiplication of your matrix with the bone's existing one. This can be easily proven by comparing the matrices read from the file with EditBone.matrix. Again it might seem correct in Blender, but now export your model and you see your animations are messed up, as the bind pose rotations of the bones are wrong. I've tried to find an alternative way to assign the transformation matrix from my file to my EditBone with no luck.

    Read the article

  • Setting Krypton Light to Screen Pixels

    - by Adam Jerrett
    So a few days back, I started playing around with Krypton XNA for 2D lighting in my game. I noticed in general, that spawning a light at (0,0) with Krypton causes the light to appear in, pretty much, the centre of the game screen. Is there any way to change this so a Krypton light's "starting point" at [0,0] would spawn at the top left of the screen, and thus follow the standard screen co-ordinates for position? I ask because currently I'm busy working on my game where my spawn point is [512,512]. With hard code, the closest I've got to the light being "central" to this point is the vector position [12,-20], which makes no sense and is impossible to craft, mathematically, if I want the light to move with the camera (the position [480,512] maps roughly to [10,-20]). So, is there any way to "normalise" the krypton lights to use standard screen co-ordinates? If you guys can, play around with the demo from the site and please see if you can find anything out about it. Documentation on the engine is rather scarce, so it's difficult to find anything relevant to my "pixel-perfect" need. It might just also be something in the code with regards to the matrices that I'm not fully understanding. Any help would be useful. Thanks.

    Read the article

  • forward motion car physics - gradual slow

    - by spartan2417
    Im having trouble creating realistic car movements in xna 4. Right now i have a car going forward and hitting a terminal velocity which is fine but when i release the up key i need to the car to slow down gradually and then come to a stop. Im pretty sure this is easy code but i cant seem to get it to work the code - update if (Keyboard.GetState().IsKeyDown(Keys.Up)) { double elapsedTime = gameTime.ElapsedGameTime.Milliseconds; CalcTotalForce(); Acceleration = Vector2.Divide(CalcTotalForce(), MASS); Velocity = Vector2.Add(Velocity, Vector2.Multiply(Acceleration, (float)(elapsedTime))); Position = Vector2.Add(Position, Vector2.Multiply(Velocity, (float)(elapsedTime))); } added functions public Vector2 CalcTraction() { //Traction force = vector direction * engine force return Vector2.Multiply(forwardDirection, ENGINE_FORCE); } public Vector2 CalcDrag() { //Drag force = constdrag * velocity * speed return Vector2.Multiply(Vector2.Multiply(Velocity, DRAG_CONST), Velocity.Y); } public Vector2 CalcRoll() { //roll force = const roll * velocity return Vector2.Multiply(Velocity, ROLL_CONST); } public Vector2 CalcTotalForce() { //total force = traction + (-drag) + (-rolling) return Vector2.Add(CalcTraction(), Vector2.Add(-CalcDrag(), -CalcRoll())); } anyone have any ideas?

    Read the article

  • backface culling error (in world space)

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation (is it correct?). (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason? For better explanation i upload picture with cube rotation screenshots: http://imageshack.us/photo/my-images/842/bcmove.png/ UPDATED: The error occurs only when triangle has non-zero offset from origin UPDATED 2: If i process backface culling in clip space (after transforming all vertices with view and projection matrix), and just check z coordinate of triangle normal - it works perfect... Can i perform culing RIGHT BEFORE view/proj transforms? In this case looks like culling will not depends of projection and it's not right?.. UPDATED 3: I found answer and will post it in two hours - again coz of reputation lack.

    Read the article

  • Directional and orientation problem

    - by Ahmed Saleh
    I have drawn 5 tentacles which are shown in red. I have drew those tentacles on a 2D Circle, and positioned them on 5 vertices of the that circle. BTW, The circle is never be drawn, I have used it to simplify the problem. Now I wanted to attached that circle with tentacles underneath the jellyfish. There is a problem with the current code but I don't know what is it. You can see that the circle is parallel to the base of the jelly fish. I want it to be shifted so that it be inside the jelly fish. but I don't know how. I tried to multiply the direction vector to extend it but that didn't work. // One tentacle is constructed from nodes // Get the direction of the first tentacle's node 0 to node 39 of that tentacle; Vec3f dir = m_tentacle[0]->geNodesPos()[0] - m_tentacle[0]->geNodesPos()[39]; // Draw the circle with tentacles on it Vec3f pos = m_SpherePos; drawCircle(pos,dir,30,m_tentacle.size()); for (int i=0; i<m_tentacle.size(); i++) { m_tentacle[i]->Draw(); } // Draw the jelly fish, and orient it on the 2D Circle gl::pushMatrices(); Quatf q; // assign quaternion to rotate the jelly fish around the tentacles q.set(Vec3f(0,-1,0),Vec3f(dir.x,dir.y,dir.z)); // tanslate it to the position of the whole creature per every frame gl::translate(m_SpherePos.x,m_SpherePos.y,m_SpherePos.z); gl::rotate(q); // draw the jelly fish at center 0,0,0 drawHemiSphere(Vec3f(0,0,0),m_iRadius,90); gl::popMatrices();

    Read the article

  • Optimized algorithm for line-sphere intersection in GLSL

    - by fernacolo
    Well, hello then! I need to find intersection between line and sphere in GLSL. Right now my solution is based on Paul Bourke's page and was ported to GLSL this way: // The line passes through p1 and p2: vec3 p1 = (...); vec3 p2 = (...); // Sphere center is p3, radius is r: vec3 p3 = (...); float r = ...; float x1 = p1.x; float y1 = p1.y; float z1 = p1.z; float x2 = p2.x; float y2 = p2.y; float z2 = p2.z; float x3 = p3.x; float y3 = p3.y; float z3 = p3.z; float dx = x2 - x1; float dy = y2 - y1; float dz = z2 - z1; float a = dx*dx + dy*dy + dz*dz; float b = 2.0 * (dx * (x1 - x3) + dy * (y1 - y3) + dz * (z1 - z3)); float c = x3*x3 + y3*y3 + z3*z3 + x1*x1 + y1*y1 + z1*z1 - 2.0 * (x3*x1 + y3*y1 + z3*z1) - r*r; float test = b*b - 4.0*a*c; if (test >= 0.0) { // Hit (according to Treebeard, "a fine hit"). float u = (-b - sqrt(test)) / (2.0 * a); vec3 hitp = p1 + u * (p2 - p1); // Now use hitp. } It works perfectly! But it seems slow... I'm new at GLSL. You can answer this questions in two ways: Tell me there is no solution, showing some proof or strong evidence. Tell me about GLSL features (vector APIs, primitive operations) that makes the above algorithm faster, showing some example. Thanks a lot!

    Read the article

  • Alpha From PNGs Butchered

    - by ashes999
    I have a pretty vanilla Monogame game. I'm using PNG for all my sprites (made in Photoshop). I noticed that XNA is butchering the aliasing; no matter what I do, my graphics appear jaggedy. Below is a screenshot. The bottom half is what XNA shows me when I zoom in 2X using a Matrix on my GraphicsDevice (to make the effect more obvious). The top is when I pasted the same sprites from Photoshop and scaled them to 200%. Note that partially transparent pixels are turning whiteish. Is there a way to fix this? What am I doing wrong? Here's the relevant call to draw to the SpriteBatch: spriteBatch.Draw(this.texture, this.positionVector, null, Color.White, this.Angle, this.originVector, 1f, SpriteEffects.None, 0f); (this.positionVector can easily be Vector.Zero; Color.White as 100% alpha, I think; this.Angle can be a real angle (small > in the image) or zero (the orb itself).

    Read the article

  • Does F# kill C++?

    - by MarkPearl
    Okay, so the title may be a little misleading… but I am currently travelling and so have had very little time and access to resources to do much fsharping – this has meant that I am right now missing my favourite new language. I was interested to see this post on Stack Overflow this evening concerning the performance of the F# language. The person posing the question asked 8 key points about the F# language, namely… How well does it do floating-point? Does it allow vector instructions How friendly is it towards optimizing compilers? How big a memory foot print does it have? Does it allow fine-grained control over memory locality? Does it have capacity for distributed memory processors, for example Cray? What features does it have that may be of interest to computational science where heavy number processing is involved? Are there actual scientific computing implementations that use it? Now, I don’t have much time to look into a decent response and to be honest I don’t know half of the answers to what he is asking, but it was interesting to see what was put up as an answer so far and would be interesting to get other peoples feedback on these questions if they know of anything other than what has been covered in the answer section already.

    Read the article

  • Quaternion LookAt for camera

    - by Homar
    I am using the following code to rotate entities to look at points. glm::vec3 forwardVector = glm::normalize(point - position); float dot = glm::dot(glm::vec3(0.0f, 0.0f, 1.0f), forwardVector); float rotationAngle = (float)acos(dot); glm::vec3 rotationAxis = glm::normalize(glm::cross(glm::vec3(0.0f, 0.0f, 1.0f), forwardVector)); rotation = glm::normalize(glm::quat(rotationAxis * rotationAngle)); This works fine for my usual entities. However, when I use this on my Camera entity, I get a black screen. If I flip the subtraction in the first line, so that I take the forward vector to be the direction from the point to my camera's position, then my camera works but naturally my entities rotate to look in the opposite direction of the point. I compute the transformation matrix for the camera and then take the inverse to be the View Matrix, which I pass to my OpenGL shaders: glm::mat4 viewMatrix = glm::inverse( cameraTransform->GetTransformationMatrix() ); The orthographic projection matrix is created using glm::ortho. What's going wrong?

    Read the article

  • Are there any real-world cases for C++ without exceptions?

    - by Martin
    In When to use C over C++, and C++ over C? there is a statement wrt. to code size / C++ exceptions: Jerry answers (among other points): (...) it tends to be more difficult to produce truly tiny executables with C++. For really small systems, you're rarely writing a lot of code anyway, and the extra (...) to which I asked why that would be, to which Jerry responded: the main thing is that C++ includes exception handling, which (at least usually) adds some minimum to the executable size. Most compilers will let you disable exception handling, but when you do the result isn't quite C++ anymore. (...) which I do not really doubt on a technical real world level. Therefore I'm interested (purely out of curiosity) to hear from real world examples where a project chose C++ as a language and then chose to disable exceptions. (Not just merely "not use" exceptions in user code, but disable them in the compiler, so that you can't throw or catch exceptions.) Why does a project chose to do so (still using C++ and not C, but no exceptions) - what are/were the (technical) reasons? Addendum: For those wishing to elaborate on their answers, it would be nice to detail how the implications of no-exceptions are handled: STL collections (vector, ...) do not work properly (allocation failure cannot be reported) new can't throw Constructors cannot fail

    Read the article

  • Reconstruct a file from a TCP stream

    - by Abhishek Chanda
    I have a client and a server and a third box which sees all packets from the server to the client (but not the other way around). Now when the client requests a file from the server (over HTTP), the third box sees the response. I am trying to reconstruct the file there. I am using libpcap to capture TCP datagrams and trying to reconstruct the file there. Here is what I did Listen for packets on an interface Group all packets which have the same ACK number Sort the group based on SEQ number Extract data from each packet and combine them and write to the disk The problem is, the file thus generated is not exactly the same as the original file. Does everything sound correct here? Some more details: I am using C++ The packet data is being stored as std::vector<char> I did change the byte order while reading the ack number and seq number from the packet using ntohl I am not sure if I need to change the byte order for the data as well. I tried to reverse the data from each packet before combining them, even that did not work. Is there something I am missing?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >