Search Results

Search found 1451 results on 59 pages for 'theory'.

Page 16/59 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • When to use Euler vs Axis angles vs Quaternions?

    - by manning18
    I understand the theory behind each but I was wondering if people could share their experiences in when one would use one over the other For instance, if you were implementing a chase camera, a FPS-style mouse look or writing some kinematic routine, what would be the factors you consider to go with one type over the other and when might you need to convert from one form of representation to the other? Are there certain things that only one system can do that the others can't? (eg smooth interpolation with quaternions)

    Read the article

  • C++ Tutorial: 10 New STL Algorithms That Will Make You A More Productive Developer

    Unquestionably, the most effective tool for a C++ programmer's productivity is the Standard library's rich collection of algorithms. In 2008, about 20 new algorithms were voted into the C++0x draft standard. These new algorithms let you among the rest copy n elements intuitively, perform set theory operations, and handle partitions conveniently. Find out how to use these algorithms to make your code more efficient and intuitive.

    Read the article

  • Microsoft Access Small Business Solutions

    Microsoft Access Small Business Solutions, published by Wiley, proves that Microsoft Access can be used to create valuable business solutions in theory and in practice by providing both the reasoning for the database design and the databases themselves in the CD that accompanies the book.

    Read the article

  • If you had to teach professional development to students that just graduated school, what would be the topics?

    - by user2567
    The idea is to give them more chances to be efficient in a professional environment. Most students are good with theory, most of them are smart, but they have to learn how to solve common technical problems. They will be better programmers as they practice, but maybe we can help them with some introduction training. Which topics you would select for a two weeks full time training? It's an open question, I don't want to suggest things that will reduce the answers to a particular field.

    Read the article

  • Windows cannot open directory with too long name created by Linux

    - by Tim
    Hello! My laptop has two OSes: Windows 7 and Ubuntu 10.10. A partition of Windows 7 of format NTFS is mounted in Ubuntu. In Ubuntu, I created a directory under somehow deep path and with a long name for itself, specifically, the name for that directory is "a set of size-measurable subsets ie sigma algebra". Now in Windows, I cannot open the directory, which I guess is because of the name is too long, nor can I rename it. I was wondering if there is some way to access that directory under Windows? Better without changing the directory if possible, but will have to if necessary. Thanks and regards! Update: This is the output using "DIR /X" in cmd.exe, which does not shorten the directory name: F:\science\math\Foundations of mathematics\set theory\whether element of a set i s also a set\when element is set\when element sets are subsets of a universal se t\closed under some set operations\sigma algebra of sets>DIR /X Volume in drive F is Data Volume Serial Number is 0492-DD90 Directory of F:\science\math\Foundations of mathematics\set theory\whether elem ent of a set is also a set\when element is set\when element sets are subsets of a universal set\closed under some set operations\sigma algebra of sets 03/14/2011 10:43 AM <DIR> . 03/14/2011 10:43 AM <DIR> .. 03/08/2011 10:09 AM <DIR> a set of size-measurable sub sets ie sigma algebra 02/12/2011 04:08 AM <DIR> example 02/17/2011 12:30 PM <DIR> general 03/13/2011 02:28 PM <DIR> mapping from sigma algebra t o R or C i.e. measure 02/12/2011 04:10 AM <DIR> msbl mapping from general ms bl space to Borel msbl R or C 02/12/2011 04:10 AM 4,928 new file~ 03/14/2011 10:42 AM <DIR> temp 03/02/2011 10:58 AM <DIR> with Cartesian product of se ts 1 File(s) 4,928 bytes 9 Dir(s) 39,509,340,160 bytes free

    Read the article

  • Challenge 19 – An Explanation of a Query

    - by Dave Ballantyne
    I have received a number of requests for an explanation of my winning query of TSQL Challenge 19. This involved traversing a hierarchy of employees and rolling a count of orders from subordinates up to superiors. The first concept I shall address is the hierarchyId , which is constructed within the CTE called cteTree.   cteTree is a recursive cte that will expand the parent-child hierarchy of the personnel in the table @emp.  One useful feature with a recursive cte is that data can be ‘passed’ from the parent to the child data.  The hierarchyId column is similar to the hierarchyId data type that was introduced in SQL Server 2008 and represents the position of the person within the organisation. Let us start with a simplistic example Albert manages Bob and Eddie.  Bob manages Carl and Dave. The hierarchyId will represent each person’s position in this relationship in a single field.  In this simple example we could append the userID together into a varchar field as detailed below. This will enable us to select a branch of the tree by filtering using Where hierarchyId  ‘1,2%’ to select Bob and all his subordinates.  Naturally, this is not comprehensive enough to provide a full solution, but as opposed to concatenating the Id’s together into a varchar datatyped column, we can apply the same theory to a varbinary.  By CASTing the ID’s into a datatype of varbinary(4) ,4 is used as 4 bytes of data are used to store an integer and building a hierarchyId  from those.  For example: The important point to bear in mind for later in the query is that the binary data generated is 'byte order comparable'. ie We can ORDER a dataset with it and the resulting data, will be in the order required. Now, would probably be a good time to download the example file and, after the cte ‘cteTree’, uncomment the line ‘select * from cteTree’.  Mark this and all prior code and execute.  This will show you how this theory directly relates to the actual challenge data.  The only deviation from the above, is that instead of using the ID of an employee, I have used the row_number() ranking function to order each level by LastName,Firstname.  This enables me to order by the HierarchyId in the final result set so that the result set is in the required order. Your output should be something like the below.  Notice also the ‘Level’ Column that contains the depth that the employee is within the tree.  I would encourage you to ‘play’ with the query, change the order in the row_number() or the length of the cast in the hierarchyId to see how that effects the outcome.  The next cte, ‘cteTreeWithOrderCount’, is a join between cteTree and the @ord table, and COUNT’s the number of orders per employee.  A LEFT JOIN is employed here to account for the occasion where an employee has made no sales.   Executing a ‘Select * from cteTreeWithOrderCount’ will return the result set as below.  The order here is unimportant as this is only a staging point of the data and only the final result set in a cte chain needs an Order by clause, unless TOP is utilised. cteExplode joins the above result set to the tally table (Nums) for Level Occurances.  So, if level is 2 then 2 rows are required.  This is done to expand the dataset, to create a new column (PathInc), which is the (n+1) integers contained within the heirarchyid.  For example, with the data for Robert King as given above, the below 3 rows will be returned. From this you can see that the pathinc column now contains the values for Andrew Fuller and Steven Buchanan who are Robert King’s superiors within the tree.    Finally cteSumUp, sums the orders for each person and their subordinates using the PathInc generated above, and the final select does the final simple mathematics and filters to restrict the result set to only the ‘original’ row per employee.

    Read the article

  • Issues printing through ssh tunnel and port forwarding

    - by simogasp
    I'm having some problems trying to print through a ssh tunnel. I'd like to print from my laptop to a network printer (Toshiba es453, for what matters) which is in a local network. I can reach the local network using a gateway. So far I did the following: ssh -N -L19100:<Printer_IP>:9100 <username>@<ssh_gateway> Basically i just mapped the port 19100 of my laptop directly to the input port of the printer, passing through the gateway. So far, so good. Then, i tried to install on my laptop a new printer with the GUI config tool of ubuntu, so that the new printer is on localhost at port 19100 (as APP Socket/HP Jet Direct) , then I provided the proper driver of the printer. In theory, once the tunnel is open I should be able to print from any program just selecting this printer. Of course, it does not work. :-) The document hangs in the queue with status Processing while in the shell where I set up the tunnel I get these errors on failing opening channels debug1: Local forwarding listening on ::1 port 19100. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 19100. debug1: channel 1: new [port listener] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 3: new [direct-tcpip] channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44434, nchannels 4 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] channel 3: open failed: connect failed: Connection timed out debug1: channel 3: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44443, nchannels 4 channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44493, nchannels 3 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] As a further debugging test I tried the following. From a machine inside the local network I did a telnet <IP_printer> 9100, got access, wrote some random thing, closed the connection and correctly I got a print of what I had written. So the port and the ip of the printer should be correct. I tried the same from my laptop with the tunnel opened, the telnet succeeded but, again, the printer didn't print anything, getting the usual channel x: open failed: errors. I'm not a great expert on the matter, I just thought that in theory it was possible to do something like that, but maybe there is something that I didn't consider or I did wrong. Any clue? Thanks! Simone [update] As further debugging test, I tried to replicate the procedure from a machine in the local network. From that machine, I did ssh -N -L19100:<IP_printer>:9100 <username>@<ssh_gateway> (note that now the machine, the gateway and the printer are in the same local network) then I tried again the telnet test with telnet localhost 19100, I got access and everything, but I didn't get the print but the usual error channel 2: open failed: connect failed: Connection timed out Maybe I am missing some other connection to be forwarded or maybe this is not allowed by the administrators. Of course, if I connect via ssh tunneling to the local machine from my laptop through the gateway, I can successfully print using the lpr command (from the local machine). But this is what I would like to avoid (yes, I'm lazy...:-), I would like to have a more 'elegant' and transparent way to do that.

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • Growing Talent

    The subtitle of Daniel Coyles intriguing book The Talent Code is Greatness Isnt Born. Its Grown. Heres How. The Talent Code proceeds to layout a theory of how expertise can be cultivated through specific practices that encourage the growth of myelin in the brain. Myelin is a material that is produced and wraps around heavily used circuits in the brain, making them more efficient. Coyle uses an analogy that geeks will appreciate. When a circuit in the brain is used a lot (i.e. a specific action is repeated), the myelin insulates that circuit, increasing its bandwidth from telephone over copper to high speed broadband. This leads to the funny phenomenon of effortless expertise. Although highly skilled, the best players make it look easy. Coyle provides some biological backing for the long held theory that it takes 10,000 hours of practice to achieve mastery over a given subject. 10,000 hours or 10 years, as in, Teach Yourself Programming in Ten Years and others. However, it is not just that more hours equals more mastery. The other factors that Coyle identifies includes deep practice, practice which crucially involves drills that are challenging without being impossible. Another way to put it is that every day you spend doing only tasks you find monotonous and automatic, you are literally stagnating your brains development! Perhaps Coyles subtitle, needs one more phrase, Greatness Isnt Born. Its Grown. Heres How. And oh yeah, its not easy. Challenging yourself, continuing to persist in the face of repeated failures, practicing every day is not easy. As consultants, we sell our expertise, so it makes sense that we plan projects so that people can play to their strengths. At the same time, an important part of our culture is constant improvement, challenging yourself to be better. And the balancing contest ensues. I just finished working on a proof of concept (POC) we did for a project we are bidding on. Completely time boxed, so our team naturally split responsibilities amongst ourselves according to who was better at what. I must have been pretty bad at the other components, as I found myself working on the user interface, not my usual strength. The POC had a website frontend, and one thing I do know is HTML. After starting out in pure ASP.NET WebForms, I got frustrated as time was ticking, I knew what I wanted in HTML, but I couldnt coax the right output out of the ASP.NET controls. I needed two or three elements on the screen that were identical in layout, with different content. With a backup plan in  of writing the HTML into the response by hand, I decided to challenge myself a bit and see what I could do in an hour or two using the Microsoft submitted jQuery micro-templating JavaScript library. This risk paid off. I was able to quickly get the user interface up and running, responsive to the JSON data we were working with. I felt energized by the double win of getting the POC ready and learning something new. Opportunities  specifically like this POC dont come around often, but the takeaway is that while it wont be easy, there are ways to generate your own opportunities to grow towards greatness.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Project Euler considered harmful

    - by xxxxxxx
    Hi, I've done some Project Euler problems and I was able to solve them very fast because I already knew all the theory behind them. I learned that "accidentaly" because I also had to learn it for university. I used to also solve olympiad problems, I wasn't very good but I was solving some of the problems. I've reached the conclusion that Project Euler problems are taken out of their context(and olympiad problems as well). That's why they are hard. Mathematics and it's theory is taught in order to make the problems easy. However, Project Euler apparently makes an invitation at making them hard again. Why ? I honestly think this is a complete waste of time. Mathematicians had centuries at their disposal in order to solve math problems and they developed theories to explain properly why certain things happen. I think olympiad problems and Project Euler problems are really useless. What's your take on Project Euler ? Do you get something out of it or do you just find formulas on some websites and implement the code fast and then get the result and solve the problem ?

    Read the article

  • BITS client fails to specify HTTP Range header

    - by user256890
    Our system is designed to deploy to regions with unreliable and/or insufficient network connections. We build our own fault tolerating data replication services that uses BITS. Due to some security and maintenance requirements, we implemented our own ASP.NET file download service on the server side, instead of just letting IIS serving up the files. When BITS client makes an HTTP download request with the specified range of the file, our ASP.NET page pulls the demanded file segment into memory and serve that up as the HTTP response. That is the theory. ;) This theory fails in artificial lab scenarios but I would not let the system deploy in real life scenarios unless we can overcome that. Lab scenario: I have BITS client and the IIS on the same developer machine, so practically I have enormous network "bandwidth" and BITS is intelligent enough to detect that. As BITS client discovers the unlimited bandwidth, it gets more and more "greedy". At each HTTP request, BITS wants to grasp greater and greater file ranges (we are talking about downloading CD iso files, videos), demanding 20-40MB inside a single HTTP request, a size that I am not comfortable to pull into memory on the server side as one go. I can overcome that simply by giving less than demanded. It is OK. However, BITS gets really "confident" and "arrogant" demanding files WITHOUT specifying the download range, i.e., it wants the entire file in a single request, and this is where things go wrong. I do not know how to answer that response in the case of a 600MB file. If I just provide the starting 1MB range of the file, BITS client keeps sending HTTP requests for the same file without download range to continue, it hammers its point that it wants the entire file in one go. Since I am reluctant to provide the entire file, BITS gives up after several trials and reports error. Any thoughts?

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • What are the most interesting equivalences arising from the Curry-Howard Isomorphism?

    - by Tom
    I came upon the Curry-Howard Isomorphism relatively late in my programming life, and perhaps this contributes to my being utterly fascinated by it. It implies that for every programming concept there exists a precise analogue in formal logic, and vice versa. Here's an "obvious" list of such analogies, off the top of my head: program/definition | proof type/declaration | proposition inhabited type | theorem function | implication function argument | hypothesis/antecedent function result | conclusion/consequent function application | modus ponens recursion | induction identity function | tautology non-terminating function | absurdity tuple | conjunction (and) disjoint union | exclusive disjunction (xor) parametric polymorphism | universal quantification So, to my question: what are some of the more interesting/obscure implications of this isomorphism? I'm no logician so I'm sure I've only scratched the surface with this list. For example, here are some programming notions for which I'm unaware of pithy names in logic: currying | "((a & b) => c) iff (a => (b => c))" scope | "known theory + hypotheses" And here are some logical concepts which I haven't quite pinned down in programming terms: primitive type? | axiom set of valid programs? | theory ? | disjunction (or)

    Read the article

  • Pecking order of pigeons?

    - by sc_ray
    I was going though problems on graph theory posted by Prof. Ericksson from my alma-mater and came across this rather unique question about pigeons and their innate tendency to form pecking orders. The question goes as follows: Whenever groups of pigeons gather, they instinctively establish a pecking order. For any pair of pigeons, one pigeon always pecks the other, driving it away from food or potential mates. The same pair of pigeons always chooses the same pecking order, even after years of separation, no matter what other pigeons are around. Surprisingly, the overall pecking order can contain cycles—for example, pigeon A pecks pigeon B, which pecks pigeon C, which pecks pigeon A. Prove that any finite set of pigeons can be arranged in a row from left to right so that every pigeon pecks the pigeon immediately to its left. Since this is a question on Graph theory, the first things that crossed my mind that is this just asking for a topological sort of a graphs of relationships(relationships being the pecking order). What made this a little more complex was the fact that there can be cyclic relationships between the pigeons. If we have a cyclic dependency as follows: A-B-C-A where A pecks on B,B pecks on C and C goes back and pecks on A If we represent it in the way suggested by the problem, we have something as follows: C B A But the above given row ordering does not factor in the pecking order between C and A. I had another idea of solving it by mathematical induction where the base case is for two pigeons arranged according to their pecking order, assuming the pecking order arrangement is valid for n pigeons and then proving it to be true for n+1 pigeons. I am not sure if I am going down the wrong track here. Some insights into how I should be analyzing this problem will be helpful. Thanks

    Read the article

  • Easy way to lock a file on a remote machine (windows)?

    - by roufamatic
    I've tracked down an error in my logs, and am trying to reproduce it. My theory is that a file sometimes gets locked in a specific folder, and when the application (ASP.NET) tries to delete that folder it hangs. I don't have the application running on my own machine so I'm debugging this on a remote server. But for the life of me, I can't seem to figure out a way to lock a file that prevents it from being deleted by the process. My first thought was to map the network path to a local drive and just leave a command prompt open to that folder. Locally that always fouls up my folder deletes, but apparently SMB is a bit more robust and doesn't grant me a lock. After that I created an infinte loop vbscript in the folder and executed it remotely. The file was deleted out from underneath the executing code. Man! I then tried creating a file on the server in that folder and removing all permissions. That didn't do the trick. I don't have access to the IIS settings so perhaps it's running under a privileged system account. So: what's a program that you know is free and I can quickly use to create an exclusive lock on a file so I can test my delete theory? Like a really, really bad Notepad clone or something. :-)

    Read the article

  • Effective books for learning the intricacies of business application development?

    - by OffApps Cory
    I am a self taught "developer". I use the term loosely because I only know enough to make myself dangerous. I have no theory background, and I only pick up things to get this little tool to work or make that control do what I want. That said, I am looking for some reading material that explains some of the theory behind application development especially from a business standpoint. Really I need to understand what all of these terms that float around really talk about. Business Logic Layer, UI abstraction level and all that. Anyone got a reading list that they feel helped them understand this stuff? I know how to code stuff up so that it works. It is not pretty mostly because I don't know the elegant way of doing it, and it is not planned out very well (I also don't know how to plan an application). Any help would be appreciated. I have read a number of books on what I thought was the subject, but they all seem to rehash basic coding and what-not. This doesn't have to be specific to VB.NET or WPF (or Entity Framework) but anything with those items would be quite helpful.

    Read the article

  • Spring MVC: How to get the remaining path after the controller path?

    - by Willis Blackburn
    I've spent over an hour trying to find the answer to this question, which seems like it should reflect a common use case, but I can't figure it out! Basically I am writing a file-serving controller using Spring MVC. The URLs are of the format http://www.bighost.com/serve/the/path/to/the/file.jpg, in which the part after "/serve" is the path to the requested file, which may have an arbitrary number of path segments. I have a controller like this: @Controller class ServerController { @RequestMapping(value = "/serve/???") public void serve(???) { } } What I am trying to figure out is: What do I use in place of "???" to make this work? I have two theories about how this should work. The first theory is that I could replace the first "???" in the RequestMapping with a path variable placeholder that has some special syntax meaning "capture to the end of the path." If a regular placeholder looks like "{path}" then maybe I could use "{path:**}" or "{path:/}" or something like that. Then I could use a @PathVariable annotation to refer to the path variable in the second "???". The other theory is that I could replace the first "???" with "**" (match anything) and that Spring would give me an API to obtain the remainder of the path (the part matching the "**"). But I can't find such an API. Thanks for any help you can provide!

    Read the article

  • How do i set the Transaction Isolation in EJB?

    - by Nitesh Panchal
    Hello, I am not able to find a way to set TransactionIsolation in ejb. Can anybody tell me how do i set it? I am using persistence. I have looked the following classes : EntityManager , EntityManagerFactory, UserTransaction. None of them seems to have any method like setTransactionIsolation or such. Do we need to change persistence.xml? I just read a book named Mastering EJB 3.0 4th edition. They gave a full 10 page theory about Isolation level that this problems occur and that occurs and such things but at the end they gave this paragraph :- "As we now know, the EJB standard does not deal with isolation levels directly, and rightly so. EJB is a component specification. It defines the behavior and contracts of a business component with clients and middleware infrastructure (containers) such that the component can be rendered as various middleware services properly. EJBs therefore are transactional components that interact with resource managers, such as the JDBC resource manager or JMS resource manager, via JTS, as part of a transaction. They are not, hence, resource components in themselves. Since isolation levels are very specific to the behavior and capabilities of the underlying resources, they should therefore be specified at the resource API levels. " What exactly does it mean? What is meant by resource level APIs? Please help me. If persistence has no way to set Isolation Level then why do they give such huge theory in an EJB book and make it heavy in weight unnecessarily :(

    Read the article

  • Callbacks on GUI Thread

    - by miguel
    We have an external data provider which, in its construtor, takes a callback thread for returning data upon. There are some issues in the system which I am suspicious are related to threading, however, in theory they cannot be, due to the fact that the callbacks should all be returned on the same thread. My question is, does code like this require thread synchronisation? class Foo { ExternalDataProvider _provider; public Foo() { // This is the c'tor for the xternal data provider, taking a callback loop as param _provider = new ExternalDataProvider(UILoop); _provider.DataArrived += ExternalProviderCallbackMethod; } public ExternalProviderCallbackMethod() { var itemArray[] = new String[4] { "item1", "item2", "item3", "item4" }; for (int i = 0; i < itemArray.Length; i++) { string s = itemArray[i]; switch(s) { case "item1": DoItem1Action(); break; case "item2": DoItem2Action(); break; default: DoDefaultAction(); break; } } } } The issue is that, very infrequently, DoItem2Action is executingwhen DoItem1Action should be exectuing. Is it at all possible threading is at fault here? In theory, as all callbacks are arriving on the same thread, they should be serialized, right? So there should be no need for thread sync here?

    Read the article

  • All Callbacks on GUI Thread - Multithreading issues possible?

    - by miguel
    We have an external data provider which, in its construtor, takes a callback thread for returning data upon. There are some issues in the system which I am suspicious are related to threading, however, in theory they cannot be, due to the fact that the callbacks should all be returned on the same thread. My question is, does code like this require thread synchronisation? class Foo { ExternalDataProvider _provider; public Foo() { // This is the c'tor for the xternal data provider, taking a callback loop as param _provider = new ExternalDataProvider(UILoop); _provider.DataArrived += ExternalProviderCallbackMethod; } public ExternalProviderCallbackMethod(...) { //...(code omitted) var itemArray[] = new String[4] { "item1", "item2", "item3", "item4" }; for (int i = 0; i < itemArray.Length; i++) { string s = itemArray[i]; switch(s) { case "item1": DoItem1Action(); break; case "item2": DoItem2Action(); break; default: DoDefaultAction(); break; } //...(code omitted) } } } The issue is that, very infrequently, DoItem2Action is executingwhen DoItem1Action should be exectuing. Is it at all possible threading is at fault here? In theory, as all callbacks are arriving on the same thread, they should be serialized, right? So there should be no need for thread sync here?

    Read the article

  • Criteria for triggering garbage collection in .Net

    - by Kennet Belenky
    I've come across some curious behavior with regard to garbage collection in .Net. The following program will throw an OutOfMemoryException very quickly (after less than a second on a 32-bit, 2GB machine). The Foo finalizer is never called. class Foo { static Dictionary<Guid, WeakReference> allFoos = new Dictionary<Guid, WeakReference>(); Guid guid = Guid.NewGuid(); byte[] buffer = new byte[1000000]; static Random rand = new Random(); public Foo() { // Uncomment the following line and the program will run forever. // rand.NextBytes(buffer); allFoos[guid] = new WeakReference(this); } ~Foo() { allFoos.Remove(guid); } static public void Main(string args[]) { for (; ; ) { new Foo(); } } } If the rand.nextBytes line is uncommented, it will run ad infinitum, and the Foo finalizer is regularly invoked. Why is that? My best guess is that in the former case, either the CLR or the Windows VMM is lazy about allocating physical memory. The buffer never gets written to, so the physical memory is never used. When the address space runs out, the system crashes. In the latter case, the system runs out of physical memory before it runs out of address space, the GC is triggered and the objects are collected. However, here's the part I don't get. Assuming my theory is correct, why doesn't the GC trigger when the address space runs low? If my theory is incorrect, then what's the real explanation?

    Read the article

  • method to change apache config via the shell with whm/cpanel

    - by Chiggsy
    How can I make changes to the apache config on a whm/cpanel setup from the shell and have them integrate properly? I read the theory of the config system and regardless of my feelings on the matter I understand where they are coming from. Still, in that world view, there must be a way to interact with the system from a command line interface, right?

    Read the article

  • Why VM snapshots are affecting performance?

    - by Samselvaprabu
    I read in one of the VMware KB article says that snapshots will directly proportional to VM performance. But my team keep asking me how snapshots can affect performance. I would like to give them solid reason behind the statement that snapshots are performance killers. Can any one explain a little bit theory behind why actually snapshots are affecting the performance? Is it just because Disk I/O rate of hard disk would be slow?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >