Search Results

Search found 555 results on 23 pages for 'projection'.

Page 4/23 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • LINQ to Entities Projection of Nested List

    - by Matthew
    Assuming these objects... class MyClass { int ID {get;set;} string Name {get;set;} List<MyOtherClass> Things {get;set;} } class MyOtherClass { int ID {get;set;} string Value {get;set;} } How do I perform a LINQ to Entities Query, using a projection like below, that will give me a List? This works fine with an IEnumerable (assuming MyClass.Things is an IEnumerable, but I need to use List) MyClass myClass = (from MyClassTable mct in this.Context.MyClassTableSet select new MyClass { ID = mct.ID, Name = mct.Name, Things = (from MyOtherClass moc in mct.Stuff where moc.IsActive select new MyOtherClass { ID = moc.ID, Value = moc.Value }).AsEnumerable() }).FirstOrDefault(); Thanks in advance for the help!

    Read the article

  • Build Interactive Floor With Projection !?

    - by Synxmax
    Dear Guys I am somehow newbie and sometimes guru , but at this moment i don't have enough time to research My nightmare is , my boss ask me to create an interactive floor ( he just saw in an exhibition ) , he ask me to create one of them instead of buying , i am an actionscript crawler and developer with some skills in java and c# programming , i just made some track motion with a simple web cam , and this idea came to my mind if i can use an infrared or thermographic camera instead of simple camera so i can get better positioning if camera place at top of floor ! Now i just came here to ask you guys is there any resource , tip , help i can know before getting into this deal !? is there any lib or api out to deal with this ?! EVEN, if there is any resource , article from another language c++ , c , .... could help i just didn't have enough time to test lot of ways If you search interactive floors , or interactive floor projection you can find some companies who provide such a thing Thanks in advance ( and sorry for my damn poor english , français could be better :D )

    Read the article

  • Hibernate criterion Projection alias not being used

    - by sbzoom
    Do Hibernate Projection aliases even work? I could swear it just doesn't. At least, it doesn't do what I would expect it to do. Here is the java: return sessionFactory.getCurrentSession().createCriteria( PersonProgramActivity.class ).setProjection( Projections.projectionList().add( Projections.alias( Projections.sum( "numberOfPoints" ), "number_of_points" ) ).add( Projections.groupProperty( "person.id" ) ) ).setFirstResult( start ).setFetchSize( size ).addOrder( Order.desc( "numberOfPoints" ) ).list(); Here is the SQL that it generates: select sum(this_.number_of_points) as y0_, this_.person_id as y1_ from PERSON_PROGRAM_ACTIVITY this_ group by this_.person_id order by this_.number_of_points desc It doesn't seem to use the alias at all. I would think setting the alias would mean that "sum(this_.number_of_points)" would be aliased as "number_of_points" and not "y0_". Is there some trick I am missing? Thanks.

    Read the article

  • How do I create a bounding frustrum from a view & projection matrix?

    - by Narf the Mouse
    Given a left-handed Projection matrix, a left-handed View matrix, a ViewProj matrix of View * Projection - How do I create a bounding Frustum comprised of near, far, left, right and top, bottom planes? The only example I could find on Google (Tutorial 16: Frustum Culling) seems to not work; for example, if the math is used as given, the near-plane's distance is a negative. This places the near-plane behind the camera...

    Read the article

  • Impact of ordering of correlated subqueries within a projection

    - by Michael Petito
    I'm noticing something a bit unexpected with how SQL Server (SQL Server 2008 in this case) treats correlated subqueries within a select statement. My assumption was that a query plan should not be affected by the mere order in which subqueries (or columns, for that matter) are written within the projection clause of the select statement. However, this does not appear to be the case. Consider the following two queries, which are identical except for the ordering of the subqueries within the CTE: --query 1: subquery for Color is second WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; --query 2: subquery for Color is first WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; If you look at the two query plans, you'll see that an outer join is used for each subquery and that the order of the joins is the same as the order the subqueries are written. There is a filter applied to the result of the outer join for color, to filter out rows where the color is not 'Gray'. (It's odd to me that SQL would use an outer join for the color subquery since I have a non-null constraint on the result of the color subquery, but OK.) Most of the rows are removed by the color filter. The result is that query 2 is significantly cheaper than query 1 because fewer rows are involved with the second join. All reasons for constructing such a statement aside, is this an expected behavior? Shouldn't SQL server opt to move the filter as early as possible in the query plan, regardless of the order the subqueries are written?

    Read the article

  • OpenGL ES 2.0 equivalent of glOrtho()?

    - by Zippo
    In my iphone app, I need to project 3d scene into the 2D coordinates of the screen for some calculations. My objects go through various rotations, translations and scaling. So I figured I need to multiply the vertices with ModelView matrix first, then I need to multiply it with the Orthogonal projection matrix. First of all am on the right track? I have the Model View Matrix, but need the projection matrix. Is there a glOrtho() equivalent in ES 2.0?

    Read the article

  • PlaneProjection is not working well in silverlight

    - by vishal
    in silverlight project using name attribute in planeprojection gives Error 1 The type or namespace name 'PlaneProjection' could not be found (are you missing a using directive or an assembly reference?) code i used for that <Image Name="blabla.jpg" Height="200" Width="200" > <Image.Projection> <PlaneProjection Name="pp" /> </Image.Projection> </Image>

    Read the article

  • webgl adding projection doesnt display object

    - by dazed3confused
    I am having a look at web gl, and trying to render a cube, but I am having a problem when I try to add projection into the vertex shader. I have added an attribute, but when I use it to multiple the modelview and position, it stops displaying the cube. Im not sure why and was wondering if anyone could help? Ive tried looking at a few examples but just cant get this to work vertex shader attribute vec3 aVertexPosition; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); //gl_Position = uMVMatrix * vec4(aVertexPosition, 1.0); } fragment shader #ifdef GL_ES precision highp float; // Not sure why this is required, need to google it #endif uniform vec4 uColor; void main() { gl_FragColor = uColor; } function init() { // Get a reference to our drawing surface canvas = document.getElementById("webglSurface"); gl = canvas.getContext("experimental-webgl"); /** Create our simple program **/ // Get our shaders var v = document.getElementById("vertexShader").firstChild.nodeValue; var f = document.getElementById("fragmentShader").firstChild.nodeValue; // Compile vertex shader var vs = gl.createShader(gl.VERTEX_SHADER); gl.shaderSource(vs, v); gl.compileShader(vs); // Compile fragment shader var fs = gl.createShader(gl.FRAGMENT_SHADER); gl.shaderSource(fs, f); gl.compileShader(fs); // Create program and attach shaders program = gl.createProgram(); gl.attachShader(program, vs); gl.attachShader(program, fs); gl.linkProgram(program); // Some debug code to check for shader compile errors and log them to console if (!gl.getShaderParameter(vs, gl.COMPILE_STATUS)) console.log(gl.getShaderInfoLog(vs)); if (!gl.getShaderParameter(fs, gl.COMPILE_STATUS)) console.log(gl.getShaderInfoLog(fs)); if (!gl.getProgramParameter(program, gl.LINK_STATUS)) console.log(gl.getProgramInfoLog(program)); /* Create some simple VBOs*/ // Vertices for a cube var vertices = new Float32Array([ -0.5, 0.5, 0.5, // 0 -0.5, -0.5, 0.5, // 1 0.5, 0.5, 0.5, // 2 0.5, -0.5, 0.5, // 3 -0.5, 0.5, -0.5, // 4 -0.5, -0.5, -0.5, // 5 -0.5, 0.5, -0.5, // 6 -0.5,-0.5, -0.5 // 7 ]); // Indices of the cube var indicies = new Int16Array([ 0, 1, 2, 1, 2, 3, // front 5, 4, 6, 5, 6, 7, // back 0, 1, 5, 0, 5, 4, // left 2, 3, 6, 6, 3, 7, // right 0, 4, 2, 4, 2, 6, // top 5, 3, 1, 5, 3, 7 // bottom ]); // create vertices object on the GPU vbo = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vbo); gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW); // Create indicies object on th GPU ibo = gl.createBuffer(); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo); gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indicies, gl.STATIC_DRAW); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.enable(gl.DEPTH_TEST); // Render scene every 33 milliseconds setInterval(render, 33); } var mvMatrix = mat4.create(); var pMatrix = mat4.create(); function render() { // Set our viewport and clear it before we render gl.viewport(0, 0, canvas.width, canvas.height); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); gl.useProgram(program); // Bind appropriate VBOs gl.bindBuffer(gl.ARRAY_BUFFER, vbo); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo); // Set the color for the fragment shader program.uColor = gl.getUniformLocation(program, "uColor"); gl.uniform4fv(program.uColor, [0.3, 0.3, 0.3, 1.0]); // // code.google.com/p/glmatrix/wiki/Usage program.uPMatrix = gl.getUniformLocation(program, "uPMatrix"); program.uMVMatrix = gl.getUniformLocation(program, "uMVMatrix"); mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 1.0, 10.0, pMatrix); mat4.identity(mvMatrix); mat4.translate(mvMatrix, [0.0, -0.25, -1.0]); gl.uniformMatrix4fv(program.uPMatrix, false, pMatrix); gl.uniformMatrix4fv(program.uMVMatrix, false, mvMatrix); // Set the position for the vertex shader program.aVertexPosition = gl.getAttribLocation(program, "aVertexPosition"); gl.enableVertexAttribArray(program.aVertexPosition); gl.vertexAttribPointer(program.aVertexPosition, 3, gl.FLOAT, false, 3*4, 0); // position // Render the Object gl.drawElements(gl.TRIANGLES, 36, gl.UNSIGNED_SHORT, 0); } Thanks in advance for any help

    Read the article

  • Grafting LINQ onto C# 2 library

    - by P Daddy
    I'm writing a data access layer. It will have C# 2 and C# 3 clients, so I'm compiling against the 2.0 framework. Although encouraging the use of stored procedures, I'm still trying to provide a fairly complete ability to perform ad-hoc queries. I have this working fairly well, already. For the convenience of C# 3 clients, I'm trying to provide as much compatibility with LINQ query syntax as I can. Jon Skeet noticed that LINQ query expressions are duck typed, so I don't have to have an IQueryable and IQueryProvider (or IEnumerable<T>) to use them. I just have to provide methods with the correct signatures. So I got Select, Where, OrderBy, OrderByDescending, ThenBy, and ThenByDescending working. Where I need help are with Join and GroupJoin. I've got them working, but only for one join. A brief compilable example of what I have is this: // .NET 2.0 doesn't define the Func<...> delegates, so let's define some workalikes delegate TResult FakeFunc<T, TResult>(T arg); delegate TResult FakeFunc<T1, T2, TResult>(T1 arg1, T2 arg2); abstract class Projection{ public static Condition operator==(Projection a, Projection b){ return new EqualsCondition(a, b); } public static Condition operator!=(Projection a, Projection b){ throw new NotImplementedException(); } } class ColumnProjection : Projection{ readonly Table table; readonly string columnName; public ColumnProjection(Table table, string columnName){ this.table = table; this.columnName = columnName; } } abstract class Condition{} class EqualsCondition : Condition{ readonly Projection a; readonly Projection b; public EqualsCondition(Projection a, Projection b){ this.a = a; this.b = b; } } class TableView{ readonly Table table; readonly Projection[] projections; public TableView(Table table, Projection[] projections){ this.table = table; this.projections = projections; } } class Table{ public Projection this[string columnName]{ get{return new ColumnProjection(this, columnName);} } public TableView Select(params Projection[] projections){ return new TableView(this, projections); } public TableView Select(FakeFunc<Table, Projection[]> projections){ return new TableView(this, projections(this)); } public Table Join(Table other, Condition condition){ return new JoinedTable(this, other, condition); } public TableView Join(Table inner, FakeFunc<Table, Projection> outerKeySelector, FakeFunc<Table, Projection> innerKeySelector, FakeFunc<Table, Table, Projection[]> resultSelector){ Table join = new JoinedTable(this, inner, new EqualsCondition(outerKeySelector(this), innerKeySelector(inner))); return join.Select(resultSelector(this, inner)); } } class JoinedTable : Table{ readonly Table left; readonly Table right; readonly Condition condition; public JoinedTable(Table left, Table right, Condition condition){ this.left = left; this.right = right; this.condition = condition; } } This allows me to use a fairly decent syntax in C# 2: Table table1 = new Table(); Table table2 = new Table(); TableView result = table1 .Join(table2, table1["ID"] == table2["ID"]) .Select(table1["ID"], table2["Description"]); But an even nicer syntax in C# 3: TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] select new[]{t1["ID"], t2["Description"]}; This works well and gives me identical results to the first case. The problem is if I want to join in a third table. TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] join t3 in table3 on t1["ID"] equals t3["ID"] select new[]{t1["ID"], t2["Description"], t3["Foo"]}; Now I get an error (Cannot implicitly convert type 'AnonymousType#1' to 'Projection[]'), presumably because the second join is trying to join the third table to an anonymous type containing the first two tables. This anonymous type, of course, doesn't have a Join method. Any hints on how I can do this?

    Read the article

  • OPENGLES 2.0 equivalent of glorthof?

    - by Zippo
    Hi Guys, In my iphone app, i need to project 3d scene into the 2D coordinates of the screen for some calculations. My objects go through various rotations, translations and scaling. So i figured i need to multiply the vertices with ModelView matrix first, then i need to multiply it with the Orthogonal projection matrix. First of all am on the right track? I have the Model View Matrix, but need the projection matrix. Is there a glorthof equivalent in ES 2.0? PS: i am new to opengl. Thanks for your help. Zippo

    Read the article

  • NHibernate Projection Components

    - by reggieboyYEAH
    Hello guys im trying to hydrate a DTO using projections in NHibernate this is my code IList<PatientListViewModel> list = y.CreateCriteria<Patient>() .SetProjection(Projections.ProjectionList() .Add(Projections.Property("Birthdate"), "Birthdate") .Add(Projections.Property("Doctor.Id"), "DoctorId") .Add(Projections.Property("Gender"), "Gender") .Add(Projections.Property("Id"), "PatientId") .Add(Projections.Property("Patient.Name.Fullname"), "Fullname") ) .SetResultTransformer(Transformers.AliasToBean<PatientListViewModel>()) .List<PatientListViewModel>(); this code is throwing an exception? anyone know what is the problem? here is the error message Message: could not resolve property: Patient.Name.Fullname of: OneCare.Domain.Entities.Patient

    Read the article

  • Common Mercator Projection formulas for Google Maps not working correctly

    - by Tom Halladay
    I am building a Tile Overlay server for Google maps in C#, and have found a few different code examples for calculating Y from Latitude. After getting them to work in general, I started to notice certain cases where the overlays were not lining up properly. To test this, I made a test harness to compare Google Map's Mercator LatToY conversion against the formulas I found online. As you can see below, they do not match in certain cases. Case #1 Zoomed Out: The problem is most evident when zoomed out. Up close, the problem is barely visible. Case #2 Point Proximity to Top & Bottom of viewing bounds: The problem is worse in the middle of the viewing bounds, and gets better towards the edges. This behavior can negate the behavior of Case #1 The Test: I created a google maps page to display red lines using the Google Map API's built in Mercator conversion, and overlay this with an image using the reference code for doing Mercator conversion. These conversions are represented as black lines. Compare the difference. The Results: Check out the top-most and bottom-most lines: The problem gets visually larger but numerically smaller as you zoom in: And it all but disappears at closer zoom levels, regardless of screen orientation. The Code: Google Maps Client Side Code: var lat = 0; for (lat = -80; lat <= 80; lat += 5) { map.addOverlay(new GPolyline([new GLatLng(lat, -180), new GLatLng(lat, 0)], "#FF0033", 2)); map.addOverlay(new GPolyline([new GLatLng(lat, 0), new GLatLng(lat, 180)], "#FF0033", 2)); } Server Side Code: Tile Cutter : http://mapki.com/wiki/Tile_Cutter OpenStreetMap Wiki : http://wiki.openstreetmap.org/wiki/Mercator protected override void ImageOverlay_ComposeImage(ref Bitmap ZipCodeBitMap) { Graphics LinesGraphic = Graphics.FromImage(ZipCodeBitMap); Int32 MapWidth = Convert.ToInt32(Math.Pow(2, zoom) * 255); Point Offset = Cartographer.Mercator2.toZoomedPixelCoords(North, West, zoom); TrimPoint(ref Offset, MapWidth); for (Double lat = -80; lat <= 80; lat += 5) { Point StartPoint = Cartographer.Mercator2.toZoomedPixelCoords(lat, -179, zoom); Point EndPoint = Cartographer.Mercator2.toZoomedPixelCoords(lat, -1, zoom); TrimPoint(ref StartPoint, MapWidth); TrimPoint(ref EndPoint, MapWidth); StartPoint.X = StartPoint.X - Offset.X; EndPoint.X = EndPoint.X - Offset.X; StartPoint.Y = StartPoint.Y - Offset.Y; EndPoint.Y = EndPoint.Y - Offset.Y; LinesGraphic.DrawLine(new Pen(Color.Black, 2), StartPoint.X, StartPoint.Y, EndPoint.X, EndPoint.Y); LinesGraphic.DrawString( lat.ToString(), new Font("Verdana", 10), new SolidBrush(Color.Black), new Point( Convert.ToInt32((width / 3.0) * 2.0), StartPoint.Y)); } } protected void TrimPoint(ref Point point, Int32 MapWidth) { point.X = Math.Max(point.X, 0); point.X = Math.Min(point.X, MapWidth - 1); point.Y = Math.Max(point.Y, 0); point.Y = Math.Min(point.Y, MapWidth - 1); } So, Anyone ever experienced this? Dare I ask, resolved this? Or simply have a better C# implementation of Mercator Project coordinate conversion? Thanks!

    Read the article

  • How to get visible size of DisplayObject with perspective projection

    - by Ain
    The following is entirely a math question. As we know, PerspectiveProjection delivers perspective transformations in 3D represented by the interdependent values of fieldOfView and focalLength according to the following formula: focalLength = stageWidth/2 * (cos(fieldOfView/2) / sin(fieldOfView/2) Bjørn Gunnar Staal has prepared a good image to illustrate the situation. Q: How to get the visible on-screen size of the DisplayObject (Cube on the above-linked image) to which PerspectiveProjection has been applied?

    Read the article

  • NHibernate join and projection properties

    - by devgroup
    Hello, I have simple situation (like on the image link text) and simple SQL query SELECT M.Name, A.Name, B.Name FROM Master M LEFT JOIN DetailA A ON M.DescA = A.Id LEFT JOIN DetailB B ON M.DescB = B.Id How to achive the same effect in nHibernate using CriteriaAPI ?

    Read the article

  • hibernate search and projection - different fields on different objects

    - by Odelya
    Hi! I have the following code: Criteria crit = sess.createCriteria(Product.class); ProjectionList proList = Projections.projectionList(); proList.add(Projections.property("name")); proList.add(Projections.property("price")); crit.setProjection(proList); But I also have User.class and I would like the name to be restricted on User.class and price from Product class. How can I restrict different columns on different objects in Hibernate Search?

    Read the article

  • What is going on in this SAT/vector projection code?

    - by ssb
    I'm looking at the example XNA SAT collision code presented here: http://www.xnadevelopment.com/tutorials/rotatedrectanglecollisions/rotatedrectanglecollisions.shtml See the following code: private int GenerateScalar(Vector2 theRectangleCorner, Vector2 theAxis) { //Using the formula for Vector projection. Take the corner being passed in //and project it onto the given Axis float aNumerator = (theRectangleCorner.X * theAxis.X) + (theRectangleCorner.Y * theAxis.Y); float aDenominator = (theAxis.X * theAxis.X) + (theAxis.Y * theAxis.Y); float aDivisionResult = aNumerator / aDenominator; Vector2 aCornerProjected = new Vector2(aDivisionResult * theAxis.X, aDivisionResult * theAxis.Y); //Now that we have our projected Vector, calculate a scalar of that projection //that can be used to more easily do comparisons float aScalar = (theAxis.X * aCornerProjected.X) + (theAxis.Y * aCornerProjected.Y); return (int)aScalar; } I think the problems I'm having with this come mostly from translating physics concepts into data structures. For example, earlier in the code there is a calculation of the axes to be used, and these are stored as Vector2, and they are found by subtracting one point from another, however these points are also stored as Vector2s. So are the axes being stored as slopes in a single Vector2? Next, what exactly does the Vector2 produced by the vector projection code represent? That is, I know it represents the projected vector, but as it pertains to a Vector2, what does this represent? A point on a line? Finally, what does the scalar at the end actually represent? It's fine to tell me that you're getting a scalar value of the projected vector, but none of the information I can find online seems to tell me about a scalar of a vector as it's used in this context. I don't see angles or magnitudes with these vectors so I'm a little disoriented when it comes to thinking in terms of physics. If this final scalar calculation is just a dot product, how is that directly applicable to SAT from here on? Is this what I use to calculate maximum/minimum values for overlap? I guess I'm just having trouble figuring out exactly what the dot product is representing in this particular context. Clearly I'm not quite up to date on my elementary physics, but any explanations would be greatly appreciated.

    Read the article

  • projective geometry: how do I turn a projection of a rectangle in 3D into a 2D view

    - by bonomo
    So the problem is that I have a 3D projection of a rectangle that I want to turn into 2D. That is I have a photo of a sheet of paper laying on a table which I want to transform into a 2D view of that sheet. So what I need is to get an undistorted 2D image by reverting all the 3D/projection transformations and getting a plain view of the sheet from the top. I happened to find some directions on the subject but they don't suggest an immediate instruction on how this can be achieved. It would be really helpful to get a step-by-step instruction of what needs to be done. Or, alternatively, a link on a resource that breaks it down to little details. Thank you

    Read the article

  • 2D Rendering with OpenGL ES 2.0 on Android (matrices not working)

    - by TranquilMarmot
    So I'm trying to render two moving quads, each at different locations. My shaders are as simple as possible (vertices are only transformed by the modelview-projection matrix, there's only one color). Whenever I try and render something, I only end up with slivers of color! I've only done work with 3D rendering in OpenGL before so I'm having issues with 2D stuff. Here's my basic rendering loop, simplified a bit (I'm using the Matrix manipulation methods provided by android.opengl.Matrix and program is a custom class I created that just calls GLES20.glUniformMatrix4fv()): Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); program.setUniformMatrix4f("Projection", projection); At this point, I render the quads (this is repeated for each quad): Matrix.setIdentityM(modelview, 0); Matrix.translateM(modelview, 0, quadX, quadY, 0); program.setUniformMatrix4f("ModelView", modelview); quad.render(); // calls glDrawArrays and all I see is a sliver of the color each quad is! I'm at my wits end here, I've tried everything I can think of and I'm at the point where I'm screaming at my computer and tossing phones across the room. Anybody got any pointers? Am I using ortho wrong? I'm 100% sure I'm rendering everything at a Z value of 0. I tried using frustumM instead of orthoM, which made it so that I could see the quads but they would get totally skewed whenever they got moved, which makes sense if I correctly understand the way frustum works (it's more for 3D rendering, anyway). If it makes any difference, I defined my viewport with GLES20.glViewport(0, 0, windowWidth, windowHeight); Where windowWidth and windowHeight are the same values that are pased to orthoM It might be worth noting that the android.opengl.Matrix methods take in an offset as the second parameter so that multiple matrices can be shoved into one array, so that'w what the first 0 is for For reference, here's my vertex shader code: uniform mat4 ModelView; uniform mat4 Projection; attribute vec4 vPosition; void main() { mat4 mvp = Projection * ModelView; gl_Position = vPosition * mvp; } I tried swapping Projection * ModelView with ModelView * Projection but now I just get some really funky looking shapes... EDIT Okay, I finally figured it out! (Note: Since I'm new here (longtime lurker!) I can't answer my own question for a few hours, so as soon as I can I'll move this into an actual answer to the question) I changed Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); to float ratio = windowWwidth / windowHeight; Matrix.orthoM(projection, 0, 0, ratio, 0, 1, -1, 1); I then had to scale my projection matrix to make it a lot smaller with Matrix.scaleM(projection, 0, 0.05f, 0.05f, 1.0f);. I then added an offset to the modelview translations to simulate a camera so that I could center on my action (so Matrix.translateM(modelview, 0, quadX, quadY, 0); was changed to Matrix.translateM(modelview, 0, quadX + camX, quadY + camY, 0);) Thanks for the help, all!

    Read the article

  • nHibernate criteria - how do I implement 'having count'

    - by AWC
    I have the following table structure and I want a turn the query into a NH criteria but I'm not sure how to incorporate the correct 'Projection', does anyone know how? And the query I want to turn into a Criteria: select ComponentId from Table_1 where [Name] = 'Contact' or [Name] = 'CurrencyPair' group by ComponentId having count(VersionId) = 2

    Read the article

  • Soft Shadows in Raytracing 3D to 2D

    - by Myx
    Hello: I wish to implement soft shadows produced by area lights in my raytracer. I'm having trouble generating the random samples. So I have a scene in which I have an area light (represented as a circle) whose world (x,y,z) coordinates of the center are given, the radius is given, the normal of the plane on which the circle lies is given, as well as the color and attenuation factors. The sampling scheme I wish to use is the following: generate samples on the quadrilateral that encompasses the circle and discard points outside the circle until the required number of samples within the circle have been found. I'm having trouble understanding how I can transform the 3D coordinates of the center of the circle to its 2D representation (I don't think I can assume that the projection of the circle is on the x-y axis and thus just get rid of the z-component). I think the plane normal information should be used but I'm not sure how. Any and all suggestions are appreciated.

    Read the article

  • Projecting a 3D point to 2D screen coordinate OpenTK

    - by sinsro
    Using Monotouch and OpenTK I am trying to get the screen coordinate of one 3D point. I have my world view projection matrix set up, and OpenGL makes sense of it and projects my 3D model perfectly, but how to use the same matrix to project just one point from 2D to 3D? I thought I could simply use: Vector3.Transform(ref input3Dpos, ref matWorldViewProjection, out projected2Dpos); Then have the projected screen coordinate in projected2DPos. But the resulting Vector4 does not seem to represent the proper projected screen coordinate. And I do not know how to calculate it from there on.

    Read the article

  • Howto project a planar polygon on a plane in 3d-space

    - by sum1stolemyname
    I want to project my Polygon along a vector to a plane in 3d Space. I would preferably use a single transformation matrix to do this, but I don't know how to build a matrix of this kind. Given the plane's parameters (ax+by+cz+d), the world coordinates of my Polygon. As stated in the the headline, all vertices of my polygon lie in another plane. the direction vector along which to project my Polygon (currently the polygon's plane's normal vector) goal -a 4x4 transformation matrix which performs the required projection, or some insight on how to construct one myself

    Read the article

  • How to setup OpenGL camera for a racing game

    - by vian
    I need the view to show the road polygon (a rectangle 3.f * 100.f) with a vanishing point for a road being at 3/4 height of the viewport and the nearest road edge as a viewport's bottom side. See Crazy Taxi game for an example of what I wish to do. I'm using iPhone SDK 3.1.2 default OpenGL ES project template. I setup the projection matrix as follows: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glFrustumf(-2.25f, 2.25f, -1.5f, 1.5f, 0.1f, 1000.0f); Then I use glRotatef to adjust for landscape mode and setup camera. glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(-90, 0.0f, 0.0f, 1.0f); const float cameraAngle = 45.0f * M_PI / 180.0f; gluLookAt(0.0f, 2.0f, 0.0f, 0.0f, 0.0f, 100.0f, 0.0f, cos(cameraAngle), sin(cameraAngle)); My road polygon triangle strip is like this: static const GLfloat roadVertices[] = { -1.5f, 0.0f, 0.0f, 1.5f, 0.0f, 0.0f, -1.5f, 0.0f, 100.0f, 1.5f, 0.0f, 100.0f, }; And I can't seem to find the right parameters for gluLookAt. My vanishing point is always at the center of the screen.

    Read the article

  • How to find relation between change in latitudes at centre of map and top/bottom

    - by Imran
    Hi, Im having little trouble finding a relation between the movement at centre and edge of a circle, Im doing for panning world map,my map extent is 180,89:-180,-89, my map pans by adding change(dx,dY) to its extents and not its centre. Now a situation has arrrised where I have to move the map to a specific centre, to calculate the change in longitudes is very easy and simple, but its the change in lattitudes that has caused problem. It seems the change in centreY of map is more than the change at edge of the mapY, or simply if I have to move the map centre from 0long,0lat to 73long,33lat, for dX I simply get 73, but for dY apparently it looks 33 but if i add 33 to top of map that is 89 , it will be 122 which is incorrect since Latitudes are between 90 and -90 . It seems a case a projection of a circle on 2D plane where the edge of circle since is moving backward due to angle expereinces less change and the centre expereinces more change, now is there a relation between these two factors? I tried converting the difference between OriginY and destinationY into radians and then add to Top and Bottom of Map, but it did'nt really work for me. Please note that the map is project on a virtual canvas whose width starts from 256 and increases by 256*2^z , z=0 is default and whole world is visible at that extent of canvas code: public void moveMapTo(double destinationLongitude,double destinationLattitude) // moves map to the new centre { double dXLong=destinationLongitude-centreLongitude; double atanhsinO = atanh(Math.sin(destinationLattitude * Math.PI / 180.00)); double atanhsinD = atanh(Math.sin(centreLatitude * Math.PI / 180.00)); double atanhCentre = (atanhsinD + atanhsinO) / 2; double latitudeSpan =destinationLattitude - centreLatitude; double radianOfCentreLatitude = Math.atan(Math.sinh(atanhCentre)); double dXLat=latitudeSpan / Math.cos(radianOfCentreLatitude); dXLat*=getLattitudeSpan()*(Math.PI/180); <--- HERE IS THE PORBLEM System.out.println("dxLong:"+dXLong+"_dxLat:"+dXLat); mapLeft+=dXLong; mapRight+=dXLong; mapTop+=dXLat; mapBottom+=dXLat; } ////latitude span function private double getLattitudeSpan() { double latitudeSpan = mapTop - mapBottom; latitudeSpan = latitudeSpan / Math.cos(radianOfCentreLatitude); return Math.abs(latitudeSpan); } //ht

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >