Search Results

Search found 563 results on 23 pages for 'vertices'.

Page 19/23 | < Previous Page | 15 16 17 18 19 20 21 22 23  | Next Page >

  • Problem displaying Vertex Buffer Object (OpenGL and Obj-C)

    - by seaworthy
    Hey, I am having a problem displaying or loading a buffer with an array of vertices. I know that array works fine because I am able to render it using a loop and a glVertex command. I can't figure out what's wrong. Your insight is highly appreciated. GLuint vboId; glGenBuffers( 1, &vboId ); glBindBuffer( GL_ARRAY_BUFFER, vboId); glBufferData( GL_ARRAY_BUFFER, count*sizeof( GLfloat ),array,GL_STATIC_DRAW_ARB ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); printf("%d\n",count); glEnableClientState( GL_VERTEX_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, vboId ); glVertexPointer( 3, GL_FLOAT, 0, 0 ); glDisableClientState( GL_VERTEX_ARRAY ); printf("vboId: [%hd]",vboId); glDeleteBuffers(1, &vboId); Help?

    Read the article

  • shortest path search in a map represented as 2d shapes

    - by joe_shmoe
    Hi, I have a small library of a few shortest path search algorithms. They were developed for simple undirected graphs (the normal representation - vertices and edges). Now I'd like to somehow apply them on a bit different scenario - where the maps are represented as 2-dimensional shapes, connected by shared edges (edges of the polygons, that is). In this scenario, the search can start/end either at a map object or some point (x,y). What would be the best approach? Try to apply the algorithms onto shapes? or try to extract a 'normal' graph out of the shapes (I have preprocessing time available)? Any advice would be much appreciated, as I'm really not sure which way to go, and I don't have enough time (and skill) to explore many options... Thanks a lot

    Read the article

  • Why does gcc think that I am trying to make a function call in my template function signature?

    - by nieldw
    GCC seem to think that I am trying to make a function call in my template function signature. Can anyone please tell me what is wrong with the following? 227 template<class edgeDecor, class vertexDecor, bool dir> 228 vector<Vertex<edgeDecor,vertexDecor,dir>> Graph<edgeDecor,vertexDecor,dir>::vertices() 229 { 230 return V; 231 }; GCC is giving the following: graph.h:228: error: a function call cannot appear in a constant-expression graph.h:228: error: template argument 3 is invalid graph.h:228: error: template argument 1 is invalid graph.h:228: error: template argument 2 is invalid graph.h:229: error: expected unqualified-id before ‘{’ token Thanks a lot.

    Read the article

  • Finding the heaviest length-constrained path in a weighted Binary Tree

    - by Hristo
    UPDATE I worked out an algorithm that I think runs in O(n*k) running time. Below is the pseudo-code: routine heaviestKPath( T, k ) // create 2D matrix with n rows and k columns with each element = -8 // we make it size k+1 because the 0th column must be all 0s for a later // function to work properly and simplicity in our algorithm matrix = new array[ T.getVertexCount() ][ k + 1 ] (-8); // set all elements in the first column of this matrix = 0 matrix[ n ][ 0 ] = 0; // fill our matrix by traversing the tree traverseToFillMatrix( T.root, k ); // consider a path that would arc over a node globalMaxWeight = -8; findArcs( T.root, k ); return globalMaxWeight end routine // node = the current node; k = the path length; node.lc = node’s left child; // node.rc = node’s right child; node.idx = node’s index (row) in the matrix; // node.lc.wt/node.rc.wt = weight of the edge to left/right child; routine traverseToFillMatrix( node, k ) if (node == null) return; traverseToFillMatrix(node.lc, k ); // recurse left traverseToFillMatrix(node.rc, k ); // recurse right // in the case that a left/right child doesn’t exist, or both, // let’s assume the code is smart enough to handle these cases matrix[ node.idx ][ 1 ] = max( node.lc.wt, node.rc.wt ); for i = 2 to k { // max returns the heavier of the 2 paths matrix[node.idx][i] = max( matrix[node.lc.idx][i-1] + node.lc.wt, matrix[node.rc.idx][i-1] + node.rc.wt); } end routine // node = the current node, k = the path length routine findArcs( node, k ) if (node == null) return; nodeMax = matrix[node.idx][k]; longPath = path[node.idx][k]; i = 1; j = k-1; while ( i+j == k AND i < k ) { left = node.lc.wt + matrix[node.lc.idx][i-1]; right = node.rc.wt + matrix[node.rc.idx][j-1]; if ( left + right > nodeMax ) { nodeMax = left + right; } i++; j--; } // if this node’s max weight is larger than the global max weight, update if ( globalMaxWeight < nodeMax ) { globalMaxWeight = nodeMax; } findArcs( node.lc, k ); // recurse left findArcs( node.rc, k ); // recurse right end routine Let me know what you think. Feedback is welcome. I think have come up with two naive algorithms that find the heaviest length-constrained path in a weighted Binary Tree. Firstly, the description of the algorithm is as follows: given an n-vertex Binary Tree with weighted edges and some value k, find the heaviest path of length k. For both algorithms, I'll need a reference to all vertices so I'll just do a simple traversal of the Tree to have a reference to all vertices, with each vertex having a reference to its left, right, and parent nodes in the tree. Algorithm 1 For this algorithm, I'm basically planning on running DFS from each node in the Tree, with consideration to the fixed path length. In addition, since the path I'm looking for has the potential of going from left subtree to root to right subtree, I will have to consider 3 choices at each node. But this will result in a O(n*3^k) algorithm and I don't like that. Algorithm 2 I'm essentially thinking about using a modified version of Dijkstra's Algorithm in order to consider a fixed path length. Since I'm looking for heaviest and Dijkstra's Algorithm finds the lightest, I'm planning on negating all edge weights before starting the traversal. Actually... this doesn't make sense since I'd have to run Dijkstra's on each node and that doesn't seem very efficient much better than the above algorithm. So I guess my main questions are several. Firstly, do the algorithms I've described above solve the problem at hand? I'm not totally certain the Dijkstra's version will work as Dijkstra's is meant for positive edge values. Now, I am sure there exist more clever/efficient algorithms for this... what is a better algorithm? I've read about "Using spine decompositions to efficiently solve the length-constrained heaviest path problem for trees" but that is really complicated and I don't understand it at all. Are there other algorithms that tackle this problem, maybe not as efficiently as spine decomposition but easier to understand? Thanks.

    Read the article

  • openCV usage cvFindDominantPoints

    - by rsinha
    Hi, Does anyone know how to use the cvFindDominantPoints API of openCV? I basically have a 1 channel, binary image from which I get a set of contours. Judging from the image, I seem to be getting the correct contours. Now, I am selecting one of these contours to get dominant points of. This contour has about 60 vertices. However, the API call to cvFindDominantPoints is giving me a sequence of points (about 15) that does not even lie on the contour. It is quite far from it. Any insight? my usage: CvSeq *dominantpoints = cvFindDominantPoints(targetSeq, tristorage, CV_DOMINANT_IPAN, 7, 9, 9, 150);

    Read the article

  • How do I make lines scale when using GLOrtho?

    - by Mason Wheeler
    I'm using GLOrtho to set up a 2D view that I can render textures onto. It works really well, up until I try to zoom in on the image. If I pass half the width and half the height of the viewport to GLOrtho, I end up with all my textures displayed twice as big as normal, which is exactly what I expect. But then I try to draw a box around part of the image and it all falls apart. I call glBegin(GL_LINE_LOOP), place the four vertices, and call glEnd, and I expect to see the same thing I would see if I drew it at normal zoom level, doubled. Instead, I get lines that are all the right length, but they all come out one pixel wide, instead of two, and it looks really bad. What am I missing?

    Read the article

  • Loading textures in an Android OpenGL ES App.

    - by Omega
    I was wondering if anyone could advise on a good pattern for loading textures in an Android Java & OpenGL ES app. My first concern is determining how many texture names to allocate and how I can efficiently go about doing this prior to rendering my vertices. My second concern is in loading the textures, I have to infer the texture to be loaded based on my game data. This means I'll be playing around with strings, which I understand is something I really shouldn't be doing in my GL thread. Overall I understand what's happening when loading textures, I just want to get the best lifecycle out of it. Are there any other things I should be considering?

    Read the article

  • Plotting 3D Polygons in python-matplotlib

    - by Developer
    I was unsuccessful browsing web for a solution for the following simple question: How to draw 3D polygon (say a filled rectangle or triangle) using vertices values? I have tried many ideas but all failed, see: from mpl_toolkits.mplot3d import Axes3D from matplotlib.collections import PolyCollection import matplotlib.pyplot as plt fig = plt.figure() ax = Axes3D(fig) x = [0,1,1,0] y = [0,0,1,1] z = [0,1,0,1] verts = [zip(x, y,z)] ax.add_collection3d(PolyCollection(verts),zs=z) plt.show() I appreciate in advance any idea/comment. Updates based on the accepted answer: import mpl_toolkits.mplot3d as a3 import matplotlib.colors as colors import pylab as pl import scipy as sp ax = a3.Axes3D(pl.figure()) for i in range(10000): vtx = sp.rand(3,3) tri = a3.art3d.Poly3DCollection([vtx]) tri.set_color(colors.rgb2hex(sp.rand(3))) tri.set_edgecolor('k') ax.add_collection3d(tri) pl.show() Here is the result:

    Read the article

  • Triangular bounding volumes

    - by Cheery
    I've come up with an alternative for beziers that might be easier to ray-trace, perhaps even though a plain vertex shader. Though there's missing a piece. I need to find the parametric surface equation from the surface normals I have for edge vertices. I also have to know it's peak and valley so I can constraint the depth of my bounding triangle. Image explains the overall idea: I build a bounding-volume from a control triangle. Then apply a function to each parametric coordinate of the triangle (s+t+u=1 where s,t,u = 0) to get the height coordinate for that certain point. Simply put, it produces a procedurally generated height-map for the triangle's surface. I just need to find a function that generates the height-map so I can make it work.

    Read the article

  • Calculating terrain height in 3d-space

    - by Jonas B
    Hi I'm diving into 3d programming a bit and am currently learning by writing a procedural terrain generator that generates terrain based on a heightmap. I would also want to implement some physics and my first attempt at terrain collision was by simply checking the current position vs the heightmap. This however wont work well against small objects as you'd have to calculate the height by taking the heightdifference of the nearest vertices of the object and doing this every colision check is pretty slow. Beleive me I tried googling for it but there's simply so much crap and millions of blogs posting ripped-of newbie tutorials everywhere with basically no real information on the subject, I can't find anything that explains it or even names any generally used techniques. I'm not asking for code or a complete solution, but if anyone knows a particular technique good for calculating a high-res heightmap out of the already generated and smoothed terrain I would be very happy as I could look into it further when I know what I'm looking for. Thanks

    Read the article

  • Eliminating cyclic flows from a graph

    - by Jon Harrop
    I have a directed graph with flow volumes along edges and I would like to simplify it by removing all cyclic flows. This can be done by finding the minimum of the flow volumes along each edge in any given cycle and reducing the flows of every edge in the cycle by that minimum volume, deleting edges with zero flow. When all cyclic flows have been removed the graph will be acyclic. For example, if I have a graph with vertices A, B and C with flows of 1 from A?B, 2 from B?C and 3 from C?A then I can rewrite this with no flow from A?B, 1 from B?C and 2 from C?A. The number of edges in the graph has reduced from 3 to 2 and the resulting graph is acyclic. Which algorithm(s), if any, solve this problem?

    Read the article

  • How to get all n sets of three consecutives elements in an array or arraylist with a for statement ?

    - by newba
    Hi, I'm trying to do a convex hull approach and the little problem is that I need to get all sets of three consecutive vertices, like this: private void isConvexHull(Ponto[] points) { Arrays.sort(points); for (int i = 0; i <points.length; i++) { isClockWise(points[i],points[i+1],points[i+2]); } //... } I always do something that I don't consider clean code. Could please help me find one or more ways to this? I want it to be circular, i.e., if my fisrt point of the a set is the last element in the array, the 2nd element will be the 3rd in the list and the 3rd in that set will be the the 2nd element in the list, and so on. They must be consecutive, that's all.

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • Chess board position numbers in 6-rooted-binary tree?

    - by HH
    The maximum number of adjacent vertices is 6 that corresponds to the number of roots. By the term root, I mean the number of children for each node. If adjacent square is empty, fill it with Z-node. So every square will have 6 nodes. How can you formulate it with binary tree? Is the structure just 6-rooted-binary tree? What is the structure called if nodes change their positions? Suppose partially ordered list where its units store a large randomly expanding board. I want a self-adjusting data structure, where it is easy to calculate distances between nodes. What is its name?

    Read the article

  • Full coordinates across streets with google maps

    - by nbv4
    If you go here: http://econym.org.uk/gmap/snap.htm there are some examples of the kind of stuff I'm trying to do. I want the user to enter a route on a google maps widget, and then have the map drawl the route along roads. Then the user click on a "submit" button, and their route is sent back to the server where it will be stored in a database. Instead of sending back just the red vertices, I want to send back all the information that makes up the purple lines. Is this possible?

    Read the article

  • How do I get an Iterator over a vector of objects from a Template?

    - by nieldw
    I'm busy implementing a Graph ADT in C++. I have templates for the Edges and the Vertices. At each Vertex I have a vector containing pointers to the Edges that are incident to it. Now I'm trying to get an iterator over those edges. These are the lines of code: vector<Edge<edgeDecor, vertexDecor, dir>*> edges = this->incidentEdges(); vector<Edge<edgeDecor, vertexDecor, dir>*>::const_iterator i; for (i = edges.begin(); i != edges.end(); ++i) { However, the compiler won't accept the middle line. I'm pretty new to C++. Am I missing something? Why can't I declare an iterator over objects from the Edge template? The compiler isn't giving any useful feedback. Much thanks niel

    Read the article

  • abstract class need acces to subclass atribute

    - by user1742980
    I have a problem with my code. public abstract class SimplePolygon implements Polygon { ... public String toString(){ String str = "Polygon: vertices ="; for(int i = 0;i<varray.length;i++){ str += " "; str += varray[i]; } return str; } } public class ArrayPolygon extends SimplePolygon { private Vertex2D[] varray; public ArrayPolygon(Vertex2D[] array){ varray = new Vertex2D[array.length]; if (array == null){} for(int i = 0;i<array.length;i++){ if (array[i] == null){} varray[i] = array[i]; } ... } Problem is, that i'm not allowed to add any atribute or method to abstract class SimplePolygon, so i'cant properly initialize varray. It could simply be solved with protected atrib in that class, but for some (stupid) reason i'cant do that. Has anybody an idea how to solve it without that? Thanks for all help.

    Read the article

  • Python OpenGL Can't Redraw Scene

    - by RobbR
    I'm getting started with OpenGL and shaders using GLUT and PyOpenGL. I can draw a basic scene but for some reason I can't get it to update. E.g. any changes I make during idle(), display(), or reshape() are not reflected. Here are the methods: def display(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) glMatrixMode(GL_MODELVIEW) glLoadIdentity() glUseProgram(self.shader_program) self.m_vbo.bind() glEnableClientState( GL_VERTEX_ARRAY ) glVertexPointerf(self.m_vbo) glDrawArrays(GL_TRIANGLES, 0, len(self.m_vbo)) glutSwapBuffers() glutReportErrors() def idle(self): test_change += .1 self.m_vbo = vbo.VBO( array([ [ test_change, 1, 0 ], # triangle [ -1,-1, 0 ], [ 1,-1, 0 ], [ 2,-1, 0 ], # square [ 4,-1, 0 ], [ 4, 1, 0 ], [ 2,-1, 0 ], [ 4, 1, 0 ], [ 2, 1, 0 ], ],'f') ) glutPostRedisplay() def begin(self): glutInit() glutInitWindowSize(400, 400) glutCreateWindow("Simple OpenGL") glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB) glutDisplayFunc(self.display) glutReshapeFunc(self.reshape) glutMouseFunc(self.mouse) glutMotionFunc(self.motion) glutIdleFunc(self.idle) self.define_shaders() glutMainLoop() I'd like to implement a time step in idle() but even basic changes to the vertices or tranlastions and rotations on the MODELVIEW matrix don't display. It just puts up the initial state and does not update. Am I missing something?

    Read the article

  • Find a node in a Graph that minimizes the distance between two other nodes

    - by Andrés
    Here is the thing. I have a directed weighted graph G, with V vertices and E edges. Given two nodes in the graph, let's say A, and B, and given the weight of an edge A-B denoted as w(A, B), I need to find a node C so that max(w(A, C), w(B, C)) is minimal among all possibilities. By possibilities I mean all the values C can take. I don't know if it is completely clear, if it's not, I'll try to be more precise. Thanks in advance.

    Read the article

  • How can I find out how much memory an object (rather the instance of an object) of a C++ class consu

    - by Shadow
    Hi, I am developing a Graph-class, based on boost-graph-library. A Graph-object contains a boost-graph, so to say an adjacency_list, and a map. When monitoring the total memory usage of my program, it consumes quite a lot (checked with pmap). Now, I would like to know, how much of the memory is exactly consumed by a filled object of this Graph-class? With filled I mean when the adjacency_list is full of vertices and edges. I found out, that using sizeof() doesn't bring me far. Using valgrind is also not an alternative as there is quite some memory allocation done previously and this makes the usage of valgrind impractical for this purpose. I'm also not interested in what other parts of the program cost in memory, I want to focus on one single object. Thank you.

    Read the article

  • finding the total number of distinct shortest paths between 2 nodes in undirected weighted graph in linear time?

    - by logan
    I was wondering, that if there is a weighted graph G(V,E), and I need to find a single shortest path between any two vertices S and T in it then I could have used the Dijkstras algorithm. but I am not sure how this can be done when we need to find all the distinct shortest paths from S to T. Is it solvable on O(n) time? I had one more question like if we assume that the weights of the edges in the graph can assume values only in certain range lets say 1 <=w(e)<=2 will this effect the time complexity?

    Read the article

  • Dynamically generate Triangle Lists for a Complex 3D Mesh

    - by Vulcan Eager
    In my application, I have the shape and dimensions of a complex 3D solid (say a Cylinder Block) taken from user input. I need to construct vertex and index buffers for it. Since the dimensions are taken from user input, I cannot user Blender or 3D Max to manually create my model. What is the textbook method to dynamically generate such a mesh? Edit: I am looking for something that will generate the triangles given the vertices, edges and holes. Something like TetGen. As for TetGen itself, I have no way of excluding the triangles which fall on the interior of the solid/mesh.

    Read the article

  • How can I find out how much memory an instance of a C++ class consumes?

    - by Shadow
    Hi, I am developing a Graph-class, based on boost-graph-library. A Graph-object contains a boost-graph, so to say an adjacency_list, and a map. When monitoring the total memory usage of my program, it consumes quite a lot (checked with pmap). Now, I would like to know, how much of the memory is exactly consumed by a filled object of this Graph-class? With filled I mean when the adjacency_list is full of vertices and edges. I found out, that using sizeof() doesn't bring me far. Using valgrind is also not an alternative as there is quite some memory allocation done previously and this makes the usage of valgrind impractical for this purpose. I'm also not interested in what other parts of the program cost in memory, I want to focus on one single object. Thank you.

    Read the article

  • Can I use a vertex shader to display a models normals?

    - by geowar
    I'm currently using a VBO for the texture coordinates, normals and the vertices of a (3DS) model I'm drawing with "glDrawArrays(GL_TRIANGLES, ...);". For debugging I want to (temporarily) show the normals when drawing my model. Do I have to use immediate mode to draw each line from vert to vert+normal -OR- stuff another VBO with vert and vert+normal to draw all the normals… -OR- is there a way for the vertex shader to use the vertex and normal data already passed in when drawing the model to compute the V+N used when drawing the normals?

    Read the article

  • How can I find out how much memory an object of a C++ class consumes?

    - by Shadow
    Hi, I am developing a Graph-class, based on boost-graph-library. A Graph-object contains a boost-graph, so to say an adjacency_list, and a map. When monitoring the total memory usage of my program, it consumes quite a lot (checked with pmap). Now, I would like to know, how much of the memory is exactly consumed by a filled object of this Graph-class? With filled I mean when the adjacency_list is full of vertices and edges. I found out, that using sizeof() doesn't bring me far. Using valgrind is also not an alternative as there is quite some memory allocation done previously and this makes the usage of valgrind impractical for this purpose. I'm also not interested in what other parts of the program cost in memory, I want to focus on one single object. Thank you.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23  | Next Page >