Search Results

Search found 839 results on 34 pages for 'vertex'.

Page 26/34 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Problems when rendering code on Nvidia GPU

    - by 2am
    I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y = sin.waveAmp * sin(u); giving error Error C1105 : Cannot call a non-function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else??

    Read the article

  • Particle trajectory smoothing: where to do the simulation?

    - by nkint
    I have a particle system in which I have particles that are moving to a target and the new targets are received via network. The list of new target are some noisy coordinates of a moving target stored in the server that I want to smooth in the client. For doing the smoothing and the particle I wrote a simple particle engine with standard euler integration model. So, my pseudo code is something like that: # pseudo code class Particle: def update(): # do euler motion model integration: # if the distance to the target is more than a limit # add a new force to the accelleration # seeking the target, # and add the accelleration to velocity # and velocity to the position positionHistory.push_back(position); if history.length > historySize : history.pop_front() class ParticleEngine: particleById = dict() # an associative array # where the keys are the id # and particle istances are sotred as values # this method is called each time a new tcp packet is received and parsed def setNetTarget(int id, Vec2D new_target): particleById[id].setNewTarget(new_target) # this method is called each new frame def draw(): for p in particleById.values: p.update() beginVertex(LINE_STRIP) for v in p.positionHistory: vertex(v.x, v.y) endVertex() The new target that are arriving are noisy but setting some accelleration/velocity parameters let the particle to have a smoothed trajectories. But if a particle trajectory is a circle after a while the particle position converge to the center (a normal behaviour of euler integration model). So I decided to change the simulation and use some other interpolation (spline?) or smooth method (kalman filter?) between the targets. Something like: switch( INTERPOLATION_MODEL ): case EULER_MOTION: ... case HERMITE_INTERPOLATION: ... case SPLINE_INTERPOLATION: ... case KALMAN_FILTER_SMOOTHING: ... Now my question: where to write the motion simulation / trajectory interpolation? In the Particle? So I will have some Particle subclass like ParticleEuler, ParticleSpline, ParticleKalman, etc..? Or in the particle engine?

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • Windows 7 strange boot issue

    - by JL
    Recently I installed an SSD drive, its now my primary boot drive (OCZ Vertex Turbo 60GB). I'm running Win 7 (64bit). Now I am not sure exactly where the fault lies, but when I do a restart from within Windows, the computer will boot and enter into the loading windows portion, however won't progress beyond there. My solution is to manually hit the reset switch, and then when it boots it enters into a windows menu that offers a start up repair or start windows normally. I select start windows normally, and then it boots fine. I am not entirely sure why I can't just restart the computer from windows normally though. The SSD drive works perfectly, and not even sure if it is the cause to this problem. Any ideas on this?

    Read the article

  • OpenGL depth texture wrong

    - by CoffeeandCode
    I have been writing a game engine for a while now and have decided to reconstruct my positions from depth... but how I read the depth seems to be wrong :/ What is wrong in my rendering? How I init my depth texture in the FBO gl::BindTexture(gl::TEXTURE_2D, this->textures[0]); // Depth gl::TexImage2D( gl::TEXTURE_2D, 0, gl::DEPTH32F_STENCIL8, width, height, 0, gl::DEPTH_STENCIL, gl::FLOAT_32_UNSIGNED_INT_24_8_REV, nullptr ); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MAG_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MIN_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_S, gl::CLAMP_TO_EDGE); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_T, gl::CLAMP_TO_EDGE); gl::FramebufferTexture2D( gl::FRAMEBUFFER, gl::DEPTH_STENCIL_ATTACHMENT, gl::TEXTURE_2D, this->textures[0], 0 ); Linear depth readings in my shader Vertex #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment (where the issue probably is) #version 150\n uniform sampler2D depth_texture; in vec2 uv_f; out vec4 Screen; void main(){ float n = 0.00001; float f = 100.0; float z = texture(depth_texture, uv_f).x; float linear_depth = (n * z)/(f - z * (f - n)); Screen = vec4(linear_depth); // It ISN'T because I don't separate alpha } When Rendered so gamedev.stackexchange, what's wrong with my rendering/glsl?

    Read the article

  • Why does HDTune report better performing drives 2 months after installing them?

    - by Rolnik
    OK, so this is really weird. I ran HDTune on a newly set-up home-built computer and got the following readings from my drives in mid-November. SSD 154 MB/s RAID1 87 RAID0 198 (software installs) RAID0 98 Swap drive Today, in January, I run HDTune (same version) and get these results, in MB/s: SSD 186 RAID1 98 RAID0 241 RAID0 98 (Swap drive) Here are more details that HDTune reports on the SSD drive: HD Tune: OCZ-VERTEX Benchmark Blockquote Transfer Rate Minimum : 135.4 MB/sec Transfer Rate Maximum : 219.4 MB/sec Transfer Rate Average : 185.7 MB/sec Access Time : 0.1 ms Burst Rate : 187.3 MB/sec CPU Usage : -1.0% To get to my question: Why are my hard drives improving in performance? Most of my logical drives are in some form of RAID, except for the SSD. Will this performance ever deteriorate? Note, none of my drives are a hybrid drive that uses some form of SSD to enhance the write/reads on actual platters.

    Read the article

  • SSD on Vmware ESXI 4 (TRIM? Good Idea?)

    - by nextgenneo
    Hi, I just posted about finding bottle necks and have narrowed it down to having way too many VMs on my machine on one 15K SAS drive. I have plenty of cores and plenty of ram. So I am planning on putting 6 VMs on one drive (so 5 drives for 30 VMs). I am thinking of using a 60GB Vertex 2 SSD. Each of my VMs will only need about 6GB of HDD space so this isn't a big deal. My questions are: does ESXI support Trim and do I really need it if I leave 25% of the drive as free space? If I need it should I get a diff drive that handles garbage collection differently? I have a RAID controller w/ write caching. I will still benefit from this? Will this effect my setup differently? Is there anything I need to consider regarding SSD's in virtualized environments. Thanks for any and all help!

    Read the article

  • Workstation Build: Single 2.66ghz i-7 with overclock potential, OR Dual 5520 2.26ghz Xeons?

    - by jdc0589
    There are probably better places to ask this, but I am used to the excellent quality of responses on stack overflow. I am rebuilding my desktop in a few months. Aside from normal lightweight internet usage, I use it to run sqlServer, mySql, 1-2 Ubuntu VMs from time to time, lots of IDE's, and a media server for my PS3. The two possible setups cost the exact same amount (within $50) and would both have 12gb 1333mhz ddr3 ram and a 500gb RAID-0 array (250x2). Now, If I go with a single i-7 920 2.66ghz quadcore, I can easily overclock it to 3ghz, and would have cash leftover to get a 160gb ssd (either the ocz vertex or the 120gb intel) for the main OS/Program install drive. Else, I could get a dual lga1366 motherboard with two e5520 Xeon's (2.26ghz),just use the disks I already have. So, do I go for 8 physical/16 virtual cores at 2.26ghz (No overclocking on server boards) with normal disk I/O, or a 4 physical/8 virtual cores at 3.0ghz with really outstanding disk I/O?

    Read the article

  • HighPoint RAID Controller can't see drives

    - by Satellite
    I've just built a system with a HighPoint 2720 RAID controller. Everything appears to be OK, except that the controller doesn't see any of my drives - the RAID BIOS displays "ERROR. No Suitable disks". I have tried a WD SATA, Seagate SATA, and OCZ Vertex 3 SATA. I am using MiniSAS to 4x SATA breakout cables. How do I get this working? I've tried to sign up for HighPoint support, but their support site doesn't appear to be functioning correctly.

    Read the article

  • Certain grid lines not rendering as expected

    - by row1
    I am drawing a simple quad (a triangle strip with 4 vertices) as the floor and then drawing an 8x8 grid over top (a collection of vertex pairs for a line list). The vertical grid lines work fine (apart from being very aliased), but some of the horizontal lines do not get rendered. The grid renders fine if I do not draw the quad. foreach (EffectPass pass in _Effect.CurrentTechnique.Passes) { pass.Apply(); CurrentGraphicsDevice.SetVertexBuffer(_VertexFloorBuffer); _Engine.CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); //Some of the horizontal lines seems to disappear if we draw the above quad. CurrentGraphicsDevice.SetVertexBuffer(_VertexGridBuffer); CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.LineList, 0, _VertexGridBuffer.VertexCount / 2); } What could be causing these lines to not be rendered? Update: I added the below code after I draw my quad and grid and it started working. But I am not sure why that works as I thought this code was to draw the WPF controls elementRenderer.Render(); spriteBatch.Begin(); spriteBatch.Draw(elementRenderer.Texture, Vector2.Zero, Color.White); spriteBatch.End();

    Read the article

  • Windows 7 System Image Restore to SSD - SSD not detected?

    - by steviey
    I recently bought a new SSD (OCZ Vertex 2 120GB) as a replacement for my laptop HDD . I created a System Image on an external drive using the Win 7 backup and restore tool and then restored this image to the SSD using the Win 7 disk. AHCI is enabled in Win 7 and the BIOS, but it seems that windows is not detecting the disk as an SSD because, in the defragger, the disk is still available for defragging. Prefetch and Superfetch were also still enabled. I have manually disabled scheduled defragging, prefetch and superfetch. I have also checked that TRIM is enabled using: fsutil behavior query DisableDeleteNotify and the result is '0' (it is). Performance seems ok, but not quite as fast as I expected. Was restoring from a win7 generated system image a mistake? Am I getting optimal SSD performance?

    Read the article

  • How to setup my texture cordinates correctly in GLSL 150 and OpenGL 3.3?

    - by RubyKing
    I'm trying to do texture mapping in GLSL 150 and OpenGL 3.3 Here are my shaders I've tried my best to get this correct as possible hopefully this is :) I'm guessing you want to know what the problem is well my texture shows but not in its fullest form just one section of it not the full texture on the quad. All I can think of is its the texture cordinates in the main.cpp which is at the bottom of this post. FRAGMENT SHADER #version 150 in vec2 Texcoord_VSPS; out vec4 color; // Values that stay constant for the whole mesh. uniform sampler2D myTextureSampler; //Main Entry Point void main() { // Output color = color of the texture at the specified UV color = texture2D( myTextureSampler, Texcoord_VSPS ); } VERTEX SHADER #version 150 //Position Container in vec3 position; //Container for TexCoords attribute vec2 Texcoord0; out vec2 Texcoord_VSPS; //out vec2 ex_texcoord; //TO USE A DIFFERENT COORDINATE SYSTEM JUST MULTIPLY THE MATRIX YOU WANT //Main Entry Point void main() { //Translations and w Cordinates stuff gl_Position = vec4(position.xyz, 1.0); Texcoord_VSPS = Texcoord0; } LINK TO MAIN.CPP http://pastebin.com/t7Vg9L0k

    Read the article

  • 3D Box Collision Data Import

    - by cboe
    I'm trying to implement a collision system using oriented bounding boxes, using a center for the box, it's extents as a 3D Vector and a rotation matrix, which is all stuff I picked up online and seem to be somewhat the standard. Detecting the center is no problem so I'm gonna leave these out here. My problem however is importing the data from a 3D file. Say I've placed a box with 2 units length on each side aligned to the world axis. The logic results here are extents of 1,1,1 and I use an identity matrix for rotation - easy. However I'm stuck when I rotate the box in the 3D program, say 30 degrees each axis. How would I parse the box? I only have these 8 vertices as information, and I guess what I would need to do is to find out the rotation of said box, apply it to the vertices so they are aligned to world axes and then calculate the extents out of that. How do I get the rotation of the box when I only have the vertex information of the box available?

    Read the article

  • What's wrong with this Open GL ES 2.0. Shader?

    - by Project Dumbo Dev
    I just can't understand this. The code works perfectly on the emulator(Which is supposed to give more problems than phones…), but when I try it on a LG-E610 it doesn't compile the vertex shader. This is my log error(Which contains the shader code as well): EDITED Shader: uniform mat4 u_Matrix; uniform int u_XSpritePos; uniform int u_YSpritePos; uniform float u_XDisplacement; uniform float u_YDisplacement; attribute vec4 a_Position; attribute vec2 a_TextureCoordinates; varying vec2 v_TextureCoordinates; void main(){ v_TextureCoordinates.x= (a_TextureCoordinates.x + u_XSpritePos) * u_XDisplacement; v_TextureCoordinates.y= (a_TextureCoordinates.y + u_YSpritePos) * u_YDisplacement; gl_Position = u_Matrix * a_Position; } Log reports this before loading/compiling shader: 11-05 18:46:25.579: D/memalloc(1649): /dev/pmem: Mapped buffer base:0x51984000 size:5570560 offset:4956160 fd:46 11-05 18:46:25.629: D/memalloc(1649): /dev/pmem: Mapped buffer base:0x5218d000 size:5836800 offset:5570560 fd:49 Maybe it has something to do with that men alloc? The phone is also giving a constant error while plugged: ERROR FBIOGET_ESDCHECKLOOP fail, from msm7627a.gralloc Edited: "InfoLog:" refers to glGetShaderInfoLog, and it's returning nothing. Since I removed the log in a previous edit I will just say i'm looking for feedback on compiling shaders. Solution + More questions: Ok, the problem seems to be that either ints are not working(generally speaking) or that you can't mix floats with ints. That brings to me the question, why on earth glGetShaderInfoLog is returning nothing? Shouldn't it tell me something is wrong on those lines? It surely does when I misspell something. I solved by turning everything into floats, but If someone can add some light into this, It would be appreciated. Thanks.

    Read the article

  • Weird order when painting triangle outlines using GL_LINE_STRIP

    - by RayDeeA
    I'm developing an app for iOS-Plaftorms using OpenGL. Currently I'm having a weird issue when painting a plane (terrain) which consists of multiple subplanes, where each subplane consists of 2 triangles forming a rect. I'm painting this terrain as a wireframe by using a call to glDrawElements and provide the parameters GL_Line_Strip and the precalculated indices. The problem is that the triangles get painted in the wrong order or are rather vertically mirrored. They do not get painted in the order how I specified the indices, which is confusing. This is the simplified code to generate the vertices: for(NSInteger y = - gridSegmentsY / 2; y < gridSegmentsY / 2; y ++) { for(NSInteger x = - gridSegmentsX / 2; x < gridSegmentsX / 2; x ++) { vertices[pos++] = x * 5; vertices[pos++] = y * 5; vertices[pos++] = 0; } } This is how I generate the indices including degenerated ones (To use as IBO). pos = 0; for(int y = 0; y < gridSegmentsY - 1; y ++) { if (y > 0) { // Degenerate begin: repeat first vertex indices[pos++] = (unsigned short)(y * gridSegmentsY); } for(int x = 0; x < gridSegmentsX; x++) { // One part of the strip indices[pos++] = (unsigned short)((y * gridSegmentsY) + x); indices[pos++] = (unsigned short)(((y + 1) * gridSegmentsY) + x); } if (y < gridSegmentsY - 2) { // Degenerate end: repeat last vertex indices[pos++] = (unsigned short)(((y + 1) * gridSegmentsY) + (gridSegmentsX - 1)); } } So in this part... indices[pos++] = (unsigned short)((y * gridSegmentsY) + x); indices[pos++] = (unsigned short)(((y + 1) * gridSegmentsY) + x); ...I'm setting the first index in the indices array to point to the current (x,y) and the next index to (x,y+1). I'm doin' this for all x's in the current strip, then I'm handling degenerated triangles and repeat this procedure for the next strip (y+1). This method is taken from http://www.learnopengles.com/android-lesson-eight-an-introduction-to-index-buffer-objects-ibos/ So I expect the resulting mesh to get painted like: a----b----c | /| /| | / | / | | / | / | |/ |/ | d----e----f | /| /| | / | / | | / | / | |/ |/ | g----h----i by painting it as described using: glDrawElements(GL_LINE_STRIP, indexCount, GL_UNSIGNED_SHORT, 0); ...since I expect GL_Line_Strip to paint first a line from (a-d), then from (d-b), then (b, e)... and so on (as specified in the indices calculation) But what actually gets painted is: *----*----* |\ |\ | | \ | \ | | \ | \ | | \| \| *----*----* |\ |\ | | \ | \ | | \ | \ | | \| \| *----*----* So the triangles are somehow painted in the wrong order and I need to know why? ;). Does somebody know? Does the problem lie in using GL_Line_Strip or is there a bug in my code? My eye is at (0.0f, 0.0f, 20.0f) and looks at (0,0,0). The mesh is painted along the x-axis & y-axis from left to right with z = 0, so the mesh should not be flipped or anything.

    Read the article

  • One letter game problem?

    - by Alex K
    Recently at a job interview I was given the following problem: Write a script capable of running on the command line as python It should take in two words on the command line (or optionally if you'd prefer it can query the user to supply the two words via the console). Given those two words: a. Ensure they are of equal length b. Ensure they are both words present in the dictionary of valid words in the English language that you downloaded. If so compute whether you can reach the second word from the first by a series of steps as follows a. You can change one letter at a time b. Each time you change a letter the resulting word must also exist in the dictionary c. You cannot add or remove letters If the two words are reachable, the script should print out the path which leads as a single, shortest path from one word to the other. You can /usr/share/dict/words for your dictionary of words. My solution consisted of using breadth first search to find a shortest path between two words. But apparently that wasn't good enough to get the job :( Would you guys know what I could have done wrong? Thank you so much. import collections import functools import re def time_func(func): import time def wrapper(*args, **kwargs): start = time.time() res = func(*args, **kwargs) timed = time.time() - start setattr(wrapper, 'time_taken', timed) return res functools.update_wrapper(wrapper, func) return wrapper class OneLetterGame: def __init__(self, dict_path): self.dict_path = dict_path self.words = set() def run(self, start_word, end_word): '''Runs the one letter game with the given start and end words. ''' assert len(start_word) == len(end_word), \ 'Start word and end word must of the same length.' self.read_dict(len(start_word)) path = self.shortest_path(start_word, end_word) if not path: print 'There is no path between %s and %s (took %.2f sec.)' % ( start_word, end_word, find_shortest_path.time_taken) else: print 'The shortest path (found in %.2f sec.) is:\n=> %s' % ( self.shortest_path.time_taken, ' -- '.join(path)) def _bfs(self, start): '''Implementation of breadth first search as a generator. The portion of the graph to explore is given on demand using get_neighboors. Care was taken so that a vertex / node is explored only once. ''' queue = collections.deque([(None, start)]) inqueue = set([start]) while queue: parent, node = queue.popleft() yield parent, node new = set(self.get_neighbours(node)) - inqueue inqueue = inqueue | new queue.extend([(node, child) for child in new]) @time_func def shortest_path(self, start, end): '''Returns the shortest path from start to end using bfs. ''' assert start in self.words, 'Start word not in dictionnary.' assert end in self.words, 'End word not in dictionnary.' paths = {None: []} for parent, child in self._bfs(start): paths[child] = paths[parent] + [child] if child == end: return paths[child] return None def get_neighbours(self, word): '''Gets every word one letter away from the a given word. We do not keep these words in memory because bfs accesses a given vertex only once. ''' neighbours = [] p_word = ['^' + word[0:i] + '\w' + word[i+1:] + '$' for i, w in enumerate(word)] p_word = '|'.join(p_word) for w in self.words: if w != word and re.match(p_word, w, re.I|re.U): neighbours += [w] return neighbours def read_dict(self, size): '''Loads every word of a specific size from the dictionnary into memory. ''' for l in open(self.dict_path): l = l.decode('latin-1').strip().lower() if len(l) == size: self.words.add(l) if __name__ == '__main__': import sys if len(sys.argv) not in [3, 4]: print 'Usage: python one_letter_game.py start_word end_word' else: g = OneLetterGame(dict_path = '/usr/share/dict/words') try: g.run(*sys.argv[1:]) except AssertionError, e: print e

    Read the article

  • [OpenGL ES - Android] Better way to generate tiles

    - by Inoe
    Hi ! I'll start by saying that i'm REALLY new to OpenGL ES (I started yesterday =), but I do have some Java and other languages experience. I've looked a lot of tutorials, of course Nehe's ones and my work is mainly based on that. As a test, I started creating a "tile generator" in order to create a small Zelda-like game (just moving a dude in a textured square would be awsome :p). So far, I have achieved a working tile generator, I define a char map[][] array to store wich tile is on : private char[][] map = { {0, 0, 20, 11, 11, 11, 11, 4, 0, 0}, {0, 20, 16, 12, 12, 12, 12, 7, 4, 0}, {20, 16, 17, 13, 13, 13, 13, 9, 7, 4}, {21, 24, 18, 14, 14, 14, 14, 8, 5, 1}, {21, 22, 25, 15, 15, 15, 15, 6, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {26, 0, 0, 0, 0, 0, 0, 3, 2, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1} }; It's working but I'm no happy with it, I'm sure there is a beter way to do those things : 1) Loading Textures : I create an ugly looking array containing the tiles I want to use on that map : private int[] textures = { R.drawable.herbe, //0 R.drawable.murdroite_haut, //1 R.drawable.murdroite_milieu, //2 R.drawable.murdroite_bas, //3 R.drawable.angledroitehaut_haut, //4 R.drawable.angledroitehaut_milieu, //5 }; (I cutted this on purpose, I currently load 27 tiles) All of theses are stored in the drawable folder, each one is a 16*16 tile. I then use this array to generate the textures and store them in a HashMap for a later use : int[] tmp_tex = new int[textures.length]; gl.glGenTextures(textures.length, tmp_tex, 0); texturesgen = tmp_tex; //Store the generated names in texturesgen for(int i=0; i < textures.length; i++) { //Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), textures[i]); InputStream is = context.getResources().openRawResource(textures[i]); Bitmap bitmap = null; try { //BitmapFactory is an Android graphics utility for images bitmap = BitmapFactory.decodeStream(is); } finally { //Always clear and close try { is.close(); is = null; } catch (IOException e) { } } // Get a new texture name // Load it up this.textureMap.put(new Integer(textures[i]),new Integer(i)); int tex = tmp_tex[i]; gl.glBindTexture(GL10.GL_TEXTURE_2D, tex); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } I'm quite sure there is a better way to handle that... I just was unable to figure it. If someone has an idea, i'm all ears. 2) Drawing the tiles What I did was create a single square and a single texture map : /** The initial vertex definition */ private float vertices[] = { -1.0f, -1.0f, 0.0f, //Bottom Left 1.0f, -1.0f, 0.0f, //Bottom Right -1.0f, 1.0f, 0.0f, //Top Left 1.0f, 1.0f, 0.0f //Top Right }; private float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; Then, in my draw function, I loop through the map to define the texture to use (after pointing to and enabling the buffers) : for(int y = 0; y < Y; y++){ for(int x = 0; x < X; x++){ tile = map[y][x]; try { //Get the texture from the HashMap int textureid = ((Integer) this.textureMap.get(new Integer(textures[tile]))).intValue(); gl.glBindTexture(GL10.GL_TEXTURE_2D, this.texturesgen[textureid]); } catch(Exception e) { return; } //Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glTranslatef(2.0f, 0.0f, 0.0f); //A square takes 2x so I move +2x before drawing the next tile } gl.glTranslatef(-(float)(2*X), -2.0f, 0.0f); //Go back to the begining of the map X-wise and move 2y down before drawing the next line } This works great by I really think that on a 1000*1000 or more map, it will be lagging as hell (as a reminder, this is a typical Zelda world map : http://vgmaps.com/Atlas/SuperNES/LegendOfZelda-ALinkToThePast-LightWorld.png ). I've read things about Vertex Buffer Object and DisplayList but I couldn't find a good tutorial and nodoby seems to be OK on wich one is the best / has the better support (T1 and Nexus One are ages away). I think that's it, I've putted a lot of code but I think it helps. Thanks in advance !

    Read the article

  • Translation of clustering problem to graph theory language

    - by honk
    I have a rectangular planar grid, with each cell assigned some integer weight. I am looking for an algorithm to identify clusters of 3 to 6 adjacent cells with higher-than-average weight. These blobs should have approximately circular shape. For my case the average weight of the cells not containing a cluster is around 6, and that for cells containing a cluster is around 6+4, i.e. there is a "background weight" somewhere around 6. The weights fluctuate with a Poisson statistic. For small background greedy or seeded algorithms perform pretty well, but this breaks down if my cluster cells have weights close to fluctuations in the background. Also, I cannot do a brute-force search looping through all possible setups because my grid is large (something like 1000x1000). I have the impression there might exist ways to tackle this in graph theory. I heard of vertex-covers and cliques, but am not sure how to best translate my problem into their language.

    Read the article

  • OpenGL ES Simple Undo Last Drawing

    - by Erika
    Hi Everyone, I am trying to figure out how to implement a simple "undo" of last drawing action on the iPhone screen. I draw by first preparing the frame buffer: [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); I then prepare the vertex array and draw this way: glVertexPointer(2, GL_FLOAT, 0, vertexBuffer); glDrawArrays(GL_POINTS, 0, vertexCount); glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; How do I simple undo this last action? There has to be a way to save previous state or an built-in OpenGL ES function, I would think. Thanks

    Read the article

  • problem with my texture coordinates on a square.

    - by Evan Kimia
    Im very new to OpenGL ES, and have been doing a tutorial to build a square. The square is made, and now im trying to map a 256 by 256 image onto it. The problem is, im only seeing a very zoomed in portion of this bitmap; Im fairly certain my texture coords are whats wrong here. Thanks! package se.jayway.opengl.tutorial; import java.io.IOException; import java.io.InputStream; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import java.nio.ShortBuffer; import javax.microedition.khronos.opengles.GL10; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.opengl.GLUtils; public class Square { // Our vertices. private float vertices[] = { -1.0f, 1.0f, 0.0f, // 0, Top Left -1.0f, -1.0f, 0.0f, // 1, Bottom Left 1.0f, -1.0f, 0.0f, // 2, Bottom Right 1.0f, 1.0f, 0.0f, // 3, Top Right }; //Our texture. private float texture[] = { 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, }; // The order we like to connect them. private short[] indices = { 0, 1, 2, 0, 2, 3 }; // Our vertex buffer. private FloatBuffer vertexBuffer; // Our index buffer. private ShortBuffer indexBuffer; //texture buffer. private FloatBuffer textureBuffer; //Our texture pointer. private int[] textures = new int[1]; public Square() { // a float is 4 bytes, therefore we multiply the number if // vertices with 4. ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); vertexBuffer = vbb.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); // a float is 4 bytes, therefore we multiply the number of // vertices with 4. ByteBuffer tbb = ByteBuffer.allocateDirect(texture.length * 4); vbb.order(ByteOrder.nativeOrder()); textureBuffer = tbb.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); // short is 2 bytes, therefore we multiply the number if // vertices with 2. ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2); ibb.order(ByteOrder.nativeOrder()); indexBuffer = ibb.asShortBuffer(); indexBuffer.put(indices); indexBuffer.position(0); } /** * This function draws our square on screen. * @param gl */ public void draw(GL10 gl) { // Counter-clockwise winding. gl.glFrontFace(GL10.GL_CCW); // Enable face culling. gl.glEnable(GL10.GL_CULL_FACE); // What faces to remove with the face culling. gl.glCullFace(GL10.GL_BACK); //Bind our only previously generated texture in this case gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); // Enabled the vertices buffer for writing and to be used during // rendering. gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); //Enable texture buffer array gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); // Specifies the location and data format of an array of vertex // coordinates to use when rendering. gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_SHORT, indexBuffer); // Disable the vertices buffer. gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); //Disable the texture buffer. gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); // Disable face culling. gl.glDisable(GL10.GL_CULL_FACE); } /** * Load the textures * * @param gl - The GL Context * @param context - The Activity context */ public void loadGLTexture(GL10 gl, Context context) { //Get the texture from the Android resource directory InputStream is = context.getResources().openRawResource(R.drawable.test); Bitmap bitmap = null; try { //BitmapFactory is an Android graphics utility for images bitmap = BitmapFactory.decodeStream(is); } finally { //Always clear and close try { is.close(); is = null; } catch (IOException e) { } } //Generate one texture pointer... gl.glGenTextures(1, textures, 0); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); //Clean up bitmap.recycle(); } }

    Read the article

  • Format of compiled directx9 shader files?

    - by JB
    Is the format of compiled pixel and vertex shader object files as produced by fxc.exe documented anywhere either officially or unofficially? I'd like to be able to read the constant name to register assignments from the shader files. I know that the effects framework in D3DX can do this, but I need to avoid using D3DX as it may not be installed on user's machines and I don't need it for anything else so I want to avoid them having to run the directx update. If the effects framework can do it, then so can I if I can find out the file format but I can' seem to find it documented anywhere.

    Read the article

  • Finding the centeroid of a polygon?

    - by user146780
    I have tried: for each vertex, add to total, divide by number of verities to get center. I'v also tried: Find the topmost, bottommost - get midpoint... find leftmost, rightmost, find midpoint. Both of these did not return the perfect center because I'm relying on the center to scale a polygon. I want to scale my polygons so I may put a border around them. What is the best way to find the centroid of a polygon given that the polygon may be concave, convex and have many many sides of various lengths. Thanks

    Read the article

  • Interpolating 2d data that is piecewise constant on faces

    - by celil
    I have an irregular mesh which is described by two variables - a faces array that stores the indices of the vertices that constitute each face, and a verts array that stores the coordinates of each vertex. I also have a function that is assumed to be piecewise constant over each face, and it is stored in the form of an array of values per face. I am looking for a way to construct a function f from this data. Something along the following lines: faces = [[0,1,2], [1,2,3], [2,3,4] ...] verts = [[0,0], [0,1], [1,0], [1,1],....] vals = [0.0, 1.0, 0.5, 3.0,....] f = interpolate(faces, verts, vals) f(0.2, 0.2) = 0.0 # point inside face [0,1,2] f(0.6, 0.6) = 1.0 # point inside face [1,2,3] The manual way of evaluating f(x,y) would be to find the corresponding face that the point x,y lies in, and return the value that is stored in that face. Is there a function that already implements this in scipy (or in matlab)?

    Read the article

  • Dislpay List and transformation

    - by Gary
    Greetings! I have this question. Whenever, I enter a transformation (gltranslate, glrotate, glscale) within a display list, the transformation remains as a command within the display list. Everytime the display list is rendered, it will calculate all and over again. Is there a way, I can make an opengl transformation and the transformed vertex coordinates be stored permanently in a display list instead of transformation & intial coordinates? Hope my question makes sense. Thank you in advance. Gary

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >