Search Results

Search found 18084 results on 724 pages for 'graphics programming'.

Page 83/724 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • SDL Fullscreen and Gnome-panel

    - by Daniel
    On Ubuntu 10.10, the following SDL code cause Gnome-panel to cease updating its drawing, however it does still function (ie windows on the panel open where they should be, but you just have to know where they 'would be' on instinct/memory). Gnome-panel also leaves a "Untitled window" box in the panel. #include <SDL.h> int main() { SDL_Surface* Screen; if(SDL_Init(SDL_INIT_VIDEO) < 0) { return 1; } Screen = SDL_SetVideoMode(1280, 1024, 32, SDL_OPENGL | SDL_FULLSCREEN); SDL_FreeSurface(Screen); SDL_Quit(); return 0; } Is this something wrong with SDL? Something wrong with the code? Something wrong with Gnome-panel? Hopefully we can find out :) Note: SDL tag request? Seeing as it is quite popular when searched: http://askubuntu.com/search?q=SDL

    Read the article

  • What's a good data structure solution for a scene manager in XNA?

    - by tunnuz
    Hello, I'm playing with XNA for a game project of myself, I had previous exposure to OpenGL and worked a bit with Ogre, so I'm trying to get the same concepts working on XNA. Specifically I'm trying to add to XNA a scene manager to handle hierarchical transforms, frustum (maybe even occlusion) culling and transparency object sorting. My plan was to build a tree scene manager to handle hierarchical transforms and lighting, and then use an Octree for frustum culling and object sorting. The problem is how to do geometry sorting to support transparencies correctly. I know that sorting is very expensive if done on a per-polygon basis, so expensive that it is not even managed by Ogre. But still images from Ogre look right. Any ideas on how to do it and which data structures to use and their capabilities? I know people around is using: Octrees Kd-trees (someone on GameDev forum said that these are far better than Octrees) BSP (which should handle per-polygon ordering but are very expensive) BVH (but just for frustum and occlusion culling) Thank you Tunnuz

    Read the article

  • ATI Catalyst doesn't retain changes after reboot when setting extended display

    - by rfc1484
    I have Ubuntu 11.10 and I'm trying to set up extended display for my two displays. I have an AMD 6870. fglrx and fglrx-updates are installed. When I launch amdcccle trough the terminal (using sudo), I select in tab "Multi-Display" the option "Multi-display Desktop with Display(s)". Then it says for changes to be done I have to reboot my computer. Being a good an obedient lad I do just that, but after rebooting the displays are still in the same "clone" option as before in the Catalyst Control Center and no changes are made. Any suggestions?

    Read the article

  • ATI (fglrx) Dual monitor / laptop hot-plugging

    - by Brendan Piater
    I feel like I've gone back 5 years on my desktop today. I'll try not dump to much frustration here... I been running 12.04 since alpha with the ATI open source drivers and the gnome 3 desktop. I been generally very happy with them with only small issues along the way. Now of course it does not support 3D acceleration 100%, so games like my newly purchased Amnesia from the Humble bundle would not play. OK, no worries, the ATI driver is in the repos so let me have a go I thought. With all this testing that's been done with multi-monitor support, what could go wrong...? How I use my computer: It's laptop, with a HD 3670 card in it. I spend about 50% of the time working directly on the laptop (at home) and about 50% of the time working with an additional display connected (at work), multi desktop environment. What happening now: installed drivers things seemed to working, save some small other bugs (not critical) this morning I take my machine and plug the additional monitor into it, and nothing happens... ok fine. open "displays" try configure dual display, won't work open ati config "thing" (cause it is a thing, a crap thing) and set-up monitors there reboot it says (oh ffs, really.... ok) reboot, login and wow, I got a gnome 2 desktop (presume gnome 3 fall back) and no multi-monitor...great. (screenshot: http://ubuntuone.com/5tFe3QNFsTSIGvUSVLsyL7 ) after getting into a situation where I had to Ctrl + Alt + Del to get out of a frozen display, I eventually manage to set-up a single display desktop on the "main" monitor ok.. time to go home... unplug monitor... nothing happens.. oh boy here we go... try displays again, nothing, just hangs the display.. great. crash all the apps and reboot... So it's been a trying day... What I really hope is that someone else has figured out how to avoid this PAIN. Please help with a solution that: allows me run fglrx (so I can run the games I want) allows me to hot-plug a monitor to my laptop and remove it again allows me to change the display so include the hot-plugged monitor (preferable automatically like it did with the open drivers) Next best if that's not possible: switch between laptop only display and monitor only display easily (i.e. not having to reboot/logout/suspened etc) Really appreciate the time of anyone that has a solution. Thanks in advance. Regards Brendan PS: I guess I should file a bug about this too, so some direction as to the best place to file this would be appreciated too.

    Read the article

  • Triangle Strips and Tangent Space Normal Mapping

    - by Koarl
    Short: Do triangle strips and Tangent Space Normal mapping go together? According to quite a lot of tutorials on bump mapping, it seems common practice to derive tangent space matrices in a vertex program and transform the light direction vector(s) to tangent space and then pass them on to a fragment program. However, if one was using triangle strips or index buffers, it is a given that the vertex buffer contains vertices that sit at border edges and would thus require more than one normal to derive tangent space matrices to interpolate between in fragment programs. Is there any reasonable way to not have duplicate vertices in your buffer and still use tangent space normal mapping? Which one do you think is better: Having normal and tangent encoded in the assets and just optimize the geometry handling to alleviate the cost of duplicate vertices or using triangle strips and computing normals/tangents completely at run time? Thinking about it, the more reasonable answer seems to be the first one, but why might my professor still be fussing about triangle strips when it seems so obvious?

    Read the article

  • What languages are most commonly used in medical research?

    - by Chris Taylor
    For someone about to go into a career in medical research, what language would be the most useful to learn? From my limited experience (I have been a researcher in mathematics and in finance) I have been able to recommend looking at R (for statistics) Matlab (for general numeric processing) and Python (for general purpose programming with statistics/numerics as an add-on) but I don't know which of those (if any) are in common use -- or if there are other, more specialized languages that are used. To be clear, I'm not talking about a professional programmer working in a medical setting. I am talking about a medical or genetics researcher who uses programming to analyse data, or generally to help get their work done.

    Read the article

  • Bitmap font rendering, UV generation and vertex placement

    - by jack
    I am generating a bitmap, however, I am not sure on how to render the UV's and placement. I had a thread like this once before, but it was too loosely worded as to what I was looking to do. What I am doing right now is creating a large 1024x1024 image with characters evenly placed every 64 pixels. Here is an example of what I mean. I then save the bitmap X/Y information to a file (which is all multiples of 64). However, I am not sure how to properly use this information and bitmap to render. This falls into two different categories, UV generation and kerning. Now I believe I know how to do both of these, however, when I attempt to couple them together I will get horrendous results. For example, I am trying to render two different text arrays, "123" and "njfb". While ignoring the texture quality (I will be increasing the texture to provide more detail once I fix this issue), here is what it looks like when I try to render them. http://img64.imageshack.us/img64/599/badfontrendering.png Now for the algorithm. I am doing my letter placement with both GetABCWidth and GetKerningPairs. I am using GetABCWidth for the width of the characters, then I am getting the kerning information for adjust the characters. Does anyone have any suggestions on how I can implement my own bitmap font renderer? I am trying to do this without using external libraries such as angel bitmap tool or freetype. I also want to stick to the way the bitmap font sheet is generated so I can do extra effects in the future. Rendering Algorithm for(U32 c = 0, vertexID = 0, i = 0; c < numberOfCharacters; ++c, vertexID += 4, i += 6) { ObtainCharInformation(fontName, m_Text[c]); letterWidth = (charInfo.A + charInfo.B + charInfo.C) * scale; if(c != 0) { DWORD BytesReq = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, 0, 0, &mat); U8 * glyphImg= new U8[BytesReq]; DWORD r = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, BytesReq, glyphImg, &mat); for (int k=0; k<nKerningPairs; k++) { if ((kerningpairs[k].wFirst == previousCharIndex) && (kerningpairs[k].wSecond == m_Text[c])) { letterBottomLeftX += (kerningpairs[k].iKernAmount * scale); break; } } letterBottomLeftX -= (gm.gmCellIncX * scale); } SetVertex(letterBottomLeftX, 0.0f, zFight, vertexID); SetVertex(letterBottomLeftX, letterHeight, zFight, vertexID + 1); SetVertex(letterBottomLeftX + letterWidth, letterHeight, zFight, vertexID + 2); SetVertex(letterBottomLeftX + letterWidth, 0.0f, zFight, vertexID + 3); zFight -= 0.001f; float BottomLeftX = (F32)(charInfo.bitmapXOrigin) / (float)m_BitmapWidth; float BottomLeftY = (F32)(charInfo.bitmapYOrigin + charInfo.charBitmapHeight) / (float)m_BitmapWidth; float TopLeftX = BottomLeftX; float TopLeftY = (F32)(charInfo.bitmapYOrigin) / (float)m_BitmapWidth; float TopRightX = (F32)(charInfo.bitmapXOrigin + charInfo.B - charInfo.C) / (float)m_BitmapWidth; float TopRightY = TopLeftY; float BottomRightX = TopRightX; float BottomRightY = BottomLeftY; SetTextureCoordinate(TopLeftX, TopLeftY, vertexID + 1); SetTextureCoordinate(BottomLeftX, BottomLeftY, vertexID + 0); SetTextureCoordinate(BottomRightX, BottomRightY, vertexID + 3); SetTextureCoordinate(TopRightX, TopRightY, vertexID + 2); /// index setting letterBottomLeftX += letterWidth; previousCharIndex = m_Text[c]; }

    Read the article

  • Installed UBUNTU12.04 in Legacy, when changed to UEFI just runs the Terminal, not GUI

    - by jraulvc
    Well, I installed Ubuntu 12.04 in a Gateway NE 522 with Windows 8. First, I had to install it in Legacy mode, because in UEFI it would not run the bootable USB. In the Legacy mode it runs perfect. Once done that with help of the "Boot-Repair" I changed it to the UEFI and disabled the secure boot mode. GRUB runs fine but when I run ubuntu I get the following message: microcode: failed to load file amd-ucode/microcode_amd_fam16h.bin kvm: disabled by bios kvm: disabled by bios kvm: disabled by bios and then I just get access to the terminal. From there, I have already tried with reinstalling unity and gmd. When I try to install amd64-microcode the same error ocurrs ( microcode: failed to load file amd-ucode/microcode_amd_fam16h.bin ) by the "updating the microcode on all online processors..." phase of the installation. Can somebody tell me how can I recover the graphical interphase of ubuntu from the terminal? Thanks a lot

    Read the article

  • What's the best way to generate an NPC's face using web technologies?

    - by Vael Victus
    I'm in the process of creating a web app. I have many randomly-generated non-player characters in a database. I can pull a lot of information about them - their height, weight, down to eye color, hair color, and hair style. For this, I am solely interested in generating a graphical representation of the face. Currently the information is displayed with text in the nicest way possible, but I believe it's worth generating these faces for a more... human experience. Problem is, I'm not artist. I wouldn't mind commissioning an artist for this system, but I wouldn't know where to start. Were it 2007, I'd naturally think to myself that using Flash would be the best choice. I'd love to see "breathing" simulated. However, since Flash is on its way out, I'm not sure of a solid solution. With a previous game, I simply used layered .pngs to represent various aspects of the player's body: their armor, the face, the skin color. However, these solutions weren't very dynamic and felt very amateur. I can't go deep into this project feeling like that's an inferior way to present these faces, and I'm certain there's a better way. Can anyone give some suggestion on how to pull this off well?

    Read the article

  • Nothing drawing on screen OpenGL with GLSL

    - by codemonkey
    I hate to be asking this kind of question here, but I am at a complete loss as to what is going wrong, so please bear with me. I am trying to render a single cube (voxel) in the center of the screen, through OpenGL with GLSL on Mac I begin by setting up everything using glut glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH); glutInitWindowSize(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); glutCreateWindow("Cubez-OSX"); glutReshapeFunc(reshape); glutDisplayFunc(render); glutIdleFunc(idle); _electricSheepEngine=new ElectricSheepEngine(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); _electricSheepEngine->initWorld(); glutMainLoop(); Then inside the engine init camera & projection matrices: cameraPosition=glm::vec3(2,2,2); cameraTarget=glm::vec3(0,0,0); cameraUp=glm::vec3(0,0,1); glm::vec3 cameraDirection=glm::normalize(cameraPosition-cameraTarget); cameraRight=glm::cross(cameraDirection, cameraUp); cameraRight.z=0; view=glm::lookAt(cameraPosition, cameraTarget, cameraUp); lensAngle=45.0f; aspectRatio=1.0*(windowWidth/windowHeight); nearClippingPlane=0.1f; farClippingPlane=100.0f; projection=glm::perspective(lensAngle, aspectRatio, nearClippingPlane, farClippingPlane); then init shaders and check compilation and bound attributes & uniforms to be correctly bound (my previous question) These are my two shaders, vertex: #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } and fragment: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } init the cube: setPosition(glm::vec3(0,0,0)); struct voxelData data[]={ //front face {{-1.0, -1.0, 1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, 1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, 1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, 1.0}, {0.0, 1.0, 1.0}}, //back face {{-1.0, -1.0, -1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, -1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, -1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, -1.0}, {0.0, 1.0, 1.0}} }; glGenBuffers(1, &modelVerticesBufferObject); glBindBuffer(GL_ARRAY_BUFFER, modelVerticesBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); const GLubyte indices[] = { // Front 0, 1, 2, 2, 3, 0, // Back 4, 6, 5, 4, 7, 6, // Left 2, 7, 3, 7, 6, 2, // Right 0, 4, 1, 4, 1, 5, // Top 6, 2, 1, 1, 6, 5, // Bottom 0, 3, 7, 0, 7, 4 }; glGenBuffers(1, &modelFacesBufferObject); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, modelFacesBufferObject); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); and then the render call: glClearColor(0.52, 0.8, 0.97, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); //use the shader glUseProgram(shaderProgram); //enable attributes in program glEnableVertexAttribArray(shaderAttribute_position); glEnableVertexAttribArray(shaderAttribute_color); //model matrix using model position vector glm::mat4 mvp=projection*view*voxel->getModelMatrix(); glUniformMatrix4fv(shaderAttribute_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_position, // attribute 3, // number of elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements 0 // offset of first element ); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_color, // attribute 3, // number of colour elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements (GLvoid *)(offsetof(struct voxelData, color3D)) // offset of colour data ); //draw the model by going through its elements array glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, voxel->modelFacesBufferObject); int bufferSize; glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &bufferSize); glDrawElements(GL_TRIANGLES, bufferSize/sizeof(GLushort), GL_UNSIGNED_SHORT, 0); //close up the attribute in program, no more need glDisableVertexAttribArray(shaderAttribute_position); glDisableVertexAttribArray(shaderAttribute_color); but on screen all I get is the clear color :$ I generate my model matrix using: modelMatrix=glm::translate(glm::mat4(1.0), position); which in debug turns out to be for the position of (0,0,0): |1, 0, 0, 0| |0, 1, 0, 0| |0, 0, 1, 0| |0, 0, 0, 1| Sorry for such a question, I know it is annoying to look at someone's code, but I promise I have tried to debug around and figure it out as much as I can, and can't come to a solution Help a noob please? EDIT: Full source here, if anyone wants

    Read the article

  • Cannot assign keys to Wacom scroll wheel anymore

    - by UncleZeiv
    Hi all, my Wacom Graphire 4 used to work perfectly well until, I think, Ubuntu 10.4. At that point something changed in the configuration and I couldn't assign a key to the scroll wheel anymore (note: the pad's scroll wheel, not the mouse's), i.e this command: xsetwacom set "Wacom Graphire4 6x8 pad" AbsWDn "key +" returns silently without error but nothing happens. Same goes for AbsWUp, RelWDn, RelWUp. Apparently though the problem is even deeper as pressing the wheel in a xev window doesn't seem to have any effect. Moreover I am thoroughly confused on how the various pieces (kernel driver, xorg driver, evdev, HAL, xinput?) are supposed to work together and if the wacom module that ships with Ubuntu is the one from linuxwacom or not. Any ideas? I don't want to become an X.org hacker just to understand what's going on... it used to work! NOTE: I have already read question 3940, but that's not the same problem.

    Read the article

  • Why doesn't my graphic card software support 1280*1024?

    - by Allwar
    Hi, I have an external monitor which is an 20" 1280*1024. In windows 7 it works fine with that resolution but in ubuntu it can't. Example: In windows I connect it and activates it, done. In ubuntu I connect and the only resolution that is available is the ones my laptop screen support, 12" 1366*768. My laptop is an asus 1201n. If I force it to use 1280*1024 both screen crashes and i have to force a reboot. When I force it I only force the external monitor, the laptop is already at maximum 1366*768. I connect it throw VGA. ((The graphic card supports 1280*1024 in windows 7, #Fail)) alvar@alvars-laptop:~$ disper -l display DFP-0: HSD121PHW1 resolutions: 320x175, 320x200, 360x200, 320x240, 400x300, 416x312, 512x384, 640x350, 576x432, 640x400, 680x384, 720x400, 640x480, 720x450, 640x512, 700x525, 800x512, 840x525, 800x600, 960x540, 832x624, 1024x768, 1366x768 display CRT-0: CRT-0 resolutions: 320x240, 400x300, 512x384, 680x384, 640x480, 800x600, 1024x768, 1152x864, 1360x768

    Read the article

  • Can "go" replace C++? [closed]

    - by iammilind
    I was reading wiki article about "go" programming language, where Bruce Eckel states: The complexity of C++ (even more complexity has been added in the new C++), and the resulting impact on productivity, is no longer justified. All the hoops that the C++ programmer had to jump through in order to use a C-compatible language make no sense anymore --they're just a waste of time and effort. Now, Go makes much more sense for the class of problems that C++ was originally intended to solve. Can go really replace C++(11) for new development in future? How about generic programming? I don't know go , but the amount of time (in)wasted in learning C++ seems to go in vain.

    Read the article

  • Tangent basis calculation problem

    - by Kirill Daybov
    I have the problem with seams with calculating a tangent basis in my application. I'm using a seems to be right algorithm, but it gives wrong result on the seams. What am I doing wrong? Is there a problem with an algorithm, or with the model? The designer says that our models with our normal maps are rendered correctly in Xoliul Shader Plugin in 3Ds Max, so there should be a way to calculate correct tangent basis programmatically. Here's an example of the problem I'm talking about. Steps, I've already taken: - Tried different algorithm (from Gamasutra, I can't post the link because I don't have enough reputation yet). I got wrong, much worse, results; - Tried to average basis vectors for vertexes are used in multiple faces; - Tried to average basis vectors for vertexes that have same world coordinates (this would be obviously wrong solution, but I've tried it anyway).

    Read the article

  • How to install Wacom Bamboo Pen

    - by casadrya
    I have a new Wacom Bamboo Pen. I'm using Ubuntu 10.10 64bit. After googling a little bit, I checked that xserver-xorg-input-wacom was installed. I plugged in my tablet. I rebooted my computer. Nothing special happened. I opened Inkscape. The tablet didn't work. I opened Inkscape's Input devices dialog. I didn't understand anything. I tried to blindly click some options in that dialog but nothing seemed to have any effect. Same with Gimp. After googling some more I found the linuxwacom website with source code, this didn't seem to work. So... any help? As requested: lsusb Bus 005 Device 002: ID 056a:00d4 Wacom Co., Ltd dmesg | tail [ 492.961267] usb 5-1: new full speed USB device using uhci_hcd and address 3 [ 493.144862] input: Wacom Bamboo 4x5 Pen as /devices/pci0000:00/0000:00:1a.2/usb5/5-1/5-1:1.0/input/input6 [ 493.158854] input: Wacom Bamboo 4x5 Finger as /devices/pci0000:00/0000:00:1a.2/usb5/5-1/5-1:1.1/input/input7

    Read the article

  • How do I get Poulsbo (GMA500) drivers to work?

    - by slayerman
    I'm trying to get working the Poulbo driver under Ubuntu 9.10. I've installed poulsbo-driver-2d poulsbo-driver-3d poulsbo-config packages from sudo add-apt-repository ppa:gma500/ppa. The packages installation is working. After I have to put in xorg.conf driver "psb". Then I reboot and I have no more display. I have to switch back to vesa in order to display back. Can someone give me some kind of solution?

    Read the article

  • Achieving certain rendering styles

    - by milesmeow
    I'm trying to assess the difficulty of creating a rendering style that is more like the game Okami and the Quake mods (as shown on this page...search for 'okami','quake npr'). Here's a better page describing the Quake rendering mod. Can a game engine such as Unity be used and programmed to achieve these kind of rendering styles? I'm doing research and am totally new to this so any insight into this would help tremendously.

    Read the article

  • The practical cost of swapping effects

    - by sebf
    I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Live CD boot/installation problem, blank screen - 12.04

    - by traubi
    I'm trying to install ubuntu-12.04-desktop-amd64 from my usb stick. When booting from usb I instantly get a blank screen (with a grey box at the left bottom) I checked the live system with my laptop.works well except it also starts of with a black screen and a grey box at the bottom. I figure the Problem is my geforce gtx 570 since older versions of Ubuntu where only able to boot with xforcevesa and nomodeset. Unfortunately I can't change the boot parameters. I tried the alternate version for a text based install, but it has the same problem. If I press Esc in the alternate I get a message box with two buttons but no text. I would be happy for any advice on this matter

    Read the article

  • Split vector vs matrix notation for transformation

    - by seahorse
    Some rendering engines like Ogre prefer to use a individual vector based notation for transformations like the following Split vector notation: Net Transformation is represented by Scale vector = sx, sy, sz Transformation vector = tx, ty, tz Rotation Quaternion Vector = w,x,y,z Matrix notation: There are other engines which simply use a net combined transformation matrix. What are the advantages of the first notation over the second? Also for animation interpolation does it work in the first notation that we interpolate across the individual components and use the interpolated parts to get the net transformation? Is this another advantage?

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • What features does D3D have that OpenGL does not (and vice versa)?

    - by Tom
    Are there any feature comparisons on Direct3D 11 and the newest OpenGL versions? Well, simply put, Direct3D 11 introduced three main features (taken from Wikipedia): Tesselation Multithreaded rendering Compute shaders Increased texture cache Now I'm wondering, how does the newest versions of OpenGL cope with these features? And since I have this feeling that there are features that Direct3D lacks from OpenGL's side, what are those?

    Read the article

  • Correct use of VAO's in OpenGL ES2 for iOS?

    - by sak
    I'm migrating to OpenGL ES2 for one of my iOS projects, and I'm having trouble to get any geometry to render successfully. Here's where I'm setting up the VAO rendering: void bindVAO(int vertexCount, struct Vertex* vertexData, GLushort* indexData, GLuint* vaoId, GLuint* indexId){ //generate the VAO & bind glGenVertexArraysOES(1, vaoId); glBindVertexArrayOES(*vaoId); GLuint positionBufferId; //generate the VBO & bind glGenBuffers(1, &positionBufferId); glBindBuffer(GL_ARRAY_BUFFER, positionBufferId); //populate the buffer data glBufferData(GL_ARRAY_BUFFER, vertexCount, vertexData, GL_STATIC_DRAW); //size of verte position GLsizei posTypeSize = sizeof(kPositionVertexType); glVertexAttribPointer(kVertexPositionAttributeLocation, kVertexSize, kPositionVertexTypeEnum, GL_FALSE, sizeof(struct Vertex), (void*)offsetof(struct Vertex, position)); glEnableVertexAttribArray(kVertexPositionAttributeLocation); //create & bind index information glGenBuffers(1, indexId); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, *indexId); glBufferData(GL_ELEMENT_ARRAY_BUFFER, vertexCount, indexData, GL_STATIC_DRAW); //restore default state glBindVertexArrayOES(0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); glBindBuffer(GL_ARRAY_BUFFER, 0); } And here's the rendering step: //bind the frame buffer for drawing glBindFramebuffer(GL_FRAMEBUFFER, outputFrameBuffer); glClear(GL_COLOR_BUFFER_BIT); //use the shader program glUseProgram(program); glClearColor(0.4, 0.5, 0.6, 0.5); float aspect = fabsf(320.0 / 480.0); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.0f); GLKMatrix4 mvpMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); //glUniformMatrix4fv(projectionMatrixUniformLocation, 1, GL_FALSE, projectionMatrix.m); glUniformMatrix4fv(modelViewMatrixUniformLocation, 1, GL_FALSE, mvpMatrix.m); glBindVertexArrayOES(vaoId); glDrawElements(GL_TRIANGLE_FAN, kVertexCount, GL_FLOAT, &indexId); //bind the color buffer glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer); [context presentRenderbuffer:GL_RENDERBUFFER]; The screen is rendering the color passed to glClearColor correctly, but not the shape passed into bindVAO. Is my VAO being built correctly? Thanks!

    Read the article

  • Why don't Normal maps in tangent space have a single blue color?

    - by seahorse
    Normal maps are predominantly blue in color because the z component maps to Blue and since normals point out of the surface in the z direction we see Blue as the predominant component. If the above is true then why are normal maps just of one color i.e. blue and they should not be having any other shades(not even shades of blue) Since by definition tangent space is perpendicular to normal at any point we should have the normal always pointing in the Z (Blue direction) with no X(Red component) and Y(Green component). Thus the normal map(since it is a "normal map") should have had color of normals which is just the Blue(Z =Blue compoennt = 1, R=0, G=0) and the normal map should have been of only Blue color with no shades in between. But even then normal maps are not so, and they have gradients of shades in them, why is this so?

    Read the article

  • Eye of gnome image bug with ATI graphic driver

    - by thonixx
    I just installed the ATI driver for my Ubuntu 11.10. After some annoying bugs and errors it works for now. But there is one most stupid bug. Whenever I open a picture in the default image viewer (eye of gnome EOG) it shows me an overexposed picture. Example with EOG: http://ubuntuone.com/4tJHSINBUPjypmcV2EXUF5 Example how it should be: http://ubuntuone.com/1DnwJ1pdQKUCloBcV1kcY5 How can I fix this? Update Driver I used was 8.911-111025a-128237C-ATI with Catalyst 11.11. I installed the driver via jockey and used the driver released with Ubuntu because the post-release driver fails everytime.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >