Search Results

Search found 639 results on 26 pages for 'goo gl'.

Page 7/26 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Unity on Ubuntu 11.10 - The Dash Home button brings up the panel, but is empty

    - by David M. Coe
    The dash home button brings up a panel that is greyed out, but it is totally empty. It seems to be the very same issue as this: Dash home button brings up blank window which is unanswered. /usr/lib/nux/unity_support_test -p returns OpenGL vendor string: X.Org R300 Project OpenGL renderer string: Gallium 0.4 on ATI RV370 OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes I've tried a unity --reset but that doesn't seem to work. Unity seems to reset, but I get the following warning over and over: cs space validation failed unity What should I do next to try and fix this? Edit: Attempted fixes: I've refomatted, did not work. I've done apt-get remove unity then apt-get update then apt-get install unity, did not work. I've switched to Unity 2d and this seems to work. How can I get regualar Unity working or atleast find the error?

    Read the article

  • Graphics hardware warning when updating to 14.04

    - by pacomet
    As I use Ubuntu at work I just update only LTS versions but now I'm not sure if I can/should. As my computer is now ten years old I would change if was mine but as it is owned by my employer I have to work with it. It's not a bad one, it runs fine (this was not true when still had Windows on it ;-). When updating to 14.04, it warns about possible bad/slow performance with Unity 3D so I stop updating as I am at work, not my own computer. As I understand from http://askubuntu.com/a/438958/25305 Nvidia Geforce FX 5500 graphics card is still supported in 14.04. Now, in 12.04, I have driver version 173 and unity 2d runs fine for me. output of /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.39 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no Should I update? Is it better to stay with 12.04? Thanks

    Read the article

  • Une API pour le service de raccourcissement d'URL de Google à la disposition des développeurs Web

    Une API pour le service de raccourcissement d'URL de Google Est mise à la disposition des développeurs Web Google vient de publier une API pour Goo.gl, son service de raccourcissement d'URL. L'API permettra aux développeurs d'intégrer le service Goo.gl à leurs pages Web et à leurs différents projets Web. Le service de raccourcissement d'URL Goo avait été déployé par Google en 2009 pour se lancer sur le marché des « URL-Shortner » et concurrencer ainsi Bit.ly. Depuis son lanc...

    Read the article

  • If 'Architect' is a dirty word - what's the alternative; when not everyone can actually design a goo

    - by Andras Zoltan
    Now - I'm a developer first and foremost; but whenever I sit down to work on a big project with lots of interlinking components and areas, I will forward-plan my interfaces, base classes etc as best I can - putting on my Architect hat. For a few weeks I've been doing this for a huge project - designing whole swathes of interfaces etc for a business-wide platform that we're developing. The basic structure is a couple of big projects that consists of service and data interfaces, with some basic implementations of all of these. On their own, these assemblies are useless though, as they are simply intended intended as a scaffold on which to build a business-specific implementation (we have a lot of businesses). Therefore, the design of the core platform is absolutely crucial, since consumers of the system are not intended to know which implementation they are actually using. In the past it's not worked so well, but after a few proof-of-concepts and R&D projects this new platform is now growing nicely and is already proving itself. Then somebody else gets involved in the project - he's a TDD man who sees code-level architecture as an irrelevance and is definitely from the camp that 'architect' is a dirty word - I should add that our working relationship is very good despite this :) He's open about the fact that he can't architect in advance and obviously TDD really helps him because it allows him to evolve his systems over time. That I get, and totally understand; but it means that his coding style, basically, doesn't seem to be able to honour the architecture that I've been putting in place. Now don't get me wrong - he's an awesome coder; but the other day he needed to extend one of his components (an implementation of a core interface) to bring in an extra implementation-specific dependency; and in doing so he extended the core interface as well as his implementation (he uses ReSharper), thus breaking the independence of the whole interface. When I pointed out his error to him, he was dismayed. Being test-first, all that mattered to him was that he'd made his tests pass, and just said 'well, I need that dependency, so can't we put it in?'. Of course we could put it in, but I was frustrated that he couldn't see that refactoring the generic interface to incorporate an implementation-specific feature was just wrong! But it is all very Charlie Brown to him (you know the sound the adults make when they're talking to the children) - as far as he's concerned we don't need to worry about it because we can always refactor. The problem is, the culture of test-write-refactor is all very well and good - but not when you're dealing with a platform that is going to be shared out among so many projects that you could never get them all in one place to make the refactorings work. In my opinion, sometimes you actually have to think about what you're doing, and not just let nature take its course. Am I simply fulfilling the role of Architect as a dirty word here? I believe that architecture is important and should be thought about before code gets written; unless it's a particularly small project. But when you're working in a team of people who don't think that way, or even can't think that way how can you actually get this across? Is it a case of simply making the architecture off-limits to changes by other people? I don't want to start having bloody committees just to be able to grow the system; but equally I don't want to be the only one responsible for it all. Do you think the architect role is a waste of time? Is it at odds with TDD and other practises? Can this mix of different practises be made to work, or should I just be a lot less precious (and in so doing allow a generic platform become useless!)? Or do I just lay down the law? Any ideas/experiences/views gratefully received.

    Read the article

  • Why browser doesnt recognize jquery when <script src="...jquery.js> is on a html page served by Goo

    - by indiehacker
    jquery.js source code is not being recognised by browser using my page.html served by Google App Engine as a http:some_request to the SDK, BUT when I load the exact same page.html into the browser directly from my local hard drive as jquery.js all works OK, it is recognized, so I know my path is OK.... I don't understand? In the header of my page.html I have the following: <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js" type="text/javascript"></script> I also tried without success in the header this: <script src="/static/jquery.js" type="text/javascript"></script> I am working along with the examples in the [Jquery tutorial][1]. I am sure there is something simple I dont understand about how .html pages served to the browser from app engine interact differently with the browser than what I normally would expect....but I frustratingly just cant get it.....everything else I have with my app engine app is working fine......would love some help so i can move forward....

    Read the article

  • Tutorial for texture mapping a map onto an Open GL ES sphere?

    - by hotpaw2
    I'm not looking for a library or even open source code. I want to learn how to do this on my own. Where do I start to find an online tutorial, a book chapter, or other educational material for generating a polygonal model of a 3D sphere suitable for feeding to Open GL ES on an iPhone, and then mapping the polygons to some sort of 2D map data so I can texture map the sphere? Is there some sort of software tool (blenders? mayan?) with a tutorial on how to do generate this data? Where is the best place to start?

    Read the article

  • SQL Server 2008 need just like crosstab query on XML column?

    - by user1332896
    <abc id="abc1"> <def id="def1"> <ghi att='ghi1'> <mn id="0742d2ea" name="RF" dt="0" df="3" ty="0" /> <mn id="64d9a11b" name="CJ" dt="0" df="3" ty="0" /> <mn id="db72d154" name="FJ" dt="2" df="4" ty="0" /> <mn id="39af9fa1" name="BS" dt="0" df="2" ty="0" /> </ghi> <jkl att='jkl1'> <mn id="0742d2ea" name="RF" dt="1" gl="19" /> <mn id="64d9a11b" name="CJ" dt="0" gl="6" /> <mn id="db72d154" name="FJ" dt="0" gl="0" /> <mn id="39af9fa1" name="BS" dt="0" gl="12" /> <mn id="ac4f566f" name="DJ" dt="0" gl="9" /> <mn id="4bf3ba2f" name="RP" dt="0" gl="16" /> <mn id="db1af021" name="SC" dt="1" gl="10" /> <mn id="c4c93a2d" name="DN" dt="1" gl="15" /> </jkl> </def> </abc> I need this output. Is this possible in SQL Server 2008? id name ghiDT ghiDF ghiTY jklDT jklGL 0742d2ea RF 0 3 0 1 19 64d9a11b CJ 0 3 0 0 6 db72d154 FJ 2 4 0 0 0 39af9fa1 BS 0 2 0 0 12 ac4f566f DJ 0 0 0 0 9 4bf3ba2f RP 0 0 0 0 16 db1af021 SC 0 0 0 1 10 c4c93a2d DN 0 0 0 1 15

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • foreign key and index issue

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. I have a table and one of its column is referring to another column in another table (in the same database) as foreign key, here is the related SQL statement, in more details, column [AnotherID] in table [Foo] refers to another table [Goo]'s column [GID] as foreign key. [GID] is primary key and clustered index on table [Goo]. My question is, in this way, if I do not create index on [AnotherID] column on [Foo] explicitly, will there be an index created automatically for [AnotherID] column on [Foo] -- because its foreign key reference column [GID] on table [Goo] already has primary clustered key index? CREATE TABLE [dbo].[Foo]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [AnotherID] [int] NULL, [InsertTime] [datetime] NULL CONSTRAINT DEFAULT (getdate()), CONSTRAINT [PK_Foo] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ALTER TABLE [dbo].[Foo] WITH CHECK ADD CONSTRAINT [FK_Foo] FOREIGN KEY([Goo]) REFERENCES [dbo].[Goo] ([GID]) ALTER TABLE [dbo].[Foo] CHECK CONSTRAINT [FK_Foo] thanks in advance, George

    Read the article

  • OpenGL texture on sphere

    - by Cilenco
    I want to create a rolling, textured ball in OpenGL ES 1.0 for Android. With this function I can create a sphere: public Ball(GL10 gl, float radius) { ByteBuffer bb = ByteBuffer.allocateDirect(40000); bb.order(ByteOrder.nativeOrder()); sphereVertex = bb.asFloatBuffer(); points = build(); } private int build() { double dTheta = STEP * Math.PI / 180; double dPhi = dTheta; int points = 0; for(double phi = -(Math.PI/2); phi <= Math.PI/2; phi+=dPhi) { for(double theta = 0.0; theta <= (Math.PI * 2); theta+=dTheta) { sphereVertex.put((float) (raduis * Math.sin(phi) * Math.cos(theta))); sphereVertex.put((float) (raduis * Math.sin(phi) * Math.sin(theta))); sphereVertex.put((float) (raduis * Math.cos(phi))); points++; } } sphereVertex.position(0); return points; } public void draw() { texture.bind(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, sphereVertex); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } My problem now is that I want to use this texture for the sphere but then only a black ball is created (of course because the top right corner s black). I use this texture coordinates because I want to use the whole texture: 0|0 0|1 1|1 1|0 That's what I learned from texturing a triangle. Is that incorrect if I want to use it with a sphere? What do I have to do to use the texture correctly?

    Read the article

  • SFML programs fails to debug with glslDevil

    - by Zhen
    I'm testing the glslDevil debugger with a simple (and working) SFML application in Linux + NVidia. But it always fails in the window creation step: W! Program Start | glXGetConfig(0x86a50b0, 0x86acef8, 4, 0xbf8228c4) | glXGetConfig(0x86a50b0, 0x86acef8, 5, 0xbf8228c8) | glXGetConfig(0x86a50b0, 0x86acef8, 8, 0xbf8228cc) | glXGetConfig(0x86a50b0, 0x86acef8, 9, 0xbf8228d0) | glXGetConfig(0x86a50b0, 0x86acef8, 10, 0xbf8228d4) | glXGetConfig(0x86a50b0, 0x86acef8, 11, 0xbf8228d8) | glXGetConfig(0x86a50b0, 0x86acef8, 12, 0xbf8228dc) | glXGetConfig(0x86a50b0, 0x86acef8, 13, 0xbf8228e0) | glXGetConfig(0x86a50b0, 0x86acef8, 100000, 0xbf8228e4) | glXGetConfig(0x86a50b0, 0x86acef8, 100001, 0xbf8228e8) | glXCreateContext(0x86a50b0, 0x86acef8, (nil), 1) E! Child process exited W! Program termination forced! And the code that fails: #include <SFML/Graphics.hpp> #define GL_GLEXT_PROTOTYPES 1 #define GL3_PROTOTYPES 1 #include <GL/gl.h> #include <GL/glu.h> #include <GL/glext.h> int main(){ sf::RenderWindow window{ sf::VideoMode(800, 600), "Test SFML+GL" }; bool running = true; while( running ){ sf::Event event; while( window.pollEvent(event) ){ if( event.type == sf::Event::Closed ){ running = false; }else if(event.type == sf::Event::Resized){ glViewport(0, 0, event.size.width, event.size.height); } } window.display(); } return 0; } Is It posible to solve this problem? or get around the problem to continue the gslsDevil use?.

    Read the article

  • Can one draw a cube using different method/drawing mode?

    - by den-javamaniac
    Hi. I've just started learning gamedev (in particular android EGL based) and have ran over a code from Pro Android Games 2 that looks as follows: /* * Copyright (C) 2007 Google Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package opengl.scenes.cubes; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.IntBuffer; import javax.microedition.khronos.opengles.GL10; public class Cube { public Cube(){ int one = 0x10000; int vertices[] = { -one, -one, -one, one, -one, -one, one, one, -one, -one, one, -one, -one, -one, one, one, -one, one, one, one, one, -one, one, one, }; int colors[] = { 0, 0, 0, one, one, 0, 0, one, one, one, 0, one, 0, one, 0, one, 0, 0, one, one, one, 0, one, one, one, one, one, one, 0, one, one, one, }; byte indices[] = { 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; // Buffers to be passed to gl*Pointer() functions // must be direct, i.e., they must be placed on the // native heap where the garbage collector cannot vbb.asIntBuffer() // move them. // // Buffers with multi-byte datatypes (e.g., short, int, float) // must have their byte order set to native order ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length*4); vbb.order(ByteOrder.nativeOrder()); mVertexBuffer = vbb.asIntBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); ByteBuffer cbb = ByteBuffer.allocateDirect(colors.length*4); cbb.order(ByteOrder.nativeOrder()); mColorBuffer = cbb.asIntBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FIXED, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FIXED, 0, mColorBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); } private IntBuffer mVertexBuffer; private IntBuffer mColorBuffer; private ByteBuffer mIndexBuffer;} So it suggests to draw a cube using triangles. My question is: can I draw the same cube using GL_TPOLYGON? If so, isn't that an easier/more understandable way to do things?

    Read the article

  • problem adding bumpmap to textured gluSphere in JOGL

    - by ChocoMan
    I currently have one texture on a gluSphere that represents the Earth being displayed perfectly, but having trouble figuring out how to implement a bumpmap as well. The bumpmap resides in "res/planet/earth/earthbump1k.jpg".Here is the code I have for the regular texture: gl.glTranslatef(xPath, 0, yPath + zPos); gl.glColor3f(1.0f, 1.0f, 1.0f); // base color for earth earthGluSphere = glu.gluNewQuadric(); colorTexture.enable(); // enable texture colorTexture.bind(); // bind texture // draw sphere... glu.gluDeleteQuadric(earthGluSphere); colorTexture.disable(); // texturing public void loadPlanetTexture(GL2 gl) { InputStream colorMap = null; try { colorMap = new FileInputStream("res/planet/earth/earthmap1k.jpg"); TextureData data = TextureIO.newTextureData(colorMap, false, null); colorTexture = TextureIO.newTexture(data); colorTexture.getImageTexCoords(); colorTexture.setTexParameteri(GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR); colorTexture.setTexParameteri(GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST); colorMap.close(); } catch(IOException e) { e.printStackTrace(); System.exit(1); } // Set material properties gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR); gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST); colorTexture.setTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_S); colorTexture.setTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_T); } How would I add the bumpmap as well to the same gluSphere?

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • Rending 2D Tile World (With Player In The Middle)

    - by Mick
    What I have at the moment is a series of data structures I'm using, and I would like to render the world onto the screen (just the visible parts). I've actually already done this several times (lots of rewrites), but it's a bit buggy (rounding seems to make the screen jump ever so slightly every x tiles the player walks past). Basically I've been confusing myself heavily on what I feel should be a pretty simple problem... so here I am asking for some help! OK! So I have a 50x50 array holding the tiles of the world. I have the player position as 2 floats, x ([0, 49]) and y ([0, 49]) in that array. I have the application size exactly in pixels (x and y). I have an arbitrary TILE_SIZE static int (based on screen pixels). What I think is heavily confusing me is using a 2d orthogonal projection in opengl which maps (0,0) to the top left of the screen and (SCREEN_SIZE_X, SCREEN_SIZE_Y) to the bottom right of the screen. gl.glMatrixMode(GL.GL_PROJECTION); gl.glLoadIdentity(); glu.gluOrtho2D(0, getActualWidth(), getActualHeight(), 0); gl.glMatrixMode(GL.GL_MODELVIEW); gl.glLoadIdentity(); The map tiles are set so that the (0,0) in the array is the bottom left. And the player has to be in the middle on the screen (SCREEN_SIZE_X/2, SCREEN_SIZE_Y/2). What I've been doing so far is trying to render 1-2 tiles more all around what would be displayed on the screen so that I don't have to worry about figuring out rendering half a tile from the top left, depending where the player is. It seems like such an easy problem but after spending about 40+hours on it rewriting it many times I think I'm at a point where I just can't think clearly anymore... Any help would be appreciated. It would be great if someone can provide some very basic pseudo code on keeping the player in the middle when your projection is mapped to screen coordinates and only rendering basically the tiles that you would be any be see. Thanks!

    Read the article

  • I enabled and setup glBlendFunc, but my texture has a white outline. What am I doing wrong?

    - by vinzBad
    You can see most of my source code in this question: Instead of the specified Texture, black circles on a green background are getting rendered. Why? Now I have the problem, that my texture has a white outline on its transparent parts. After googling and setting up glBlendFunc, the outline just got "softer". This is how it looks like: This is how I now setup OpenGL: public static void SetupGL() { GL.Enable(EnableCap.Blend); GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha); GL.Enable(EnableCap.Texture2D); GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest); }

    Read the article

  • Javascript WebGL error

    - by Chris
    Hi I can load 400 textured cylinders using o3d webl, as soon as I push the test upto 600 cylinders I get the following.. uncaught exception: [Exception... "Could not convert JavaScript argument arg 0 [nsIDOMWebGLRenderingContext.getShaderInfoLog" nsresult: "0x80570009 (NS_ERROR_XPC_BAD_CONVERT-JS)" location: "JS frame :: file:///d:/o3d-webgl/effect.js :: :: line 181" data: no) This is the error from minefield, does anyone know what this means? This is the function: o3d.Effect.prototype.bindAttributesAndLinkIfReady = function() { if (this.vertexShaderLoaded_ && this.fragmentShaderLoaded_) { var semanticMap = o3d.Effect.semanticMap_; for (var name in semanticMap) { this.gl.bindAttribLocation( this.program_, semanticMap[name].gl_index, name); } this.gl.linkProgram(this.program_); if (!this.gl.getProgramParameter(this.program_, this.gl.LINK_STATUS)) { var log = this.gl.getShaderInfoLog(this.program_); this.gl.client.error_callback( 'Program link failed with error log:\n' + log); } this.getUniforms_(); this.getAttributes_(); } }; It falls over on ... var log = this.gl.getShaderInfoLog(this.program_);

    Read the article

  • What does setting the GL color before doing a texture mapping operation do?

    - by quixoto
    I am looking at some sample code in a book that creates a jittered antialiasing effect by repeatedly rendering a scene (at different offsets) onto a offscreen texture, then using that texture to repeatedly draw a quad in the main view with some blend stuff set up. To accumulate the color "correctly", the code is setting the color like so: glColor4f(f, f, f, 1); where f is 1.0/number_of_samples, and then binding the offscreen texture and rendering it. Since textures come with their own color and alpha data, what is the effect (mathematically and intuitively) that setting the overall "color" in advance achieves? Thanks.

    Read the article

  • How to setup/calculate texturebuffer in glTexCoordPointer when importing from OBJ-file

    - by JohnMurdoch
    Hi all, I'm parsing an OBJ-file in Android and my goal is to render & display the object. Everything works fine except the correct texture mapping (importing the resource/image into opengl etc works fine). I don't know how to populate the texture related data from the obj-file into an texturebuffer-object. In the OBJ-file I've vt-lines: vt 0.495011 0.389417 vt 0.500686 0.561346 and face-lines: f 127/73/62 98/72/62 125/75/62 My draw-routine looks like (only relevant parts): gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_NORMAL_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glNormalPointer(GL10.GL_FLOAT, 0, normalsBuffer); gl.glTexCoordPointer(2, GL10.GL_SHORT, 0, t.getvtBuffer()); gl.glDrawElements(GL10.GL_TRIANGLES, t.getFacesCount(), GL10.GL_UNSIGNED_SHORT, t.getFaceBuffer()); Output of the counts of the OBJ-file: Vertex-count: 1023 Vns-count: 1752 Vts-count: 524 ///////////////////////// Part 0 Material name:default Number of faces:2037 Number of vnPointers:2037 Number of vtPointers:2037 Any advise is welcome.

    Read the article

  • glTexImage2D + byte[]

    - by miniMe
    How can I upload pixels from a simple byte array to an OpenGl texture ? I'm using glTexImage2D and all I get is a white rectangle instead of a pixelated texture. The 9th parameter (32-bit pointer to the pixel data) is IMO the problem. I tried lots of parameter types there (byte, ref byte, byte[], ref byte[], int & IntPtr + Marshall, out byte, out byte[], byte*). glGetError() always returns GL_NO_ERROR. There must be something I'm doing wrong because it's never some gibberish pixels. It's always white. glGenTextures works correct. The first id has the value 1 like always in OpenGL. And I draw colored lines without any problem. So something is wrong with my texturing. I'm in control of the DllImport. So I can change the parameter types if necessary. GL.glBindTexture(GL.GL_TEXTURE_2D, id); int w = 4; int h = 4; byte[] bytes = new byte[w * h * 4]; for (int i = 0; i < bytes.Length; i++) bytes[i] = (byte)Utils.random(256); GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, w, h, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bytes); [DllImport(GL_LIBRARY)] public static extern void glTexImage2D(uint what, int level, int internalFormat, int width, int height, int border, int format, int type, byte[] bytes);

    Read the article

  • Ubuntu 11.10 Virtual-box Unity 3D not working

    - by naveen
    After struggling for four hours, I still cannot get Unity 3D of Gnome 3 to work on my VirtualBox - I have been pouring through Internet and forum posts but to no avail. Here's what I've done so far: VirtualBox 4.1.4r74921 on Windows 7 Installed Ubuntu Desktop 11.10 ( 32 bit ) Enabled 3D acceleration Allocated 1.5GB of RAM Allocated 50MB video memory (hope this is not the culprit) Installed Guest edition 4.1.4 Did apt-get update and apt-get upgrade Booted back in to Ubuntu - falls back to Unity 2D Shared folder, mouse integration all works, so guest edition is properly installed Tried the command and below is the output /usr/lib/nux/unity_support_test –p OpenGL vendor string: Mesa Project OpenGL renderer string: Software Rasterizer OpenGL version string: 2.1 Mesa 7.11 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no I am trying to find what the "no" means but cannot find any good answers. Inter Core i5 processor 4GB of RAM on the host Display adapter: NVIDIA GeForce 8400GS Is anyone else facing the same problem? If so, can you point me to a solution or any reference where I can find a solution?

    Read the article

  • Google+ Platform Office Hours for March 21, 2012: JavaScript and the REST APIs

    Google+ Platform Office Hours for March 21, 2012: JavaScript and the REST APIs It's a blast from the past. Here's the video from our office hours held on the 20th of March. In this session Jonathan and Wolff guided you through using the REST APIs with JavaScript. Get the source code: goo.gl Discuss this video on Google+: goo.gl 1:05 - How to use JSONP to access the REST APIs in JavaScript 2:30 - Setting up a new project in the API console 7:06 - The client libraries, what are they? 8:39 - Using the JavaScript client library to reimplement our example 13:27 - About OAuth and private resources 14:26 - Creating an OAuth client using the API console - The JavaScript client library discussion group - goo.gl 24:14 - Using the JavaScript client library and REST APIs from within a hangout - hangoutbots.blogspot.com Q&A 19 - Are you planning to change the +1 button back? 20:14 - Will this video be posted to YouTube? (Spoiler: the answer is yes) 20:43 - Do you read your issues list? - The Google+ platform issue tracker: goo.gl From: GoogleDevelopers Views: 2808 33 ratings Time: 27:03 More in Science & Technology

    Read the article

  • Follow your friends on StackOverflow with FriendOverflow

    - by Mike Grace
    Screenshot About I created this app because I wanted to see what my friends and co-workers were doing on StackOverflow. I was previously going to their profiles to see what they were asking, answering, and commenting on because most of the time I found what they were doing was interesting or relevant to what I was doing. This app is for anyone who visits StackOverflow using their desktop browser and has 'friends' they would like to follow on StackOverflow. Cost Free Download Google Chrome extension http://goo.gl/ooE34 Mozilla Firefox extension http://goo.gl/3Pnqa Bookmarklet http://goo.gl/FkuQW Platform Desktop browsers via Google Chrome extension, Mozilla Firefox extension, and bookmarklet Contact @MikeGrace Code App was built on the Kynetx platform using KRL (Kynetx Rule Language)

    Read the article

  • [OpenGL ES - Android] Better way to generate tiles

    - by Inoe
    Hi ! I'll start by saying that i'm REALLY new to OpenGL ES (I started yesterday =), but I do have some Java and other languages experience. I've looked a lot of tutorials, of course Nehe's ones and my work is mainly based on that. As a test, I started creating a "tile generator" in order to create a small Zelda-like game (just moving a dude in a textured square would be awsome :p). So far, I have achieved a working tile generator, I define a char map[][] array to store wich tile is on : private char[][] map = { {0, 0, 20, 11, 11, 11, 11, 4, 0, 0}, {0, 20, 16, 12, 12, 12, 12, 7, 4, 0}, {20, 16, 17, 13, 13, 13, 13, 9, 7, 4}, {21, 24, 18, 14, 14, 14, 14, 8, 5, 1}, {21, 22, 25, 15, 15, 15, 15, 6, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {26, 0, 0, 0, 0, 0, 0, 3, 2, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1} }; It's working but I'm no happy with it, I'm sure there is a beter way to do those things : 1) Loading Textures : I create an ugly looking array containing the tiles I want to use on that map : private int[] textures = { R.drawable.herbe, //0 R.drawable.murdroite_haut, //1 R.drawable.murdroite_milieu, //2 R.drawable.murdroite_bas, //3 R.drawable.angledroitehaut_haut, //4 R.drawable.angledroitehaut_milieu, //5 }; (I cutted this on purpose, I currently load 27 tiles) All of theses are stored in the drawable folder, each one is a 16*16 tile. I then use this array to generate the textures and store them in a HashMap for a later use : int[] tmp_tex = new int[textures.length]; gl.glGenTextures(textures.length, tmp_tex, 0); texturesgen = tmp_tex; //Store the generated names in texturesgen for(int i=0; i < textures.length; i++) { //Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), textures[i]); InputStream is = context.getResources().openRawResource(textures[i]); Bitmap bitmap = null; try { //BitmapFactory is an Android graphics utility for images bitmap = BitmapFactory.decodeStream(is); } finally { //Always clear and close try { is.close(); is = null; } catch (IOException e) { } } // Get a new texture name // Load it up this.textureMap.put(new Integer(textures[i]),new Integer(i)); int tex = tmp_tex[i]; gl.glBindTexture(GL10.GL_TEXTURE_2D, tex); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } I'm quite sure there is a better way to handle that... I just was unable to figure it. If someone has an idea, i'm all ears. 2) Drawing the tiles What I did was create a single square and a single texture map : /** The initial vertex definition */ private float vertices[] = { -1.0f, -1.0f, 0.0f, //Bottom Left 1.0f, -1.0f, 0.0f, //Bottom Right -1.0f, 1.0f, 0.0f, //Top Left 1.0f, 1.0f, 0.0f //Top Right }; private float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; Then, in my draw function, I loop through the map to define the texture to use (after pointing to and enabling the buffers) : for(int y = 0; y < Y; y++){ for(int x = 0; x < X; x++){ tile = map[y][x]; try { //Get the texture from the HashMap int textureid = ((Integer) this.textureMap.get(new Integer(textures[tile]))).intValue(); gl.glBindTexture(GL10.GL_TEXTURE_2D, this.texturesgen[textureid]); } catch(Exception e) { return; } //Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glTranslatef(2.0f, 0.0f, 0.0f); //A square takes 2x so I move +2x before drawing the next tile } gl.glTranslatef(-(float)(2*X), -2.0f, 0.0f); //Go back to the begining of the map X-wise and move 2y down before drawing the next line } This works great by I really think that on a 1000*1000 or more map, it will be lagging as hell (as a reminder, this is a typical Zelda world map : http://vgmaps.com/Atlas/SuperNES/LegendOfZelda-ALinkToThePast-LightWorld.png ). I've read things about Vertex Buffer Object and DisplayList but I couldn't find a good tutorial and nodoby seems to be OK on wich one is the best / has the better support (T1 and Nexus One are ages away). I think that's it, I've putted a lot of code but I think it helps. Thanks in advance !

    Read the article

  • VB.NET custom user control graphics rotation

    - by AtharvaI
    Hi, this is my first question on here. I'm trying to build a dial control as a custom user control in VB.NET. I'm using VS2008. so far I have managed to rotate image using graphics.rotatetransform . however, this rotate everything. Now I have a Bitmap for the dial which should stay stable and another Bitmap for the needle which I need to rotate. so far i've tried this: Dim gL As Graphics = Graphics.FromImage(bmpLongNeedle) gL.TranslateTransform(bmpLongNeedle.Width / 2, bmpLongNeedle.Height * 0.74) gL.RotateTransform(angleLongNeedle) gL.TranslateTransform(-bmpLongNeedle.Width / 2, -bmpLongNeedle.Height * 0.74) gL.DrawImage(bmpLongNeedle, 0, 0) As I understand it, the image of the needle should be rotated at angle "angleLongNeedle" although i'm placing the rotated image at 0,0. However, the result is that the Needle doesn't get drawn on the control. any pointers as to where I might be going wrong or something else I should be doing? Thanks in advance

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >