Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 454/1022 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • GUI for DirectX

    - by DeadMG
    I'm looking for a GUI library built on top of DirectX- preferably 9, but I can also do 11. I've looked at stuff like DXUT, but it's way too much for me- I'm only needing some UI controls which I would rather not write (and debug) myself, and their need to keep a C-compatible API is definitely a big downside. I'd rather look at UI libs that are designed to be integrated into an existing DirectX-based system, rather than forming the basis of a system. Any recommendations?

    Read the article

  • Good GUI for OpenGL

    - by Cristina
    I am starting to learn OpenGL with FreeGLUT using the Superbible and the knowledge i have from my elementary graphics to brush up on my skills. To get more from this experience i want to integrate a GUI to overwrite the one FreeGLUT uses, now my question is this: is this thing possible and what library should i use? Some characteristics for the library: Open source Multi-platform (Linux and Windows) C/C++ If you have any other recommendations please feel free to post them along with your answers for my problem.

    Read the article

  • What is the XACT API?

    - by EddieV223
    I wanted to use DirectMusic in my game, but it's not in the June 2010 SDK, so I thought that I had to use DirectSound. Then I saw the XAudio2.h header in the SDK's include folder and found that XAudio2 is the replacement for DirectSound. Both are low-level. During my research I stumbled across the XACT API, but can't find a good explanation on it. Is XACT to XAudio2 what DirectMusic was to DirectSound? By which I mean, is the XACT API a high-level, easier-to-use API for playing sounds that abstracts away the details of XAudio2? If not, what is it?

    Read the article

  • Best way to load rigid bodies from file

    - by Mel
    I'm trying to switch to bullet for physics simulation. Lemme just say first that I am so pleased with bullet's accuracy and performance. After messing around it for a bit, I'm now trying to load rigid bodies from files. Most of my models are in blender and with some searching, I was able to export them in .bullet format. However, loading the files into bullet doesn't look like an easy task. I've come across this page that points me to a sample application that loads bullet files. But then it goes and says that this loader is just a starting point. Is there any open source library out there that will allow me to load rigid bodies from a file? I don't really wanna spend that much time trying to create my own loader.

    Read the article

  • Examples of good Javascript/HTML5 based games

    - by Zuch
    Now that Flash is largely being replaced with HTML5 elements (video, audio, canvas, etc.) are there any good examples of web-based games built on completely open standards (meaning Javascript, HTML and CSS)? I see a lot of examples of pure HTML5 implementations of what was once only in Flash (like stuff here: http://www.html5rocks.com/) but not many games, a domain which still seem dominated by Flash. I'm curious what's possible and what the limitations are.

    Read the article

  • Collision within a poly

    - by G1i1ch
    For an html5 engine I'm making, for speed I'm using a path poly. I'm having trouble trying to find ways to get collision with the walls of the poly. To make it simple I just have a vector for the object and an array of vectors for the poly. I'm using Cartesian vectors and they're 2d. Say poly = [[550,0],[169,523],[-444,323],[-444,-323],[169,-523]], it's just a pentagon I generated. The object that will collide is object, object.pos is it's position and object.vel is it's velocity. They're both 2d vectors too. I've had some success to get it to find a collision, but it's just black box code I ripped from a c++ example. It's very obscure inside and all it does though is return true/false and doesn't return what vertices are collided or collision point, I'd really like to be able to understand this and make my own so I can have more meaningful collision. I'll tackle that later though. Again the question is just how does one find a collision to walls of a poly given you know the poly vertices and the object's position + velocity? If more info is needed please let me know. And if all anyone can do is point me to the right direction that's great.

    Read the article

  • cocos2d/OpenGL multitexturing problem

    - by Gajoo
    I've got a simple shader to test multitextureing the problem is both samplers are using same image as their reference. the shader code is basically just this : vec4 mid = texture2D(u_texture,v_texCoord); float g = texture2D(u_guide,v_guideCoord); gl_FragColor = vec4(g , mid.g,0,1); and this is how I'm calling draw function : int last_State; glGetIntegerv(GL_ACTIVE_TEXTURE, &last_State); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, getTexture()->getName()); glActiveTexture(GL_TEXTURE1); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, mGuideTexture->getName()); ccGLEnableVertexAttribs( kCCVertexAttribFlag_TexCoords |kCCVertexAttribFlag_Position); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, texCoord); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisable(GL_TEXTURE_2D); I've already check mGuideTexture->getName() and getTexture()->getName() are returning correct textures. but looking at the result I can tell, both samplers are reading from getTexture()->getName(). here are some screen shots showing what is happening : The image rendered Using above codes The image rendered when I change textures passed to samples I'm expecting to see green objects from the first picture with red objects hanging from the top.

    Read the article

  • How do you pack resources in a game when you have too many of them?

    - by ThePlan
    I've recently made a basic space invaders clone in C++ using the Allegro 5 framework. It took me a long time, but after I finished, I realized I had about 10 sprites, and 13MB worth of DLLs (Some of the people didn't even have the mingW dlls) which were making people who played the game very confused. How can I "pack" all my resources in a way that I can easily add-remove data to my game, and to reduce the size taken by the resource, basically placing them in 1 spot? I'm using codeblocks.

    Read the article

  • Getting isometric grid coordinates from standard X,Y coordinates

    - by RoryHarvey
    I'm currently trying to add sprites to an isometric Tiled TMX map using Objects in cocos2d. The problem is the X and Y metadata from TMX object are in standard 2d format (pixels x, pixels y), instead of isometric grid X and Y format. Usually you would just divide them by the tile size, but isometric needs some sort of transform. For example on a 64x32 isometric tilemap of size 40 tiles by 40 tiles an object at (20,21)'s coordinates come out as (640,584) So the question really is what formula gets (20,21) from (640,584)?

    Read the article

  • Implementing Circle Physics in Java

    - by Shijima
    I am working on a simple physics based game where 2 balls bounce off each other. I am following a tutorial, 2-Dimensional Elastic Collisions Without Trigonometry, for the collision reactions. I am using Vector2 from the LIBGDX library to handle vectors. I am a bit confused on how to implement step 6 in Java from the tutorial. Below is my current code, please note that the code strictly follows the tutorial and there are redundant pieces of code which I plan to refactor later. Note: refrences to this refer to ball 1, and ball refers to ball 2. /* * Step 1 * * Find the Normal, Unit Normal and Unit Tangential vectors */ Vector2 n = new Vector2(this.position[0] - ball.position[0], this.position[1] - ball.position[1]); Vector2 un = n.normalize(); Vector2 ut = new Vector2(-un.y, un.x); /* * Step 2 * * Create the initial (before collision) velocity vectors */ Vector2 v1 = this.velocity; Vector2 v2 = ball.velocity; /* * Step 3 * * Resolve the velocity vectors into normal and tangential components */ float v1n = un.dot(v1); float v1t = ut.dot(v1); float v2n = un.dot(v2); float v2t = ut.dot(v2); /* * Step 4 * * Find the new tangential Velocities after collision */ float v1tPrime = v1t; float v2tPrime = v2t; /* * Step 5 * * Find the new normal velocities */ float v1nPrime = v1n * (this.mass - ball.mass) + (2 * ball.mass * v2n) / (this.mass + ball.mass); float v2nPrime = v2n * (ball.mass - this.mass) + (2 * this.mass * v1n) / (this.mass + ball.mass); /* * Step 6 * * Convert the scalar normal and tangential velocities into vectors??? */

    Read the article

  • Wrong faces culled in OpenGL when drawing a rectangular prism

    - by BadSniper
    I'm trying to learn opengl. I did some code for building a rectangular prism. I don't want to draw back faces so I used glCullFace(GL_BACK), glEnable(GL_CULL_FACE);. But I keep getting back faces also when viewing from front and also sometimes when rotating sides are vanishing. Can someone point me in right direction? glPolygonMode(GL_FRONT,GL_LINE); // draw wireframe polygons glColor3f(0,1,0); // set color green glCullFace(GL_BACK); // don't draw back faces glEnable(GL_CULL_FACE); // don't draw back faces glTranslatef(-10, 1, 0); // position glBegin(GL_QUADS); // face 1 glVertex3f(0,-1,0); glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,-1,0); // face 2 glVertex3f(2,-1,2); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(2,5,2); // face 3 glVertex3f(0,5,0); glVertex3f(0,5,2); glVertex3f(2,5,2); glVertex3f(2,5,0); // face 4 glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,5,2); glVertex3f(0,5,2); // face 5 glVertex3f(0,-1,2); glVertex3f(0,-1,0); glVertex3f(0,5,0); glVertex3f(0,5,2); // face 6 glVertex3f(0,-1,0); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(0,5,0); glEnd();

    Read the article

  • Why don't C++ Game Developers use the boost library?

    - by James
    So if you spend any time viewing / answering questions over on Stack Overflow under the C++ tag, you will quickly notice that just about everybody uses the boost library; some would even say that if you aren't using it, you're not writing "real' C++ (I disagree, but that's not the point). But then there is the game industry, which is well known for using C++ and not using boost. I can't help but wonder why that is. I don't care to use boost because I write games (now) as a hobby, and part of that hobby is implementing what I need when I am able to and using off-the-shelf libraries when I can't. But that is just me. Why don't game developers, in general, use the boost library? Is it performance or memory concerns? Style? Something Else? I was about to ask this on stack overflow, but I figured the question is better asked here. EDIT : I realize I can't speak for all game programmers and I haven't seen all game projects, so I can't say game developers never use boost; this is simply my experience. Allow me to edit my question to also ask, if you do use boost, why did you choose to use it?

    Read the article

  • Camera doesn't move

    - by hugo
    Here is my code, as my subject indicates i have implemented a camera but I couldn't make it move. #define PI_OVER_180 0.0174532925f #define GL_CLAMP_TO_EDGE 0x812F #include "metinalifeyyaz.h" #include <GL/glu.h> #include <GL/glut.h> #include <QTimer> #include <cmath> #include <QKeyEvent> #include <QWidget> #include <QDebug> metinalifeyyaz::metinalifeyyaz(QWidget *parent) : QGLWidget(parent) { this->setFocusPolicy(Qt:: StrongFocus); time = QTime::currentTime(); timer = new QTimer(this); timer->setSingleShot(true); connect(timer, SIGNAL(timeout()), this, SLOT(updateGL())); xpos = yrot = zpos = 0; walkbias = walkbiasangle = lookupdown = 0.0f; keyUp = keyDown = keyLeft = keyRight = keyPageUp = keyPageDown = false; } void metinalifeyyaz::drawBall() { //glTranslatef(6,0,4); glutSolidSphere(0.10005,300,30); } metinalifeyyaz:: ~metinalifeyyaz(){ glDeleteTextures(1,texture); } void metinalifeyyaz::initializeGL(){ glShadeModel(GL_SMOOTH); glClearColor(1.0,1.0,1.0,0.5); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); glDepthFunc(GL_LEQUAL); glClearColor(1.0,1.0,1.0,1.0); glShadeModel(GL_SMOOTH); GLfloat mat_specular[]={1.0,1.0,1.0,1.0}; GLfloat mat_shininess []={30.0}; GLfloat light_position[]={1.0,1.0,1.0}; glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular); glMaterialfv(GL_FRONT,GL_SHININESS,mat_shininess); glLightfv(GL_LIGHT0, GL_POSITION, light_position); glEnable(GL_LIGHT0); glEnable(GL_LIGHTING); QImage img1 = convertToGLFormat(QImage(":/new/prefix1/halisaha2.bmp")); QImage img2 = convertToGLFormat(QImage(":/new/prefix1/white.bmp")); glGenTextures(2,texture); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img1.width(), img1.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img1.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img2.width(), img2.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img2.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really nice perspective calculations } void metinalifeyyaz::resizeGL(int w, int h){ if(h==0) h=1; glViewport(0,0,w,h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0f, static_cast<GLfloat>(w)/h,0.1f,100.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void metinalifeyyaz::paintGL(){ movePlayer(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); GLfloat xtrans = -xpos; GLfloat ytrans = -walkbias - 0.50f; GLfloat ztrans = -zpos; GLfloat sceneroty = 360.0f - yrot; glLoadIdentity(); glRotatef(lookupdown, 1.0f, 0.0f, 0.0f); glRotatef(sceneroty, 0.0f, 1.0f, 0.0f); glTranslatef(xtrans, ytrans+50, ztrans-130); glLoadIdentity(); glTranslatef(1.0f,0.0f,-18.0f); glRotatef(45,1,0,0); drawScene(); int delay = time.msecsTo(QTime::currentTime()); if (delay == 0) delay = 1; time = QTime::currentTime(); timer->start(qMax(0,10 - delay)); } void metinalifeyyaz::movePlayer() { if (keyUp) { xpos -= sin(yrot * PI_OVER_180) * 0.5f; zpos -= cos(yrot * PI_OVER_180) * 0.5f; if (walkbiasangle >= 360.0f) walkbiasangle = 0.0f; else walkbiasangle += 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } else if (keyDown) { xpos += sin(yrot * PI_OVER_180)*0.5f; zpos += cos(yrot * PI_OVER_180)*0.5f ; if (walkbiasangle <= 7.0f) walkbiasangle = 360.0f; else walkbiasangle -= 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } if (keyLeft) yrot += 0.5f; else if (keyRight) yrot -= 0.5f; if (keyPageUp) lookupdown -= 0.5; else if (keyPageDown) lookupdown += 0.5; } void metinalifeyyaz::keyPressEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_Escape: close(); break; case Qt::Key_F1: setWindowState(windowState() ^ Qt::WindowFullScreen); break; default: QGLWidget::keyPressEvent(event); case Qt::Key_PageUp: keyPageUp = true; break; case Qt::Key_PageDown: keyPageDown = true; break; case Qt::Key_Left: keyLeft = true; break; case Qt::Key_Right: keyRight = true; break; case Qt::Key_Up: keyUp = true; break; case Qt::Key_Down: keyDown = true; break; } } void metinalifeyyaz::changeEvent(QEvent *event) { switch (event->type()) { case QEvent::WindowStateChange: if (windowState() == Qt::WindowFullScreen) setCursor(Qt::BlankCursor); else unsetCursor(); break; default: break; } } void metinalifeyyaz::keyReleaseEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_PageUp: keyPageUp = false; break; case Qt::Key_PageDown: keyPageDown = false; break; case Qt::Key_Left: keyLeft = false; break; case Qt::Key_Right: keyRight = false; break; case Qt::Key_Up: keyUp = false; break; case Qt::Key_Down: keyDown = false; break; default: QGLWidget::keyReleaseEvent(event); } } void metinalifeyyaz::drawScene(){ glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,1.0f); // glColor3f(0,0,1); //back glVertex3f(-6,0,-4); glVertex3f(-6,-0.5,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,-1.0f); //front glVertex3f(6,0,4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,0,4); glEnd(); glBegin(GL_QUADS); glNormal3f(-1.0f,0.0f,0.0f); // glColor3f(0,0,1); //left glVertex3f(-6,0,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); // glColor3f(0,0,1); //right glVertex3f(6,0,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(6,0,4); glEnd(); glBindTexture(GL_TEXTURE_2D, texture[0]); glBegin(GL_QUADS); glNormal3f(0.0f,1.0f,0.0f);//top glTexCoord2f(1.0f,0.0f); glVertex3f(6,0,-4); glTexCoord2f(1.0f,1.0f); glVertex3f(6,0,4); glTexCoord2f(0.0f,1.0f); glVertex3f(-6,0,4); glTexCoord2f(0.0f,0.0f); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,-1.0f,0.0f); //glColor3f(0,0,1); //bottom glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glEnd(); // glPushMatrix(); glBindTexture(GL_TEXTURE_2D, texture[1]); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); glTexCoord2f(1.0f,0.0f); //right far goal post front face glVertex3f(5,0.5,-0.95); glTexCoord2f(1.0f,1.0f); glVertex3f(5,0,-0.95); glTexCoord2f(0.0f,1.0f); glVertex3f(5,0,-1); glTexCoord2f(0.0f,0.0f); glVertex3f(5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(5,0.5,-1); glVertex3f(5,0,-1); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5,0,-0.95); glVertex3f(5, 0.5, -0.95); glColor3f(1,1,1); //right near goal post front face glVertex3f(5,0.5,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0,1); glVertex3f(5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(5,0.5,1); glVertex3f(5,0,1); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0.5, 0.95); glColor3f(1,1,1); //right crossbar front face glVertex3f(5,0.55,-1); glVertex3f(5,0.55,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5.05,0.5,1); glVertex3f(5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(5.05,0.5,-1); glVertex3f(5.05,0.5,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5,0.55,1); glVertex3f(5,0.55,-1); glColor3f(1,1,1); //left far goal post front face glVertex3f(-5,0.5,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5,0,-1); glVertex3f(-5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(-5,0.5,-1); glVertex3f(-5,0,-1); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5, 0.5, -0.95); glColor3f(1,1,1); //left near goal post front face glVertex3f(-5,0.5,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0,1); glVertex3f(-5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(-5,0.5,1); glVertex3f(-5,0,1); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0.5, 0.95); glColor3f(1,1,1); //left crossbar front face glVertex3f(-5,0.55,-1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5.05,0.5,1); glVertex3f(-5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(-5.05,0.5,-1); glVertex3f(-5.05,0.5,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.55,-1); glEnd(); // glPopMatrix(); // glPushMatrix(); // glTranslatef(0,0,0); // glutSolidSphere(0.10005,500,30); // glPopMatrix(); }

    Read the article

  • Quake 3 Bot Programming Example

    - by Manni
    I would like to implement an intelligent bot for Quake-3. I downloaded the and built the code successfully under Linux. My problem is that I couldn't find any complete tutorial telling me how to build an agent; telling which files to use( as there are many files in the source code). Can you give me a website or piece of source code telling me how to start? Or something like an example source code for a bot.

    Read the article

  • Moving objects colliding when using unalligned collision avoidance (steering)

    - by James Bedford
    I'm having trouble with unaligned collision avoidance for what I think is a rare case. I have set two objects to move towards each other but with a slight offset, so one of the objects is moving slightly upwards, and one of the objects is moving slightly downwards. In my unaligned collision avoidance steering algorithm I'm finding the points on the object's forward line and the other object's forward line where these two lines are the closest. If these closest points are within a collision avoidance distance, and if the distance between them is smaller than the two radii of the two object's bounding spheres, then the objects should steer away in the appropriate direction. The problem is that for my case, the closest points on the lines are calculated to be really far away from the actual collision point. This is because the two forward lines for each object are moving away from each other as the objects pass. The problem is that because of this, no steering takes place, and the two objects partially collide. Does anyone have any suggestions as to how I can correctly calculate the point of collision? Perhaps by somehow taking into account the size of the two objects?

    Read the article

  • How to deal with Character body parts from Design to Cocos2d

    - by Edwin Soho
    I'm trying to figure out the pattern the game developers use together with game designers: See the picture below with the independent parts: Questions: 1) Should I create different image parts from different body parts or keep frame by frame animaton? (I know both can be used, but I'm trying to figure what is commonly used in the industry) 2) If I'm going to generate different image parts from different body parts (which is I thing is more logical) how would I export that to Cocos2d (Vector or Bitmap)?

    Read the article

  • Render a 3D scene in multiple windows - extended panoramic view

    - by teodron
    Is there any resource location on how to view a 3D scene from an application or a game on multiple windows or monitors? Each window should continue drawing from where the neighbouring one left off (in the end, the result should be a mosaic of the scene). My idea is to use a camera for each window and have a reference position and orientation for a meta-camera object that is used to correctly offset the other camera. Since there are quite some elements to consider (window specs, viewport properties, position-orientation of each render camera), what is the correct way to update the individual cameras considering the position and orientation of the central, meta-camera? I currently cannot make the cameras present the scene contiguously (and I am reluctant in working out the transformations without checking whether this is the actual way of doing things).

    Read the article

  • Per-vertex animation with VBOs: Stream each frame or use index offset per frame?

    - by charstar
    Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will be at different poses/keyframes at the same time). Assume color and texture coordinate buffers are static. Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs). Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? Are there other methods that I am missing?

    Read the article

  • How to implement soft edge areas with particles

    - by OpherV
    My game is created using Phaser, but the question itself is engine-agnostic. In my game I have several environments, essentially polygonal areas that player characters can move into and be affected by. For example ice, fire, poison etc' The graphic element of these areas is the color filled polygon area itself, and particles of the suitable type (in this example ice shards). This is how I'm currently implementing this - with a polygon mask covering a tilesprite with the particle pattern: The hard edge looks bad. I'd like to improve by doing two things: 1. Making the polygon fill area to have a soft edge, and blend into the background. 2. Have some of the shards go out of the polygon area, so that they are not cut in the middle and the area doesn't have a straight line for example (mockup): I think 1 can be achieved with blurring the polygon, but I'm not sure how to go about with 2. How would you go about implementing this?

    Read the article

  • How to properly do weapon cool-down reload timer in multi-player laggy environment?

    - by John Murdoch
    I want to handle weapon cool-down timers in a fair and predictable way on both client on server. Situation: Multiple clients connected to server, which is doing hit detection / physics Clients have different latency for their connections to server ranging from 50ms to 500ms. They want to shoot weapons with fairly long reload/cool-down times (assume exactly 10 seconds) It is important that they get to shoot these weapons close to the cool-down time, as if some clients manage to shoot sooner than others (either because they are "early" or the others are "late") they gain a significant advantage. I need to show time remaining for reload on player's screen Clients can have clocks which are flat-out wrong (bad timezones, etc.) What I'm currently doing to deal with latency: Client collects server side state in a history, tagged with server timestamps Client assesses his time difference with server time: behindServerTimeNs = (behindServerTimeNs + (System.nanoTime() - receivedState.getServerTimeNs())) / 2 Client renders all state received from server 200 ms behind from his current time, adjusted by what he believes his time difference with server time is (whether due to wrong clocks, or lag). If he has server states on both sides of that calculated time, he (mostly LERP) interpolates between them, if not then he (LERP) extrapolates. No other client-side prediction of movement, e.g., to make his vehicle seem more responsive is done so far, but maybe will be added later So how do I properly add weapon reload timers? My first idea would be for the server to send each player the time when his reload will be done with each world state update, the client then adjusts it for the clock difference and thus can estimate when the reload will be finished in client-time (perhaps considering also for latency that the shoot message from client to server will take as well?), and if the user mashes the "shoot" button after (or perhaps even slightly before?) that time, send the shoot event. The server would get the shoot event and consider the time shot was made as the server time when it was received. It would then discard it if it is nowhere near reload time, execute it immediately if it is past reload time, and hold it for a few physics cycles until reload is done in case if it was received a bit early. It does all seem a bit convoluted, and I'm wondering whether it will work (e.g., whether it won't be the case that players with lower ping get better reload rates), and whether there are more elegant solutions to this problem.

    Read the article

  • collision detection problems - Javascript/canvas game

    - by Tom Burman
    Ok here is a more detailed version of my question. What i want to do: i simply want the have a 2d array to represent my game map. i want a player sprite and i want that sprite to be able to move around my map freely using the keyboard and also have collisions with certain tiles of my map array. i want to use very large maps so i need a viewport. What i have: I have a loop to load the tile images into an array: /Loop to load tile images into an array var mapTiles = []; for (x = 0; x <= 256; x++) { var imageObj = new Image(); // new instance for each image imageObj.src = "images/prototype/"+x+".jpg"; mapTiles.push(imageObj); } I have a 2d array for my game map: //Array to hold map data var board = [ [1,2,3,4,3,4,3,4,5,6,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [17,18,19,20,19,20,19,20,21,22,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [33,34,35,36,35,36,35,36,37,38,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [49,50,51,52,51,52,51,52,53,54,1,1,1,1,1,1,1,1,1,1,1,1,1,197,198,199,1,1,1,1], [65,66,67,68,146,147,67,68,69,70,1,1,1,1,1,1,1,1,216,217,1,1,1,213,214,215,1,1,1,1], [81,82,83,161,162,163,164,84,85,86,1,1,1,1,1,1,1,1,232,233,1,1,1,229,230,231,1,1,1,1], [97,98,99,177,178,179,180,100,101,102,1,1,1,1,59,1,1,1,248,249,1,1,1,245,246,247,1,1,1,1], [1,1,238,1,1,1,1,239,240,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [216,217,254,1,1,1,1,255,256,1,204,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [232,233,1,1,1,117,118,1,1,1,220,1,1,119,120,1,1,1,1,1,1,1,1,1,1,1,119,120,1,1], [248,249,1,1,1,133,134,1,1,1,1,1,1,135,136,1,1,1,1,1,1,59,1,1,1,1,135,136,1,1], [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,216,217,1,1,1,1,1,1,60,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,232,233,1,1,1,1,1,1,1,1,1,1,1,1,1,1,204,1,1,1,1,1,1,1,1,1,1,1], [1,1,248,249,1,1,1,1,1,1,1,1,1,1,1,1,1,1,220,1,1,1,1,1,1,216,217,1,1,1], [1,1,1,1,1,1,1,1,1,1,1,1,149,150,151,1,1,1,1,1,1,1,1,1,1,232,233,1,1,1], [12,12,12,12,12,12,12,13,1,1,1,1,165,166,167,1,1,1,1,1,1,119,120,1,1,248,249,1,1,1], [28,28,28,28,28,28,28,29,1,1,1,1,181,182,183,1,1,1,1,1,1,135,136,1,1,1,1,1,1,1], [44,44,44,44,44,15,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,59,1,1,197,198,199,1,1,1,1,119,120,1], [1,1,1,1,1,27,28,29,1,1,216,217,1,1,1,1,1,1,1,1,213,214,215,1,1,1,1,135,136,1], [1,1,1,1,1,27,28,29,1,1,232,233,1,1,1,1,1,1,1,1,229,230,231,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,248,249,1,1,1,1,1,1,1,1,245,246,247,1,1,1,1,1,1,1], [1,1,1,197,198,199,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,213,214,215,28,29,1,1,1,1,1,60,1,1,1,1,204,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,229,230,231,28,29,1,1,1,1,1,1,1,1,1,1,220,1,1,1,1,119,120,1,1,1,1,1], [1,1,1,245,246,247,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,135,136,1,1,60,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1] ]; I have my loop to place the correct tile sin the correct positions: //Loop to place tiles onto screen in correct position for (x = 0; x <= viewWidth; x++){ for (y = 0; y <= viewHeight; y++){ var width = 32; var height = 32; context.drawImage(mapTiles[board[y+viewY][x+viewX]],x*width, y*height); } } I Have my player object : //Place player object context.drawImage(playerImg, (playerX-viewX)*32,(playerY-viewY)*32, 32, 32); I have my viewport setup: //Set viewport pos viewX = playerX - Math.floor(0.5 * viewWidth); if (viewX < 0) viewX = 0; if (viewX+viewWidth > worldWidth) viewX = worldWidth - viewWidth; viewY = playerY - Math.floor(0.5 * viewHeight); if (viewY < 0) viewY = 0; if (viewY+viewHeight > worldHeight) viewY = worldHeight - viewHeight; I have my player movement: canvas.addEventListener('keydown', function(e) { console.log(e); var key = null; switch (e.which) { case 37: // Left if (playerY > 0) playerY--; break; case 38: // Up if (playerX > 0) playerX--; break; case 39: // Right if (playerY < worldWidth) playerY++; break; case 40: // Down if (playerX < worldHeight) playerX++; break; } My Problem: I have my map loading an it looks fine, but my player position thinks it's on a different tile to what it actually is. So for instance, i know that if my player moves left 1 tile, the value of that tile should be 2, but if i print out the value it should be moving to (2), it comes up with a different value. How ive tried to solve the problem: I have tried swap X and Y values for the initialization of my player, for when my map prints. If i swap the x and y values in this part of my code: context.drawImage(mapTiles[board[y+viewY][x+viewX]],x*width, y*height); The map doesnt get draw correctly at all and tiles are placed all in random positions or orientations IF i sway the x and y values for my player in this line : context.drawImage(playerImg, (playerX-viewX)*32,(playerY-viewY)*32, 32, 32); The players movements are inversed, so up and down keys move my player left and right viceversa. My question: Where am i going wrong in my code, and how do i solve it so i have my map looking like it should and my player moving as it should as well as my player returning the correct tileID it is standing on or moving too. Thanks Again ALSO Here is a link to my whole code: prototype

    Read the article

  • Tiny Wings - Placing items

    - by Federico
    I'm currently developing a Flash game like 'Tiny Wings'. I have a lot of work done, but i'm currently working on placing the items ( coins and obstacles ) on the terrain. My player it is moving on a auto-generated terrain (based on Emanuele Feronato's tutorials) so every time the player's x position is greater than (screenWidth + x) another hill is generated and so on. I'm currently having problems placing the items in a correct angle and put 5 or more items together on a hill. Could you please help me with this? Thanks, Regards. PS: This is the URL to the Emanuele Feronato post and the code to make the hills http://www.emanueleferonato.com/2011/10/04/create-a-terrain-like-the-one-in-tiny-wings-with-flash-and-box2d-%E2%80%93-adding-more-bumps/

    Read the article

  • Rule of thumb for enemy design

    - by Terrance
    I'm at the early stages of developing a 2d side scrolling open ended platformer (think metroidvania) and am having a bit of difficulty at enemy design inspiration for something of a scifi, nature, fantasy setting that isn't overly familar or obvious. I haven't seen too many articles blogs or books that talk about the subject at great length. Is there a fair rule of thumb when coming up with enemy design with respect to keeping your player engaged?

    Read the article

  • Basics of drawing in 2d with OpenGL 3 shaders

    - by davidism
    I am new to OpenGL 3 and graphics programming, and want to create some basic 2d graphics. I have the following scenario of how I might go about drawing a basic (but general) 2d rectangle. I'm not sure if this is the correct way to think about it, or, if it is, how to implement it. In my head, here's how I imagine doing it: t = make_rectangle(width, height) build general VBO, centered at 0, 0 optionally: t.set_scale(2) optionally: t.set_angle(30) t.draw_at(x, y) calculates some sort of scale/rotate/translate matrix (or matrices), passes the VBO and the matrix to a shader program Something happens to clip the world to the view visible on screen. I'm really unclear on how 4 and 5 will work. The main problem is that all the tutorials I find either: use fixed function pipeline, are for 3d, or are unclear how to do something this "simple". Can someone provide me with either a better way to think of / do this, or some concrete code detailing performing the transformations in a shader and constructing and passing the data required for this shader transformation?

    Read the article

  • Best strategy (tried and tested) for using Box2D in a real-time multiplayer game?

    - by Simon Grey
    I am currently tackling real-time multiplayer physics updates for a game engine I am writing. My question is how best to use Box2D for networked physics. If I run the simulation on the server, should I send position, velocity etc to every client on every tick? Should I send it every few ticks? Maybe there is another way that I am missing? How has this problem been solved using Box2D before? Anyone with some ideas would be greatly appreciated!

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >