Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 533/1414 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • Are there any preexisting maps for a Minecraft-like level I could use in my engine?

    - by Rishav Sharan
    I am working on a tiny cube-based engine like Minecraft. I was wondering if there is a way for me to get large blocky terrain in a text format that I can use for rendering on my engine? I don't want to start on procedural generation now, I just want a resource where I can get the coord list for a pretty looking terrain. Alternatively, is it possible for me to parse the Minecraft world files and use that data to generate terrain/buildings in my code?

    Read the article

  • How can I resolve component types in a way that supports adding new types relatively easily?

    - by John
    I am trying to build an Entity Component System for an interactive application developed using C++ and OpenGL. My question is quite simple. In my GameObject class I have a collection of Components. I can add and retrieve components. class GameObject: public Object { public: GameObject(std::string objectName); ~GameObject(void); Component * AddComponent(std::string name); Component * AddComponent(Component componentType); Component * GetComponent (std::string TypeName); Component * GetComponent (<Component Type Here>); private: std::map<std::string,Component*> m_components; }; I will have a collection of components that inherit from the base Components class. So if I have a meshRenderer component and would like to do the following GameObject * warship = new GameObject("myLovelyWarship"); MeshRenderer * meshRenderer = warship->AddComponent(MeshRenderer); or possibly MeshRenderer * meshRenderer = warship->AddComponent("MeshRenderer"); I could be make a Component Factory like this: class ComponentFactory { public: static Component * CreateComponent(const std::string &compTyp) { if(compTyp == "MeshRenderer") return new MeshRenderer; if(compTyp == "Collider") return new Collider; return NULL; } }; However, I feel like I should not have to keep updating the Component Factory every time I want to create a new custom Component but it is an option. Is there a more proper way to add and retrieve these components? Is standard templates another solution?

    Read the article

  • How do I implement movement in a WPF Adventure game?

    - by ZeroPhase
    I'm working on making a short WPF adventure game. The only major hurdle I have right now is how to animate objects on the screen correctly. I've experimented with DoubleAnimation and ThicknessAnimation both enable movement of the character, but the speed is a bit erratic. The objects I'm trying to move around are labels in a grid, I'm checking the mouse's position in terms of the canvas I have the grid in. Does anyone have any suggestions for coding the movement, while still allowing mouse clicks to pick up items when needed? It would be nice if I could continue using the Visual Studio GUI Editor. By the way, I'm fine with scrapping labels in a grid for a more ideal object to manipulate. Here's my movement code: ThicknessAnimation ta = new ThicknessAnimation(); The event handling movement: private void Hansel_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { ta.FillBehavior = FillBehavior.HoldEnd; ta.From = Hansel.Margin; double newX = Mouse.GetPosition(PlayArea).X; double newY = Mouse.GetPosition(PlayArea).Y; if (newX < Convert.ToDouble(Hansel.Margin.Left)) { //newX = -1 * newX; ta.To = new Thickness(0, newY, newX, 0); } else if (newY < Convert.ToDouble(Hansel.Margin.Top)) { newY = -1 * newY; } else { ta.To = new Thickness(newX, newY, 0, 0); } ta.Duration = new Duration(TimeSpan.FromSeconds(2)); Hansel.BeginAnimation(Grid.MarginProperty, ta); } ScreenShot with annotations: http://i1118.photobucket.com/albums/k608/sealclubberr/clickToMove_zps9d4a33cc.png ScreenShot with example movement: http://i1118.photobucket.com/albums/k608/sealclubberr/clickToMove_zps51f2359f.jpg

    Read the article

  • What exactly can shaders be used for?

    - by Bane
    I'm not really a 3D person, and I've only used shaders a little in some Three.js examples, and so far I've got an impression that they are only being used for the graphical part of the equation. Although, the (quite cryptic) Wikipedia article and some other sources lead me to believe that they can be used for more than just graphical effects, ie, to program the GPU (Wikipedia). So, the GPU is still a processor, right? With a larger and a different instruction set for easier and faster vector manipulation, but still a processor. Can I use shaders to make regular programs (provided I've got access to the video memory, which is probable)? Edit: regular programs == "Applications", ie create windows/console programs, or at least have some way of drawing things on the screen, maybe even taking user input.

    Read the article

  • Want to develop my own primitive physics engine, don't know how to start with it's high-level architecture. Suggestions?

    - by Violet Giraffe
    Few years ago I tried to make a simple 3D game - billiards. Completed like 50%, stuck with physics. Basically, I only need to calculate balls rolling over flat surface, but it would be nice to make something more flexible. I know all the formulas and laws (most of them, anyway). the problem is I have no idea of how to make good physics engine architecture-wise. I tried google and other forums but didn't find what I was looking for. The only suggestion was to look at open-source engine, but I'm not that good a programmer to make heads or tails out of it...

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

  • What are the reasons for MMOs to have level caps [on hold]

    - by SamStephens
    In many MMOs players character progression is artificially capped, e.g. by level 60 or 90 or 100 or whatever. Why do MMOs have these level caps in the first place? Why not just allow characters to continue to arbitrary levels with a mathematically designed leveling system that keeps the leveling experience interesting and endless? Answers to this question may help us to see the reason behind the feature and decide if and how this should be implemented in our MMOs.

    Read the article

  • OpenGL ES 2 on Android: native window

    - by ThreaderSlash
    According to OGLES specification, we have the following definition: EGLSurface eglCreateWindowSurface(EGLDisplay display, EGLConfig config, NativeWindowType native_window, EGLint const * attrib_list) More details, here: http://www.khronos.org/opengles/documentation/opengles1_0/html/eglCreateWindowSurface.html And also by definition: int32_t ANativeWindow_setBuffersGeometry(ANativeWindow* window, int32_t width, int32_t height, int32_t format); More details, here: http://mobilepearls.com/labs/native-android-api I am running Android Native App on OGLES 2 and debugging it in a Samsung Nexus device. For setting up the 3D scene graph environment, the following variables are defined: struct android_app { ... ANativeWindow* window; }; android_app* mApplication; ... mApplication=&pApplication; And to initialize the App, we run the commands in the code: ANativeWindow_setBuffersGeometry(mApplication->window, 0, 0, lFormat); mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); Funny to say is that, the command ANativeWindow_setBuffersGeometry behaves as expected and works fine according to its definition, accepting all the parameters sent to it. But the eglCreateWindowSurface does no accept the parameter mApplication-window, as it should accept according to its definition. Instead, it looks for the following input: EGLNativeWindowType hWnd; mSurface = eglCreateWindowSurface(mDisplay,lConfig,hWnd,NULL); As an alternative, I considered to use instead: NativeWindowType hWnd=android_createDisplaySurface(); But debugger says: Function 'android_createDisplaySurface' could not be resolved Is 'android_createDisplaySurface' compatible only for OGLES 1 and not for OGLES 2? Can someone tell if there is a way to convert mApplication-window? In a way that the data from the android_app get accepted to the window surface?

    Read the article

  • Deterministic Multiplayer RTS game questions?

    - by Martin K
    I am working on a cross-platform multiplayer RTS game where the different clients and a server(flash and C#), all need to stay deterministically synchronised. To deal with Floatpoint inconsistencies, I've come across this method: http://joshblog.net/2007/01/30/flash-floating-point-number-errors/#comment-49912 which basically truncates off the nondeterministic part: return Math.round(1000 * float) / 1000; Howewer my concern is that every time there is a division, there is further chance of creating additional floatpoint errors, in essence making it worse? . So it occured to me, how about something like this: function floatSafe(number:Number) : Number {return Math.round(float* 1024) / 1024; } ie dividing with only powers of 2 ? What do you think? . Ironically with the above method I got less accurate results: trace( floatSafe(3/5) ) // 0.599609375 where as with the other method(dividing with 1000), or just tracing the raw value I am getting 3/5 = 0.6 or Maybe thats what 3/5 actually is, IE 0.6 cannot really be represented with a floatpoint datatype, and would be more consistent across different platforms?

    Read the article

  • Understanding math used to determine if vector is clockwise / counterclockwise from your vector

    - by MTLPhil
    I'm reading Programming Game AI by Example by Mat Buckland. In the Math & Physics primer chapter there's a listing of the declaration of a class used to represent 2D vectors. This class contains a method called Sign. It's implementation is as follows //------------------------ Sign ------------------------------------------ // // returns positive if v2 is clockwise of this vector, // minus if anticlockwise (Y axis pointing down, X axis to right) //------------------------------------------------------------------------ enum {clockwise = 1, anticlockwise = -1}; inline int Vector2D::Sign(const Vector2D& v2)const { if (y*v2.x > x*v2.y) { return anticlockwise; } else { return clockwise; } } Can someone explain the vector rules that make this hold true? What do the values of y*v2.x and x*v2.y that are being compared actually represent? I'd like to have a solid understanding of why this works rather than just accepting that it does without figuring it out. I feel like it's something really obvious that I'm just not catching on to. Thanks for your help.

    Read the article

  • Pixel perfect collision with paths (Android)

    - by keysersoze
    Hi I'm writing a game and I'm trying to do some pixel perfect collisions with paths. The player's character has a bitmask which looks for example like this: Currenly my code that handles player's collision with path looks like this: private boolean isTerrainCollisionDetected() { if(collisionRegion.op(player.getBounds(), terrain.getBottomPathBounds(), Region.Op.INTERSECT) || collisionRegion.op(player.getBounds(), terrain.getTopPathBounds(), Region.Op.INTERSECT)) { collisionRegion.getBounds(collisionRect); for(int i = collisionRect.left; i < collisionRect.right; i++) { for(int j = collisionRect.top; j < collisionRect.bottom; j++) { if(player.getBitmask().getPixel(i - player.getX(), j - player.getY()) != Color.TRANSPARENT) { return true; } } } } return false; } The problem is that collisions aren't pixel perfect. It detects collisions in situations like this: The question is: what can I do to improve my collision detection?

    Read the article

  • HTML5 Canvas Game Timer

    - by zghyh
    How to create good timer for HTML5 Canvas games? I am using RequestAnimationFrame( http://paulirish.com/2011/requestanimationframe-for-smart-animating/ ) But object's move too fast. Something like my code is: http://pastebin.com/bSHCTMmq But if I press UP_ARROW player don't move one pixel, but move 5, 8, or 10 or more or less pixels. How to do if I press UP_ARROW player move 1 pixel? Thanks for help.

    Read the article

  • Beggining OpenGL vs beggining DirectX and some question about the philosophical difference between them

    - by jokoon
    I'm begginning with Direct X at school, and my teacher said it was harder to begin with than OpenGL, but I read several things that in fact, Direct X was more advanced than OpenGL in terms of recent graphic cards features. Since I'm far from wanting to do top notch effects, which can already be implemented with existing engines and/or shaders, I wanted to know your opinion: Can OpenGL be considered like a more basic, KISS, hardware agnostic, graphic library to just do 3D with acceleration, and consider DirectX like a top notch, game-oriented graphic API that will always support the next-gen 3D chips ? Citation from wikipedia on http://en.wikipedia.org/wiki/Id_Tech_5 : John Carmack mentioned in his keynote at QuakeCon 2007 that the id Tech 5 engine will not be using the DirectX 10 API. I don't want to seem like I'm minding open source because Carmack does and because he is famous, it's just that android and iPhone are out there, and Direct X doesn't seems to me to be the necessary API to know, since Windows supports OpenGL, and since the 360 is just a console among other consoles.

    Read the article

  • Can I use PBOs for textures in iOS?

    - by Radu
    As far as I can see, there is no GL_PIXEL_UNPACK_BUFFER. Also, the OpenGL ES 2.0 specification (and as far as I know, no iOS device currently supports OpenGL ES 2.0) states that glMapBufferOES() can only use GL_ARRAY_BUFFER as a target, yet glTexImage2D() and glTexSubImage2D() only seem to use PBOs if GL_PIXEL_UNPACK_BUFFER is bound. The OpenGL documentation for glBindBuffer() also states that: GL_PIXEL_PACK_BUFFER and GL_PIXEL_UNPACK_BUFFER are available only if the GL version is 2.1 or greater. So, can I use PBOs for textures? Am I missing something obvious?

    Read the article

  • Obtain rectangle indicating 2D world space camera can see

    - by Gareth
    I have a 2D tile based game in XNA, with a moveable camera that can scroll around and zoom. I'm trying to obtain a rectangle which indicates the area, in world space, that my camera is looking at, so I can render anything this rectangle intersects with (currently, everything is rendered). So, I'm drawing the world like this: _SpriteBatch.Begin( SpriteSortMode.FrontToBack, null, SamplerState.PointClamp, // Don't smooth null, null, null, _Camera.GetTransformation()); The GetTransformation() method on my Camera object does this: public Matrix GetTransformation() { _transform = Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateRotationZ(Rotation) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(_viewportWidth * 0.5f, _viewportHeight * 0.5f, 0)); return _transform; } The camera properties in the method above should be self explanatory. How can I get a rectangle indicating what the camera is looking at in world space?

    Read the article

  • how to generate random bubbles from array of sprites in cocos2d?

    - by prakash s
    I am devoloping the bubble shooter game in cocos2d how to generate random bubbles from array of sprites here is my code (void)addTarget { CGSize winSize = [[CCDirector sharedDirector] winSize]; //CCSprite *target = [CCSprite spriteWithFile:@"3.png" rect:CGRectMake(0, 0, 256, 256)]; NSMutableArray * movableSprites = [[NSMutableArray alloc] init]; NSArray *images = [NSArray arrayWithObjects:@"1.png", @"2.png", @"3.png", @"4.png",@"5.png",@"1.png",@"5.png", @"3.png", nil]; for(int i = 0; i < images.count; ++i) { NSString *image = [images objectAtIndex:i]; // generate random number based on size of array (array size is larger than 10) CCSprite*target = [CCSprite spriteWithFile:image]; float offsetFraction = ((float)(i+1))/(images.count+1); target.position = ccp(winSize.width*offsetFraction, winSize.height/2); target.position = ccp(350*offsetFraction, 460); [self addChild:target]; [movableSprites addObject:target]; //[target runAction:]; id actionMove = [CCMoveTo actionWithDuration:10 position:ccp(winSize.width/2,winSize. height/2)]; This code generating bubbles with *.png colour bubbles but i want to generate randomly because for shooting the bubbles by shooter class help me please id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; } }

    Read the article

  • Is there a way to export all the images of my tweening effect in Flash?

    - by Paul
    i'm using Flash to create the animation of my character in 2D (i'm just beginning). Is it possible to make a tween effect of a character, and then automatically export all the images/frames? So far, it's a bit fastidious : i create my tweening effect, then i put a keyframe for each frame i want to copy and paste, then i select the movieclips and shapes and copy and paste them into another flash document, i position those clips at the exact same location as the previous image, then i erase the previous image and export the image... For 30 frames! Is there any faster way? Thanks

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • xbox thumbstick used to rotate sprite, basic formula makes it "stick" or feel "sticky" at 90 degree intervals! how do get smooth rotation?

    - by Hugh
    Context: C#, XNA game I am using a very basic formula to calculate what angle my sprite (spaceship for example) should be facing based on the xbox controller thumbstick ie. you use the thumbstick to rotate the ship! in my main update method: shuttleAngle = (float) Math.Atan2(newGamePadState.ThumbSticks.Right.X, newGamePadState.ThumbSticks.Right.Y); in my main draw method: spriteBatch.Draw(shuttle, shuttleCoords, sourceRectangle, Color.White, shuttleAngle, origin, 1.0f, SpriteEffects.None, 1); as you can see its quite simple, i take the current radians from the thumbstick and store it in a float "shuttleAngle" and then use this as the rotation angle (in radians) arguement for drawing the shuttle. For some reason when i rotate the sprint it feels sticky at 0, 90, 180 and 270 degrees angles, it wants to settle at those angles. its not giving me a smooth and natural rotation like i would feel in a game that uses a similar mechanic. PS: my xbox controller is fine!

    Read the article

  • Resources on expected behaviour when manipulating 3D objects with the mouse

    - by sebf
    Hello, In my animation editor, I have a 3D gizmo that sits on the origin of a bone; the user drags the mesh around to rotate the bone. I've found that translating the 2D movements of the mouse into sensible 3D transforms is not near as simple as i'd hoped. For example what is intuitively 'up' or 'down'? How should the magnitude of rotations change with respect to dX/dY? How to implement this? What happens when the gizmo changes position or orientation with respect to the camera? ect. So far with trial and error i've written something (very) simple that works 70% of the time. I could probably continue to hack at it until I made something that works 99% of the time, but there must be someone who needed the same thing, and spent the time coming up with a much more elegant solution. Does anyone know of one?

    Read the article

  • Procedural world generation oriented on gameplay features

    - by Richard Fabian
    In large procedural landscape games, the land seems dull, but that's probably because the real world is largely dull, with only limited places where the scenery is dramatic or tactical. Looking at world generation from this point of view, a landscape generator for a game (that is, not for the sake of scenery, but for the sake of gameplay) needs to not follow the rules of landscaping, but instead some rules married to the expectations of the gamer. For example, there could be a choke point / route generator that creates hills ravines, rivers and mountains between cities, rather than the natural way cities arise, scattered on the land based on resources or conditions generated by the mountains and rainfall patterns. Is there any existing work being done like this? Start with cities or population centres and then add in terrain afterwards? The reason I'm asking is that I'd previously pondered taking existing maps from fantasy fiction (my own and others), putting the information into the system as a base point, and then generating a good world to play in from it. This seems covered by existing technology, that is, where the designer puts in all the necessary information such as the city populations, resources, biomes, road networks and rivers, then allows the PCG fill in the gaps. But now I'm wondering if it may be possible to have a content generator generate also the overall design. Generate the cities and population centres, balancing them so that there is a natural seeming need of commerce, then generate the positions and connectivity, then from the type of city produce the list of necessary resources that must be nearby, and only then, maybe given some rules on how to make the journey between cities both believable and interesting, generate the final content including the roads, the choke points, the bridges and tunnels, ferries and the terrain including the biomes and coastline necessary. If this has been done before, I'd like to know, and would like to know what went wrong, and what went right.

    Read the article

  • DX11 - Weird shader behavior with and without branching

    - by Martin Perry
    I have found problem in my shader code, which I dont´t know how to solve. I want to rewrite this code without "ifs" tmp = evaluate and result is 0 or 1 (nothing else) if (tmp == 1) val = X1; if (tmp == 0) val = X2; I rewite it this way, but this piece of code doesn ´t word correctly tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 val = !tmp * X2 However if I change it to: tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 if (!tmp) val = !tmp * X2 It works fine... but it is useless because of "if", which need to be eliminated I honestly don´t understand it Posted Image . I tried compilation with NO and FULL optimalization, result is same

    Read the article

  • How do I know if my game's average game session time is too small?

    - by you786
    My game has only one life, and the aim is to stay alive as long as possible to get as many points as possible (it's an endless runner). Using Google Analytics I found that players are staying alive for an average of 17 seconds. I could easily increase or decrease this by manipulating acceleration or starting speed. The question is, should I change it at all? Is there any research or general ideas on the best playing time for a game like this? I would also like to know about any research about how long an ideal mobile game session should last.

    Read the article

  • Negamax implementation doesn't appear to work with tic-tac-toe

    - by George Jiglau
    I've implemented Negamax as it can be found on wikipedia, which includes alpha/beta pruning. However, it seems to favor a losing move, which should be an invalid result. The game is Tic-Tac-Toe, I've abstracted most of the game play so it should be rather easy to spot an error within the algorithm. Here is the code, nextMove, negamax or evaluate are probably the functions that contain the fault: #include <list> #include <climits> #include <iostream> //#define DEBUG 1 using namespace std; struct Move { int row, col; Move(int row, int col) : row(row), col(col) { } Move(const Move& m) { row = m.row; col = m.col; } }; struct Board { char player; char opponent; char board[3][3]; Board() { } void read(istream& stream) { stream >> player; opponent = player == 'X' ? 'O' : 'X'; for(int row = 0; row < 3; row++) { for(int col = 0; col < 3; col++) { char playa; stream >> playa; board[row][col] = playa == '_' ? 0 : playa == player ? 1 : -1; } } } void print(ostream& stream) { for(int row = 0; row < 3; row++) { for(int col = 0; col < 3; col++) { switch(board[row][col]) { case -1: stream << opponent; break; case 0: stream << '_'; break; case 1: stream << player; break; } } stream << endl; } } void do_move(const Move& move, int player) { board[move.row][move.col] = player; } void undo_move(const Move& move) { board[move.row][move.col] = 0; } bool isWon() { if (board[0][0] != 0) { if (board[0][0] == board[0][1] && board[0][1] == board[0][2]) return true; if (board[0][0] == board[1][0] && board[1][0] == board[2][0]) return true; } if (board[2][2] != 0) { if (board[2][0] == board[2][1] && board[2][1] == board[2][2]) return true; if (board[0][2] == board[1][2] && board[1][2] == board[2][2]) return true; } if (board[1][1] != 0) { if (board[0][1] == board[1][1] && board[1][1] == board[2][1]) return true; if (board[1][0] == board[1][1] && board[1][1] == board[1][2]) return true; if (board[0][0] == board[1][1] && board[1][1] == board[2][2]) return true; if (board[0][2] == board [1][1] && board[1][1] == board[2][0]) return true; } return false; } list<Move> getMoves() { list<Move> moveList; for(int row = 0; row < 3; row++) for(int col = 0; col < 3; col++) if (board[row][col] == 0) moveList.push_back(Move(row, col)); return moveList; } }; ostream& operator<< (ostream& stream, Board& board) { board.print(stream); return stream; } istream& operator>> (istream& stream, Board& board) { board.read(stream); return stream; } int evaluate(Board& board) { int score = board.isWon() ? 100 : 0; for(int row = 0; row < 3; row++) for(int col = 0; col < 3; col++) if (board.board[row][col] == 0) score += 1; return score; } int negamax(Board& board, int depth, int player, int alpha, int beta) { if (board.isWon() || depth <= 0) { #if DEBUG > 1 cout << "Found winner board at depth " << depth << endl; cout << board << endl; #endif return player * evaluate(board); } list<Move> allMoves = board.getMoves(); if (allMoves.size() == 0) return player * evaluate(board); for(list<Move>::iterator it = allMoves.begin(); it != allMoves.end(); it++) { board.do_move(*it, -player); int val = -negamax(board, depth - 1, -player, -beta, -alpha); board.undo_move(*it); if (val >= beta) return val; if (val > alpha) alpha = val; } return alpha; } void nextMove(Board& board) { list<Move> allMoves = board.getMoves(); Move* bestMove = NULL; int bestScore = INT_MIN; for(list<Move>::iterator it = allMoves.begin(); it != allMoves.end(); it++) { board.do_move(*it, 1); int score = -negamax(board, 100, 1, INT_MIN + 1, INT_MAX); board.undo_move(*it); #if DEBUG cout << it->row << ' ' << it->col << " = " << score << endl; #endif if (score > bestScore) { bestMove = &*it; bestScore = score; } } if (!bestMove) return; cout << bestMove->row << ' ' << bestMove->col << endl; #if DEBUG board.do_move(*bestMove, 1); cout << board; #endif } int main() { Board board; cin >> board; #if DEBUG cout << "Starting board:" << endl; cout << board; #endif nextMove(board); return 0; } Giving this input: O X__ ___ ___ The algorithm chooses to place a piece at 0, 1, causing a guaranteed loss, do to this trap(nothing can be done to win or end in a draw): XO_ X__ ___ Perhaps it has something to do with the evaluation function? If so, how could I fix it?

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >