Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 350/657 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • Rotation of viewplatform in Java3D

    - by user29163
    I have just started with Java3D programming. I thought I had built up some basic intuition about how the scene graph works, but something that should work, does not work. I made a simple program for rotating a pyramid around the y-axis. This was done just by adding a RotationInterpolator R to the TransformGroup above the pyramid. Then I thought hey, can I now remove the RotationInterpolator from this TransformGroup, then add it to the TransformGroup above my ViewPlatform leaf. This should work if I have understood how things work. Adding the RotationInterpolator to this TransformGroup, should make the children of this TransformGroup rotate, and the ViewingPlatform is a child of the TransformGroup. Any ideas on where my reasoning is flawed? Here is the code for setting up the universe, and the view branchgroup. import java.awt.*; import java.awt.event.*; import javax.media.j3d.*; import javax.vecmath.*; public class UniverseBuilder { // User-specified canvas Canvas3D canvas; // Scene graph elements to which the user may want access VirtualUniverse universe; Locale locale; TransformGroup vpTrans; View view; public UniverseBuilder(Canvas3D c) { this.canvas = c; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view = new View(); view.addCanvas3D(c); view.setPhysicalBody(body); view.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); Transform3D s = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 10.0f)); t.rotX(-Math.PI/4); s.set(new Vector3f(0.0f, 0.0f, 10.0f)); //forandre verdier her for å endre viewing position t.mul(s); ViewPlatform vp = new ViewPlatform(); vpTrans = new TransformGroup(t); vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); // Rotator stuff Transform3D yAxis = new Transform3D(); //yAxis.rotY(Math.PI/2); Alpha rotationAlpha = new Alpha( -1, Alpha.INCREASING_ENABLE, 0, 0,4000, 0, 0, 0, 0, 0); RotationInterpolator rotator = new RotationInterpolator( rotationAlpha, vpTrans, yAxis, 0.0f, (float) Math.PI*2.0f); RotationInterpolator rotator2 = new RotationInterpolator( rotationAlpha, vpTrans); BoundingSphere bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0); rotator.setSchedulingBounds(bounds); vpTrans.addChild(rotator); vpTrans.addChild(vp); vpRoot.addChild(vpTrans); view.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } public void addBranchGraph(BranchGroup bg) { locale.addBranchGraph(bg); } }

    Read the article

  • How do I do random isometric paths?

    - by user406470
    I'm working on an Isometric city generator, and I am looking for a little push in the right direction. I'm looking to randomly generate roads on a isometric plane. I have never done pathfinding before, and I've googled it and didn't find any articles relating to what I am trying to do. Basically, my program generates a random isometric city and, I am hoping to add roads to that. Any help is appreciated!

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be .png, .jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan B (actually, I'm probably on plan 'E' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the API. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via JNI (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • Circle physics and collision using vectors

    - by Joe Hearty
    This is a problem I've been having, When making a set number of filled circles at random locations on a JPanel and applying a gravity (a negative change in the y), each of the circles collide. I want them to have collision detection and push in the opposite direction using vectors but I don't know how to apply that to my scenario could someone help? public void drawballs(Graphics g){ g.setColor (Color.white); //displays circles for(int i = 0; i<xlocationofcircles.length-1; i++){ g.fillOval( (int) xlocationofcircles[i], (int) (ylocationofcircles[i]) ,16 ,16 ); ylocationofcircles[i]+=.2; //gravity if(ylocationofcircles[i] > 550) //stops gravity at bottom of screen ylocationofcircles[i]-=.2; //Check distance between circles(i think..) float distance =(xlocationofcircles[i+1]-xlocationofcircles[i]) + (ylocationofcircles[i+1]-xlocationofcircles[i]); if( Math.sqrt(distance) <16) ...

    Read the article

  • A* how make natural look path?

    - by user11177
    I've been reading this: http://theory.stanford.edu/~amitp/GameProgramming/Heuristics.html But there are some things I don't understand, for example the article says to use something like this for pathfinding with diagonal movement: function heuristic(node) = dx = abs(node.x - goal.x) dy = abs(node.y - goal.y) return D * max(dx, dy) I don't know how do set D to get a natural looking path like in the article, I set D to the lowest cost between adjacent squares like it said, and I don't know what they meant by the stuff about the heuristic should be 4*D, that does not seem to change any thing. This is my heuristic function and move function: def heuristic(self, node, goal): D = 10 dx = abs(node.x - goal.x) dy = abs(node.y - goal.y) return D * max(dx, dy) def move_cost(self, current, node): cross = abs(current.x - node.x) == 1 and abs(current.y - node.y) == 1 return 19 if cross else 10 Result: The smooth sailing path we want to happen: The rest of my code: http://pastebin.com/TL2cEkeX

    Read the article

  • why would you use textures that are not a power of 2?

    - by Will
    In the early days of OpenGL and DirectX, it was required that textures were powers of two. This meant that interpolation of float values could be done very quickly using shifting and such. Since OpenGL 2.0, and preceding that via an extension, non-power-of-two texture dimensions has been supported. Are there performance advantages to sticking to power-of-two textures on modern integrated and discrete GPUs? What advantages do non-power-of-two textures have, if any? Are there large populations of desktop users who don't have cards that support non-power-of-two textures?

    Read the article

  • GestureListener's fling method doesn't get called

    - by nosferat
    I'm using SimpleGestureDetector from the libgdx-users Wiki as my InputProcessor. I set it in the created() method: Gdx.input.setInputProcess(new SimpleDirectionGestureDetector(charController)); charController is my class which implements the DirectionListener interface defined in the SimpleDirectionGestureDetector class and it is responsible for moving the player character. However the character doesn't change direction when I'm performing a fling action in any direction. I've checked and the fling() method in the SimpleDirectionGesture class doesn't get called and I have no idea why, since everything seems good. What am I doing wrong?

    Read the article

  • HUD layer not being added on my scene

    - by Shailesh_ios
    I have a CCScene which already holds my gameLayer and I am trying to add HUD layer on that.But the HUD layer is not getting added in my scene, I can say that because I have set up a CCLabel on HUD layer and when I run my project, I cannot see that label. Here's what I am doing : In my gameLayer: +(id) scene { CCScene *scene = [CCScene node]; GameScreen *layer = [GameScreen node]; [scene addChild: layer]; HUDclass * otherLayer = [HUDclass node]; [scene addChild:otherLayer]; layer.HC = otherLayer;// HC is reference to my HUD layer in @Interface of gameLayer return scene; } And then in my HUD layer I have just added a CCLabelTTF in its init method like this : -(id)init { if ((self = [super init])) { CCLabelTTF * label = [CCLabelTTF labelWithString:@"IN WEAPON CLASS" fontName:@"Arial" fontSize:15]; label.position = ccp(240,160); [self addChild:label]; } return self; } But now when I run my project I dont see that label, What am I doing wrong here ..? Any Ideas.. ? Thanks in advance for your time.

    Read the article

  • Explaining Asteroids Movement code

    - by Moaz ELdeen
    I'm writing an Asteroids Atari clone, and I want to figure out how the AI for the asteroids is done. I have came across that piece of code, but I can't get what it does 100% if ((float)rand()/(float)RAND_MAX < 0.5) { m_Pos.x = -app::getWindowWidth() / 2; if ((float)rand()/(float)RAND_MAX < 0.5) m_Pos.x = app::getWindowWidth() / 2; m_Pos.y = (int) ((float)rand()/(float)RAND_MAX * app::getWindowWidth()); } else { m_Pos.x = (int) ((float)rand()/(float)RAND_MAX * app::getWindowWidth()); m_Pos.y = -app::getWindowHeight() / 2; if (rand() < 0.5) m_Pos.y = app::getWindowHeight() / 2; } m_Vel.x = (float)rand()/(float)RAND_MAX * 2; if ((float)rand()/(float)RAND_MAX < 0.5) { m_Vel.x = -m_Vel.x; } m_Vel.y =(float)rand()/(float)RAND_MAX * 2; if ((float)rand()/(float)RAND_MAX < 0.5) m_Vel.y = -m_Vel.y;

    Read the article

  • Whole map design vs. tiles array design

    - by Mikalichov
    I am working on a 2D RPG, which will feature the usual dungeon/town maps (pre-generated). I am using tiles, that I will then combine to make the maps. My original plan was to assemble the tiles using Photoshop, or some other graphic program, in order to have one bigger picture that I could then use as a map. However, I have read on several places people talking about how they used arrays to build their map in the engine (so you give an array of x tiles to your engine, and it assemble them as a map). I can understand how it's done, but it seems a lot more complicated to implement, and I can't see obvious avantages. What is the most common method, and what are advantages/disadvantages of each?

    Read the article

  • Using Ogre with android [closed]

    - by Rich
    I am trying get Ogre 3d to work on android, I have managed to download and run the ogre sample browser but I am really struggling with trying to get a basic application working i have been trying for days now with no avail. Does anyone have any pointers on where to start with this? Thanks if anyone can help EDIT: Very sorry for my rubbish question! I am a bit new to this and I am just trying to seek some guidance. Ok so i followed the instructions on the Ogre wiki to build ogre for android and the sample browser here: http://www.ogre3d.org/tikiwiki/tiki-index.php?page=CMake+Quick+Start+Guide&tikiversion=Android so it is deffinately possible. The issue I am having is knowing what I need to do to get started with ogre e.g just a simple hello world style app where it might just show the ogre head, so tutorials might actually be good because I could not really find any simple ones as I am very new to 3D development. I just found that the sample browser was just massive and yes it has everything in it but it's very difficult to understand how it all works. What I am asking is basically some help, as I have been trying to pull out some parts of the sample browser to just create a view with a 3D model. Hope this is better?

    Read the article

  • OpenGL + Allegro. Moving from software drawing X Y to openGL is confusing

    - by Aaron
    Having a fair bit of trouble. I'm used to Allegro and drawing sprites on a bitmap buffer at X Y coords. Now I've started a test project with OpenGL and its weird. Basically, as far as I know, theirs many ways to draw stuff in OpenGL. At the moment, I think I'm creating a Quad? Whatever that is, and I think Ive given it a texture of a bitmap and them im drawing that: GLuint gl_image; bitmap = load_bitmap("cat.bmp", NULL); gl_image = allegro_gl_make_texture_ex(AGL_TEXTURE_MASKED, bitmap, GL_RGBA); glBindTexture(GL_TEXTURE_2D, gl_image); glBegin(GL_QUADS); glColor4ub(255, 255, 255, 255); glTexCoord2f(0, 0); glVertex3f(-0.5, 0.5, 0); glTexCoord2f(1, 0); glVertex3f(0.5, 0.5, 0); glTexCoord2f(1, 1); glVertex3f(0.5, -0.5, 0); glTexCoord2f(0, 1); glVertex3f(-0.5, -0.5, 0); glEnd(); So yeah. So I got a few questions: Is this the best way of drawing a sprite? Is it suitable? The big question: Can anyone help / Does anyone know any tutorials on this weird coordinate thing? If it even is that. It's vastly different from XY, but I want to learn it. I was thinking maybe I could learn how this weird positioning stuff works, and then write a function to try and translate it to X and Y coords. Thats about it. I'm still trying to figure it all out on my own but any contributions you guys can make would be greatly appreciated =D Thanks!

    Read the article

  • How can I replicate the look and limitations of the Super NES?

    - by Mikalichov
    I am looking to produce graphics with the same limitations / look that in the Super Nes era. I am specifically looking for graphics similar to Chrono Trigger / FF6. It would be a lot easier to do if I had an idea of the resolution / dpi I am supposed to use. I found that the technical specs for the SNES are: Progressive: 256 × 224, 512 × 224, 256 × 239, 512 × 239 Interlaced: 512 × 448, 512 × 478 But even by using these resolutions, it is pointless if I set it at 72dpi, as I will still have possibly very detailed graphics (that is the main thing, I don't want detailed graphics, I want to go pixelated). I figured it might be related to the sprite size limit, i.e.: Sprites can be 8 × 8, 16 × 16, 32 × 32, or 64 × 64 pixels, each using one of eight 16-color palettes and tiles from one of two blocks of 256 in VRAM. Up to 32 sprites and 34 8 × 8 sprite tiles may appear on any one line. This would work for sprites (characters, objects), but what about maps? Are they built entirely from 8x8 tiles? And then, at what resolution is the end result displayed? It might seem like I am giving the question and answers at the same time, but all of these are suppositions I am making, so could someone confirm or correct them?

    Read the article

  • Keypress Left is called twice in Update when key is pressed only once

    - by Simran kaur
    I have a piece of code that is changing the position of player when left key is pressed. It is inside of Update() function. I know, Update is called multiple times, but since I have an ifstatement to check if left arrow is pressed, it should update only once. I have tested using print statement that once pressed, it gets called twice. Problem: Position updated twice when key is pressed only once. Below given is the structure of my code: void Update() { if (Input.GetKeyDown (KeyCode.LeftArrow)) { print ("PRESSEEEEEEEEEEEEEEEEEEDDDDDDDDDDDDDD"); } } I looked up on web and what was suggested id this: if (Event.current.type == EventType.KeyDown && Event.current.keyCode == KeyCode.LeftArrow) { print("pressed"); } But, It gives me an error that says: Object reference not set to instance of an object How can I fix this?

    Read the article

  • exporting a maya model to work with Playstation mobile Studio?

    - by spartan2417
    Ive just started to program in Playstation Mobile Studio for the PS vita. The Models that are available to use in Ps Development have to .mdk models. The studio comes with a a model converter that converts from DAE, FBX, XSI and X files. However ive tried converting a maya scene to both DAE and FBX files but the converter returns an error with no explanation. Ive even tried to use blender and the tutorial here: http://www.gamefromscratch.com/post/2012/04/30/Exporting-from-Blender-to-PS-Suite-Blender-to-Vita-in-under-5-minutes.aspx but this fails at the converter too. With regards to maya all i do is export the file with default settings. Anyone know any good tutorials on this or guess why this would be happening?

    Read the article

  • Ease Rotate RigidBody2D toward arbitrary angle

    - by Plastic Sturgeon
    I'm trying to make a rigidbody2D circle return to an orientation after a collision. But there is a weird behavior I do not expect - it always orients to the same direction. This is what I call in FixedUpdate(): rotationdifference = -halfPI + rigidbody2D.rotation; rigidbody2D.AddTorque (rotationdifference * ease); I would expect this would rotate 90 degrees (1/2 Pi Radians) off of the neutral axis. But it does not. In fact it performs exactly the same as: rotationdifference = rigidbody2D.rotation; rigidbody2D.AddTorque (rotationdifference * ease); What is going on? How would I be able to set an angle I want it to ease towards, and then have it ease towards it when its not colliding with some other force?

    Read the article

  • Creating a 2D Line Branch (Part 2)

    - by Danran
    Yesterday i asked this question on how to create a 2D line branch; Creating a 2D Line Branch And thanks to the answered provided, i now have this nice looking main branch; *coloured to show the different segments in the final item. Now is the time now to branch things off as discussed in the article; http://drilian.com/2009/02/25/lightning-bolts/ Again however i am confused as to the meaning of the following pseudo code; splitEnd = Rotate(direction, randomSmallAngle)*lengthScale + midPoint; I'm unsure how to actually rotate this correctly. In all honesty i'm abit unsure what to-do completely at this part, "splitEnd" will be a Vector3, so whatever happens in the rotate function must then return some form of directional rotation which is then * by a scale to create length and then added to the midPoint. I'm not sure. If someone could explain what i'm meant to be doing in this part that would be really grateful.

    Read the article

  • What forms of non-interactive RPG battle systems exist?

    - by Landstander
    I am interested in systems that allow players to develop a battle plan or setup strategy for the party or characters prior to entering battle. During the battle the player either cannot input commands or can choose not to. Rule Based In this system the player can setup a list of rules in the form of [Condition - Action] that are then ordered by priority. Gambits in Final Fantasy XII Tactics in Dragon Age Origin & II

    Read the article

  • Sprite/Tile Sheets Vs Single Textures

    - by Reanimation
    I'm making a race circuit which is constructed using various textures. To provide some background, I'm writing it in C++ and creating quads with OpenGL to which I assign a loaded .raw texture too. Currently I use 23 500px x 500px textures of which are all loaded and freed individually. I have now combined them all into a single sprite/tile sheet making it 3000 x 2000 pixels seems the number of textures/tiles I'm using is increasing. Now I'm wondering if it's more efficient to load them individually or write extra code to extract a certain tile from the sheet? Is it better to load the sheet, then extract 23 tiles and store them from one sheet, or load the sheet each time and crop it to the correct tile? There seems to be a number of way to implement it... Thanks in advance.

    Read the article

  • Import 3ds into JMonkeyEngine 3

    - by Yanick Rochon
    I have asked this question on SO, but I think it will be more suitable here. Basically, we are trying to import an animated character body (with skeleton) from 3D Studio Max to JMonkeyEngine 3, but while we succeeded at importing some animations, we cannot seem to export the skeleton to .skeleton.xml using OgreXML format. Since OgreXML seems to be the favored way to import models into JME, we dropped .obj files and such. Any help appreciated.

    Read the article

  • Hashing 3D position into 2D position

    - by notabene
    I am doing volumetric raycasting and curently working on depth jitter. I have 3D position on ray and want to sample 2D noise texture to jitter the depth. Function for converting (or hashing) 3D position to 2D have to produce absolutely different numbers for a little changes (especialy because i am sampling in texture space so sample values differs very very little) and have to be "shader-wise" - so forget about branches, cycles etc. I'm looking forward for yours nice and fast solutions.

    Read the article

  • Is there a library that handles hexagon tiled 2D maps?

    - by Pete Mancini
    It would represent a map that is semi-square of arbitrary size. It would have a simple system for representation of the map coordinates such as 0101 (first column, 1st hex). I'd want the map to be able to tell me the distance between two points, and what other hexes lay between those two points as a list or array. I don't care as much about the language but c# or python would be ideal. Does one exist?

    Read the article

  • Infinite terrain shadows

    - by user35399
    I'm creating an infinite terrain engine, which generates the terrain either with fractals or noise. How can I make dynamic shadows for the sun on this terrain, if I don't know in advance what will be rendered in front of the sun. My terrain: The sun is the only light, it is directional, my terrain is generated on a plane which is positioned before the camera, frustum culled and fits the size of the viewing frustum. It is height mapped with generated noise texture, and using tessellation shaders on it. Video:http://www.youtube.com/watch?v=tk6yFwYusOs Dynamic shadows with the infinite terrain.

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • How does this snippet of code create a ray direction vector?

    - by Isaac Waller
    In the Minecraft source code, this code is used to create a direction vector for a ray from pitch and yaw:' float f1 = MathHelper.cos(-rotationYaw * 0.01745329F - 3.141593F); float f3 = MathHelper.sin(-rotationYaw * 0.01745329F - 3.141593F); float f5 = -MathHelper.cos(-rotationPitch * 0.01745329F); float f7 = MathHelper.sin(-rotationPitch * 0.01745329F); return Vec3D.createVector(f3 * f5, f7, f1 * f5); I was wondering how it worked, and what is the constant 0.01745329F?

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >