Search Results

Search found 17342 results on 694 pages for 'custom draw'.

Page 414/694 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • What's wrong with this OpenGL model picking code?

    - by openglNewbie
    I am making simple model viewer using OpenGL. When I want to pick an object OpenGL returns nothing or an object that is in another place. This is my code: GLuint buff[1024] = {0}; GLint hits,view[4]; glSelectBuffer(1024,buff); glGetIntegerv(GL_VIEWPORT, view); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); gluPickMatrix(x,y,1.0,1.0,view); gluPerspective(45,(float)view[2]/(float)view[4],1.0,1500.0); glMatrixMode(GL_MODELVIEW); glRenderMode(GL_SELECT); glLoadIdentity(); //I make the same transformations for normal render glTranslatef(0, 0, -zoom); glMultMatrixf(transform.M); glInitNames(); glPushName(-1); for(int j=0;j<allNodes.size();j++) { glLoadName(allNodes.at(j)->id); allNodes.at(j)->Draw(textures); } glPopName(); glMatrixMode(GL_PROJECTION); glPopMatrix(); hits = glRenderMode(GL_RENDER);

    Read the article

  • JDeveloper News &ndash; Did You Know

    - by shay.shmeltzer
    There have been a few issues lately with access to blogs.oracle.com to write new messages – and as a result I’m a little behind on reporting the latest and greatest in JDeveloper. (I’m also unable to approve comments :-( ) But just in case you missed it here are a few note worthy things you should know: ADF Mobile Client went production – this is a solution that let you use ADF to develop on-device applications for mobile devices. You develop once and run on various devices (right now Blackberry and Windows Mobile with other platforms coming soon). For more information check out the ADF Mobile page on OTN. ADF Developer Certification goes production – you can now take the official test and get an official certification that you can include in your resume.Should be a must for any consultant looking to get ADF related gigs. Learn more about the ADF Certification Exam. ADF Vision Stencils Released – If you are looking for a quick way to draw prototypes of ADF screens you probably can’t get any faster and more accurate results than this. This is a set of Vision components that you can drag over and design a page visually. You can also set properties for components and do other advance things. You do need a license for Visio to use this, but the ADF stencils are free. We’ve been using these internally at Oracle for some time now and we thought the ADF community would enjoy these too. Download the ADF Visio Stencils here (and watch a youtube demo of how they work).

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • Fun programming or something else?

    - by gion_13
    I've recently heard about android's isUserAGoat method and I didn't know what to think. At first I laughed my brains out, than I was embarrassed for my lack of professionalism and tried to look into it and see if it makes any normal sense. As it turns out it is a joke (as stated here) and it appears that other languages/apis have these sort of easter eggs implemented in their core. While I personally like them and feel they can be a fresh breath sometimes, I think that they also can be both frustrating and confusing (and you begin to ask yourself : "can users be goats?" or "I get it! "goat" is slang for.... wait.."). My question is are there any other examples of these kind of programming jokes and what are their intends? Should they be considered harmless or not (how do programmers feel about it) ? Do they reach their goal (if any other than to laugh) ? Where do you draw a line between a good joke and a disaster? (what if the method was called isUserStupid?)

    Read the article

  • Drawing two orthogonal strings in 3d space in Android Canvas?

    - by hasanghaforian
    I want to draw two strings in canvas.First string must be rotated around Y axis,for example 45 degrees.Second string must be start at the end of first string and also it must be orthogonal to first string. This is my code: String text = "In the"; float textWidth = redPaint.measureText(text); Matrix m0 = new Matrix(); Matrix m1 = new Matrix(); Matrix m2 = new Matrix(); mCamera = new Camera(); canvas.setMatrix(null); canvas.save(); mCamera.rotateY(45); mCamera.getMatrix(m0); m0.preTranslate(-100, -100); m0.postTranslate(100, 100); canvas.setMatrix(m0); canvas.drawText(text, 100, 100, redPaint); mCamera = new Camera(); mCamera.rotateY(90); mCamera.getMatrix(m1); m1.preTranslate(-textWidth - 100, -100); m1.postTranslate(textWidth + 100, 100); m2.setConcat(m1, m0); canvas.setMatrix(m2); canvas.drawText(text, 100 + textWidth, 100, greenPaint); But in result,only first string(text with red font)is visible. How can I do drawing two orthogonal strings in 3d space?

    Read the article

  • Rendering design. How can I effectively deal with forward, deferred and transparent rendering?

    - by user1423893
    I have many objects in my game world that all derive from one base class. Each object will have different materials and will therefore be required to be drawn using various rendering techniques. I currently use the following order for rendering my objects. Deferred Forward Transparent (order independent) Each object has a rendering flag that denotes which one of the above methods should be used. The list of base objects in the scene are then iterated through and added to separate lists of deferred, forward or transparent objects based on their rendering flag value. The individual lists are then iterated through and drawn using the order above. Each list is cleared at the end of the frame. This methods works fairly well but it requires different draw methods for each material type. For example each object will require the following methods in order to be compatible with the possible flag settings. object.DrawDeferred() object.DrawForward() object.DrawTransparent() It is also hard to see where methods outside of materials, such as rendering shadow maps, would fit using this "flag & method" design. object.DrawShadow() I was hoping that someone may have some suggestions for improving this rendering process, possibly making it more generic and less verbose?

    Read the article

  • Using texture() in combination with JBox2D

    - by Valentino Ru
    I'm getting some trouble using the texture() method inside beginShape()/endShape() clause. In the display()-method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following: void display(){ Vec2 pos = level.getLevel().getBodyPixelCoord(body); float a = body.getAngle(); // needed for rotation pushMatrix(); translate(pos.x, pos.y); rotate(-a); fill(temp); // temp is a color defined in the constructor stroke(0); beginShape(); vertex(-w/2,-h/2); vertex(w/2,-h/2); vertex(w/2,h-h/2); vertex(-w/2,h-h/2); endShape(CLOSE); popMatrix(); } Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.

    Read the article

  • glTexImage2D not loading my data

    - by Clyde
    Can anyone suggest why this code doesn't work? When I draw using this texture all I get is black. If I use GLUtils.texImage2D() to load a png file, it works correctly. ByteBuffer bb = ByteBuffer.allocateDirect(128*128*4).order(ByteOrder.nativeOrder()); bb.position(0); for(int row = 0; row != 128; row++) { for(int i = 0 ; i != 128 ; i++) { bb.put((byte)0x80); bb.put((byte)0xFF); bb.put((byte)0xFF); bb.put((byte)i); } } int[] handle = new int[1]; GLES20.glEnable(GLES20.GL_TEXTURE_2D); GLES20.glGenTextures(1, handle, 0); DrawAdapter.checkGlError("Gen textures"); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, handle[0]); DrawAdapter.checkGlError("Bind textures"); bb.position(0); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 128, 128, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb); DrawAdapter.checkGlError("glTexImage2D"); return handle[0];

    Read the article

  • How do I properly implement zooming in my game?

    - by Rudy_TM
    I'm trying to implement a zoom feature but I have a problem. I am zooming in and out a camera with a pinch gesture, I update the camera each time in the render, but my sprites keep their original position and don't change with the zoom in or zoom out. The Libraries are from libgdx. What am I missing? private void zoomIn() { ((OrthographicCamera)this.stage.getCamera()).zoom += .01; } public boolean pinch(Vector2 arg0, Vector2 arg1, Vector2 arg2, Vector2 arg3) { // TODO Auto-generated method stub zoomIn(); return false; } public void render(float arg0) { this.gl.glClear(GL10.GL_DEPTH_BUFFER_BIT | GL10.GL_COLOR_BUFFER_BIT); ((OrthographicCamera)this.stage.getCamera()).update(); this.stage.draw(); } public boolean touchDown(int arg0, int arg1, int arg2) { this.stage.toStageCoordinates(arg0, arg1, point); Actor actor = this.stage.hit(point.x, point.y); if(actor instanceof Group) { ((LevelSelect)((Group) actor).getActors().get(0)).touched(); } return true; } Zoom In Zoom Out

    Read the article

  • Prevent Nautilus from displaying thumbnails on a specific mount

    - by Zakhar
    I have written a filesystem over Fuse to access a remote pseudo-NAS (the French "Freebox V6", I'll soon publish it as GPL3... when it's a little bit more polished!). The NAS is connected to a home ADSL, thus data comes down at the upload speed of ADSL, which is at best 1Mbps. My mount works fine (read-only at the moment), but Nautilus sees the mountpoint (and all sub-directories) as a "local" filesystem and tries to make thumbnails. As I have a directory full of images, this is quite horrible, because Nautilus then opens ALL the images to try to display the thumbnail. I could switch the Nautilus preferences to "Never" for thumbnails, but then I'll loose thumbnails on my "real" local filesystem. So the question is: with the preference "Only for local filesystem", how can I instruct Nautilus that my mountpoint is in fact NOT a local mount so that it will stop trying to draw thumbnails on that specific mount, but continue "thumbnailing" on mounts that are really local? Edit note: the same things happens if you use "standard worldwide" mounts such as sshfs, davfs,... as long as you mount over a relatively slow network (ADSL) and have images/movies on your mounted tree.

    Read the article

  • String on a model

    - by alecnash
    I am trying to put a sting on a Model and I want it to be dynamic. Did some research and came up with drawing the text on the texture and then set it on the model. I use something like this: public static Texture2D SpriteFontTextToTexture(SpriteFont font, string text, Color backgroundColor, Color textColor) { Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(GraphicsDevice, (int)Size.X, (int)Size.Y); GraphicsDevice.SetRenderTarget(renderTarget); GraphicsDevice.Clear(Color.Transparent); Spritbatch.Begin(); //have to redo the ColorTexture Spritbatch.Draw(ColorTexture.Create(GraphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); Spritbatch.DrawString(font, text, Vector2.Zero, textColor); Spritbatch.End(); GraphicsDevice.SetRenderTarget(null); return renderTarget; } When I was working with primitives and not models everything worked fine because I set the texture exactly where I wanted but with the model (RoundedRect 3d button) it looks like that: Is there a way to have the text centered only on one side?

    Read the article

  • Rendering 8 bit graphics

    - by Matjaz Muhic
    I have a strong programming background just not from game development. I only made some pong and snake in high school and I did some OpenGL in college. I want to make my own game engine. Nothing fancy just a simple 2D game engine. But because I'm kinda old school and feeling retro. I want graphics to look like old 8 bit games (megaman, contra, super mario, ...). So how were the old games made back then? I want the simplest approach. Were they also using assets (images) like newer engines now do? How do you achieve this kind of rendering using OpenGL? Keep in mind. Simplest solution. I want to know how it was made back then and how I can replicate that. Doesn't even have to be OpenGL. I can draw on window canvas. I do want to make it from scratch basically.

    Read the article

  • Finite Numbers and ExplorerCanvas

    - by PhubarBaz
    I was working on my online mathematical graphing application, CloudGraph, and trying to make it work in IE. The app uses the HTML5 canvas element to draw graphs. Since IE doesn't support canvas yet I use ExplorerCanvas to provide that support for IE. However, it seems that when using excanvas, if you try to moveTo or drawTo a point that is not finite it loses it's mind and stops drawing anything else after that. I had no such problems in Firefox or Chrome so it took me awhile to figure out what was going on. Next I discovered that I needed a way to check if a variable was NaN or Inifinity or any other non-finite value so I could avoid calling moveTo() in that case. I started writing a long if statement, then I thought there has to be a better way. Sure enough there was. There just happens to be an isFinite() function built into Javascript just for this purpose. Who knew! It works great. Another difference I discovered with excanvas is that you must specify a starting point using a moveTo() when beginning a drawing path. Again, Chrome and Firefox are a lot more forgiving in this area so it took me a while to figure out why my lines weren't drawing. But, all is happy now and I'm a little wiser to the ways of the canvas.

    Read the article

  • XNA 3D coordinates seem off

    - by Peteyslatts
    I'm going through a book, and the example it gave me seems like is should work, but when I try and implement it, it falls short. My Camera class takes three vectors in to generate View and Projection matrices. I'm giving it a position vector of (0,0,5), a target vector of Vector.Zero and a top vector (which way is up) of Vector.Up. My Three vertices are placed at (0,1,0), (-1,-1,0), (1,-1,0). It seems like it should work because the vertices are centered around the origin, and thats where I'm telling the camera to look but when I run the game, the only way to get the camera to see the vertices is to set its position to (0,0,-5) and even then the triangle is skewed. Not sure what's wrong here. Any suggestions would be helpful. Just to make sure I've given you guys everything (I don't think these are important as the problem seems to be related to the coordinates, not the ability of the game to draw them): I'm using a VertexBuffer and a BasicEffect. My render code is as follows: effect.World = Matrix.Identity; effect.View = camera.view; effect.Projection = camera.projection; effect.VertexColorEnabled = true; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawUserPrimitives<VertexPositionColor> (PrimitiveType.TriangleStrip, verts, 0, 1); }

    Read the article

  • Pixel Shader Issues :

    - by Morphex
    I have issues with a pixel shader, my issue is mostly that I get nothing draw on the screen. float4x4 MVP; // TODO: add effect parameters here. struct VertexShaderInput { float4 Position : POSITION; float4 normal : NORMAL; float2 TEXCOORD : TEXCOORD; }; struct VertexShaderOutput { float4 Position : POSITION; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { input.Position.w = 0; VertexShaderOutput output; output.Position = mul(input.Position, MVP); // TODO: add your vertex shader code here. return output; } float4 PixelShaderFunction(VertexShaderOutput input) : SV_TARGET { return float4(1, 0, 0, 1); } technique { pass { Profile = 11.0; VertexShader = VertexShaderFunction; PixelShader = PixelShaderFunction; } } My matrix is calculated like this : Matrix MVP = Matrix.Multiply(Matrix.Multiply(Matrix.Identity, Matrix.LookAtLH(new Vector3(-10, 10, -10), new Vector3(0), new Vector3(0, 1, -0))), Camera.Projection); VoxelEffect.Parameters["MVP"].SetValue(MVP); Visual Studio Graphics Debug shows me that my vertex shader is actually working, but not the PixelShader. I striped the Shader to the bare minimums so that I was sure the shader was correct. But why is my screen still black?

    Read the article

  • What makes you look like a bad developer (ie a hacker) [on hold]

    - by user134583
    This comes from a lot of people about me, so I have to look at myself. So I would wonder what make one a bad developer (ie a hacker). These are a few things about me I used IDE intensively, all features, you name it: auto-completion, refactoring, quick fixes, open type, view hierarchy, API documentation, etcc When I deal with writing code for a project in domain I am not used to (I can't have fluency in this, this is new), I only have a very rough high level ideas. I don't use the standard modeling diagrams for early detail planning. Unorthodox diagrams that I invented when I need to draw the design in details. I don't use UML or similar, I find them not enough. I divide the sorts of diagram I drew into 3 types. Very high level diagrams which probably can be understood by almost anybody. Data entity diagram used for modeling data objects only (like ER diagrams and tree for inheritances and composition). Action diagrams for agents/classes and their interactions on data objects they contain. Constantly changing the interface (public methods) between interacting agents/classes if the need arises. I am more refrained when the interface and the module have matured Write initial concept code in a quick hackie way just so that the module works in the general cases so that I can play around with it. The module will be re-factored intensively after playing around so I could see more corner cases that I couldn't or (wouldn't want) anticipate before writing code. Using JUnit for integration-like test by using TestSuite class and ordering Unit test classes in the suite Using debugger almost anytime there is a problem instead of reading the code Constantly search on the internet for how to do some thing with some library that I haven't used a lot. So judgment, am I a bad developer? a hacker? Put in other words, to make sure this is not considered off-topic: - Is this bad practice to make your code too agile during incubating/prototyping phase of software development - Is it bad practice to use JUnit for integration testing, (I know there are other framework for integration testing, but those frameworks are for a specific products, not general)

    Read the article

  • How difficult is it for an artist to make their own artwork cohesive to another artists' style? [on hold]

    - by user36200
    I have a lot of artwork I purchased from a website, but the artist who drew the game assets is unavailable. I need to create additional artwork which fits with this style, but I am not an artist- nor do I have any idea of how artists work. Obviously, the solution is to find a new artist, who I can pay to draw this artwork while keeping it to look at certain way. I am scared of wasting money though. I don't want to contract an artist, only to find out it is extremely difficult for someone to match another person's art style. I don't need it to be identical, I just need it to be cohesive. I also want to know what I'm asking of people, before I ask them. Artists are workers just like me, and deserve to be understood when contracted. As an artist, is it extra difficult or time consuming to alter your artwork to match a certain style? Does it require a lot of talent to make a cohesive piece of art? To be specific, I am talking about structures, such as 2D symbols of towns for a map. They all have this "gritty" penciling effect to them and lots of saturated colors, which is what will be required when I say "cohesive". Along with looking like the architecture belongs in the same world.

    Read the article

  • what's wrong with my lookAt and move forward code?

    - by alaslipknot
    so am still in the process of getting familiar with libGdx and one of the fun things i love to do is to make basics method for reusability on future projects, and for now am stacked on getting a Sprite rotate toward target (vector2) and then move forward based on that rotation the code am using is this : // set angle public void lookAt(Vector2 target) { float angle = (float) Math.atan2(target.y - this.position.y, target.x - this.position.x); angle = (float) (angle * (180 / Math.PI)); setAngle(angle); } // move forward public void moveForward() { this.position.x += Math.cos(getAngle())*this.speed; this.position.y += Math.sin(getAngle())*this.speed; } and this is my render method : @Override public void render(float delta) { // TODO Auto-generated method stub Gdx.gl.glClearColor(0, 0, 0.0f, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); // groupUpdate(); Vector3 mousePos = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); ball.lookAt(new Vector2(mousePos.x, mousePos.y)); // if (Gdx.input.isTouched()) { ball.moveForward(); } batch.begin(); batch.draw(ball.getSprite(), ball.getPos().x, ball.getPos().y, ball .getSprite().getOriginX(), ball.getSprite().getOriginY(), ball .getSprite().getWidth(), ball.getSprite().getHeight(), .5f, .5f, ball.getAngle()); batch.end(); } the goal is to make the ball always look at the mouse cursor, and then move forward when i click, am also using this camera : // create the camera and the SpriteBatch camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); aaaand the result was so creepy lol Thank you

    Read the article

  • Facebook contest policy no-no?

    - by Fred
    I would like to post a link on a Facebook page where it will exit Facebook entirely and go to a client's website, where people will be on a page (client's) where they can enter their e-mail address to be entered in a temporary database file with rules and disclosures etc., for a draw once the number of entries reaches 100 for instance. Once the number of entries reaches 100, a random winner is picked and notified via E-mail. The functionality is as follows: A link is place on a Facebook page leading to an external page The page is a form to merely enter their email address for a contest The email is placed in a temporary file An automatic E-mail is sent to the address used for confirmation using SHAH-256 hash The person receives the Email saying something to the affect "Please confirm your Email address etc. - If you did not authorize this, simply ignore this message and no further action will be taken". If the person clicks on the confirmation link, the Email is then stored in the database and the person is again notified saying "Thank you for signing up etc." Once others do the same process and the database reaches a certain number, the form is no longer accessible and automatically picks a random Email. Once picked, an Email is automatically sent to the winner stating the instructions, and notifying me also. Once that person clicks yet another confirmation link, the database is then automatically deleted. I have built this myself and have no intentions of breaking any rules, nor jeopardize the work/time/energy I have put into this project. Is this allowed?

    Read the article

  • Android Bitmap: Collision Detecting

    - by Aekasitt Guruvanich
    I am writing an Android game right now and I would need some help in the collision of the Pawns on screen. I figured I could run a for loop on the Player class with all Pawn objects on the screen checking whether or not Width*Height intersects with each other, but is there a more efficient way to do this? And if you do it this way, many of the transparent pixel inside the rectangular area will also be considered as collision as well. Is there a way to check for collision between Bitmap on a Canvas that disregard transparent pixels? The class for player is below and the Pawn class uses the same method of display. Class Player { private Resources res; // Used for referencing Bitmap from predefined location private Bounds bounds; // Class that holds the boundary of the screen private Bitmap image; private float x, y; private Matrix position; private int width, height; private float velocity_x, velocity_y; public Player (Resources resources, Bounds boundary) { res = resources; bounds = boundary; image = BitmapFactory.decodeResource(res, R.drawable.player); width = image.getWidth(); height = image.getHeight(); position = new Matrix(); x = bounds.xMax / 2; // Initially puts the Player in the middle of screen y = bounds.yMax / 2; position.preTranslate(x,y); } public void draw(Canvas canvas) { canvas.drawBitmap(image, position, null); } }

    Read the article

  • Is spreading code with refactoring comments a good idea?

    - by Uooo
    I am working on a "spaghetti-code" project, and while I am fixing bugs and implementing new features, I also do some refactoring in order to make the code unit-testable. The code is often so tightly coupled or complicated that fixing a small bug would result in a lot of classes being rewritten. So I decided to draw a line somewhere in the code where I stop refactoring. To make this clear, I drop some comments in the code explaining the situation, like: class RefactoredClass { private SingletonClass xyz; // I know SingletonClass is a Singleton, so I would not need to pass it here. // However, I would like to get rid of it in the future, so it is passed as a // parameter here to make this change easier later. public RefactoredClass(SingletonClass xyz) { this.xyz = xyz; } } Or, another piece of cake: // This might be a good candidate to be refactored. The structure is like: // Version String // | // +--> ... // | // +--> ... // | // ... and so on ... // Map map = new HashMap<String, Map<String, Map<String, List<String>>>>(); Is this a good idea? What should I keep in mind when doing so?

    Read the article

  • Which jar has JBox2d's p5 package

    - by Brantley Blanchard
    Using eclipse, I'm trying to write a simple hello world program in processing that simply draws a rectangle on the screen then has gravity drop it as seen in this Tutorial. The problem is that when I try to import the p5 package, it's not resolving so I can't declare my Physics object. I tried two things. Download the zip, unzip it, then import the 3 jars (library, serialization, & testbed) a. import org.jbox2d.p5.*; doesn't resolve but the others do b. Physics physics; doesn't resolve Download the older standalone testbed jar then import it a. Physics physics; doesn't resolve; Here is basically where I'm starting import org.jbox2d.util.nonconvex.*; import org.jbox2d.dynamics.contacts.*; import org.jbox2d.testbed.*; import org.jbox2d.collision.*; import org.jbox2d.common.*; import org.jbox2d.dynamics.joints.*; import org.jbox2d.p5.*; import org.jbox2d.dynamics.*; import processing.core.PApplet; public class MyFirstJBox2d extends PApplet { Physics physics; public void setup() { size(640,480); frameRate(60); initScene(); } public void draw() { background(0); if (keyPressed) { //Reset everything physics.destroy(); initScene(); } } public void initScene() { physics = new Physics(this, width, height); physics.setDensity(1.0f); physics.createRect(300,200,340,300); } }

    Read the article

  • moving in the wrong direction

    - by Will
    Solution: To move a unit forward: forward = Quaternion(0,0,0,1) rotation.normalize() # ocassionally ... pos += ((rotation * forward) * rotation.conjugated()).xyz().normalized() * speed I think the trouble stemmed from how the Euclid math library was doing Quaternion*Vector3 multiplication, although I can't see it. I have a vec3 position, a quaternion for rotation and a speed. I compute the player position like this: rot *= Quaternion().rotate_euler(0.,roll_speed,pitch_speed) rot.normalize() pos += rot.conjugated() * Vector3(0.,0.,-speed) However, printing the pos to console, I can see that I only ever seem to travel on the x-axis. When I draw the scene using the rot quaternion to rotate my camera, it shows a proper orientation. What am I doing wrong? Here's an example: You start off with rotation being an identity quaternion: w=1,x=0,y=0,z=0 You move forward; the code correctly decrements the Z You then pitch right over to face the other way; if you spin only 175deg it'll go in right direction; you have to spin past 180deg. It doesn't matter which direction you spin in, up or down, though Your quaternion can then be something like: w=0.1,x=0.1,y=0,z=0 And moving forward, you actually move backward?! (I am using the euclid Python module, but its the same as every other conjulate) The code can be tried online at http://williame.github.com/ludum_dare_24_evolution/ The only key that adjusts the speed is W and S. The arrow keys only adjust the pitch/roll. At first you can fly ok, but after a bit of weaving around you end up getting sucked towards one of the sides. The code is https://github.com/williame/ludum_dare_24_evolution/blob/cbacf61a7159d2c83a2187af5f2015b2dde28687/tiny1web.py#L102

    Read the article

  • How to manage a lot of Action Listeners for multiple buttons

    - by Wumbo4Dayz
    I have this Tic Tac Toe game and I thought of this really cool way to draw out the grid of 9 little boxes. I was thinking of putting buttons in each of those boxes. How should I give each button (9 buttons in total) an ActionListener that draws either an X or O? Should they each have their own, or should I do some sort of code that detects turns in this? Could I even do a JButton Array and do some for loops to put 9 buttons. So many possibilities, but which one is the most proper? Code so far: import javax.swing.*; import java.awt.event.*; import java.awt.*; public class Board extends JPanel implements ActionListener{ public Board(){ Timer timer = new Timer(25,this); timer.start(); } @Override protected void paintComponent(Graphics g){ for(int y = 0; y < 3; y++){ for(int x = 0; x < 3; x++){ g.drawRect(x*64, y*64, 64, 64); } } } public void actionPerformed(ActionEvent e){ repaint(); } }

    Read the article

  • Shared pointers causing weird behaviour

    - by Setzer22
    I have the following code in SFML 2.1 Class ResourceManager: shared_ptr<Sprite> ResourceManager::getSprite(string name) { shared_ptr<Texture> texture(new Texture); if(!texture->loadFromFile(resPath+spritesPath+name)) throw new NotSuchFileException(); shared_ptr<Sprite> sprite(new Sprite(*texture)); return sprite; } Main method: (I'll omit most of the irrelevant code shared_ptr<Sprite> sprite = ResourceManager::getSprite("sprite.png"); ... while(renderWindow.isOpen()) renderWindow.draw(*sprite); Oddly enough this makes my sprite render completely white, but if I do this instead: shared_ptr<Sprite> ResourceManager::getSprite(string name) { Texture* texture = new Texture; // <------- From shared pointer to pointer if(!texture->loadFromFile(resPath+spritesPath+name)) throw new NotSuchFileException(); shared_ptr<Sprite> sprite(new Sprite(*texture)); return sprite; } It works perfectly. So what's happening here? I assumed the shared pointer would work just as a pointer. Could it be that it's getting deleted? My main method is keeping a reference to it so I don't really understand what's going on here :S EDIT: I'm perfectly aware deleting the sprite won't delete the texture and this is generating a memory leak I'd have to handle, that's why I'm trying to use smart pointers on the first place...

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >