Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 293/657 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • How do I identify mouse clicks versus mouse down in games?

    - by Tristan
    What is the most common way of handling mouse clicks in games? Given that all you have in way of detecting input from the mouse is whether a button is up or down. I currently just rely on the mouse down being a click, but this simple approach limits me greatly in what I want to achieve. For example I have some code that should only be run once on a mouse click, but using mouse down as a mouse click can cause the code to run more then once depending on how long the button is held down for. So I need to do it on a click! But what is the best way to handle a click? Is a click when the mouse goes from mouse up to down or from down to up or is it a click if the button was down for less then x frames/milliseconds and then if so, is it considered mouse down and a click if its down for x frames/milliseconds or a click then mouse down? I can see that each of the approaches can have their uses but which is the most common in games? And maybe i'll ask more specifically which is the most common in RTS games?

    Read the article

  • How to do reflective collisions with particles hitting background tiles?

    - by Shawn LeBlanc
    In my 2d pixel old-school platformer, I'm looking for methods for bouncing particles off of background tiles. Particles aren't affected by gravity and collisions are "reflective". By that I mean a particle hitting the side of a square tile at 45 degrees should bounce off at 45 degrees as well. We can assume that tiles will always be perfectly square. No slopes or anything. What are efficient methods and algorithms to do this? I'd be implementing this on a Sega Genesis.

    Read the article

  • Multiple Key Presses in XNA?

    - by Bryan Harrington
    I'm actually trying to do something fairly simple. I cannot get multiple key presses to work in XNA. I've tried the following pieces of code. else if (keyboardState.IsKeyDown(Keys.Down) && (keyboardState.IsKeyDown(Keys.Left))) { //Move Character South-West } and I tried. else if (keyboardState.IsKeyDown(Keys.Down)) { if (keyboardState.IsKeyDown(Keys.Left)) { //Move Character South-West } } Neither worked for me. Single presses work just fine. Any thoughts?

    Read the article

  • Trade offs of linking versus skinning geometry

    - by Jeff
    What are the trade offs between inherent in linking geometry to a node versus using skinned geometry? Specifically: What capabilities do you gain / lose from using each method? What are the performance impacts of doing one over the other? What are the specific situations where you would want to do one over the other? In addition, do the answers to these questions tend to be engine specific? If so, how much?

    Read the article

  • Platformer gravity where gravity is greater than tile size

    - by Sara
    I am making a simple grid-tile-based platformer with basic physics. I have 16px tiles, and after playing with gravity it seems that to get a nice quick Mario-like jump feel, the player ends up moving faster than 16px per second at the ground. The problem is that they clip through the first layer of tiles before collisions being detected. Then when I move the player to the top of the colliding tile, they move to the bottom-most tile. I have tried limiting their maximum velocity to be less than 16px but it does not look right. Are there any standard approaches to solving this? Thanks.

    Read the article

  • Setting up cube map texture parameters in OpenGL

    - by KaiserJohaan
    I see alot of tutorials and sources use the following code snippet when defining each face of a cube map: for (i = 0; i < 6; i++) glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, InternalFormat, size, size, 0, Format, Type, NULL); Is it safe to assume GL_TEXTURE_CUBE_MAP_POSITIVE_X + i will properly iterate the following cube map targets, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y etc?

    Read the article

  • How do I draw a scene with 2 nested frames

    - by Guido Granobles
    I have been trying for long time to figure out this: I have loaded a model from a directx file (I am using opengl and Java) the model have a hierarchical system of nested reference frames (there are not bones). There are just 2 frames, one of them is called x3ds_Torso and it has a child frame called x3ds_Arm_01. Each one of them has a mesh. The thing is that I can't draw the arm connected to the body. Sometimes the body is in the center of the screen and the arm is at the top. Sometimes they are both in the center. I know that I have to multiply the matrix transformation of every frame by its parent frame starting from the top to the bottom and after that I have to multiply every vertex of every mesh by its final transformation matrix. So I have this: public void calculeFinalMatrixPosition(Bone boneParent, Bone bone) { System.out.println("-->" + bone.name); if (boneParent != null) { bone.matrixCombined = bone.matrixTransform.multiply(boneParent.matrixCombined); } else { bone.matrixCombined = bone.matrixTransform; } bone.matrixFinal = bone.matrixCombined; for (Bone childBone : bone.boneChilds) { calculeFinalMatrixPosition(bone, childBone); } } Then I have to multiply every vertex of the mesh: public void transformVertex(Bone bone) { for (Iterator<Mesh> iterator = meshes.iterator(); iterator.hasNext();) { Mesh mesh = iterator.next(); if (mesh.boneName.equals(bone.name)) { float[] vertex = new float[4]; double[] newVertex = new double[3]; if (mesh.skinnedVertexBuffer == null) { mesh.skinnedVertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); } mesh.vertexBuffer.buffer.rewind(); while (mesh.vertexBuffer.buffer.hasRemaining()) { vertex[0] = mesh.vertexBuffer.buffer.get(); vertex[1] = mesh.vertexBuffer.buffer.get(); vertex[2] = mesh.vertexBuffer.buffer.get(); vertex[3] = 1; newVertex = bone.matrixFinal.transpose().multiply(vertex); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[0])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[1])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[2])); } mesh.vertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); mesh.skinnedVertexBuffer.buffer.rewind(); mesh.vertexBuffer.buffer.put(mesh.skinnedVertexBuffer.buffer); } } for (Bone childBone : bone.boneChilds) { transformVertex(childBone); } } I know this is not the more efficient code but by now I just want to understand exactly how a hierarchical model is organized and how I can draw it on the screen. Thanks in advance for your help.

    Read the article

  • How do people get around the Carmack's Reverse patent?

    - by Rei Miyasaka
    Apparently, Creative has a patent on Carmack's Reverse, and they successfully forced Id to modify their techniques for the source drop, as well as to include EAX in Doom 3. But Carmack's Reverse is discussed quite often and apparently it's a good choice for deferred shading, so it's presumably used in a lot of other high-budget productions too. Even though it's unlikely that Creative would go after smaller companies, I'm wondering how the bigger studios get around this problem. Do they just cross their fingers and hope Creative doesn't troll them, or do they just not use Carmack's Reverse at all?

    Read the article

  • Why does Farseer 2.x store temporaries as members and not on the stack? (.NET)

    - by Andrew Russell
    UPDATE: This question refers to Farseer 2.x. The newer 3.x doesn't seem to do this. I'm using Farseer Physics Engine quite extensively at the moment, and I've noticed that it seems to store a lot of temporary value types as members of the class, and not on the stack as one might expect. Here is an example from the Body class: private Vector2 _worldPositionTemp = Vector2.Zero; private Matrix _bodyMatrixTemp = Matrix.Identity; private Matrix _rotationMatrixTemp = Matrix.Identity; private Matrix _translationMatrixTemp = Matrix.Identity; public void GetBodyMatrix(out Matrix bodyMatrix) { Matrix.CreateTranslation(position.X, position.Y, 0, out _translationMatrixTemp); Matrix.CreateRotationZ(rotation, out _rotationMatrixTemp); Matrix.Multiply(ref _rotationMatrixTemp, ref _translationMatrixTemp, out bodyMatrix); } public Vector2 GetWorldPosition(Vector2 localPosition) { GetBodyMatrix(out _bodyMatrixTemp); Vector2.Transform(ref localPosition, ref _bodyMatrixTemp, out _worldPositionTemp); return _worldPositionTemp; } It looks like its a by-hand performance optimisation. But I don't see how this could possibly help performance? (If anything I think it would hurt by making objects much larger).

    Read the article

  • Homemaking a 2d soft body physics engine

    - by Griffin
    hey so I've decided to Code my own 2D soft-body physics engine in C++ since apparently none exist and I'm starting only with a general idea/understanding on how physics work and could be simulated: by giving points and connections between points properties such as elasticity, density, mass, shape retention, friction, stickiness, etc. What I want is a starting point: resources and helpful examples/sites that could give me the specifics needed to actually make this such as equations and required physics knowledge. It would be great if anyone out there also would give me their attempts or ideas. finally I was wondering if it was possible to... use the source code of an existing 3D engine such as Bullet and transform it to be 2D based? use the source code of a 2D Rigid body physics engine such as box2d as a starting point?

    Read the article

  • Algorithm to reduce a bitmap mask to a list of rectangles?

    - by mos
    Before I go spend an afternoon writing this myself, I thought I'd ask if there was an implementation already available --even just as a reference. The first image is an example of a bitmap mask that I would like to turn into a list of rectangles. A bad algorithm would return every set pixel as a 1x1 rectangle. A good algorithm would look like the second image, where it returns the coordinates of the orange and red rectangles. The fact that the rectangles overlap don't matter, just that there are only two returned. To summarize, the ideal result would be these two rectangles (x, y, w, h): [ { 3, 1, 2, 6 }, { 1, 3, 6, 2 } ]

    Read the article

  • Finding the contact point with SAT

    - by Kai
    The Separating Axis Theorem (SAT) makes it simple to determine the Minimum Translation Vector, i.e., the shortest vector that can separate two colliding objects. However, what I need is the vector that separates the objects along the vector that the penetrating object is moving (i.e. the contact point). I drew a picture to help clarify. There is one box, moving from the before to the after position. In its after position, it intersects the grey polygon. SAT can easily return the MTV, which is the red vector. I am looking to calculate the blue vector. My current solution performs a binary search between the before and after positions until the length of the blue vector is known to a certain threshold. It works but it's a very expensive calculation since the collision between shapes needs to be recalculated every loop. Is there a simpler and/or more efficient way to find the contact point vector?

    Read the article

  • Efficient visualization of a large voxelized volume

    - by Alejandro Piad
    Lets consider a large voxelized volume stored in an oct-tree or any other convenient structure. This volume represents, for instance, a landscape, where each block is either empty (air), or it has an specific material that will be later used to apply a texture. Voxels that are next to each other represent connected sections of the surface. What I need is an algorithm to generate a mesh from this voxels that represents the volume, with the following caracteristics: All the "holes" in the voxelized volume are correct. All the connections are correct, i.e. seamless. The surface appears smooth. In a broad sense, I want to somehow preserve the surface topology, meaning that connected sections remain connected in the resulting mesh and that the surface has a curvature that responds to the voxels topology. Imagine trying to render the Minecraft world but getting the mountain ladders to be smooth instead of blocky.

    Read the article

  • How to label a cuboid?

    - by usha
    Hi this is how my 3dcuboid looks, I have attached the complete code. I want to label this cuboid using different names across sides, how is this possible using opengl on android? public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • Using Ogre particle point billboards with shaders

    - by Jay
    I'm learning about using Ogre particles and had some questions about how the point type particles work. Q. I believe point type particles are implemented as a single position. Is one single vertex is passed to the vertex shader? Q. If one vertex is passed to the vertex shader then what gets sent to the fragment shader? Q. Can I pass the particle size to the shader? Perhaps with a custom parameter?

    Read the article

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • How can I find a position between 4 vertices in a fragment shader?

    - by c4sh
    I'm creating a shader with SharpDX (DirectX11 in C#) that takes a segment (2 points) from the output of a Vertex Shader and then passes them to a Geometry Shader, which converts this line into a rectangle (4 points) and assigns the four corners a texture coordinate. After that I want a Fragment Shader (which recieves the interpolated position and the interpolated texture coordinates) that checks the depth at the "spine of the rectangle" (that is, in the line that passes through the middle of the rectangle. The problem is I don't know how to extract the position of the corresponding fragment at the spine of the rectangle. This happens because I have the texture coordinates interpolated, but I don't know how to use them to get the fragment I want, because the coordinate system of a) the texture and b) the position of my fragment in screen space are not the same. Thanks a lot for any help.

    Read the article

  • Can't load vector font in Nuclex Framework

    - by ProgrammerAtWork
    I've been trying to get this to work for the last 2 hours and I'm not getting what I'm doing wrong... I've added Nuclex.TrueTypeImporter to my references in my content and I've added Nuclex.Fonts & Nuclex.Graphics in my main project. I've put Arial-24-Vector.spritefont & Lindsey.spritefont in the root of my content directory. _spriteFont = Content.Load<SpriteFont>("Lindsey"); // works _testFont = Content.Load<VectorFont>("Arial-24-Vector"); // crashes I get this error on the _testFont line: File contains Microsoft.Xna.Framework.Graphics.SpriteFont but trying to load as Nuclex.Fonts.VectorFont. So I've searched around and by the looks of it it has something to do with the content importer & the content processor. For the content importer I have no new choices, so I leave it as it is, Sprite Font Description - XNA Framework for content processor and I select Vector Font - Nuclex Framework And then I try to run it. _testFont = Content.Load<VectorFont>("Arial-24-Vector"); // crashes again I get the following error Error loading "Arial-24-Vector". It does work if I load a sprite, so it's not a pathing problem. I've checked the samples, they do work, but I think they also use a different version of the XNA framework because in my version the "Content" class starts with a capital letter. I'm at a loss, so I ask here. Edit: Something super weird is going on. I've just added the following two lines to a method inside FreeTypeFontProcessor::FreeTypeFontProcessor( Microsoft::Xna::Framework::Content::Pipeline::Graphics::FontDescription ^fontDescription, FontHinter hinter, just to check if code would even get there: System::Console::WriteLine("I AM HEEREEE"); System::Console::ReadLine(); So, I compile it, put it in my project, I run it and... it works! What the hell?? This is weird because I've downloaded the binaries, they didn't work, I've compiled the binaries myself. didn't work either, but now I make a small change to the code and it works? _. So, now I remove the two lines, compile it again and it works again. Someone care to elaborate what is going on? Probably some weird caching problem!

    Read the article

  • ISACA Webcast follow up: Managing High Risk Access and Compliance with a Platform Approach to Privileged Account Management

    - by Darin Pendergraft
    Last week we presented how Oracle Privileged Account Manager (OPAM) could be used to manage high risk, privileged accounts.  If you missed the webcast, here is a link to the replay: ISACA replay archive (NOTE: you will need to use Internet Explorer to view the archive) For those of you that did join us on the call, you will know that I only had a little bit of time for Q&A, and was only able to answer a few of the questions that came in.  So I wanted to devote this blog to answering the outstanding questions.  Here they are. 1. Can OPAM track admin or DBA activity details during a password check-out session? Oracle Audit Vault is monitoring these activities which can be correlated to check-out events. 2. How would OPAM handle simultaneous requests? OPAM can be configured to allow for shared passwords.  By default sharing is turned off. 3. How long are the passwords valid?  Are the admins required to manually check them in? Password expiration can be configured and set in the password policy according to your corporate standards.  You can specify if you want forced check-in or not. 4. Can 2-factor authentication be used with OPAM? Yes - 2-factor integration with OPAM is provided by integration with Oracle Access Manager, and Oracle Adaptive Access Manager. 5. How do you control access to OPAM to ensure that OPAM admins don't override the functionality to access privileged accounts? OPAM provides separation of duties by using Admin Roles to manage access to targets and privileged accounts and to control which operations admins can perform. 6. How and where are the passwords stored in OPAM? OPAM uses Oracle Platform Security Services (OPSS) Credential Store Framework (CSF) to securely store passwords.  This is the same system used by Oracle Applications. 7. Does OPAM support hierarchical/level based privileges?  Is the log maintained for independent review/audit? Yes. OPAM uses the Fusion Middleware (FMW) Audit Framework to store all OPAM related events in a dedicated audit database.  8. Does OPAM support emergency access in the case where approvers are not available until later? Yes.  OPAM can be configured to release a password under a "break-glass" emergency scenario. 9. Does OPAM work with AIX? Yes supported UNIX version are listed in the "certified component section" of the UNIX connector guide at:http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 10. Does OPAM integrate with Sun Identity Manager? Yes.  OPAM can be integrated with SIM using the REST  APIs.  OPAM has direct integration with Oracle Identity Manager 11gR2. 11. Is OPAM available today and what does it cost? Yes.  OPAM is available now.  Ask your Oracle Account Manager for pricing. 12. Can OPAM be used in SAP environments? Yes, supported SAP version are listed in the "certified component section" of the SAP  connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e25327/intro.htm#autoId0 13. How would this product integrate, if at all, with access to a particular field in the DB that need additional security such as SSN's? OPAM can work with DB Vault and DB Firewall to provide the fine grained access control for databases. 14. Is VM supported? As a deployment platform Oracle VM is supported. For further details about supported Virtualization Technologies see Oracle Fusion Middleware Supported System configurations here: http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html 15. Where did this (OPAM) technology come from? OPAM was built by Oracle Engineering. 16. Are all Linux flavors supported?  How about BSD? BSD is not supported. For supported UNIX version see the "certified component section" of the UNIX connector guide http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 17. What happens if users don't check passwords in at the end of a work task? In OPAM a time frame can be defined how long a password can be checked out. The security admin can force a check-in at any given time. 18. is MySQL supported? Yes, supported DB version are listed in the "certified component section" of the DB connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e28315/intro.htm#BABGJJHA 19. What happens when OPAM crashes and you need to use the password? OPAM can be configured for high availability, but if required, OPAM data can be backed up/recovered.  See the OPAM admin guide. 20. Is OPAM Standalone product or does it leverage other components from IDM? OPAM can be run stand-alone, but will also leverage other IDM components

    Read the article

  • Loading a new instance of a class through XML not working quite right

    - by Thegluestickman
    I'm having trouble with XML and XNA. I want to be able to load weapon settings through XML to make my weapons easier to make and to have less code in the actual project file. So I started out making a basic XML document, something to just assign variables with. But no matter what I changed it gave me a new error every time. The code below gives me a "XML element 'Tag' not found", I added and it started to say the variables weren't found. What I wanted to do in the XML file as well, was load a texture for the file too. So I created a static class to hold my texture values, then in the Texture tag of my XML document I would set it to that instance too. I think that's were the problems are occuring because that's where the "XML element 'Tag' not found" error is pointing me too. My XML document: <XnaContent> <Asset Type="ConversationEngine.Weapon"> <weaponStrength>0</weaponStrength> <damageModifiers>0</damageModifiers> <speed>0</speed> <magicDefense>0</magicDefense> <description>0</description> <identifier>0</identifier> <weaponTexture>LoadWeaponTextures.ironSword</weaponTexture> </Asset> </XnaContent> My Class to load the weapon XML: public static class LoadWeaponXML { static Weapon Weapons; public static Weapon WeaponLoad(ContentManager content, int id) { Weapons = content.Load<Weapon>(@"Weapons/" + id); return Weapons; } } public static class LoadWeaponTextures { public static Texture2D ironSword; public static void TextureLoad(ContentManager content) { ironSword = content.Load<Texture2D>("Sword"); } } I'm not entirely sure if you can load textures through XML, but any help would be greatly appreciated.

    Read the article

  • What are the valid DepthBuffer Texture formats in DirectX 11? And which are also valid for a staging resource?

    - by sebf
    I am trying to read the contents of the depth buffer into main memory so that my CPU side code can do Some Stuff™ with it. I am attempting to do this by creating a staging resource which can be read by the CPU, which I will copy the contents of the depth buffer into before reading it. I keep encountering errors however, because of, I believe, incompatibilities between the resource format and the view formats. Threads like these lead me to believe it is possible in DX11 to access the depth buffer as a resource, and that I can create a resource with a typeless format and have it interpreted in the view as another, but I cannot get it to work. What are the valid formats for the resource to be used as the depth buffer? Which of these are also valid for a CPU accessible staging resource?

    Read the article

  • My raycaster is putting out strange results, how do I fix it?

    - by JamesK89
    I'm working on a raycaster in ActionScript 3.0 for the fun of it, and as a learning experience. I've got it up and running and its displaying me output as expected however I'm getting this strange bug where rays go through corners of blocks and the edges of blocks appear through walls. Maybe somebody with more experience can point out what I'm doing wrong or maybe a fresh pair of eyes can spot a tiny bug I haven't noticed. Thank you so much for your help! Screenshots: http://i55.tinypic.com/25koebm.jpg http://i51.tinypic.com/zx5jq9.jpg Relevant code: function drawScene() { rays.graphics.clear(); rays.graphics.lineStyle(1, rgba(0x00,0x66,0x00)); var halfFov = (player.fov/2); var numRays:int = ( stage.stageWidth / COLUMN_SIZE ); var prjDist = ( stage.stageWidth / 2 ) / Math.tan(toRad( halfFov )); var angStep = ( player.fov / numRays ); for( var i:int = 0; i < numRays; i++ ) { var rAng = ( ( player.angle - halfFov ) + ( angStep * i ) ) % 360; if( rAng < 0 ) rAng += 360; var ray:Object = castRay(player.position, rAng); drawRaySlice(i*COLUMN_SIZE, prjDist, player.angle, ray); } } function drawRaySlice(sx:int, prjDist, angle, ray:Object) { if( ray.distance >= MAX_DIST ) return; var height:int = int(( TILE_SIZE / (ray.distance * Math.cos(toRad(angle-ray.angle))) ) * prjDist); if( !height ) return; var yTop = int(( stage.stageHeight / 2 ) - ( height / 2 )); if( yTop < 0 ) yTop = 0; var yBot = int(( stage.stageHeight / 2 ) + ( height / 2 )); if( yBot > stage.stageHeight ) yBot = stage.stageHeight; rays.graphics.moveTo( (ray.origin.x / TILE_SIZE) * MINI_SIZE, (ray.origin.y / TILE_SIZE) * MINI_SIZE ); rays.graphics.lineTo( (ray.hit.x / TILE_SIZE) * MINI_SIZE, (ray.hit.y / TILE_SIZE) * MINI_SIZE ); for( var x:int = 0; x < COLUMN_SIZE; x++ ) { for( var y:int = yTop; y < yBot; y++ ) { buffer.setPixel(sx+x, y, clrTable[ray.tile-1] >> ( ray.horz ? 1 : 0 )); } } } function castRay(origin:Point, angle):Object { // Return values var rTexel = 0; var rHorz = false; var rTile = 0; var rDist = MAX_DIST + 1; var rMap:Point = new Point(); var rHit:Point = new Point(); // Ray angle and slope var ra = toRad(angle) % ANGLE_360; if( ra < ANGLE_0 ) ra += ANGLE_360; var rs = Math.tan(ra); var rUp = ( ra > ANGLE_0 && ra < ANGLE_180 ); var rRight = ( ra < ANGLE_90 || ra > ANGLE_270 ); // Ray position var rx = 0; var ry = 0; // Ray step values var xa = 0; var ya = 0; // Ray position, in map coordinates var mx:int = 0; var my:int = 0; var mt:int = 0; // Distance var dx = 0; var dy = 0; var ds = MAX_DIST + 1; // Horizontal intersection if( ra != ANGLE_180 && ra != ANGLE_0 && ra != ANGLE_360 ) { ya = ( rUp ? TILE_SIZE : -TILE_SIZE ); xa = ya / rs; ry = int( origin.y / TILE_SIZE ) * ( TILE_SIZE ) + ( rUp ? TILE_SIZE : -1 ); rx = origin.x + ( ry - origin.y ) / rs; mx = 0; my = 0; while( mx >= 0 && my >= 0 && mx < world.size.x && my < world.size.y ) { mx = int( rx / TILE_SIZE ); my = int( ry / TILE_SIZE ); mt = getMapTile(mx,my); if( mt > 0 && mt < 9 ) { dx = rx - origin.x; dy = ry - origin.y; ds = ( dx * dx ) + ( dy * dy ); if( rDist >= MAX_DIST || ds < rDist ) { rDist = ds; rTile = mt; rMap.x = mx; rMap.y = my; rHit.x = rx; rHit.y = ry; rHorz = true; rTexel = int(rx % TILE_SIZE) } break; } rx += xa; ry += ya; } } // Vertical intersection if( ra != ANGLE_90 && ra != ANGLE_270 ) { xa = ( rRight ? TILE_SIZE : -TILE_SIZE ); ya = xa * rs; rx = int( origin.x / TILE_SIZE ) * ( TILE_SIZE ) + ( rRight ? TILE_SIZE : -1 ); ry = origin.y + ( rx - origin.x ) * rs; mx = 0; my = 0; while( mx >= 0 && my >= 0 && mx < world.size.x && my < world.size.y ) { mx = int( rx / TILE_SIZE ); my = int( ry / TILE_SIZE ); mt = getMapTile(mx,my); if( mt > 0 && mt < 9 ) { dx = rx - origin.x; dy = ry - origin.y; ds = ( dx * dx ) + ( dy * dy ); if( rDist >= MAX_DIST || ds < rDist ) { rDist = ds; rTile = mt; rMap.x = mx; rMap.y = my; rHit.x = rx; rHit.y = ry; rHorz = false; rTexel = int(ry % TILE_SIZE); } break; } rx += xa; ry += ya; } } return { angle: angle, distance: Math.sqrt(rDist), hit: rHit, map: rMap, tile: rTile, horz: rHorz, origin: origin, texel: rTexel }; }

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • Shadow Mapping and Transparent Quads

    - by CiscoIPPhone
    Shadow mapping uses the depth buffer to calculate where shadows should be drawn. My problem is that I'd like some semi transparent textured quads to cast shadows - for example billboarded trees. As the depth value will be set across all of the quad and not just the visible parts it will cast a quad shadow, which is not what I want. How can I make my transparent quads cast correct shadows using shadow mapping?

    Read the article

  • Sphere-Sphere intersection and Circle-Sphere intersection

    - by cagirici
    I have code for circle-circle intersection. But I need to expand it to 3-D. How do I calculate: Radius and center of the intersection circle of two spheres Points of the intersection of a sphere and a circle? Given two spheres (sc0,sr0) and (sc1,sr1), I need to calculate a circle of intersection whose center is ci and whose radius is ri. Moreover, given a sphere (sc0,sr0) and a circle (cc0, cr0), I need to calulate the two intersection points (pi0, pi1) I have checked this link and this link, but I could not understand the logic behind them and how to code them. I tried ProGAL library for sphere-sphere-sphere intersection, but the resulting coordinates are rounded. I need precise results.

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >