Search Results

Search found 10707 results on 429 pages for 'scroll position'.

Page 283/429 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Top tweets SOA Partner Community – November 2012

    - by JuergenKress
    Dear SOA partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. BPM is a key solution to this portfolio. To position BPM to your customers you can find many use case ideas in the paper BPM 11g Patterns and industry specific value propositions for Financial Services & Insurance & Retail. Many more Process Accelerators (11.1.1.6.2) have become available. It is an excellent demo and starting point for BPM projects. Our SOA Suite team published the most important OOW presentation at the OTN website. The Oracle SOA proactive support team is running a series of blog posts about SOA and JMS Introductory. To become an expert in SOA, Bob highlighted the latest list of SOA books. For OSB projects we recommend the EAIESB OSB poster. Thanks to all the experts who contributed and shared their SOA & BPM knowledge this month again. Please feel free to send us the link to your blog post via twitter @soacommunity: Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel Fault Handling Slides and Q&A by Vennester Installing Oracle Event Processing 11g by Antoney Reynolds Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile By Michelle Kimihira A brief note for customers running SOA Suite on AIX platforms By Christian ACM - Adaptive Case Management by Peter Paul BPM 11g - Dynamic Task Assignment with Multi-level Organization Units By Mark Foster Oracle Real User Experience Insight: Oracle's Approach to User Experience Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsNovember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Java SE 6 Update 23 is released!

    - by [email protected]
    Java SE 6 Update 23 includes the following key changes: Java VisualVM 1.3.1 introduces the following features and enhancements:           o Added Java version and vendor information to the application Overview view           o Built on NetBeans Platform and profiler 6.9.1 Swing JMenuItem Corrections for right-to-left languages           o Non-default alignment and text orientation for the menu items in Swing have been fixed           o The position of the icon and the text where text used to overlap the icon in a menu item was corrected           o All platform Look and Feel configurations will now handle menu items in right-to-left language situations. Updated HotSpot 19 JVM Engine with improvements to overall performance and reliability. Additional Languages support in SuSE Linux Enterprise Server 10 and 11 on Chinese(Simplified), Chinese (Traditional), Japanese, and Korean locales. Supported system configuration, check - http://java.sun.com/javase/6/webnotes/install/system-configurations.htmlSee Release Notes for additional details.

    Read the article

  • Complexity of defense AI

    - by Fredrik Johansson
    I have a non-released game, and currently it's only possible to play with another human being. As the game rules are made up by me, I think it would be great if new players could learn basic game play by playing against an AI opponent. I mean it's not like Tennis, where the majority knows at least the fundamental rules. On the other hand, I'm a bit concerned that this AI implementation can be quite complex. I hope you can help me with an complexity estimation. I've tried to summarize the gameplay below. Is this defense AI very hard to do? Basic Defense Game Play Player Defender can move within his land, i.e. inside a random, non-convex, polygon. This land will also contain obstacles modeled as polygons, that Defender has to move around. Player Attacker has also a land, modeled as another such polygon. Assume that Defender shall defend against Attacker. Attacker will then throw a thingy towards Defender's land. To be rewarded, Attacker wants to hit Defender's land, and Defender will want to strike away the thingy from his land before it stops to prevent Attacker from scoring. To feint Defender, Attacker might run around within his land before the throw, and based on these attacker movements Defender shall then continuously move to the best defense position within his land.

    Read the article

  • Sending changes to a terrain heightmap over UDP

    - by Floomi
    This is a more conceptual, thinking-out-loud question than a technical one. I have a 3D heightmapped terrain as part of a multiplayer RTS that I would like to terraform over a network. The terraforming will be done by units within the gameworld; the player will paint a "target heightmap" that they'd like the current terrain to resemble and units will deform towards that on their own (a la Perimeter). Given my terrain is 257x257 vertices, the naive approach of sending heights when they change will flood the bandwidth very quickly - updating a quarter of the terrain every second will hit ~66kB/s. This is clearly way too much. My next thought was to move to a brush-based system, where you send e.g. the centre of a circle, its radius, and some function defining the influence of the brush from the centre going outwards. But even with reliable UDP the "start" and "stop" messages could still be delayed. I guess I could compare timestamps and compensate for this, although it'd likely mean that clients would deform verts too much on their local simulations and then have to smooth them back to the correct heights. I could also send absolute vert heights in the "start" and "stop" messages to guarantee correct data on the clients. Alternatively I could treat brushes in a similar way to units, and do the standard position + velocity + client-side prediction jazz on them, with the added stipulation that they deform terrain within a certain radius around them. The server could then intermittently do a pass and send (a subset of) recently updated verts to clients as and when there's bandwidth to spare. Any other suggestions, or indications that I'm on the right (or wrong!) track with any of these ideas would be greatly appreciated.

    Read the article

  • Dev Lead Job opening on my team

    My product unit (Parallel Developer Tools) is hiring a developer lead here in Redmond. This position is specifically on the debugger feature team that I "Program Manage".So, if you have what it takes and don't mind working with me every single day, click on the link below to read more and apply. You can also send me your resume and I'll make sure it gets to the right place and that you get a prompt response.There is a very long job description on the Microsoft careers site under job id 707388.Here is an excerpt from the middle (emphasis mine):"...We are in search of a talented and innovative senior lead software design engineer to own development of the debugging tools for data parallelism (including GP-GPU) and HPC Clusters being built by our team.To be successful, you need to be able to guide careers, design and architect well, communicate and share the best development practices, collaborate with your peers, contribute to the vision, and code significant portions of the solution. We want to hear from you if you're passionate about making your mark in the parallel development space, improving people, and building world-class tools."Responsibilities include:Managing a team of senior and junior developersDesign and coding high-quality software..."For the full background story, requirements, qualifications and responsibilities please visit the official page. Comments about this post welcome at the original blog.

    Read the article

  • Oracle SOA Governance EMEA Workshop for Partners & System Integrators: Nov 5-7th | Madrid, Spain

    - by Lionel Dubreuil
    The EMEA Fusion Middleware Product Management team is delighted to announce an exciting and a much-awaited workshop on our market-leading SOA Governance offering. Oracle SOA Governance solution is Oracle Fusion Middleware's strategic approach to governing SOA. Whether just embarking on an SOA program, or expanding from project or pilot to broader deployment, the Oracle SOA Governance solution closes the loop on measuring SOA success from project inception through to realization, and providing the proof of ROI on SOA. Would your prospects and customers like to: Align their SOA Vision and Execution Improve Decision Making Effectively Manage Business and Technology Change Enable Control Foster Enterprise-wide Collaboration Reduce Development Costs Track their SOA Investments and Returns Demonstrate business value and ROI of SOA This FREE hands-on workshop is dedicated to EMEA Partners & System Integrators (SIs). It'll be delivered by Oracle HQ Product Management and will primarily focus on : SOA Governance as a Strategy and Methodology Hands-on with Oracle Enterprise Repository (OER) and Oracle Service Registry (OSR) When, how and whom to position our SOA Governance offerings Our SOA Governance Rapid Start Service Hands-on sessions for the most popular customer use cases Seats are limited, book now - you cannot afford to miss this training! If you're interested please contact Yogesh Sontakke: [email protected].

    Read the article

  • DB Schema for ACL involving 3 subdomains

    - by blacktie24
    Hi, I am trying to design a database schema for a web app which has 3 subdomains: a) internal employees b) clients c) contractors. The users will be able to communicate with each other to some degree, and there may be some resources that overlap between them. Any thoughts about this schema? Really appreciate your time and thoughts on this. Cheers! -- -- Table structure for table locations CREATE TABLE IF NOT EXISTS locations ( id bigint(20) NOT NULL, name varchar(250) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Table structure for table privileges CREATE TABLE IF NOT EXISTS privileges ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, resource_id int(11) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=10 ; -- -- Table structure for table resources CREATE TABLE IF NOT EXISTS resources ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, user_type enum('internal','client','expert') NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Table structure for table roles CREATE TABLE IF NOT EXISTS roles ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, type enum('position','department') NOT NULL, parent_id int(11) DEFAULT NULL, user_type enum('internal','client','expert') NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Table structure for table role_perms CREATE TABLE IF NOT EXISTS role_perms ( id int(11) NOT NULL AUTO_INCREMENT, role_id int(11) NOT NULL, privilege_id int(11) NOT NULL, mode varchar(250) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Table structure for table users CREATE TABLE IF NOT EXISTS users ( id int(10) unsigned NOT NULL AUTO_INCREMENT, email varchar(255) NOT NULL, password varchar(255) NOT NULL, salt varchar(255) NOT NULL, type enum('internal','client','expert') NOT NULL, first_name varchar(255) NOT NULL, last_name varchar(255) NOT NULL, location_id int(11) NOT NULL, phone varchar(255) NOT NULL, status enum('active','inactive') NOT NULL DEFAULT 'active', PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Table structure for table user_perms CREATE TABLE IF NOT EXISTS user_perms ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, privilege_id int(11) NOT NULL, mode varchar(250) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Table structure for table user_roles CREATE TABLE IF NOT EXISTS user_roles ( id int(11) NOT NULL, user_id int(11) NOT NULL, role_id int(11) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

    Read the article

  • Move a 2D square on y axis on android GLES2

    - by Dan
    I am trying to create a simple game for android, to start i am trying to make the square move down the y axis but the way i am doing it dosent move the square at all and i cant find any tutorials for GLES20 The on draw frame function in the render class updates the users position based on accleration dew to gravity, gets the transform matrix from the user class which is used to move the square down, then the program draws it. All that happens is that the square is drawn, no motion happens public void onDrawFrame(GL10 gl) { user.update(0.0, phy.AccelerationDewToGravity); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT); // Re draws black background GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false, 12, user.SquareVB);//triangleVB); GLES20.glEnableVertexAttribArray(maPositionHandle); GLES20.glUniformMatrix4fv(maPositionHandle, 1, false, user.getTransformMatrix(), 0); GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); } The update function in the player class is public void update(double vh, double vv) { Vh += vh; // Increase horrzontal Velosity Vv += vv; // Increase vertical velosity //Matrix.translateM(mMMatrix, 0, (int)Vh, (int)Vv, 0); Matrix.translateM(mMMatrix, 0, mMMatrix, 0, (float)Vh, (float)Vv, 0); }

    Read the article

  • XNA: How to make the Vaus Spacecraft move left and right on directional keys pressed?

    - by Will Marcouiller
    I'm currently learning XNA per suggestion from this question's accepted answer: Where to start writing games, any tutorials or the like? I have then installed everything to get ready to work with XNA Game Studio 4.0. General Objective Writing an Arkanoid-like game. I want to make my ship move when I press either left or right keys. Code Sample protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here #if WINDOWS if (Keyboard.GetState().IsKeyDown(Keys.Escape)) this.Exit(); else { if (Keyboard.GetState().IsKeyDown(Keys.Left)) MoveLeft(gameTime); } #endif // Move the sprite around. BounceEnergyBall(gameTime); base.Update(gameTime); } void MoveLeft(GameTime gameTime) { // I'm not sure how to play with the Vector2 object and its position here!... _vausSpacecraftPos /= _vausSpacecraftSpeed.X; // This line makes the spacecraft move diagnol-top-left. } Question What formula shall I use, or what algorithm shall I consider to make my spaceship move as expected left and right properly? Thanks for your thoughts! Any clue will be appreciated. I am on my learning curve though I have years of development behind me (already)!

    Read the article

  • How do I get Graphics drivers / bluetooth / card reader working on an Acer Aspire V3-571G?

    - by Adam
    A couple of days ago I bought an Acer Aspire V3-571G laptop without a system installed on it. The only thing that was there was Linux Linpus. I created a bootable CD with Ubuntu 12.04 64-bit - I read that my processor was 64 bit and that it might be a good configuration for my gear (I'm not especially fluent with all the computer stuff, still trying to learn) and replaced Linpus with Ubuntu. Everything seemed to work fine, but there're few exceptions to that which came pass my way. My bluetooth doesn't work. It seems to be switched on, but when I check my system settings the button is actually off, and I can't drag it 'perminently' to the 'on' position. Tried a couple of commands I found on the net, none of them helped and there was no word whatsoever in my BIOS settings about enabling bluetooth. My card reader has some serious problems with copying more than one file at a time. I tried to put some music on my phone through a MicroSD card adapter (because my bluetooth doesn't work) and it got stuck every single time I copied an album on it. I'm not sure if all my drivers were properly installed, so I checked in the terminal if it could tell me sth about my graphics. typed: sudo lshw -c display and what i got was: *-display UNCLAIMED description: VGA compatible controller product: NVIDIA Corporation vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller cap_list configuration: latency=0 resources: memory:b2000000-b2ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:2000(size=128) *-display description: VGA compatible controller product: Ivy Bridge Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:b3000000-b33fffff memory:c0000000-cfffffff ioport:3000(size=64) As I said I'm no expert and not english-speaking generally, but it doesn't seem to be right. I've got a NVIDIA GeForce GT 640M.

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

  • Shooter in iOS and a visible Aim line before shooting

    - by London2423
    I have to questions. I am trying to develop a game that is iOS but I did it first in my computer so I can tested there. I was able to must of it for PC but I am having a very hard time with iOS port The problem I do have is that I don't know how to shout in iOS. To be more specific how to line render in iOS This is the script I use in my computer using UnityEngine; using System.Collections; public class NewBehaviourScript : MonoBehaviour { LineRenderer line; void Start () { line = gameObject.GetComponent<LineRenderer>(); line.enabled = false; } void Update () { if (Input.GetButtonDown ("Fire1")) { StopCoroutine ("FireLaser"); StartCoroutine ("FireLaser"); } } IEnumerator FireLaser () { line.enabled = true; while (Input.GetButton("Fire1")) { Ray ray = new Ray(transform.position, transform.forward); RaycastHit hit; line.SetPosition (0, ray.origin); if (Physics.Raycast (ray, out hit,100)) { line.SetPosition(1,hit.point); if (hit.rigidbody) { hit.rigidbody.AddForceAtPosition(transform.forward * 5, hit.point); } } else line.SetPosition (1, ray.GetPoint (100)); yield return null; } line.enabled = false; { } } } Which part I have to change for iOS? I already did in the iOS the touch giu event so my player move around in the xcode/Iphone but I need some help with the shouting part. The second part of the question is where I do have to insert or change the script in order to first aim and I DO see the line of aim and then shout. Now the player can only shout. It can not aim at the gameobject, see the the line coming out of the gun aiming at the object and then shout? How I can do that. Everyone tell me Line render but that's what i did Thank you

    Read the article

  • why the difference in google search result using script for search and using a browser for search

    - by Jayapal Chandran
    I wrote a code to find the position in google search result for a search keyword. I also did the same with the browser. Both the results are different. Let me explain in detail here. I have a website and i wanted to know on which page number my domain appears for a search string. Like when i search for 'code snippets' i wanted to find in google search on which page number a certain domain appears. I wrote a php code to search page by page starting from page 1 to page n. I did the same task using a browser. The script returned page 4 and when browsed i can see the domain appearing in second page. here is the search string i use in my code. /search?hl=en&output=search&sclient=psy-ab&q=code+snippets&start=0&btnG= and for each request i change the start=0 to start=1, start=2, etc... and in the response i will check whether my domain appears in it. any idea for this different in search results?

    Read the article

  • How do you motivate peers to become better developers?

    - by Brian Rasmussen
    In my experience there seems to be two kinds of developers (if we simplify matters a great deal of course). On the one hand we have the developers, who may do a perfectly acceptable job, but who do not really care about the computer science part of their craft. They usually know few languages / technologies and are happy to let things stay that way. For whatever reason, they don't try to improve their computer science skills unless this is required in their current position. On the other hand, we have the geeks or the pragmatic programmers if you subscribe to that idea. They play around with other languages and technologies and usually have knowledge about several topics outside the technical domain of their current job. I would like to see more developers, who are enthusiastic about software development. If you share this point of view, what do you do to push your peers in that direction? Edit: follow-up question inspired by one of the answers: As non-managers, should we really care about this? And why/why not?

    Read the article

  • Synaptics drivers are not loading on Kubuntu 13.10 on Dell Vostro 2420

    - by Alok Singh Mahor
    I freshly installed Kubuntu 13.10, 32 bit version on newly purchased Dell Vostro 2420. everything is working fine except scrolling and multitouch features through touchpad. I am able to change position of cursor using touchpad and able to tap (single click and double click) but scrolling is not working I tried to find out solution by searching on google but could not find proper solution to load synaptics drivers. i am listing some details: Laptop: Dell Vostro 2420 Linux Kernel version and distribution: 3.11.0-12-generic, Kubuntu 13.10 output of xinput list is ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=12 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Power Button id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=10 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=11 [slave keyboard (3)] ? Dell WMI hotkeys id=13 [slave keyboard (3)] Output of synclient -l is Couldn't find synaptics properties. No synaptics driver loaded? output of lshw is at http://paste.ubuntu.com/6645687/ xserver log and dmesg dont have trace of synaptics kindly tell me how to troubleshoot this problem.

    Read the article

  • Smooth Camera Zoom Factor Change

    - by Siddharth
    I have game play scene in which user can zoom in and out. For which I used smooth camera in the following manner. public static final int CAMERA_WIDTH = 1024; public static final int CAMERA_HEIGHT = 600; public static final float MAXIMUM_VELOCITY_X = 400f; public static final float MAXIMUM_VELOCITY_Y = 400f; public static final float ZOOM_FACTOR_CHANGE = 1f; mSmoothCamera = new SmoothCamera(0, 0, Constants.CAMERA_WIDTH, Constants.CAMERA_HEIGHT, Constants.MAXIMUM_VELOCITY_X, Constants.MAXIMUM_VELOCITY_Y, Constants.ZOOM_FACTOR_CHANGE); mSmoothCamera.setBounds(0f, 0f, Constants.CAMERA_WIDTH, Constants.CAMERA_HEIGHT); But above thing create problem for me. When user perform zoom in and leave game play scene then other scene behaviour not look good. I already set zoom factor to 1 for this purpose. But now it show camera translation in other scene. Because scene switching time it so much small that player can easily saw translation of camera that I don't want to show. After camera reposition, everything works perfect but how to set camera its proper position. For example my loading text move from bottom to top or vice versa based on camera movement. Any more detail you want then I can able to give you.

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Elliptical orbit modeling

    - by Nathon
    I'm playing with orbits in a simple 2-d game where a ship flies around in space and is attracted to massive things. The ship's velocity is stored in a vector and acceleration is applied to it every frame as appropriate given Newton's law of universal gravitation. The point masses don't move (there's only 1 right now) so I would expect an elliptical orbit. Instead, I see this: I've tried with nearly circular orbits, and I've tried making the masses vastly different (a factor of a million) but I always get this rotated orbit. Here's some (D) code, for context: void accelerate(Vector delta) { velocity = velocity + delta; // Velocity is a member of the ship class. } // This function is called every frame with the fixed mass. It's a // method of the ship's. void fall(Well well) { // f=(m1 * m2)/(r**2) // a=f/m // Ship mass is 1, so a = f. float mass = 1; Vector delta = well.position - loc; float rSquared = delta.magSquared; float force = well.mass/rSquared; accelerate(delta * force * mass); }

    Read the article

  • Rope Colliding with a Rectangle

    - by Colton
    I have my rope, and I have my rectangles. The rope is similar to the implementation found here: http://nehe.gamedev.net/tutorial/rope_physics/17006/ Now, I want to make the rope properly collide with the rectangle such that the rope will not pass through a rectangle, and wrap around the rectangle and all that good stuff. Currently, I have it set so no rope node can pass through a rect (successfully), however, this means a rope segment can still pass through a block. Ex: So the question is, what can I do to fix this? What I have tried: I create a rectangle between two nodes of a rope, calculate rotation between the nodes, and get myself a transformed rectangle. I can successfully detect a collision between rope segments and a (non-transformed) rectangle. Create a new node or pivot point around the corner of the block, and rearrange nodes to point to the corner node. Trouble is determining what corner the rope segment is passing through. And then the current rope setup goes wonky (based on verlet integration, so a sudden change in position causes the rope to wiggle like a seismograph during a magnitude 8 earth quake.) Among other issues that might be solvable, but its turning into a case by case thing, which doesn't seem right. I think the best answer here would just be a link to a tutorial (I simply can't find any, most lead to box2D or farseer, but I want to at least learn how it works before I hide behind an engine).

    Read the article

  • Detecting walls or floors in pygame

    - by Serial
    I am trying to make bullets bounce of walls, but I can't figure out how to correctly do the collision detection. What I am currently doing is iterating through all the solid blocks and if the bullet hits the bottom, top or sides, its vector is adjusted accordingly. However, sometimes when I shoot, the bullet doesn't bounce, I think it's when I shoot at a border between two blocks. Here is the update method for my Bullet class: def update(self, dt): if self.can_bounce: #if the bullet hasnt bounced find its vector using the mousclick pos and player pos speed = -10. range = 200 distance = [self.mouse_x - self.player[0], self.mouse_y - self.player[1]] norm = math.sqrt(distance[0] ** 2 + distance[1] ** 2) direction = [distance[0] / norm, distance[1 ] / norm] bullet_vector = [direction[0] * speed, direction[1] * speed] self.dx = bullet_vector[0] self.dy = bullet_vector[1] #check each block for collision for block in self.game.solid_blocks: last = self.rect.copy() if self.rect.colliderect(block): topcheck = self.rect.top < block.rect.bottom and self.rect.top > block.rect.top bottomcheck = self.rect.bottom > block.rect.top and self.rect.bottom < block.rect.bottom rightcheck = self.rect.right > block.rect.left and self.rect.right < block.rect.right leftcheck = self.rect.left < block.rect.right and self.rect.left > block.rect.left each test tests if it hit the top bottom left or right side of the block its colliding with if self.can_bounce: if topcheck: self.rect = last self.dy *= -1 self.can_bounce = False print "top" if bottomcheck: self.rect = last self.dy *= -1 #Bottom check self.can_bounce = False print "bottom" if rightcheck: self.rect = last self.dx *= -1 #right check self.can_bounce = False print "right" if leftcheck: self.rect = last self.dx *= -1 #left check self.can_bounce = False print "left" else: # if it has already bounced and colliding again kill it self.kill() for enemy in self.game.enemies_list: if self.rect.colliderect(enemy): self.kill() #update position self.rect.x -= self.dx self.rect.y -= self.dy This definitely isn't the best way to do it but I can't think of another way. If anyone has done this or can help that would be awesome!

    Read the article

  • How does font rendering actually work?

    - by Andrea
    I realize that I know essentially nothing about the way fonts get rendered in my computer. From what I can observe, font rendering is generally made in a consistent way throughout the system. For instance, the subpixel font hinting settings that I configure in my DE control panel have influence on text which appears on window borders, in my browser, in my text editor and so on. (I should observe that some Java applications show a noticeable difference, so I guess they are using a different font rendering mechanism). What I get from the above is that probably all applications that need font rendering make use of some OS (or DE)-wide library. On the other hand, browsers usually manage their own rendering through a rendering engine, that takes care of positioning various items - including text - according to specific flow rules. I am not sure how these two facts are compatible. I would assume that the browser would have to ask the OS to draw a glyph at a given position, but how can it manage the flow of text without knowing beforehand how much space the glyph will take? Are there separate calls to determine the glyph sizes, so that the browser can manage the flow as if characters were little boxes that are later filled in by the OS? (Although this does not take care of kerning). Or is the OS responsible for drawing a whole text area, including text flow? Does the OS return the rendered glyph as a bitmap and leaves it to the application to draw that on the screen?

    Read the article

  • List has no value after adding values in

    - by Sigh-AniDe
    I am creating a a ghost sprite that will mimic the main sprite after 10 seconds of the game. I am storing the users movements in a List<string> and i am using a foreach loop to run the movements. The problem is when i run through the game by adding breakpoints the movements are being added to the List<string> but when the foreach runs it shows that the list has nothing in it. Why does it do that? How can i fix it? this is what i have: public List<string> ghostMovements = new List<string>(); public void UpdateGhost(float scalingFactor, int[,] map) { // At this foreach, ghostMovements has nothing in it foreach (string s in ghostMovements) { // current position of the ghost on the tiles int mapX = (int)(ghostPostition.X / scalingFactor); int mapY = (int)(ghostPostition.Y / scalingFactor); if (s == "left") { switch (ghostDirection) { case ghostFacingUp: angle = 1.6f; ghostDirection = ghostFacingRight; Program.form.direction = ""; break; case ghostFacingRight: angle = 3.15f; ghostDirection = ghostFacingDown; Program.form.direction = ""; break; case ghostFacingDown: angle = -1.6f; ghostDirection = ghostFacingLeft; Program.form.direction = ""; break; case ghostFacingLeft: angle = 0.0f; ghostDirection = ghostFacingUp; Program.form.direction = ""; break; } } } } // The movement is captured here and added to the list public void captureMovement() { ghostMovements.Add(Program.form.direction); }

    Read the article

  • Interesting 3d zooming technique

    - by stark
    Is it possible to zoom to a certain point on screen by modifying the field of view and rotating the camera as to keep that point/object in the same place on screen while zooming ? Changing the camera position is not allowed.. I projected the 3d pos of the object on screen and remembered it. Then on each frame I calculate the direction to it in camera space and then I construct a rotation matrix to align this direction to Z axis (in cam space). After this, I calculate the direction from the camera to the object in world space and transform this vector with the matrix I obtained earlier and then use this final vector as the camera's new direction. And it's actually "kinda working", the problem is that it is more/less off than the camera's rotation before starting to zoom depending on the area you are trying to zoom in (larger error on edges/corners). It looks acceptable, but I'm not settling for only this. Any suggestions/resources for doing this technique perfectly ? If some of you want to explain the math in detail, be my guests, I can understand these things well. Thanks. Edit: I'll check often for responses, I'm really curious about this :D

    Read the article

  • Algorithm to shoot at a target in a 3d game

    - by Sebastian Bugiu
    For those of you remembering Descent Freespace it had a nice feature to help you aim at the enemy when shooting non-homing missiles or lasers: it showed a crosshair in front of the ship you chased telling you where to shoot in order to hit the moving target. I tried using the answer from http://stackoverflow.com/questions/4107403/ai-algorithm-to-shoot-at-a-target-in-a-2d-game?lq=1 but it's for 2D so I tried adapting it. I first decomposed the calculation to solve the intersection point for XoZ plane and saved the x and z coordinates and then solving the intersection point for XoY plane and adding the y coordinate to a final xyz that I then transformed to clipspace and put a texture at those coordinates. But of course it doesn't work as it should or else I wouldn't have posted the question. From what I notice the after finding x in XoZ plane and the in XoY the x is not the same so something must be wrong. float a = ENG_Math.sqr(targetVelocity.x) + ENG_Math.sqr(targetVelocity.y) - ENG_Math.sqr(projectileSpeed); float b = 2.0f * (targetVelocity.x * targetPos.x + targetVelocity.y * targetPos.y); float c = ENG_Math.sqr(targetPos.x) + ENG_Math.sqr(targetPos.y); ENG_Math.solveQuadraticEquation(a, b, c, collisionTime); First time targetVelocity.y is actually targetVelocity.z (the same for targetPos) and the second time it's actually targetVelocity.y. The final position after XoZ is crossPosition.set(minTime * finalEntityVelocity.x + finalTargetPos4D.x, 0.0f, minTime * finalEntityVelocity.z + finalTargetPos4D.z); and after XoY crossPosition.y = minTime * finalEntityVelocity.y + finalTargetPos4D.y; Is my approach of separating into 2 planes and calculating any good? Or for 3D there is a whole different approach? sqr() is square not sqrt - avoiding a confusion.

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >