Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 347/1021 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Text contains characters that cannot be resolved by this SpriteFont

    - by Appeltaart
    I'm trying to draw currency symbols in an Arial Spritefont. According to thissite, the font contains these symbols. I've included the Unicode character region for these symbols in the Content Processor SpriteFont file, like so: <CharacterRegions> <CharacterRegion> <Start>&#32;</Start> <End>&#126;</End> </CharacterRegion> <CharacterRegion> <!-- Currency symbols --> <Start>&#8352;</Start> <End>&#8378;</End> </CharacterRegion> </CharacterRegions> However, whenever I try to draw a € (Euro) symbol, I get this exception: Text contains characters that cannot be resolved by this SpriteFont Inspecting the SpriteFont in the IDE at that breakpoint shows the € symbol is in there. I'm at a loss at what could be wrong. Why is this exception thrown?

    Read the article

  • Scaling background without scaling foreground in platformer?

    - by David Xu
    I'm currently developing a platform game and I've run into a problem with scaling resolutions. I want a different resolution of the game to still display the foreground unscaled (characters, tiles, etc) but I want the background to be scaled to fit into the window. To explain this better, my viewport has 4 variables: (x, y, width, height) where x and y are the top left corner and width and height are the dimensions. These can be either 800x600, 1024x768 or 1280x960. When I design my levels, I design everything for the highest resolution (1280x960) and expect the game engine to scale it down if a user is running in a lower resolution. I have tried the following to make it work but nothing I've come up with solves it so far: scale = view->width/1280; drawX = x * scale; drawY = y * scale; (this makes the translation too small for low resolution) and scale = view->width/1280; bgWidth = background->width*scale; bgHeight = background->height*scale; drawX = x + background->width/2 - bgWidth/2; drawY = y + background->height/2 - bgHeight/2; (this makes the translation completely wrong at the edges of the map) The thing is, no matter what resolution the game is run at, the map remains the same size, and the foreground is unscaled. (With a lower resolution you just see less of the foreground in the viewport) I was wondering if anyone had any idea how to solve this problem? Thank you in advance!

    Read the article

  • XNA matrix order problem

    - by user1990950
    I want a matrix that scales first and then rotates. I tried the code below, but it didn't work. zRotation, yRotation and xRotation are rotations that shouldn't be affected by the origin. allrot should be affected. xScale, yScale and zScale are the scaling variables. The code below works except that it rotates and then scales. Matrix worldMatrix = ( Matrix.CreateRotationZ(MathHelper.ToRadians(zRotation)) * Matrix.CreateRotationX(MathHelper.ToRadians(xRotation)) * Matrix.CreateRotationY(MathHelper.ToRadians(yRotation)) ) * ( Matrix.CreateTranslation(origin) * Matrix.CreateRotationY(MathHelper.ToRadians(allrot)) * Matrix.CreateScale(xScale, yScale, zScale) );

    Read the article

  • Handling player logoff and logon in a persistent world without breaking immersion

    - by Boreal
    One problem I've never seen fixed in any persistent online game is how to handle player logon and logoff without the characters just popping in and out of the world. My first thought is to simply make a player's offline state as their character being asleep, but that doesn't make sense in the event of a disconnect and not an intentional logoff. How would you fix this, if you would even bother fixing it at all?

    Read the article

  • Sprites are sometimes blurry in Flash

    - by Tim Cooper
    I am playing around with drawing an SVG sprite (imported in through [Embed]). Depending on the coordinates of the image, sometimes it appears more crisp than others. The following image shows how at different locations is it rendered differently: (Image link - You may have to download and zoom in with an image editor to see it) You'll notice that the middle sprite is more blurry than the ones on the sides. Does anyone know why this is? Any help would be appreciated.

    Read the article

  • Convert vector interpolation to quaternion interpolation? (Catmull-Rom)

    - by edA-qa mort-ora-y
    I have some existing code which does catmull-rom interpolation on two vectors (facing and up). I'm converting this to use quaternions instead (to replace the two vectors). Is there a general way to convert the vector based interpolation to a quaternion one? The approach I'm using now is to exact the axis and angle from the quanternion. I then interpolate each of those independently and convert back to a quaternion. Is there a more direct method?

    Read the article

  • morph a sphere to a cube and a a cube to a sphere with GLSL

    - by nkint
    hi i'm getting started with glsl with quartz composer. i have a patch with a particle system in which each particle is mapped into a sphere with a blend value. with blend=0 particles are in random positions, blend=1 particles are in the sphere. the code is here: vec3 sphere(vec2 domain) { vec3 range; range.x = radius * cos(domain.y) * sin(domain.x); range.y = radius * sin(domain.y) * sin(domain.x); range.z = radius * cos(domain.x); return range; } // in main: normal = sphere(p0); * blend + gl_Normal * (1.0 - blend); i'd like the particle to be on a cube if blend=0 i've tried to find but i can't figure out some parametric equation for the cube. mayebe it is not the right way?

    Read the article

  • Speeding up procedural texture generation

    - by FalconNL
    Recently I've begun working on a game that takes place in a procedurally generated solar system. After a bit of a learning curve (having neither worked with Scala, OpenGL 2 ES or Libgdx before), I have a basic tech demo going where you spin around a single procedurally textured planet: The problem I'm running into is the performance of the texture generation. A quick overview of what I'm doing: a planet is a cube that has been deformed to a sphere. To each side, a n x n (e.g. 256 x 256) texture is applied, which are bundled in one 8n x n texture that is sent to the fragment shader. The last two spaces are not used, they're only there to make sure the width is a power of 2. The texture is currently generated on the CPU, using the updated 2012 version of the simplex noise algorithm linked to in the paper 'Simplex noise demystified'. The scene I'm using to test the algorithm contains two spheres: the planet and the background. Both use a greyscale texture consisting of six octaves of 3D simplex noise, so for example if we choose 128x128 as the texture size there are 128 x 128 x 6 x 2 x 6 = about 1.2 million calls to the noise function. The closest you will get to the planet is about what's shown in the screenshot and since the game's target resolution is 1280x720 that means I'd prefer to use 512x512 textures. Combine that with the fact the actual textures will of course be more complicated than basic noise (There will be a day and night texture, blended in the fragment shader based on sunlight, and a specular mask. I need noise for continents, terrain color variation, clouds, city lights, etc.) and we're looking at something like 512 x 512 x 6 x 3 x 15 = 70 million noise calls for the planet alone. In the final game, there will be activities when traveling between planets, so a wait of 5 or 10 seconds, possibly 20, would be acceptable since I can calculate the texture in the background while traveling, though obviously the faster the better. Getting back to our test scene, performance on my PC isn't too terrible, though still too slow considering the final result is going to be about 60 times worse: 128x128 : 0.1s 256x256 : 0.4s 512x512 : 1.7s This is after I moved all performance-critical code to Java, since trying to do so in Scala was a lot worse. Running this on my phone (a Samsung Galaxy S3), however, produces a more problematic result: 128x128 : 2s 256x256 : 7s 512x512 : 29s Already far too long, and that's not even factoring in the fact that it'll be minutes instead of seconds in the final version. Clearly something needs to be done. Personally, I see a few potential avenues, though I'm not particularly keen on any of them yet: Don't precalculate the textures, but let the fragment shader calculate everything. Probably not feasible, because at one point I had the background as a fullscreen quad with a pixel shader and I got about 1 fps on my phone. Use the GPU to render the texture once, store it and use the stored texture from then on. Upside: might be faster than doing it on the CPU since the GPU is supposed to be faster at floating point calculations. Downside: effects that cannot (easily) be expressed as functions of simplex noise (e.g. gas planet vortices, moon craters, etc.) are a lot more difficult to code in GLSL than in Scala/Java. Calculate a large amount of noise textures and ship them with the application. I'd like to avoid this if at all possible. Lower the resolution. Buys me a 4x performance gain, which isn't really enough plus I lose a lot of quality. Find a faster noise algorithm. If anyone has one I'm all ears, but simplex is already supposed to be faster than perlin. Adopt a pixel art style, allowing for lower resolution textures and fewer noise octaves. While I originally envisioned the game in this style, I've come to prefer the realistic approach. I'm doing something wrong and the performance should already be one or two orders of magnitude better. If this is the case, please let me know. If anyone has any suggestions, tips, workarounds, or other comments regarding this problem I'd love to hear them.

    Read the article

  • Is there a market for a Text-based empire-building game?

    - by Vishnu
    I am working on building a text-based in-browser empire building game. The screen will be split into a console and an EXTREMELY rough vector map of your empire (just squares in a bigger square). Commands such as building and expanding would be typed into the console and automatically reflected in the map. Would there be any market for such a game? Would anyone want to play? To clarify, it would be online and everyone's empire would be in the same 'world'.

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • Need help starting with a game in Flash/AS3

    - by Hossein
    Hi, I am kinda new to flash/AS3. I have written some stuff and now I now the basics of AS. I want to develop a game in which my character moves. exactly like the way Super Mario moves(the background changes and new obstacles come in the way like and animation with which you can interact by climbing and shooting etc.) There is one big thing that I don't understand is that how to make my background moving and introduce new obstacles when my character moves forward. Currently my scene is static which means the background is the same and obstacles are also stationary. Can some point me to a tutorial or give me an answer that solves my confusion? thanks

    Read the article

  • Exporting the frames in a Flash CS5.5 animation and possibly creating the spritesheet

    - by Adam Smith
    Some time ago, I asked a question here to know what would be the best way to create animations when making an Android game and I got great answers. I did what people told me there by exporting each frame from a Flash animation manually and creating the spritesheet also manually and it was very tedious. Now, I changed project and this one is going to contain a lot more animations and I feel like there has to be a better way to to export each frame individually and possibly create my spritesheets in an automated manner. My designer is using Flash CS5.5 and I was wondering if all of this was possible, as I can't find an option or code examples on how to save each frame individually. If this is not possible using Flash, please recommend me another program that can be used to create animations without having to create each frame on its own. I'd rather keep Flash as my designer knows how to use it and it's giving great results.

    Read the article

  • Rule of thumb for enemy art design in 2D platformer

    - by Terrance
    I'm at the early stages of developing a 2D side scrolling open ended platformer (think Metroidvania) and am having a bit of difficulty at enemy design inspiration for something of a scifi, nature, fantasy setting that isn't overly familar or obvious. I haven't seen too many articles, blogs or books that talk about the subject at great length. Is there a fair rule of thumb when coming up with enemy art with respect to keeping your player engaged?

    Read the article

  • Why is a fully transparent pixel still rendered?

    - by Mr Bell
    I am trying to make a pixel shader that achieves an effect similar to this video http://www.youtube.com/watch?v=f1uZvurrhig&feature=related My basic idea is render the scene to a temp render target then Render the previously rendered image with a slight fade on to another temp render target Draw the current scene on top of that Draw the results on to a render target that persists between draws Draw the results on to the screen But I am having problems with the fading portion. If I have my pixel shader return a color with its A component set to 0, shouldn't that basically amount to drawing nothing? (Assuming that sprite batch blend mode is set to AlphaBlend) To test this I have my pixel shader return a transparent red color. Instead of nothing being drawn, it draws a partially transparent red box. I hope that my question makes sense, but if it doesnt please ask me to clarify Here is the drawing code public override void Draw(GameTime gameTime) { GraphicsDevice.SamplerStates[1] = SamplerState.PointWrap; drawImageOnClearedRenderTarget(presentationTarget, tempRenderTarget, fadeEffect); drawImageOnRenderTarget(sceneRenderTarget, tempRenderTarget); drawImageOnClearedRenderTarget(tempRenderTarget, presentationTarget); GraphicsDevice.SetRenderTarget(null); drawImage(backgroundTexture); drawImage(presentationTarget); base.Draw(gameTime); } private void drawImage(Texture2D image, Effect effect = null) { spriteBatch.Begin(0, BlendState.AlphaBlend, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(image, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); } private void drawImageOnRenderTarget(Texture2D image, RenderTarget2D target, Effect effect = null) { GraphicsDevice.SetRenderTarget(target); drawImage(image, effect); } private void drawImageOnClearedRenderTarget(Texture2D image, RenderTarget2D target, Effect effect = null) { GraphicsDevice.SetRenderTarget(target); GraphicsDevice.Clear(Color.Transparent); drawImage(image, effect); } Here is the fade pixel shader sampler TextureSampler : register(s0); float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0 { float4 c = 0; c = tex2D(TextureSampler, texCoord); //c.a = clamp(c.a - 0.05, 0, 1); c.r = 1; c.g = 0; c.b = 0; c.a = 0; return c; } technique Fade { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } }

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at StackOverflow

    Read the article

  • Voxel Engine in Multiplayer?

    - by Oliver Schöning
    This is a question more out of Interest for now, because I am not even near to the point that I could create this project at the moment. I really like the progress on the Atomontage Engine. A Voxel engine that is WIP at the moment. I would like to create a Voxel SERVER eventually. First in JavaScript (That's what I am learning right now) later perhaps in C++ for speed. Remember, I am perfectly aware that this is very hard! This is a brainstorm for the next 10 years as for now. What I would like to achieve one day is a Multiplayer Game in the Browser where the voxels positions are updated by XYZ input from the server. The Browser Does only 3 things: sending player input to the server, updating Voxel positions send from the server and rendering the world. I imagine using something like the Three.js libary on the client side. So that would be my programming dream right there... Now to something simpler for the near future. Right now I am learning javascript. And I am making games with Construct2. (A really cool JavaScript "game maker") The plan is to create a 2D Voxel enviorment (Block Voxels) on the Socket.IO Server* and send the position of the Voxels and Players to the Client side which then positions the Voxel Blocks to the Server Output coordinates. I think that is a bit more manageable then the other bigger idea. And also there should be no worries about speed with this type of project in JavaScript (I hope). Extra Info: *I am using nodejs (Without really knowing what it does besides making Socket.IO work) So now some questions: Is the "dream project" doable in JavaScript? Or is C++ just the best option because it does not take as long to be interpreted at run time like JavaScript? What are the limitations? I can think of some: Need of a Powerful server depending on how much information the server has to process. Internet Speed; Sending the data of the Voxel positions to every player could add up being very high! The browser FPS might go down quickly if rendering to many objects. One way of fixing reducing the packages Could be to let the browser calculate some of the Voxel positions from Several Values. But that would slow down the Client side too. What about the more achievable project? I am almost 100% convinced that this is possible in JavaScript, and that there are several ways of doing this. This is just XY position Updating for now.. Hope this did make some sense. Please comment if you got something to say :D

    Read the article

  • Admob banner not getting remove from superview

    - by Anil gupta
    I am developing one 2d game using cocos2d framework, in this game i am using admob for advertising, in some classes not in all classes but admob banner is visible in every class and after some time game getting crash also. I am not getting how admob banner is comes in every class in fact i have not declare in Rootviewcontroller class. can any one suggest me how to integrate Admob in cocos2d game, i want Admob banner in particular classes not in every class, I am using latest google admob sdk, my code is below: Thanks in advance ` -(void)AdMob{ NSLog(@"ADMOB"); CGSize winSize = [[CCDirector sharedDirector]winSize]; // Create a view of the standard size at the bottom of the screen. if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad){ bannerView_ = [[GADBannerView alloc] initWithFrame:CGRectMake(size.width/2-364, size.height - GAD_SIZE_728x90.height, GAD_SIZE_728x90.width, GAD_SIZE_728x90.height)]; } else { // It's an iPhone bannerView_ = [[GADBannerView alloc] initWithFrame:CGRectMake(size.width/2-160, size.height - GAD_SIZE_320x50.height, GAD_SIZE_320x50.width, GAD_SIZE_320x50.height)]; } if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { bannerView_.adUnitID =@"a15062384653c9e"; } else { bannerView_.adUnitID =@"a15062392a0aa0a"; } bannerView_.rootViewController = self; [[[CCDirector sharedDirector]openGLView]addSubview:bannerView_]; [bannerView_ loadRequest:[GADRequest request]]; GADRequest *request = [[GADRequest alloc] init]; request.testing = [NSArray arrayWithObjects: GAD_SIMULATOR_ID, nil]; // Simulator [bannerView_ loadRequest:request]; } //best practice for removing the barnnerView_ -(void)removeSubviews{ NSArray* subviews = [[CCDirector sharedDirector]openGLView].subviews; for (id SUB in subviews){ [(UIView*)SUB removeFromSuperview]; [SUB release]; } NSLog(@"remove from view"); } //this makes the refreshTimer count -(void)targetMethod:(NSTimer *)theTimer{ //INCREASE OF THE TIMER AND SECONDS elapsedTime++; seconds++; //INCREASE OF THE MINUTOS EACH 60 SECONDS if (seconds>=60) { seconds=0; minutes++; [self removeSubviews]; [self AdMob]; } NSLog(@"TIME: %02d:%02d", minutes, seconds); } `

    Read the article

  • Tiled game: how to correct load background image ?

    - by stighy
    Hi, i'm a newbie. I'm trying to develop a 2d game (top-down view). I would like to load a standard background, a textured ground... My "world" is big, for example 3000px X 3000px. I think it is not a good idea to load a 3000px x 3000px image and move it... So, how is the best practice ? To load a single small image (64x64) and repeat it for N times ? If yes, ok, but how i can manage the "background" movement ? Thanks Bye!

    Read the article

  • farseer physics xbox samples not working

    - by Hugh
    I have downloaded a few of the sample projects from the official farseer physics website and i just cant get them to run on my xbox. -My connection to the xbox is fine, other xbox projects debug fine on my xbox -I have tried running both the xbox versions of the samples (for example the farseer hello world sample project) and the windows version by right-clicking the project and making a copy for xbox. I get a bunch of errors but what i always get is "unreachable code detected" referring to code in the farseer library, it seems to be a problem to do with referencing/linking the farseer library to the main game project. Help please!

    Read the article

  • Can frequent state changes decrease rendering performance?

    - by Miro
    Can frequent texture and shader binding decrease rendering performance? "Frequent" binding example: for object for material in object render part of object using that material "Low count" binding example: for material for object in material render part of object using that material I'm planning to use an octree later and with this "low count" method of rendering it can drastically increase memory consumption. So is it good idea?

    Read the article

  • 2D Tile Map files for Platformer, JSON or DB?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

  • A few questions about integrating AudioKinetic Wwise and Unity

    - by SaldaVonSchwartz
    I'm new to Wwise and to using it with Unity, and though I have gotten the integration to work, I'm still dealing with some loose ends and have a few questions: (I'm on Unity 4.3 as of now but I think it shouldn't make any difference) The base path: The Wwise documentation implies you set this in the AkGlobalSoundEngineInitializer basePath public ivar, which is exposed to the editor. However, I found that this variable is not really used. Instead, the path is hardcoded to /Audio/GeneratedSoundBanks in AkBankPath. I had to modify both scripts to actually look in the path that I set in the editor property. What's the deal with this? Just sloppyness or am I missing something? Also about paths: since I'm on Mac, I'm using Unity natively under OS X and in tadem, the Wwise authoring tool via VMWare and I share the OS X Unity project folder so I can generate the soundbanks into the assets folder. However, the authoring tool (downloaded the latest one for Windows) doesn't automatically generate any "platform-specific" subfolders for my wwise files. That is, again, the Unity integration scripts assume the path to be /Audio/GeneratedSoundBanks/<my-platform>/ which in my case would be Mac (I set the authoring tool to generate for Mac). The documentation says wwise will automatically generate the platform-specific folders but it just dumps all the stuff in GeneratedSoundBanks. Am I missing some setting? cause right now I just manually create the /Mac folder. The C# methods AkSoundEngine.PostEvent and AkSoundEngine.LoadBank for instance, have a few overloads, including ones where I can refer to the soundbanks or events by their ID. However, if I try to use these, for instance: AkSoundEngine.LoadBank(, AkSoundEngine.AK_DEFAULT_POOL_ID) where the int I got from the .h header, I get Ak_Fail. If I use the overloads that reference the objects by string name then it works. What gives? Converting the ID header to C#: The integration comes with a C# script that seems to fork a process to call Python in turn to covert the C++ header into a C# script. This always fails unless I manually execute the Python script myself from outside Unity. Might be a permissions thing, but has anyone experienced this? The Profiler: I set up the Unity player to run in the background and am using the "Profile" version of the plugin. However, when I start the Unity OS X standalone app, the profiler in VMWare does not see it. This I'm thinking might just be that I'm trying to see a running instance of the sound engine inside an OS X binary from a Windows virtual machine. But I'm just wondering if anyone has gotten the Windows profiler to see an OS X Unity binary. Different versions of the integration plugin: It's not clear to me from the documentation whether I have to manually (or write a script to do it) remove the "Profile" version and install the "Release" version when I'm going to do a Release build or if I should install both version in Unity and it'll select the right one. Thanks!

    Read the article

  • Where can i get the openal sdk for c++?

    - by Peter Short
    The OpenAL site I'm looking at is a crappy outdated and broken sharepoint portal and the SDK in the downloads section give me a 500 html code when i request it. http://connect.creativelabs.com/openal/Downloads/OpenAL11CoreSDK.zip I found an OpenAL SDK on a softpedia and it has headers but not alu.h or alut.h which the tutorials I'm looking at apparently require for loading wavs etc. What am I missing? Is OpenAL dead or something?

    Read the article

  • SFML fail to load image as texture

    - by zyeek
    I have come to a problem with the code below ... Using SFML 2.0 #include <SFML/Graphics.hpp> #include <iostream> #include <list> int main() { float speed = 5.0f; // create the window sf::RenderWindow window(sf::VideoMode(sf::VideoMode::getDesktopMode().height - 300, 800), "Bricks"); // Set game window position on the screen window.setPosition( sf::Vector2i(sf::VideoMode::getDesktopMode().width/4 + sf::VideoMode::getDesktopMode().width/16 , 0) ); // Allow library to accept repeatitive key presses (i.e. holding key) window.setKeyRepeatEnabled(true); // Hide mouse cursor //window.setMouseCursorVisible(false); // Limit 30 frames per sec; the minimum for all games window.setFramerateLimit(30); sf::Texture texture; if (!texture.loadFromFile("tile.png", sf::IntRect(0, 0, 125, 32))) { std::cout<<"Could not load image\n"; return -1; } // Empty list of sprites std::list<sf::Sprite> spriteContainer; bool gameFocus = true; // run the program as long as the window is open while (window.isOpen()) { sf::Vector2i mousePos = sf::Mouse::getPosition(window); // check all the window's events that were triggered since the last iteration of the loop sf::Event event; while (window.pollEvent(event)) { float offsetX = 0.0f, offsetY = 0.0f; if(event.type == sf::Event::GainedFocus) gameFocus = !gameFocus; else if(event.type == sf::Event::LostFocus) gameFocus = !gameFocus; if(event.type == sf::Event::KeyPressed) { if (event.key.code == sf::Keyboard::Space) { if(gameFocus) { // Create sprite and add features before putting it into container sf::Sprite sprite(texture); sprite.scale(.9f,.7f); sf::Vector2u textSize = texture.getSize(); sprite.setPosition(sf::Vector2f(mousePos.x-textSize.x/2.0f, mousePos.y - textSize.y/2.0f)); spriteContainer.push_front(sprite); } } if(event.key.code == sf::Keyboard::P) std::cout << spriteContainer.size() << std::endl; if( event.key.code == sf::Keyboard::W ) offsetY -= speed; if( event.key.code == sf::Keyboard::A ) offsetX -= speed; if( event.key.code == sf::Keyboard::S ) offsetY += speed; if( event.key.code == sf::Keyboard::D ) offsetX += speed; } // "close requested" event: we close the window if (event.type == sf::Event::Closed || event.key.code == sf::Keyboard::Escape) window.close(); // Move all sprites synchronously for (std::list<sf::Sprite>::iterator sprite = spriteContainer.begin(); sprite != spriteContainer.end(); ++sprite) sprite->move(offsetX, offsetY); //sprite.move(offsetX,offsetY); } // clear the window with black color window.clear(sf::Color::Black); // draw everything here... // window.draw(...); // Draw all sprites in the container for (std::list<sf::Sprite>::iterator sprite = spriteContainer.begin(); sprite != spriteContainer.end(); ++sprite) window.draw(*sprite); // end the current frame window.display(); } return 0; } A couple weeks ago it worked flawlessly to my expectation, but now that I come back to it and I am having problems importing the image as a texture "tile.png". I don't understand why this is evening happening and the only message I get via the terminal is "Cannot load image ..." then a bunch of random characters. My libraries are for sure working, but now I am not sure why the image is not loading. My image is in the same directory as with my .h and .cpp files. This is an irritating problem that keep coming up for some reason and is always a problem to fix it. I import my libraries via my own directory "locals" which contain many APIs, but I specifically get SFML, and done appropriately as I am able to open a window and many other stuff.

    Read the article

  • Why don't more games use vector art?

    - by Parris
    It would seem to me that vector art is more efficient in terms of resources/scalability; however, in most cases I have seen artists using bitmap/rasterized art. Is this a limitation put on the artists by the game programmers/designers? As a programmer I think vector art would be more ideal, since it allows for scaling up resolution without having to recreate the art, creating really large graphics or causing graphics to become blurry. The questions: why aren't more people using SVG/AI to create 2D game art? Would it actually be preferred (and who prefers it)? Are bitmap graphics a standard or a limitation (or maybe neither)? Background: I am working on an engine, and I had some kinda cool ideas for vector based graphics; however, I don't want to piss off artists in the future. I guess this is more a question centered around pragmatism and developing games.

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >