Search Results

Search found 44501 results on 1781 pages for 'software development life'.

Page 723/1781 | < Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >

  • Material System

    - by Towelie
    I'm designing Material/Shader System (target API DX10+ and may be OpenGL3+, now only DX10). I know, there was a lot of topics about this, but i can't find what i need. I don't want to do some kind of compilation/parsing scripts in real-time. So there some artist-created material, written at some analog of CG. After it compiled to hlsl code and after to final shader. Also there are some hard-coded ConstantBuffers, like cbuffer EveryFrameChanging { float4x4 matView; float time; float delta; } And shader use shared constant buffers to get parameters. For each mesh in the scene, getting needs and what it can give (normals, binormals etc.) and finding corresponding permutation of shader or calculating missing parts. Also, during build calculating render states and the permutations or hash for this shader which later will be used for sorting or even giving the ID from 0 to ShaderCount w/o gaps to it for sorting. FinalShader have only 1 technique and one pass. After it for each Mesh setting some shader and it's good to render. some pseudo code SetConstantBuffer(ConstantBuffer::PerFrame); foreach (shader in FinalShaders) SetConstantBuffer(ConstantBuffer::PerShader, shader); SetRenderState(shader); foreach (mesh in shader.GetAllMeshes) SetConstantBuffer(ConstantBuffer::PerMesh, mesh); SetBuffers(mesh); Draw(); class FinalShader { public: UUID m_ID; RenderState m_RenderState; CBufferBindings m_BufferBindings; } But i have no idea how to create this CG language and do i really need it?

    Read the article

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • Creating shooting arrow class [on hold]

    - by I.Hristov
    OK I am trying to write an XNA game with one controllable by the player entity, while the rest are bots (enemy and friendly) wondering around and... shooting each other from range. Now the shooting I suppose should be done with a separate class Arrow (for example). The resulting object would be the arrow appearing on screen moving from shooting entity to target entity. When target is reached arrow is no longer active, probably removed from the list. I plan to make a class with fields: Vector2 shootingEntity; Vector2 targetEntity; float arrowSpeed; float arrowAttackSpeed; int damageDone; bool isActive; Then when enemy entities get closer than a int rangeToShoot (which each entity will have as a field/prop) I plan to make a list of arrows emerging from each entity and going to the closest opposite one. I wonder if that logic will enable me later to make possible many entities to be able to shoot independently at different enemy entities at the same time. I know the question is broad but it would be wise to ask if the foundations of the idea are correct.

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • Drawing Grid in 3D view - Mathematically calculate points and draw line between them (Not working)

    - by Deukalion
    I'm trying to draw a simple grid from a starting point and expand it to a size. Doing this mathematically and drawing the lines between each point, but since the "DrawPrimitives(LineList)" doesn't work the way it should work, And this method can't even draw lines between four points to create a simple Rectangle, so how does this method work exactly? Some sort of coordinate system: [ ][ ][ ][ ][ ][ ][ ] [ ][2.2][ ][0.2][ ][2.2][ ] [ ][2.1][1.1][ ][1.1][2.1][ ] [ ][2.0][ ][0.0][ ][2.0][ ] [ ][2.1][1.1][ ][1.1][2.1][ ] [ ][2.2][ ][0.2][ ][2.2][ ] [ ][ ][ ][ ][ ][ ][ ] I've checked with my method and it's working as it should. It calculates all the points to form a grid. This way I should be able to create Points where to draw line right? This way, if I supply the method with Size = 2 it starts at 0,0 and works it through all the corners (2,2) on each side. So, I have the positions of each point. How do I draw lines between these? VerticeCount = must be number of Points in this case, right? So, tell me, if I can't supply this method with Point A, Point B, Point C, Point D to draw a four vertice rectangle (Point A - B - C - D) - how do I do it? How do I even begin to understand it? As far as I'm concered, that's a "Line list" or a list of points where to draw lines. Can anyone explain what I'm missing? I wish to do this mathematically so I can create a a custom grid that can be altered.

    Read the article

  • Can Google Translate's audio files be used in a game?

    - by ashes999
    For my game, I need text-to-speech. Since it's Android, I decided to settle for MP3s, since the range of words spoken is few. For my prototype, I'm using Google Translate to generate the audio since it has awesome pronounciation across multiple languages. But can I use it in production? What if I sell my game for $1 on the app store? All I can find on SE is that the API may be LGPL, and that the licensing page mentions the API is only available for academic research -- nothing more. My usage is a bit different; I'm actually capturing the audio bits and using those instead. I'm curious to know the license for this; I can't find anything with my Google-fu.

    Read the article

  • How to adjust the shooting angle of an object

    - by Blue
    I've been trying to add an angle adjustment feature to a power bar that I got from unity3dStudents. But I can't seem to get the code right. I'm using addforce to rigidbody, it works but the power is too great. I also found that rotating the object it's shooting from changes the angle. But I don't know how to proceed from that. Can somebody show me the problem with the script below, as in how to add height to the addforce without it going to far up or to the side? Or how to change the angle of the object? var theAngle : int; var maxAngle : int = 130; var minAngle : int = 0; var angleIncreasing : boolean = false; var angleDecreasing : boolean = false; var rotationSpeed : float = 10; var ball : Rigidbody; var spawnPos : Transform; var shotForce : float = 25; function Update () { if(Input.GetKeyDown("k")){ angleIncreasing = true; angleDecreasing = false; } if(Input.GetKeyUp("k")){ angleIncreasing = false; } if(Input.GetKeyDown("l")){ angleIncreasing = false; angleDecreasing = true; } if(Input.GetKeyUp("l")){ angleDecreasing = false; } ------- if(angleIncreasing){ theAngle += Time.deltaTime * rotationSpeed; if(theAngle > maxAngle){ theAngle = maxAngle; } } if(angleDecreasing){ theAngle -= Time.deltaTime * rotationSpeed; if(theAngle < minAngle){ theAngle = minAngle; } } } function Shoot(power : float, angle : int){ ---- var forward : Vector3 = spawnPos.forward; var upward : Vector3 = spawnPos.up; pFab.AddForce(forward * power * shotForce); pFab.AddForce(upward * angle * 10); ---- }

    Read the article

  • How to create reproducible probability in map generation?

    - by nickbadal
    So for my game, I'm using perlin noise to generate regions of my map (water/land, forest/grass) but I'd also like to create some probability based generation too. For instance: if(nextInt(10) > 2 && tile.adjacentTo(Type.WATER)) tile.setType(Type.SAND); This works fine, and is even reproduceable (based on a common seed) if the nextInt() calls are always in the same order. The issue is that in my game, the world is generated on demand, based on the player's location. This means, that if I explore the map differently, and the chunks of the map are generated in a different order, the randomness is no longer consistent. How can I get this sort of randomness to be consistent, independent of call order? Thanks in advance :)

    Read the article

  • Prevent collisions between mobs/npcs/units piloted by computer AI : How to avoid mobile obstacles?

    - by Arthur Wulf White
    Lets says we have character a starting at point A and character b starting at point B. character a is headed to point B and character b is headed to point A. There are several simple ways to find the path(I will be using Dijkstra). The question is, how do I take preventative action in the code to stop the two from colliding with one another? case2: Characters a and b start from the same point in different times. Character b starts later and is the faster of the two. How do I make character b walk around character a without going through it? case3:Lets say we have m such characters in each side and there is sufficient room to pass through without the characters overlapping with one another. How do I stop the two groups of characters from "walking on top of one another" and allow them pass around one another in a natural organic way. A correct answer would be any algorithm, that given the path to the destination and a list of mobile objects that block the path, finds an alternative path or stops without stopping all units when there is sufficient room to traverse.

    Read the article

  • Rotation, further I go from 0:0, the further the object positions around the origin while rotating

    - by Serguei Fedorov
    For some reason I am having the issue where the following code: global.spriteBatch.Draw(obj.sprite, obj.getPosition(), null, Color.White, obj.rotation, obj.center, 2f, SpriteEffects.None, 1); causes the object to rotate around the origin in such a way, as though there is an offset to the position relative to its location. The calculation for the center it correct and this happens even if I set the pivot to be the location of the object. The further I get from 0:0 the larger the radius or rotation. I am not sure what is going on here because given the following tutorial http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2D/Rotation.php I have done the code setup correctly. Any ideas? Any help is greatly appreciated!!!

    Read the article

  • infer half vector length in BRDF

    - by cician
    it's my first question on stack. Is it possible to infer length of the half angle vector for specular lighting from N·L and N·V without the whole view and light vectors? I may be completely off-track, but I have this gut feeling it's possible... Why? I'm working on a skin shader and I'm already doing one texture lookup with N·L+N·E and one texture lookup for specular with N·H+N·V. The latter one can be transformed into N·L+N·E lookup if only I had the half vector length. Doing so could simplify the shader a bit and move some operations into the pre-computed lookup texture. It would make a huge difference since I'm trying to squeeze as much functionality as possible to a single pass mobile version so instruction count matters. Thanks.

    Read the article

  • Defining and selecting an area from an image

    - by meth0d_
    I want to create a game, where the world is loaded from an image file, much like Paradox Interactive does it for their games. If I have this image: Then the red, green, blue, cyan, magenta, white, black and grey should be different provinces. I know how to loop through them, and check if it's a new province, the problem is that I don't know how to define the region of the province for selection: I don't know how I can load in data, to make sure you can click anywhere on that province, and make sure it gets selected.

    Read the article

  • How do I create a game that runs on Windows, iOS and Android?

    - by AspaApps
    I use C++ to create windows games and now I want to step into another other OS like Android or iOS. I'm totally familiar with C++ so I tried to create app for iOS using objective C it was working awesome. However, I also want to publish games for Android but not by using Java. I don't want to create a single game 5-6 times for other platforms. Is their any way that if I create game for Windows then it will work in Android and iOS ? Or should I use Action Script 3.0? If I use action script 3.0, will it require Flash player to run the game in Windows, Android, iOS?

    Read the article

  • Rotating a Quad around it center

    - by Trixmix
    How can you rotate a quad around its center? This is what im trying to do but it aint working: GL11.glTranslatef(x-getWidth()/2, y-getHeight()/2, 0); GL11.glRotatef(30, 0.0f, 0.0f, 1.0f); GL11.glTranslatef(x+getWidth()/2, y+getHeight()/2, 0); DRAW my main problem is that it renders it off the screen.. draw code: GL11.glBegin(GL11.GL_QUADS); { GL11.glTexCoord2f(0, 0); GL11.glVertex2f(0, 0); GL11.glTexCoord2f(0, getTexture().getHeight()); GL11.glVertex2f(0, height); GL11.glTexCoord2f(getTexture().getWidth(), getTexture().getHeight()); GL11.glVertex2f(width,height); GL11.glTexCoord2f(getTexture().getWidth(), 0); GL11.glVertex2f(width,0); } GL11.glEnd();

    Read the article

  • GLSL per pixel lighting with custom light type

    - by Justin
    Ok, I am having a big problem here. I just got into GLSL yesterday, so the code will be terrible, I'm sure. Basically, I am attempting to make a light that can be passed into the fragment shader (for learning purposes). I have four input values: one for the position of the light, one for the color, one for the distance it can travel, and one for the intensity. I want to find the distance between the light and the fragment, then calculate the color from there. The code I have gives me a simply gorgeous ring of light that get's twisted and widened as the matrix is modified. I love the results, but it is not even close to what I am after. I want the light to be moved with all of the vertices, so it is always in the same place in relation to the objects. I can easily take it from there, but getting that to work seems to be impossible with my current structure. Can somebody give me a few pointers (pun not intended)? Vertex shader: attribute vec4 position; attribute vec4 color; attribute vec2 textureCoordinates; varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; void main() { vec4 ECposition = gl_ModelViewMatrix * gl_Vertex; vec3 tnorm = normalize(vec3 (gl_NormalMatrix * gl_Normal)); fposition = ftransform(); gl_Position = fposition; gl_TexCoord[0] = gl_MultiTexCoord0; fposition = ECposition; lightPosition = vec4(0.0, 0.0, 5.0, 0.0) * gl_ModelViewMatrix * gl_Vertex; lightDistance = 5.0; lightIntensity = 1.0; lightColor = vec4(0.2, 0.2, 0.2, 1.0); } Fragment shader: varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; uniform sampler2D texture; void main() { float l_distance = sqrt((gl_FragCoord.x * lightPosition.x) + (gl_FragCoord.y * lightPosition.y) + (gl_FragCoord.z * lightPosition.z)); float l_value = lightIntensity / (l_distance / lightDistance); vec4 l_color = vec4(l_value * lightColor.r, l_value * lightColor.g, l_value * lightColor.b, l_value * lightColor.a); vec4 color; color = texture2D(texture, gl_TexCoord[0].st); gl_FragColor = l_color * color; //gl_FragColor = fposition; }

    Read the article

  • Circular class dependency

    - by shad0w
    Is it bad design to have 2 classes which need each other? I'm writing a small game in which I have a GameEngine class which has got a few GameState objects. To access several rendering methods, these GameState objects also need to know the GameEngine class - so it's a circular dependency. Would you call this bad design? I am just asking, because I am not quite sure and at this time I am still able to refactor these things.

    Read the article

  • Is white the best base color to start with when planning to shade sprites within Unity?

    - by SpartanDonut
    I'm looking into prototyping a game in Unity which will consist of solid square sprites / tiles. I figure I can represent different types of objects with different colors for each of the tiles in the game. I figure that I can import a single square sprite and shade it appropriately in Unity as opposed to imported squares of many different colors. My experience with adjusting the hue and saturation within Photoshop shows that white is not an easy color to change as things that are white often stay white. My testing in Unity shows that I can change the "color" of a sprite to anything other than white and the sprite is seemingly shaded appropriately, despite what I would have thought given my Photoshop experience. Since white objects do seem to take on the appropriate color shading when changed within Unity my gut tells me that this is the best base color to begin with, meaning that I can import a single white square sprite and simply adjust the color to represent different objects and object states. Is a white sprite actually the best color sprite to begin with and why does something like this work in Unity as opposed to adjusting the hue and saturation within Photoshop?

    Read the article

  • HTML5 Game (Canvas) - UI Techniques?

    - by Jason L.
    Hi! I'm in the process of building a JavaScript / HTML5 game (using Canvas) for mobile (Android / iPhone/ WebOS) with PhoneGap. I'm currently trying to design out how the UI and playing board should be built and how they should interact but I'm not sure what the best solution is. Here's what I can think of - Build the UI right into the canvas using things like drawImage and fillText Build parts of the UI outside of the canvas using regular DOM objects and then float a div over the canvas when UI elements need to overlap the playing board canvas. Are there any other possible techniques I can use for building the game UI that I haven't thought of? Also, which of these would be considered the "standard" way (I know HTML5 games are not very popular so there probably isn't a "standard" way yet)? And finally, which way would YOU recommend / use? Many thanks in advance!

    Read the article

  • When dealing with a static game board, what are some methods to make it more interesting?

    - by Ólafur Waage
    Let's say you have a game board that you look at. It does not move but there is some action going on. For example Chess, Checkers, Solitaire. The game I'm working on is not one of these but it's a good reference. What are some methods you can apply to the game or the design that increases the appeal of the game to the user? Of course you can make it prettier but what are some other methods you can use? For example: Visual cues, game design changes, user interface arrangement, etc.

    Read the article

  • Double Buffering in Panda3D (C++)

    - by jsvcycling
    How would I go about using Double Buffering (to create a loading screen) in Panda3D using C++? I've searched Google and found some forums that talk about the concept of swapping buffers, but I haven't seen any that show any type of source code (specifically Panda3D/C++). I'd like to try and stay away from using pure OpenGL code and work it through Panda3D, but if I have no other choice, then I'll have to go with OpenGL coding.

    Read the article

  • How do I randomly generate a top-down 2D level with separate sections and is infinite?

    - by Bagofsheep
    I've read many other questions/answers about random level generation but most of them deal with either randomly/proceduraly generating 2D levels viewed from the side or 3D levels. What I'm trying to achieve is sort of like you were looking straight down on a Minecraft map. There is no height, but the borders of each "biome" or "section" of the map are random and varied. I already have basic code that can generate a perfectly square level with the same tileset (randomly picking segments from the tileset image), but I've encountered a major issue for wanting the level to be infinite: Beyond a certain point, the tiles' positions become negative on one or both of the axis. The code I use to only draw tiles the player can see relies on taking the tiles position and converting it to the index number that represents it in the array. As you well know, arrays cannot have a negative index. Here is some of my code: This generates the square (or rectangle) of tiles: //Scale is in tiles public void Generate(int sX, int sY) { scaleX = sX; scaleY = sY; for (int y = 0; y <= scaleY; y++) { tiles.Add(new List<Tile>()); for (int x = 0; x <= scaleX; x++) { tiles[tiles.Count - 1].Add(tileset.randomTile(x * tileset.TileSize, y * tileset.TileSize)); } } } Before I changed the code after realizing an array index couldn't be negative my for loops looked something like this to center the map around (0, 0): for (int y = -scaleY / 2; y <= scaleY / 2; y++) for (int x = -scaleX / 2; x <= scaleX / 2; x++) Here is the code that draws the tiles: int startX = (int)Math.Floor((player.Position.X - (graphics.Viewport.Width) - tileset.TileSize) / tileset.TileSize); int endX = (int)Math.Ceiling((player.Position.X + (graphics.Viewport.Width) + tileset.TileSize) / tileset.TileSize); int startY = (int)Math.Floor((player.Position.Y - (graphics.Viewport.Height) - tileset.TileSize) / tileset.TileSize); int endY = (int)Math.Ceiling((player.Position.Y + (graphics.Viewport.Height) + tileset.TileSize) / tileset.TileSize); for (int y = startY; y < endY; y++) { for (int x = startX; x < endX; x++) { if (x >= 0 && y >= 0 && x <= scaleX && y <= scaleY) tiles[y][x].Draw(spriteBatch); } } So to summarize what I'm asking: First, how do I randomly generate a top-down 2D map with different sections (not chunks per se, but areas with different tile sets) and second, how do I get past this negative array index issue?

    Read the article

  • correct pattern to handle a lot of entities in a game

    - by lezebulon
    In my game I usually have every NPC / items etc being derived from a base class "entity". Then they all basically have a virtual method called "update" that I would class for each entity in my game at every frame. I am assuming that this is a pattern that has a lot of downsides. What are some other ways to manage different "game objects" throughout the game? Are there other well-known patterns for this? My game is a RPG if that changes anything

    Read the article

  • Best database setup for one click games

    - by ewizard
    I am building a one click game website/mobile app, and I am debating between using MySQL and MongoDB for the backend. The way I have been exploring it is with a NodeJS/Express/Angular/Passport/MongoDB stack - I have also implemented Socket.io. I have gotten to the point where I am sending data from the flash game to the server (NodeJS). The only data that needs to be sent is basic user information, the players score at the end of each game, and some x,y positions for each players game (for anti-cheating). It seems like MySQL would work fine, but as I am already using MongoDB - are there any major drawbacks to continuing to work with MongoDB on this project?

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • libGDX using Stage and Actor produces different camera angles on desktop and Android Phone

    - by Brandon
    libGDX using Stage and Actor produces different camera angles on desktop and Android Phone. Here are pictures demonstrating the problem: http://brandonyuh.minus.com/mFpdTSgN17VUq On the desktop version, the image takes up most all the screen. On the Android phone it only takes up a bit of the screen. Here's the code (not my actual project but I isolated the problem): package com.me.mygdxgame2; import com.badlogic.gdx.*; import com.badlogic.gdx.graphics.*; import com.badlogic.gdx.graphics.Texture.TextureFilter; import com.badlogic.gdx.graphics.g2d.*; import com.badlogic.gdx.scenes.scene2d.*; public class MyGdxGame2 implements ApplicationListener { private Stage stage; public void create() { stage = new Stage(); stage.addActor(new ActorHi()); } public void render() { Gdx.gl.glClearColor(0, 1, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); stage.draw(); } public void dispose() {} public void resize(int width, int height) {} public void pause() {} public void resume() {} public class ActorHi extends Actor { private Sprite sprite; public ActorHi() { Texture texture = new Texture(Gdx.files.internal("data/hi.png")); texture.setFilter(TextureFilter.Linear, TextureFilter.Linear); sprite = new Sprite(new TextureRegion(texture, 0, 0, 128, 128)); sprite.setBounds(0, 0, 300.0f, 300.0f); } public void draw(SpriteBatch batch, float parentAlpha) { sprite.draw(batch); } } } hi.png is included in the above link Thank you very much for answering my question. I've spent 3 days trying to figure it out.

    Read the article

< Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >