Search Results

Search found 5654 results on 227 pages for '3d rendering'.

Page 156/227 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • OpenGL render to texture causing edge artifacts

    - by mysticalOso
    This is my first post here so any help would be massively appreciated :) I'm using C++ with SDL and OpenGL 3.3 When rendering directly to screen I get the following result And when I render to texture I this happens Anti-aliasing is turned off for both. I'm guessing this has something to do with depth buffer accuracy but I've tried a lot of different methods to improve the result but, no success :( I'm currently using the following code to set up my FBO: GLuint frameBufferID; glGenFramebuffers(1, &frameBufferID); glBindFramebuffer(GL_FRAMEBUFFER, frameBufferID); glGenTextures(1, &coloursTextureID); glBindTexture(GL_TEXTURE_2D, coloursTextureID); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,SCREEN_WIDTH,SCREEN_HEIGHT,0,GL_RGB,GL_UNSIGNED_BYTE,NULL); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); //Depth buffer setup GLuint depthrenderbuffer; glGenRenderbuffers(1, &depthrenderbuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthrenderbuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, SCREEN_WIDTH,SCREEN_HEIGHT); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthrenderbuffer); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, coloursTextureID, 0); GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0}; glDrawBuffers(1, DrawBuffers); // if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) return false; Thank you so much for any help :)

    Read the article

  • Draw contour around object in Opengl

    - by Maciekp
    I need to draw contour around 2d objects in 3d space. I tried drawing lines around object(+points to fill the gap), but due to line width, some part of it(~50%) was covering object. I tried to use stencil buffer, to eliminate this problem, but I got sth like this(contour is green): http://goo.gl/OI5uc (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object: glColorMask(1,1,1,1); std::list<CObjectOnScene*>::iterator objIter=ptr->objects.begin(),objEnd=ptr->objects.end(); int countStencilBit=1; while(objIter!=objEnd) { glColorMask(1,1,1,1); glStencilFunc(GL_ALWAYS,countStencilBit,countStencilBit); glStencilOp(GL_REPLACE,GL_KEEP,GL_REPLACE ); (*objIter)->DrawYourVertices(); glStencilFunc(GL_NOTEQUAL,countStencilBit,countStencilBit); glStencilOp(GL_KEEP,GL_KEEP,GL_REPLACE); (*objIter)->DrawYourBorder(); ++objIter; ++countStencilBit; } I've tried different settings of stencil buffer, but always I was getting sth like that. Here is question: 1.Am I setting stencil buffer wrong? 2. Are there any other simple ways to create contour on such objects? Thanks in advance.

    Read the article

  • Brightness going up to 100% on loading certain websites in Chrome

    - by picheto
    I'm using Google Chrome version 21.0.1180.89 on Ubuntu 12.04 and my laptop is a Sony VAIO VPCCW15FL (spec sheet). My video driver is the propietary "NVIDIA accelerated graphics driver (post-release updates)(version-current updates)". After installing Ubuntu, I discovered that neither the brightness control buttons (hardware) or the brightness slider (software) worked, and found out I could get the hardware buttons to work by installing the nvidiabl.deb package and oBacklight script. I'm using nvidiabl-dkms 0.77 and oBacklight 0.3.8. Still, the slider on the Ubuntu "Settings" does not work, but I don't care. There is an annoying thing happening when loading certain pages in Google Chrome: the brightness goes up to 100% when loading the webpage or when leaving it (closing the tab or typing a different URL on the omnibox). However, the "brightness tooltip" (that default brightness notification) remembers the position it was set to, so if I adjust the brightness with the HW buttons, the level gets adjusted relative to the value it was set to before "going 100%". I disabled the flash PPAPI plugin, but left the NPAPI plugin enabled, and the problem went away for pages with flash content. Still, the same thing happens when viewing HTML5 video, or when loading, for example, the Chrome Web Store or using the Scratchpad extension. I suppose it has to do with the rendering of certain elements using the GPU, but this is just a guess. This brightness thing does not happen when using Firefox 15.0 or any other application I have used yet. Does anybody know why this may be happening and what could I do to fix this without changing browser? Thanks a lot.

    Read the article

  • Impact Earth Lets You Simulate Asteroid Impacts

    - by Jason Fitzpatrick
    If you’re looking for a little morbid simulation to cap off your Friday afternoon, this interactive asteroid impact simulator makes it easy to the results of asteroid impacts big and small. The simulator is the result of a collaboration between Purdue University and the Imperial College of London. You can adjust the size, density, impact angle, and impact velocity of the asteroid as well as change the target from water to land. The only feature missing is the ability to select a specific location as the point of impact (if you want to know what a direct strike to Paris would yield, for example, you’ll have to do your own layering). Once you plug all that information in, you’re treated to a little 3D animation as the simulator crunches the numbers. After it finishes you’ll see a breakdown of a variety of effects including the size of the crater, the energy of the impact, seismic effects, and more. Hit up the link below to take it for a spin. Impact Earth [via Boing Boing] How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Faster 2D Collision detection

    - by eShredder
    Recently I've been working on a fast-paced 2d shooter and I came across a mighty problem. Collision detection. Sure, it is working, but it is very slow. My goal is: Have lots of enemies on screen and have them to not touch each other. All of the enemies are chasing the player entity. Most of them have the same speed so sooner or later they all end up taking the same space while chasing the player. This really drops the fun factor since, for the player, it looks like you are being chased by one enemy only. To prevent them to take the same space I added a collision detection (a very basic 2D detection, the only method I know of) which is. Enemy class update method Loop through all enemies (continue; if the loop points at this object) If enemy object intersects with this object Push enemy object away from this enemy object This works fine. As long as I only have <200 enemy entities that is. When I get closer to 300-350 enemy entities my frame rate begins to drop heavily. First I thought it was bad rendering so I removed their draw call. This did not help at all so of course I realised it was the update method. The only heavy part in their update method is this each-enemy-loops-through-every-enemy part. When I get closer to 300 enemies the game does a 90000 (300x300) step itteration. My my~ I'm sure there must be another way to aproach this collision detection. Though I have no idea how. The pages I find is about how to actually do the collision between two objects or how to check collision between an object and a tile. I already know those two things. tl;dr? How do I aproach collision detection between LOTS of entities? Quick edit: If it is to any help, I'm using C# XNA.

    Read the article

  • Memory allocation strategy for the vertex buffers (DirectX 10/11)

    - by Alex
    I have the following question. I write CAD system. So I have a 3D scene and there are many different objects (walls, doors, windows and so on). User can add or delete some objects. The question is: how can I organise the keeping of vertices for all my objects. I can create vertex buffer for every object. But I think drawing/switching from one buffer to another would have performance penalty. Another way - I can create several big buffers for every object type. But I don't understand how to update such buffers. It is too big to update whole buffer (for example buffer for all walls). What I need to do if I want to delete the object from the middle of the buffer? Actually I have the similar question: http://stackoverflow.com/questions/5515700/how-to-properly-update-vertex-buffers-in-directx-10 Most examples I've found work with very static models. Therefore, they tend to create a single vertex buffer with their list of points, and then are just manipulated by matrix transformations. I, on the other hand, will be updating the scene very often.

    Read the article

  • How can I create an orthographic display that handles different screen dimensions?

    - by Piku
    I'm trying to create an iPad/iPhone game using GLES2.0 that contains a 3D scene with a heads-up-display/GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI/HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old-style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co-ords 1:1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running -1/+1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.

    Read the article

  • To sell or give for free

    - by QAH
    Hello everyone! I am currently making a game that I was originally planning to sell. It is a simple 2D arcade style game for the PC. I've seen many indie games become popular and generate revenue from advertisements, but the game itself remains free. I need some advice on whether or not I should sell my game, release it for free with advertisements, or ask for donations and keep the game free. I feel that my game is fun, but of course the graphics aren't tip top because I am a programmer, not an artist. I just take screenshots of 3D models I get from Turbosquid and crop around it to make a sprite. Also, and I could be very wrong about this, it seems that there are more legal issues surrounding selling a game than making it free and generating revenue from advertisement, or asking for donations. If I am wrong, someone please correct me. Also, I am very interested in generating some revenue for my work, but that isn't at the very top of my list. I am in my last year of high school, soon to be going to college, and I am going to major in computer science/software engineering. So I am trying to gain some preliminary experience at home by coding stuff every day. One way of getting this experience is by making this game. So what do you think? What route should I take? What has worked well with other indie games? Thanks in advance.

    Read the article

  • How to get Nvidia graphics working on Sony Z laptop?

    - by projectshave
    I have an older Sony VAIO Z 590 laptop with switchable graphics between Intel and Nvidia GeForce 9300M. It is NOT Optimus. I did a clean install of Ubuntu 12.04. Everything works, but it's using Unity 2D with the Intel drivers. I've tried loading the Nvidia drivers from "Additional Drivers", but it says "this driver is activated but not currently in use". When I run "nvidia-settings", an error window pops up to say "You do not appear to be using the NVIDIA X drivers." "lspci" shows both graphics cards. Let me know if I should add more info. How do I get the Nvidia graphics and Unity 3D working? More info: $ lshw -short -class display H/W path Device Class Description ============================================== /0/100/1/0 display G98 [GeForce 9300M GS] /0/100/2 display Mobile 4 Series Chipset Integrated Graphics C $ glxinfo name of display: :0 Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Excerpts from Xorg.0.log: [ 16.373] (II) LoadModule: "glx" [ 16.373] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so [ 16.386] (II) Module glx: vendor="NVIDIA Corporation" [ 16.386] compiled for 4.0.2, module version = 1.0.0 [ 16.386] Module class: X.Org Server Extension [ 16.386] (II) NVIDIA GLX Module 295.49 Tue May 1 00:09:10 PDT 2012 [ 16.608] (II) NVIDIA dlloader X Driver 295.49 Mon Apr 30 23:48:24 PDT 2012 [ 16.608] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 17.693] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found)

    Read the article

  • Text on a model

    - by alecnash
    I am trying to put some text on a Model and I want it to be dynamic. Did some research and came up with drawing the text on the texture and then set it on the model. I use something like this: public static Texture2D SpriteFontTextToTexture(SpriteFont font, string text, Color backgroundColor, Color textColor) { Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(GraphicsDevice, (int)Size.X, (int)Size.Y); GraphicsDevice.SetRenderTarget(renderTarget); GraphicsDevice.Clear(Color.Transparent); Spritbatch.Begin(); //have to redo the ColorTexture Spritbatch.Draw(ColorTexture.Create(GraphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); Spritbatch.DrawString(font, text, Vector2.Zero, textColor); Spritbatch.End(); GraphicsDevice.SetRenderTarget(null); return renderTarget; } When I was working with primitives and not models everything worked fine because I set the texture exactly where I wanted but with the model (RoundedRect 3D button). It now looks like that: Is there a way to have the text centered only on one side?

    Read the article

  • Is there a typical career path to learn game development "on the job"?

    - by mac
    The extended version of the question is: what is the typical career paths that a developer without specific experience in game development should take if he/she wishes to work in the game development industry? In other words, what are the positions such a programmer might aspire to get hired for, in the game industry? I am asking because it seems to me that - even without direct experience with 3D modelling, physics engines, shaders, etc... - for as much complex as these topics might be - they are still "just" top layers one can learn "on the job" if he/she has already good programming skills and experience in software design (for example during peer-programming sessions). I have no knowledge whatsoever of the game industry, so maybe I am being naïve here, but for all the other programming jobs I previously took, I learnt most of the specificities while working on concrete projects... so I wonder if there is a chance to do the same with game development. Thanks for your time and advice! :) PS: I don't know if this is important or not for answering the question, but scripting languages are the languages I am more proficient in. /mac

    Read the article

  • Ubuntu 13.10, kernel 3.11 blank screen issue with hybrid graphics

    - by Lagerbaer
    On my HP Envy, which has both an Intel on-chip graphics card and an Nvidia Geforce: *-display UNCLAIMED description: 3D controller product: GK208M [GeForce GT 740M] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: cap_list configuration: latency=0 resources: memory:d2000000-d2ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:5000(size=128) memory:b2000000-b207ffff *-display description: VGA compatible controller product: 4th Gen Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:d3000000-d33fffff memory:c0000000-cfffffff ioport:6000(size=64) I have trouble with all newer kernels. I basically had to install 12.04 LTS and use their 3.5 kernel family to get the system to boot. The 3.8 from 12.10 or the newest 3.11 from Ubuntu 13.10 leave me with a black screen upon boot. On one occasion I did hear the "log in" sound, but the screen did not display anything. I have purged all nvidia drivers so I guess it should just use the intel drivers, but apparently this is all messed up with newer kernel versions. This is different from the other "nvidia boots into blank screen" bug in that I don't rely solely on an nvidia card. Surely the intel on-chip card should be supported and leave me with something different from a blank screen? Again, it only works with kernel versions 3.5.0-41-generic, not with the 3.11.0-12 one that ships with Ubuntu 13.10. When I go into the grub menu and change the boot options from 'quiet splash' to 'nomodeset' I am able to boot the system, but then I don't get any graphics and trying 'sudo service lightdm start' doesn't succeed (I get 100% CPU for apport, but this doesn't do anything either, so I kill it). Help, I'm all out of ideas. EDIT: Let me add that I'm using the EFI boot system and have a dual-boot installation with Windows 8.

    Read the article

  • Effective and simple matching for 2 unequal small-scale point sets

    - by Pavlo Dyban
    I need to match two sets of 3D points, however the number of points in each set can be different. It seems that most algorithms are designed to align images and trimmed to work with hundreds of thousands of points. My case are 50 to 150 points in each of the two sets. So far I have acquainted myself with Iterative Closest Point and Procrustes Matching algorithms. Implementing Procrustes algorithms seems like a total overkill for this small quantity. ICP has many implementations, but I haven't found any readily implemented version accounting for the so-called "outliers" - points without a matching pair. Besides the implementation expense, algorithms like Fractional and Sparse ICP use some statistics information to cancel points that are considered outliers. For series with 50 to 150 points statistic measures are often biased or statistic significance criteria are not met. I know of Assignment Problem in linear optimization, but it is not suitable for cases with unequal sets of points. Are there other, small-scale algorithms that solve the problem of matching 2 point sets? I am looking for algorithm names, scientific papers or C++ implementations. I need some hints to know where to start my search.

    Read the article

  • How to plan/manage multi-platform (mobile) products?

    - by PhD
    Say I've to develop an app that runs on iOS, Android and Windows 8 Mobile. Now all three platforms are technically in different program languages. The only 'reuse' that I can see is that of the boxes-and-lines drawings (UML :) charts and nothing else. So how do companies/programmers manage the variation of the same product across different platforms especially since the implementation languages differ? It's 'easier' in the desktop world IMO given the plethora of languages and cross-platform libraries to make your life easier. Not so in the mobile world. More so, product line management principles don't seem to be all that applicable - what is same and variant doesn't really matter - the application is the same (conceptually) and the implementation is variant. Some difficulties that come to mind: Bug Fixing: Applications maybe designed in a similar manner but the bug identification and fixing would be radically different. A bug on iOS may/may-not be existent for that on Android. Or a bug fix approach on one platform may not be the same on another (unless it's a semantic bug like a!=b instead of a==b which would require the same 'approach' to fixing in essence Enhancements: Making a change on one platform would be radically different than on another Code-Design Divergence: They way the code is written/organized, the class structures etc., could be very different given the different implementation environments - leading to further reuse of the (above) UML models. There are of course many others - just keeping the development in sync and making sure all applications are up to the same version with the same set of features etc. Seems the effort is 3x that of a single application. So how exactly does one manage this nightmarish situation? Some thoughts: Split application to client/server to minimize the effect to client side only (not always doable) Use frameworks like Unity-3D that could take care of the cross-platform problem (mostly applicable to games and probably not to other applications etc.) Any other ways of managing a platform line? What are some proven approaches to managing/taming the effects?

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • RasterizerState set to null after calling DrawText in Nuclex

    - by ProgrammerAtWork
    I have the following code in XNA: // class members Text t1; Text t2; Text t3; // init // Debugfont is size 24 vectorfont t1 = MM.DebugFont24.Fill("hello"); t1 = MM.DebugFont24.Extrude("hello"); t2 = MM.DebugFont24.Fill("hello"); t2 = MM.DebugFont24.Extrude("hello"); t3 = MM.DebugFont24.Fill("hello"); t3 = MM.DebugFont24.Extrude("hello"); // Draw TextBatch test = new TextBatch(MM.GD); test.DrawText(t1, Color.Red); test.DrawText(t2, Color.Red); test.DrawText(t3, Color.Red); test.End(); //After the second call to the TextBatch, RasterizerState of the GraphicsDevice is set to null //But I don't get any runtime errors or any indication of that something is wrong. Is this supposed to happen? Or am I doing something wrong? I've discovered that this happened because culling was set to None when I was rendering textures

    Read the article

  • Will Unity skills be interchangeable?

    - by Starkers
    I'm currently learning Unity and working my way through a video game maths primer text book. My goal is to create a racing game for WebGL (using Three.js and maybe Physic.js). I'm well aware that the Unity program shields you from a lot of what's going on and a lot of the grunt work attached to developing even a simple video game, but if I power through a bunch of Unity tutorials, will a lot of the skills I learn translate over to other frameworks/engines? I'm pretty proficient at level design with WebGL, and I'm a good 3D modeller. My weaknesses are definitely AI and Physics. While I am rapidly shoring up my math, and while Physics is undeniably interesting there's only so many hours in the day and there's a wealth of engines out there to take care of this sort of thing. AI does appeal to me a lot more, and is a lot more necessary. AI changes drastically from game to game, is tweaked heavily during development, and the physics is a lot more constant. Will leaning AI concepts in Unity allow me to transfer this knowledge pretty much anywhere? Or will I just be paddling up Unity creek with these skills?

    Read the article

  • Unity , libgdx, or something else to develop my first game for Android?

    - by capcom
    I want to start by saying that I absolutely love Unity (even more when I team it up with Blender). I really want to start developing games for Android, but it seems like Unity poses way too many roadblocks in terms of which devices it supports (and even if it does support them, it doesn't work well on all of them). I've been looking around for alternatives, and found something called libgdx. Well, it's nothing like Unity unfortunately, but at least it seems like I may be able to reach a larger audience in the market. I'd like to start by making 2D games, but with 3D graphics (say, imported from Blender). I can do this very easily in Unity, and it seems like it should be alright with libgdx too. But I really want to know if ditching Unity is a smart idea, considering how comfortable I am with it already, and how much I like it. Finally, is libgdx something you would recommend considering my requirements/situation? BTW, I am quite familiar with Eclipse too. Many thanks. Feel free to request further details.

    Read the article

  • Moviebarcodes Showcases Entire Movies as Frame-based Barcodes

    - by Jason Fitzpatrick
    If you’ve ever wanted a chance to look at at an entire movie in a single glance, here’s your chance. Moviebarcodes shares mock-barcodes generated by turning each frame of a movie into a thin stripe, offering a glimpse into the color choices and shot lengths in popular movies. The barcode seen above was generated from The Matrix; you can see where the green indicates scenes that were shot inside the matrix and thus given a subtle green tint. In the barcode below, generated from the movie Pleasantville you can see the transition in the movie between the color and black and white scenes. In the case of Pleasantville, elements of the black and white world turning to color represent pivotal moments in the plot development which are now neatly mapped out below: Check out the hundreds of barcodes at the link below; you can even order prints of your favorite movies. Find a great rendering in the mix? Share a link in the comments below. Moviebarcodes [via Cool Inforgraphics] How to Create an Easy Pixel Art Avatar in Photoshop or GIMPInternet Explorer 9 Released: Here’s What You Need To KnowHTG Explains: How Does Email Work?

    Read the article

  • What is the breakdown of jobs in game development?

    - by Destry Ullrich
    There's a project I'm trying to start for Indie Game Development; specifically, it's going to be a social networking website that lets developers meet through (It's a secret). One of the key components is showing what skills members have. Question: I need to know what MAJOR game development roles are not represented in the following list, keeping in mind that many specialist roles are being condensed into more broad, generalist roles: Art Animator (Characters, creatures, props, etc.) Concept Artist (2D scenes, environments, props, silhouettes, etc.) Technical Artist (UI artists, typefaces, graphic designers, etc.) 3D Artist (Modeling, rigging, texture, lighting, etc.) Audio Composer (Scores, music, etc.) Sound Engineer (SFX, mood setting, audio implementation, etc.) Voice (Dialog, acting, etc.) Design Creative Director (Initial direction, team management, communications, etc.) Gameplay Designer (Systems, mechanics, control mapping, etc.) World Designer (Level design, aesthetics, game progression, events, etc.) Writer (Story, mythos, dialog, flavor text, etc.) Programming Engine Programming (Engine creation, scripting, physics, etc.) Graphics Engineer (Sprites, lighting, GUI, etc.) Network Engineer (LAN, multiplayer, server support, etc.) Technical Director (I don't know what a technical director would even do.) Post Script: I have an art background, so i'm not familiar with what the others behind game creation actually do. What's missing from this list, and if you feel some things should be changed around how so?

    Read the article

  • Support our movement?

    - by Mirchi Sid
    | Imagine Freedom 2013 | mnearth Student Programs This is to inform you about the world’s first online student competition called imagine freedom which is conducted by mnearth corporation India .The imagine freedom will be one of the most popular student IT competition in the world .The program will be fully functioned with free and open source software and operating system like Ubuntu Linux and Linux . The Competition have a lot of other categories like web designing , software development and much more .The program coordinates will contact your schools for the selection process and giving the first steps for registration . If you are an expert in Open source free software? Then it is the time for you ! Otherwise do you know anyone who have the skills ? Then inform them about the program . The competitions will be done as a part of Mnearth Student Programs . The program schedule and the local competition information will be send after getting the applications . The competitions are categorized into three . |Categories Of Participants Animation Films Multimedia Presentation 3D Animation Web Designing Software Development Innovations Cloud Apps Games etc. . . . . . |Levels Of Participants High School Level Higher Secondary Level Collage Level University Level |High School level This levels is for the students who is students . It’s age limit is 12 - 24 years . The competitions will be started on this year for selecting the good students who have the talent . For more information Send to : [email protected] Call us on :04936312206 (india) Join with the Community on facebook : follow us on twitter : www.twitter.com/imaginatingkids www.facebook.com/imaginefreedomonce

    Read the article

  • how does HDR work?

    - by dotminic
    I'm trying to understand what HDR is and how it works. I understand the basic concepts and have an slight idea of how it is implemented with D3D/hlsl. However it's still pretty foggy. Say I'm rendering a sphere with a texture of the earth and a small point list of vertices to act as stars, how would I render this in HDR ? Here are a few things I'm confused about: I'm guessing, I can't use just any basic image format for the texture as the values would be limited to [0, 255] and clamped to [0, 1] in a shader. Same goes for the back buffer, I take it the format needs to be a float point format ? What are the other steps involved ? Surely there has to be more than just using floating point formats to render to a render target and then apply some bloom as a post process ? (considering the output will be 8bpp anyway) Basically, what are the steps for HDR ? How does it work ? I can't seem to find any good papers / articles that describe the process, other than this one, but it seems to skim over the basics a little, so it's confusing.

    Read the article

  • How to disable discrete GPU using NVIDIA drivers?

    - by penzoiders
    I have a DELL studio XPS 13 (aka 1340) as of 12.04 most things run smoothly out of the box, but I have some power draining and warmness issues (if not to be called terrible heat issues) The system came with a NVIDIA GeForce 9500M (which has Hybrid SLI) and it shows up in "lspci" as these 2 cards 02:00.0 VGA compatible controller: NVIDIA Corporation G98 [GeForce 9200M GS] (rev a1) 03:00.0 VGA compatible controller: NVIDIA Corporation C79 [GeForce 9400M G] (rev b1) I had to install nvidia-current over noveau driver 'cause noveau does freeze the system after suspension. By installing nvidia-current and running nvidia-xconfig the resume process after suspension is fixed. By the way both with nvidia-current and noveau the system drains a lot of battery and heats up a lot. I suppose this is because the discrete GPU is always on. I don't really need 3D graphics on this system, if not the minimal to run unity and compiz for window management. So my question is: How do I disable, using nvidia-current, the discrete GPU 9200M and use only the integrated one 9400M? notes: In BIOS I have no option to disable discrete GPU This I think is not applicable because of the suspension-freeze issue (with noveau): https://help.ubuntu.com/community/HybridGraphics I've found this but I don't know which --sli option I should choose to fit my needs: http://manpages.ubuntu.com/manpages/hardy/man1/nvidia-xconfig.1.html My system has not optimus or cuda, but anyone can tell me if bumblebee can work for me?

    Read the article

  • How can I improve the "smoothness" of a 2D side-scrolling iPhone game?

    - by MrDatabase
    I'm working on a relatively simple 2D side-scrolling iPhone game. The controls are tilt-based. I use OpenGL ES 1.1 for the graphics. The game state is updated at a rate of 30 Hz... And the drawing is updated at a rate of 30 fps (via NSTimer). The smoothness of the drawing is ok... But not quite as smooth as a game like iFighter. What can I do to improve the smoothness of the game? Here are the potential issues I've briefly considered: I'm varying the opacity of up to 15 "small" (20x20 pixels) textures at a time... Apparently varying the opacity in this manner can degrade drawing performance I'm rendering at only 30 fps (via NSTimer)... Perhaps 2D games like iFighter are rendered at a higher frame rate? Perhaps the game state could be updated at a faster rate? Note the acceleration vales are updated at 100 Hz... So I could potentially update part of the game state at 100 hz All of my textures are PNG24... Perhaps PNG8 would help (due to smaller size etc)

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >