Search Results

Search found 70344 results on 2814 pages for 'file generation'.

Page 64/2814 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Infinite detail inside Perlin noise procedural mapping

    - by Dave Jellison
    I am very new to game development but I was able to scour the internet to figure out Perlin noise enough to implement a very simple 2D tile infinite procedural world. Here's the question and it's more conceptual than code-based in answer, I think. I understand the concept of "I plug in (x, y) and get back from Perlin noise p" (I'll call it p). P will always be the same value for the same (x, y) (as long as the Perlin algorithm parameters haven't changed, like altering number of octaves, et cetera). What I want to do is be able to zoom into a square and be able to generate smaller squares inside of the already generated overhead tile of terrain. Let's say I have a jungle tile for overhead terrain but I want to zoom in and maybe see a small river tile that would only be a creek and not large enough to be a full "big tile" of water in the overhead. Of course, I want the same net effect as a Perlin equation inside a Perlin equation if that makes sense? (aka. I want two people playing the game with the same settings to get the same terrain and details every time). I can conceptually wrap my head around the large tile being based on an "zoomed out" coordinate leaving enough room to drill into but this approach doesn't make sense in my head (maybe I'm wrong). I'm guessing with this approach my overhead terrain would lose all of the cohesiveness delivered by the Perlin. Imagine I calculate (0, 0) as overhead tile 1 and then to the east of that I plug in (50, 0). OK, great, I now have 49 pixels of detail I could then "drill down" into. The issue I have in my head with this approach (without attempting it) is that there's no guarantee from my Perlin noise that (0,0) would be a good neighbor to (50,0) as they could have wildly different "elevations" or p/resultant values returning from the Perlin equation when I generate the overhead map. I think I can conceive of using the Perlin noise for the overhead tile to then reuse the p value as a seed for the "detail" level of noise once I zoom in. That would ensure my detail Perlin is always the same configuration for (0,0), (1,0), etc. ad nauseam but I'm not sure if there are better approaches out there or if this is a sound approach at all.

    Read the article

  • T4Toolbox and Visual Studio 2010

    - by Ben Griswold
    I’ve been using the T4Toolbox to help generate my ASP.NET MVC models and scaffolding for a while now.  Another developer tried using my generator project last week and ran into troubles due to a breaking change around the RenderCore() and TransformText() methods in support for VS 2010.  If you upgraded to the latest version of T4Toolbox and receive a build error similar to the following, you are probably in the same boat: GeneratedTextTransformation.[Template].RenderCore(): no suitable method found to override We took the easy way out.  I had him uninstall the latest version of T4Toolbox and install version 9.7.25.1 which my templates were initially coded against.  For now, that worked great, but it sounds like I’ll be doing some rework of the 20+ templates in my project to support Visual Studio 2010 when we migrate later this month.

    Read the article

  • Right approach to convert a word document that contains forms in a web app

    - by carlo
    I would know if someone can suggest a good approach to convert a word document that contains forms in a web app, specifically in an application built with WaveMaker.(but I'm curious also with a general approach not strictly dependent on the technology that I have mentioned). For example, if I have a page in a word document, that maps the fields of a user entity, what could be my "programmer approach" to convert it without much use of copy-paste, but with a dynamic methodology ?

    Read the article

  • Design Code Outside of an IDE (C#)?

    - by ryanzec
    Does anyone design code outside of an IDE? I think that code design is great and all but the only place I find myself actually design code (besides in my head) is in the IDE itself. I generally think about it a little before hand but when I go to type it out, it is always in the IDE; no UML or anything like that. Now I think having UML of your code is really good because you are able to see a lot more of the code on one screen however the issue I have is that once I type it in UML, I then have to type the actual code and that is just a big duplicate for me. For those who work with C# and design code outside of Visual Studio (or at least outside Visual Studio's text editor), what tools do you use? Do those tools allow you to convert your design to actual skeleton code? It is also possible to convert code to the design (when you update the code and need an updated UML diagram or whatnot)?

    Read the article

  • Atmospheric scattering sky from space artifacts

    - by ollipekka
    I am in the process of implementing atmospheric scattering of a planets from space. I have been using Sean O'Neil's shaders from http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html as a starting point. I have pretty much the same problem related to fCameraAngle except with SkyFromSpace shader as opposed to GroundFromSpace shader as here: http://www.gamedev.net/topic/621187-sean-oneils-atmospheric-scattering/ I get strange artifacts with sky from space shader when not using fCameraAngle = 1 in the inner loop. What is the cause of these artifacts? The artifacts disappear when fCameraAngle is limtied to 1. I also seem to lack the hue that is present in O'Neil's sandbox (http://sponeil.net/downloads.htm) Camera position X=0, Y=0, Z=500. GroundFromSpace on the left, SkyFromSpace on the right. Camera position X=500, Y=500, Z=500. GroundFromSpace on the left, SkyFromSpace on the right. I've found that the camera angle seems to handled very differently depending the source: In the original shaders the camera angle in SkyFromSpaceShader is calculated as: float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight; Whereas in ground from space shader the camera angle is calculated as: float fCameraAngle = dot(-v3Ray, v3Pos) / length(v3Pos); However, various sources online tinker with negating the ray. Why is this? Here is a C# Windows.Forms project that demonstrates the problem and that I've used to generate the images: https://github.com/ollipekka/AtmosphericScatteringTest/ Update: I have found out from the ScatterCPU project found on O'Neil's site that the camera ray is negated when the camera is above the point being shaded so that the scattering is calculated from point to the camera. Changing the ray direction indeed does remove artifacts, but introduces other problems as illustrated here: Furthermore, in the ScatterCPU project, O'Neil guards against situations where optical depth for light is less than zero: float fLightDepth = Scale(fLightAngle, fScaleDepth); if (fLightDepth < float.Epsilon) { continue; } As pointed out in the comments, along with these new artifacts this still leaves the question, what is wrong with the images where camera is positioned at 500, 500, 500? It feels like the halo is focused on completely wrong part of the planet. One would expect that the light would be closer to the spot where the sun should hits the planet, rather than where it changes from day to night. The github project has been updated to reflect changes in this update.

    Read the article

  • Why is permadeath essential to a roguelike design?

    - by Gregory Weir
    Roguelikes and roguelike-likes (Spelunky, The Binding of Isaac) tend to share a number of game design elements: Procedurally generated worlds Character growth by way of new abilities and powers Permanent death I can understand why starting with permadeath as a premise would lead you to the other ideas: if you're going to be starting over a lot, you'll want variety in your experiences. But why do the first two elements imply a permadeath approach?

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    Hi. I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D -TheCommieDuck

    Read the article

  • How to keep word document, html and pdf documentation aligned

    - by dendini
    Is there a way to write documentation in a WYSIWYG editor which can then export into HTML, WORD and PDF and keep copies synchronized? This documentation are mostly technical notes and some contextual help for some softwares so they must contain images and some styling, they are not programmer's documentation (API list or functions list) for which probably a program like Javadoc or Doxygen would be the best choice. For example how do companies with hundreds different software lines and thousands of programmers deal with this? I have several solutions but they all seem lacking in some aspect: Latex/Tex : very good pdf and html export, not very user friendly and no full-blown WYSIWYG editor available. LibreOffice/OpenOffice : full blown WYSIWYG editor however html export not so good (need to edit manually exported html which needs to be maintained separately ) Mediawiki or any other wiki : could be keeping documentation in wikitext format, so html is automatically generated, pdf exportation is quite good with many available plugins. Again however need some formation for the staff to use it and need to setup a server for this. Notice I'm not asking for software A vs software B, I'm asking for general advice, big companies procedures for documentation and yes some software product names if available.

    Read the article

  • Is it better to hard code data or find an algorithm?

    - by OghmaOsiris
    I've been working on a boardgame that has a hex grid as the board (the upper right grid in the image below) Since the board will never change and the spaces on the board will always be linked to the same other spaces around it, should I just hard code every space with the values that I need? Or should I use various algorithms to calculate links and traversals? To be more specific, my board game is a 4 player game where each player has a 5x5x5x5x5x5 hex grid (again, the upper right grid in th eimage above). The object is to get from the bottom of the grid to the top, with various obstacles in the way, and each players being able to attack eachother from the edge of their grid onto other players based on a range multiplier. Since the players grid will never change and the distance of any arbitrary space from the edge of the grid will always be the same, should I just hard code this number into each of the spaces, or should I still use a breadth first search algorithm when players are attacking? The only con I can think of for hard coding everything is that I'm going to code 9+ 2(5+6+7+8) = 61 individual cells. Is there anything else that I'm missing that I should consider using more complex algorithms?

    Read the article

  • What is a simple deformer in which vertices deform linearly with control points?

    - by sebf
    In my project I want to deform a complex mesh, using a simpler 'proxy' mesh. In effect, each vertex of the proxy/collision mesh will be a control point/bone, which should deform the vertices of the main mesh attached to it depending on weight, but where the weight is not dependant on the absolute distance from the control point but rather distance relative to the other affecting control points. The point of this is to preserve complex three dimensional features of the main mesh while using physics implementations which expect something far simpler, low resolution, single surface, etc. Therefore, the vertices must deform linearly with their respective weighted control points (i.e. no falloff fields or all the mesh features will end up collapsed) - as if each vertex was linked to a point on the plane created by the attached control points and deformed with it. I have tried implementing the weight computation algorithm in this paper (page 4) but it is not working as expected and I am wondering if it is really the best way to do what I want. What is the simplest way to 'skin'* an arbitrary mesh, to another arbitrary mesh? *By skin I mean I need an algorithm to determine the best control points for a vertex, and their weights.

    Read the article

  • Stack Overflow Error

    - by dylanisawesome1
    I recently created a recursive cave algorithm, and would like to have more extensive caves, but get a stack overflow after re-cursing a couple times. Any advice? Here's my code: for(int i=0;i<100;i++) { int rand = new Random().nextInt(100); if(rand<=20) { if(curtile.bounds.y-40>500+new Random().nextInt(20)) digDirection(Direction.UP); } if(rand<=40 && rand>20) { if(curtile.bounds.y+40<m.height) digDirection(Direction.DOWN); } if(rand<=60 && rand>40) { if(curtile.bounds.x-40>0) digDirection(Direction.LEFT); } if(rand<=80 && rand>60) { if(curtile.bounds.x+40<m.width) digDirection(Direction.RIGHT); } } } public void digDirection(Direction d) { if(new Random().nextInt(100)<=10) { new Miner(curtile, map); // try { // Thread.sleep(2); // } catch (InterruptedException e) { // // TODO Auto-generated catch block // e.printStackTrace(); // } //Tried this to avoid stack overflow. Didn't work. }

    Read the article

  • Dynamic Environment Creation

    - by Jack
    I was wondering, I'm thinking on a more small-scale, abstracted level, but how does one create a dynamic environment a la Minecraft? In specific, I'm thinking of the world as a 3 dimensional array of block objects, how is it made so that large features such as oceans are created? The language isn't important, I'm thinking on a conceptual level, but if it helps, I use C# or C++. Thanks for any help!

    Read the article

  • documentation of typescript code

    - by Max Beikirch
    my question is rather short: How do I document typescript code properly? I found out that for projects becoming bigger and bigger, it is important to look at a function and immediately know parameters, what it returns and side-effects etc. It is tiring to have just a bunch of comments before a function, most of the time these 'blocks' even look differently in style. What I am looking for is a documentation tool like javadoc or doxygen for typescript. Is there anything out there? Or is it possible to 'abuse' a documentation tool and get it to work with typescript?

    Read the article

  • What's the best way to generate an NPC's face using web technologies?

    - by Vael Victus
    I'm in the process of creating a web app. I have many randomly-generated non-player characters in a database. I can pull a lot of information about them - their height, weight, down to eye color, hair color, and hair style. For this, I am solely interested in generating a graphical representation of the face. Currently the information is displayed with text in the nicest way possible, but I believe it's worth generating these faces for a more... human experience. Problem is, I'm not artist. I wouldn't mind commissioning an artist for this system, but I wouldn't know where to start. Were it 2007, I'd naturally think to myself that using Flash would be the best choice. I'd love to see "breathing" simulated. However, since Flash is on its way out, I'm not sure of a solid solution. With a previous game, I simply used layered .pngs to represent various aspects of the player's body: their armor, the face, the skin color. However, these solutions weren't very dynamic and felt very amateur. I can't go deep into this project feeling like that's an inferior way to present these faces, and I'm certain there's a better way. Can anyone give some suggestion on how to pull this off well?

    Read the article

  • What is the best way to "carve" a terrain created from a heightmap?

    - by tigrou
    I have a 3d landscape created from a heightmap. I'd like to "carve" some holes in that terrain. That will allow me to create bridges, caverns and tunnels inside it. That operation will be done in the game editor so it doesn't need to be realtime. In the end, rendering is done using traditional polygons. What would be the best/easiest way to do that ? I already think about several solutions : Solution 1 1) Create voxels from the heightmap (very easy). In other words, fill a 3D array like this : voxels[32][32][32] from the heightmap values. 2) Carve holes in the voxels as i want (easy too). 3) Convert voxels to polygons using some iso-surface extraction technique (like marching cubes). 4) Reduce (decimate) polygons created in 3). This technique seems to be the most promising for giving good results (untested). However the problem with marching cubes is that they tends to produce lots of polygons thus reducing them is mandatory. Implementing 4) also seems not trivial, i have read several papers on the web and it seems pretty complex. I was also unable to find an example, code snippet or something to start writing an algorithm for triangle mesh decimation. Maybe there is a special decimation algorithm (simpler) for meshes created from marching cubes ? Solution 2 1) Create some triangle mesh from the heighmap (easy). 2) Apply severals 3D boolean operation (eg: subtraction with a sphere) to carve the mesh. 3) apply some procedure to reduce polygons (optional). Operation 2) seems to be very complex and to be honest i have no idea how to do that. Also applying many boolean operation seems to be slow and will maybe degrade the triangle mesh every time a boolean operation is applied.

    Read the article

  • Should generated documentation go in version control history?

    - by dukeofgaming
    I'm against compiled stuff going into version control, specially when it comes to compiled binaries, however, my principles are now in question after adding doxygen support for a project. Should the hundreds of files generated by doxygen go into version control?, what is the recommended practice here?, I think the ideal would be automating the process in a server that publishes that documentation at the same time, however, there is no such server now nor there will be for some time.

    Read the article

  • Better way to generate enemies of different sub-classes

    - by KDiTraglia
    So lets pretend I have an enemy class that has some generic implementation and inheriting from it I have all the specific enemies of my game. There are points in my code that I need to check whether an enemy is a specific type, but in Java I have found no easier way than this monstrosity... //Must be a better way to do this if ( enemy.class.isAssignableFrom(Ninja.class) ) { ... } My partner on the project saw these and changed them to use an enum system instead public class Ninja extends Enemy { //EnemyType is an enum containing all our enemy types public EnemyType = EnemyTypes.NINJA; } if (enemy.EnemyType = EnemyTypes.NINJA) { ... } I also have found no way to generate enemies on varying probabilities besides this for (EnemyTypes types : enemyTypes) { if ( (randomNext = (randomNext - types.getFrequency())) < 0 ) { enemy = createEnemy(types.getEnemyType()); break; } } private static Enemy createEnemy(EnemyType type) { switch (type) { case NINJA: return new Ninja(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); case GORILLA: return new Gorilla(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); case TREX: return new TRex(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); //etc } return null } I know java is a little weak at dynamic object creation, but is there a better way to implement this in a way such like this for (EnemyTypes types : enemyTypes) { if ( (randomNext = (randomNext - types.getFrequency())) < 0 ) { //Change enemyTypes to hold the classes of the enemies I can spawn enemy = types.getEnemyType().class.newInstance() break; } } Is the above possible? How would I declare enemyTypes to hold the classes if so? Everything I have tried so far as generated compile errors and general frustration, but I figured I might ask here before I completely give up to the huge mass that is the createEveryEnemy() method. All the enemies do inherit from the Enemy class (which is what the enemy variable is declared as). Also is there a better way to check which type a particular enemy that is shorter than enemy.class.isAssignableFrom(Ninja.class)? I'd like to ditch the enums entirely if possible, since they seem repetitive when the class name itself holds that information.

    Read the article

  • How do I generate terrain like that of Scorched Earth?

    - by alex
    Hi, I'm a web developer and I am keen to start writing my own games. For familiarity, I've chosen JavaScript and canvas element for now. I want to generate some terrain like that in Scorched Earth. My first attempt made me realise I couldn't just randomise the y value; there had to be some sanity in the peaks and troughs. I have Googled around a bit, but either I can't find something simple enough for me or I am using the wrong keywords. Can you please show me what sort of algorithm I would use to generate something in the example, keeping in mind that I am completely new to games programming (since making Breakout in 2003 with Visual Basic anyway)?

    Read the article

  • Is BDD actually writable by non-programmers?

    - by MattiSG
    Behavior-Driven Development with its emblematic “Given-When-Then” scenarios syntax has lately been quite hyped for its possible uses as a boundary object for software functionality assessment. I definitely agree that Gherkin, or whichever feature definition script you prefer, is a business-readable DSL, and already provides value as such. However, I disagree that it is writable by non-programmers (as does Martin Fowler). Does anyone have accounts of scenarios being written by non-programmers, then instrumented by developers? If there is indeed a consensus on the lack of writability, then would you see a problem with a tool that, instead of starting with the scenarios and instrumenting them, would generate business-readable scenarios from the actual tests?

    Read the article

  • Procedural Mesh: UV mapping

    - by Esa
    I made a procedural mesh and now I want to apply a texture to it. The problem is, I cannot get it to stick the way I want it to. The idea is to have the texture painted only once over the whole mesh, so that there is no repeating. How should I map the UV to make that happen? My mesh is a simple plane consisting of 56 triangles. I'd add pictures to clear things up but I cannot since my reputation is below 10 points. Any help is appreciated. EDIT(Kind people gave me up votes, thank you): Meet my mesh: And when textured(tried to repeat the texture): And my texture:

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D

    Read the article

  • Character movement on a 2D tile map

    - by Chris Morris
    I'm working at making a HTML5 game. Top down, closest thing I can equate it to is the gameboy zeldas, but open world and no rooms. What I have so far is a procedurally generated map in a multi dimensional array. And a starting position on the map. Along with this I have an array of movable and non movable tile ID's. I also have a class for my player and have him being rendered out in the center of the starting tile. My problem however is getting the movement sorted out for the player. I want to be able to have the character free move around the map (pixel by pixel essentially) ontop of this 2D generated world. Ideally this would allow the user to move around the walk able area of the canvas. this is simple enough for me to do, but I am having problems now moving the world. If the user is 20% from the edge of the screen i want the world to start panning in the direction the player is heading. But I'm rather lacking in ideas of how to do this. I've looked around for some tutorials, but am coming up blank on ideas of how to generate the playable area (zoomed in) and to then move this generated area under the player when they reach near the end of the screen. My current idea was to generate a certain amount of tiles full size to fill the screen and place the player i the middle. Then when the user approaches the edge of the screen start generating the tiles offset by the distance moved and the direction. I can kind of see this working but I really have no idea if this is the best or easiest to code of methods for generating the world. sorry for the lack of code but I'm still just in the theory stages of working this all out.

    Read the article

  • Level of detail algorithm not functioning correctly

    - by Darestium
    I have been working on this problem for months; I have been creating Planet Generator of sorts, after more than 6 months of work I am no closer to finishing it then I was 4 months ago. My problem; The terrain does not subdivide in the correct locations properly, it almost seems as if there is a ghost camera next to me, and the quads subdivide based on the position of this "ghost camera". Here is a video of the broken program: http://www.youtube.com/watch?v=NF_pHeMOju8 The best example of the problem occurs around 0:36. For detail limiting, I am going for a chunked LOD approach, which subdivides the terrain based on how far you are away from it. I use a "depth table" to determine how many subdivisions should take place. void PQuad::construct_depth_table(float distance) { tree[0] = -1; for (int i = 1; i < MAX_DEPTH; i++) { tree[i] = distance; distance /= 2.0f; } } The chuncked LOD relies on the child/parent structure of quads, the depth is determined by a constant e.g: if the constant is 6, there are six levels of detail. The quads which should be drawn go through a distance test from the player to the centre of the quad. void PQuad::get_recursive(glm::vec3 player_pos, std::vector<PQuad*>& out_children) { for (size_t i = 0; i < children.size(); i++) { children[i].get_recursive(player_pos, out_children); } if (this->should_draw(player_pos) || this->depth == 0) { out_children.emplace_back(this); } } bool PQuad::should_draw(glm::vec3 player_position) { float distance = distance3(player_position, centre); if (distance < tree[depth]) { return true; } return false; } The root quad has four children which could be visualized like the following: [] [] [] [] Where each [] is a child. Each child has the same amount of children up until the detail limit, the quads which have are 6 iterations deep are leaf nodes, these nodes have no children. Each node has a corresponding Mesh, each Mesh structure has 16x16 Quad-shapes, each Mesh's Quad-shapes halves in size each detail level deeper - creating more detail. void PQuad::construct_children() { // Calculate the position of the Quad based on the parent's location calculate_position(); if (depth < (int)MAX_DEPTH) { children.reserve((int)NUM_OF_CHILDREN); for (int i = 0; i < (int)NUM_OF_CHILDREN; i++) { children.emplace_back(PQuad(this->face_direction, this->radius)); PQuad *child = &children.back(); child->set_depth(depth + 1); child->set_child_index(i); child->set_parent(this); child->construct_children(); } } else { leaf = true; } } The following function creates the vertices for each quad, I feel that it may play a role in the problem - I just can't determine what is causing the problem. void PQuad::construct_vertices(std::vector<glm::vec3> *vertices, std::vector<Color3> *colors) { vertices->reserve(quad_width * quad_height); for (int y = 0; y < quad_height; y++) { for (int x = 0; x < quad_width; x++) { switch (face_direction) { case YIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, quad_height - 1.0f, -(position.y + y * element_width))); break; case YDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, 0.0f, -(position.y + y * element_width))); break; case XIncreasing: vertices->emplace_back(glm::vec3(quad_width - 1.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case XDecreasing: vertices->emplace_back(glm::vec3(0.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case ZIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, 0.0f)); break; case ZDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, -(quad_width - 1.0f))); break; } // Position the bottom, right, front vertex of the cube from being (0,0,0) to (-16, -16, 16) (*vertices)[vertices->size() - 1] -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); colors->emplace_back(Color3(255.0f, 255.0f, 255.0f, false)); } } switch (face_direction) { case YIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, quad_height - 1.0f, -(position.y + quad_height / 2.0f)); break; case YDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, 0.0f, -(position.y + quad_height / 2.0f)); break; case XIncreasing: this->centre = glm::vec3(quad_width - 1.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case XDecreasing: this->centre = glm::vec3(0.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case ZIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, 0.0f); break; case ZDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, -(quad_height - 1.0f)); break; } this->centre -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); } Any help in discovering what is causing this "subdivding in the wrong place" would be greatly appreciated.

    Read the article

  • What tools are available to generate end user documentation?

    - by Rowland Shaw
    End user documentation on how to use applications is an important part of the user experience of applications, irrespective of whether they are winforms, wpf or even asp applications. In a startup or internal development team situation, where there isn't a dedicated documentation department, it can take a lot of resources to maintain screen shots and associated user documentation, such as on-line help or even printable manuals. What tools are available to assist in creating screenshots of all the "screens" within an application (be they winforms, wpf or aspx) to help automate the capture of screen shots, and associating with the relevant documentation? In addiiton, are there any that allow automation of annotations of a particular control (so use cases like: Draw a red box around the Username control with a callout to say "This is where you'd enter your user name, in the form '[email protected]'")?

    Read the article

  • Algorithm for creating spheres?

    - by Dan the Man
    Does anyone have an algorithm for creating a sphere proceduraly with la amount of latitude lines, lo amount of longitude lines, and a radius of r? I need it to work with Unity, so the vertex positions need to be defined and then, the triangles defined via indexes (more info). EDIT I managed to get the code working in unity. But I think I might have done something wrong. When I turn up the detailLevel, All it does is add more vertices and polygons without moving them around. Did I forget something?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >