Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 374/774 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Explaining Asteroids Movement code

    - by Moaz ELdeen
    I'm writing an Asteroids Atari clone, and I want to figure out how the AI for the asteroids is done. I have came across that piece of code, but I can't get what it does 100% if ((float)rand()/(float)RAND_MAX < 0.5) { m_Pos.x = -app::getWindowWidth() / 2; if ((float)rand()/(float)RAND_MAX < 0.5) m_Pos.x = app::getWindowWidth() / 2; m_Pos.y = (int) ((float)rand()/(float)RAND_MAX * app::getWindowWidth()); } else { m_Pos.x = (int) ((float)rand()/(float)RAND_MAX * app::getWindowWidth()); m_Pos.y = -app::getWindowHeight() / 2; if (rand() < 0.5) m_Pos.y = app::getWindowHeight() / 2; } m_Vel.x = (float)rand()/(float)RAND_MAX * 2; if ((float)rand()/(float)RAND_MAX < 0.5) { m_Vel.x = -m_Vel.x; } m_Vel.y =(float)rand()/(float)RAND_MAX * 2; if ((float)rand()/(float)RAND_MAX < 0.5) m_Vel.y = -m_Vel.y;

    Read the article

  • Strategy to prevent players from seeing through walls in an online FPS?

    - by geneotech
    Why do we still moan on wallhackers in multiplayer first-person shooters ? Isn't it possible to perform occlusion culling for all players server-side ? For example, send player xyz information to client only when the player is visible in client's frustum and not occluded by any object ? Even if the collision-geometry is very simplified, most of the time cheater won't receive tactical information. Why not do this ?

    Read the article

  • Ease Rotate RigidBody2D toward arbitrary angle

    - by Plastic Sturgeon
    I'm trying to make a rigidbody2D circle return to an orientation after a collision. But there is a weird behavior I do not expect - it always orients to the same direction. This is what I call in FixedUpdate(): rotationdifference = -halfPI + rigidbody2D.rotation; rigidbody2D.AddTorque (rotationdifference * ease); I would expect this would rotate 90 degrees (1/2 Pi Radians) off of the neutral axis. But it does not. In fact it performs exactly the same as: rotationdifference = rigidbody2D.rotation; rigidbody2D.AddTorque (rotationdifference * ease); What is going on? How would I be able to set an angle I want it to ease towards, and then have it ease towards it when its not colliding with some other force?

    Read the article

  • Behavior tree implementation details

    - by angryInsomniac
    I have been looking around for implementation details of behavior trees, the best descriptions I found were by Alex Champarand and some of Damian Isla's talk about AI in Halo 2 (the video of which is locked up in the GDC vault sadly). However, both descriptions fall short of helping one actually create a BT, one particular question has been bugging me for a while. When is the tree in a behavior tree evaluated? Furthermore: If the tree is in the middle of executing a sequence of actions (patrolling waypoints) and a higher priority impulse comes in (distraction sound) , how to switch to that side of the tree seamlessly without resorting to a state machine like system and if it is decided that the impulse was irrelevant (the distraction is too far away to affect this guard), how to go back to the last thing that the guard was doing ? I have quite a few questions like this and I don't wish to flood the board with separate queries so if you know of any resource where questions like these can be answered I would be very grateful.

    Read the article

  • OpenGL Vertex Attributes - Normalisation

    - by Daniel
    Alas, I have searched, and have found no definitive answer. When would you normalize the vertex data in OpenGL using the following command: glVertexAttribPointer(index, size, type, normalize, stride, pointer); I.e when would normalize == GL_TRUE; what situations, and why would you choose to let the GPU do the calculations instead of preprocessing it? All examples I have ever seen, have this set to GL_FALSE; and I cannot personally see a use for it. But Khronos aren't stupid, so it must be there for something useful (and probably common).

    Read the article

  • How to create games with scrolling?

    - by Chandan Shetty SP
    In games like city story or we farm how do they implement scrolling? To do scrolling using UIScrollView the EAGLView size has to be bigger. In those games EAGLView size look like more than 1024*1024. But there is limitation in viewport size in iphone devices(in 3G iphone max is 1024). I played those games in 3G iphone they are working fine. Any idea how they implemented their scrolling mechanism?

    Read the article

  • How to get this wavefront .obj data onto the frustum?

    - by NoobScratcher
    I've finally figured out how to get the data from a .obj file and store the vertex positions x,y,z into a structure called Points with members x y z which are of type float. I want to know how to get this data onto the screen. Here is my attempt at doing so: //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); std::vector<int>faces; std::vector<Point>points; points.push_back(Point()); Point p; int face[4]; while ( !file.eof() ) { char modelbuffer[10000]; //Get lines and store it in line string file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); cout << "Getting Vertex Positions" << endl; cout << "v" << p.x << endl; cout << "v" << p.y << endl; cout << "v" << p.z << endl; break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); cout << face[0] << endl; cout << face[1] << endl; cout << face[2] << endl; cout << face[3] << endl; faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } GLuint vertexbuffer; glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size(), points.data(), GL_STATIC_DRAW); //glBufferData(GL_ARRAY_BUFFER,sizeof(points), &(points[0]), GL_STATIC_DRAW); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glVertexPointer(3, GL_FLOAT, sizeof(points),points.data()); glIndexPointer(GL_DOUBLE, 0, faces.data()); glDrawArrays(GL_QUADS, 0, points.size()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); } As you can see I've clearly failed the end part but I really don't know why its not rendering the data onto the frustum? Does anyone have a solution for this?

    Read the article

  • Bubble shooter search alghoritm

    - by Fofole
    So I have a Matrix of NxM. At a given position (for ex. [2][5]) I have a value which represents a color. If there is nothing at that point the value is -1. What I need to do is after I add a new point, to check all his neighbours with the same color value and if there are more than 2, set them all to -1. If what I said doesn't make sense what I'm trying to do is an alghoritm which I use to destroy all the same color bubbles from my screen, where the bubbles are memorized in a matrix where -1 means no bubble and {0,1,2,...} represent that there is a bubble with a specific color. This is what I tried and failed: public class Testing { static private int[][] gameMatrix= {{3, 3, 4, 1, 1, 2, 2, 2, 0, 0}, {1, 4, 1, 4, 2, 2, 1, 3, 0, 0}, {2, 2, 4, 4, 3, 1, 2, 4, 0, 0}, {0, 1, 2, 3, 4, 1, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, }; static int Rows=6; static int Cols=10; static int count; static boolean[][] visited=new boolean[15][15]; static int NOCOLOR = -1; static int color = 1; public static void dfs(int r, int c, int color, boolean set) { for(int dr = -1; dr <= 1; dr++) for(int dc = -1; dc <= 1; dc++) if(!(dr == 0 && dc == 0) && ok(r+dr, c+dc)) { int nr = r+dr; int nc = c+dc; // if it is the same color and we haven't visited this location before if(gameMatrix[nr][nc] == color && !visited[nr][nc]) { visited[nr][nc] = true; count++; dfs(nr, nc, color, set); if(set) { gameMatrix[nr][nc] = NOCOLOR; } } } } static boolean ok(int r, int c) { return r >= 0 && r < Rows && c >= 0 && c < Cols; } static void showMatrix(){ for(int i = 0; i < gameMatrix.length; i++) { System.out.print("["); for(int j = 0; j < gameMatrix[0].length; j++) { System.out.print(" " + gameMatrix[i][j]); } System.out.println(" ]"); } System.out.println(); } static void putValue(int value,int row,int col){ gameMatrix[row][col]=value; } public static void main(String[] args){ System.out.println("Initial Matrix:"); putValue(1, 4, 1); putValue(1, 5, 1); showMatrix(); for(int n = 0; n < 15; n++) for(int m = 0; m < 15; m++) visited[n][m] = false; //reset count count = 0; //dfs(bubbles.get(i).getRow(), bubbles.get(i).getCol(), color, false); // get the contiguous count dfs(5,1,color,false); //if there are more than 2 set the color to NOCOLOR for(int n = 0; n < 15; n++) for(int m = 0; m < 15; m++) visited[n][m] = false; if(count > 2) { //dfs(bubbles.get(i).getRow(), bubbles.get(i).getCol(), color, true); dfs(5,1,color,true); } System.out.println("Matrix after dfs:"); showMatrix(); } }

    Read the article

  • Import 3ds into JMonkeyEngine 3

    - by Yanick Rochon
    I have asked this question on SO, but I think it will be more suitable here. Basically, we are trying to import an animated character body (with skeleton) from 3D Studio Max to JMonkeyEngine 3, but while we succeeded at importing some animations, we cannot seem to export the skeleton to .skeleton.xml using OgreXML format. Since OgreXML seems to be the favored way to import models into JME, we dropped .obj files and such. Any help appreciated.

    Read the article

  • Numbers not adding up? (What am I not understanding here?) [closed]

    - by Milo
    I have the following output: Short version: The last numbers on the S= lines increase by H and SHOULD theoretically be linearly decreasing, ex: -285,-290,-295...but the fourth one jumps to -252. Yet, every other number is linearly increasing. Why is that and how could I fix that? To explain the numbers, it comes from slider value changed. I have a slider whose value is used to generate the float on the next line. Everything should be growing linearly here. This value is used to determine the size of a flow layout and it is also used in conjunction with a scrollbar. But basically I have a background for the flow layout and that number is the start location for rendering it. The numbers should linearly change to create a smooth transition but when that one jumps, it looks weird on screen and I dont understand why the numbers are jumping every X slider value changes. Mathematically what could be causing this? Here is the code for rendering the background and the function that is called when value changes: void LobbyTableManager::renderBG( GraphicsContext* g, agui::Rectangle& absRect, agui::Rectangle& childRect ) { float scale = 0.35f; int w = m_bgSprite->getWidth() * getTableScale() * scale; int h = m_bgSprite->getHeight() * getTableScale() * scale; int numX = ceil(absRect.getWidth() / (float)w) + 2; int numY = ceil(absRect.getHeight() / (float)h) + 2; int startY = childRect.getY(); int numAttempts = 0; while(startY + h < absRect.getY() && numAttempts < 1000) { startY += h; if(moo) { std::cout << startY << ","; } numAttempts++; } g->holdDrawing(); for(int i = 0; i < numX; ++i) { for(int j = 0; j < numY; ++j) { g->drawScaledSprite(m_bgSprite,0,0,m_bgSprite->getWidth(),m_bgSprite->getHeight(), absRect.getX() + (i * w) + (offsetX),absRect.getY() + (j * h) + startY,w,h,0); } } g->unholdDrawing(); g->setClippingRect(cx,cy,cw,ch); } void LobbyTableManager::setTableScale( float scale ) { scale += 0.3f; scale *= 2.0f; float scrollRel = m_vScroll->getRelativeValue(); setScale(scale); rescaleTables(); resizeFlow(); updateScrollBars(); float newVal = scrollRel * m_vScroll->getMaxValue(); m_vScroll->setValue(newVal); } void LobbyTableManager::valueChanged( agui::VScrollBar* source,int val ) { m_flow->setLocation(0,-val); } Any insight on mathematically why the anomaly might happen every Nth time would be helpful. I just dont understand why if every number linearly increates it jumps from -295 to -252! Thanks

    Read the article

  • HLSL What you get when you subtract world position from InvertViewProjection.Transform?

    - by cubrman
    In one of NVIDIA's Vertex shaders (the metal one) I found the following code: // transform object normals, tangents, & binormals to world-space: float4x4 WorldITXf : WorldInverseTranspose < string UIWidget="None"; >; // provide tranform from "view" or "eye" coords back to world-space: float4x4 ViewIXf : ViewInverse < string UIWidget="None"; >; ... float4 Po = float4(IN.Position.xyz,1); // homogeneous location coordinates float4 Pw = mul(Po,WorldXf); // convert to "world" space OUT.WorldView = normalize(ViewIXf[3].xyz - Pw.xyz); The term OUT.WorldView is subsequently used in a Pixel Shader to compute lighting: float3 Ln = normalize(IN.LightVec.xyz); float3 Nn = normalize(IN.WorldNormal); float3 Vn = normalize(IN.WorldView); float3 Hn = normalize(Vn + Ln); float4 litV = lit(dot(Ln,Nn),dot(Hn,Nn),SpecExpon); DiffuseContrib = litV.y * Kd * LightColor + AmbiColor; SpecularContrib = litV.z * LightColor; Can anyone tell me what exactly is WorldView here? And why do they add it to the normal?

    Read the article

  • Scale DIV with tiles

    - by user15350
    I am trying to create a repeating background. I have a main DIV with a grid of small 16x16 DIVs. I am trying to scale the main DIV in CSS; when the small DIVs simply have a red background color everything works great, but when there is a background image in the small DIVs then borders become visible between the tiles. This image explains the problem: http://cl.ly/FpNW/o Check the HTML in these examples: With BG-COLOR: http://jsfiddle.net/pTLXw/ With BG-IMG: http://jsfiddle.net/vkpuY/ Does anyone know what is causing this problem and how to fix it? If it is not possible to fix while using DIV, is there another way to do this? Thanks you so much!

    Read the article

  • How do I render an animation where some frames appear twice?

    - by hustlerinc
    I am animating a sprite. The sprite has 7 different frames, but the animation is 10 frames long. This is because 3 of the original frames appear twice in the animation: 3 -> 4 -> 5 -> 6 -> 4 -> 3 -> 2 -> 1 -> 0 -> 2 Frames 2, 3 and 4 appear twice. This avoids having to store duplicate frames in the spritesheet. How can I render the animation in this sequence with repeated frames?

    Read the article

  • Box2D relations

    - by Valentino Ru
    As far as I know, the unit in Box2D is meters. When I use Box2D in Processing with JBox2D, I set the "world size" as the window size specified in the setup(). Now I'm wondering if there is any function that scales down the world. For example, how can I simulate the throw of tennis ball within a room, without using a window of only 5 x 5 pixels? Additionally, is there any good documentation like the Java API?

    Read the article

  • Confusion on HLSL Samplers. Can I Set Samplers Inside Functions?

    - by Kyle Connors
    I'm trying to create a system where I can instance a quad to the screen, however I've run into a problem. Like I said, I'm trying to instance the quad, so I'm trying to use the same geometry several times, and I'm trying to do it in one draw call. The issue is, I want some quads to use different textures, but I can't figure out how to get the data into a sampler so I can use it in the pixel shader. I figured that since we can simply pass in the 4 bytes of our IDirect3DTexture9* to set the global texture, I can do so when passing in my dynamic buffer. (Which also stores each objects world matrix and UV data) Now that I'm sending the data, I can't figure how to get it into the sampler, and I really want to assume that it's simply not possible. Is there any way I could achieve this?

    Read the article

  • Finding vectors with two points

    - by Christian Careaga
    We're are trying to get the direction of a projectile but we can't find out how For example: [1,1] will go SE [1,-1] will go NE [-1,-1] will go NW and [-1,1] will go SW we need an equation of some sort that will take the player pos and the mouse pos and find which direction the projectile needs to go. Here is where we are plugging in the vectors: def update(self): self.rect.x += self.vector[0] self.rect.y += self.vector[1] Then we are blitting the projectile at the rects coords.

    Read the article

  • OOP implementation of BUFFS and Stats. Suggestion

    - by Mattia Manzo Manzati
    I am developing an MMORPG server using NodeJS. I am not sure how to implement Buffs, i mean, equipped objects or used skills have effects on the Player() which has many Stats(), some of them have a max cap... Effects can change the Stat value, increasing or decreasing it by a value, a percentage or completly rewrite the value of the stat. After a while I have decided to create a base class for buffs, which can be hidden (if they are casted from an equipped object) or shown if they came from an ability (Spell). Anyway I need suggestion how to implement it, use an array for all active buffs for a stat and have a function calculate the value of the stat affected by buffs each time I need the value of the stat or...? Other more OOP's ways to do it? I have read this What's a way to implement a flexible buff/debuff system? but this implements only a percentage system, which buffs can only say "+10%, +20%, etc...", but I would love to have an hybrid system, which can have percentage values or static values (like WoW does), and using modifiers it's hard to implement, because modifiers refers to the current value of stat :/ Thanks for suggestions :)

    Read the article

  • How to shift a vector based on the rotation of another vector?

    - by bpierre
    I’m learning 2D programming, so excuse my approximations, and please, don’t hesitate to correct me. I am just trying to fire a bullet from a player. I’m using HTML canvas (top left origin). Here is a representation of my problem: The black vector represent the position of the player (the grey square). The green vector represent its direction. The red disc represents the target. The red vector represents the direction of a bullet, which will move in the direction of the target (red and dotted line). The blue cross represents the point from where I really want to fire the bullet (and the blue and dotted line represents its movement). This is how I draw the player (this is the player object. Position, direction and dimensions are 2D vectors): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.drawImage(this.image, Math.round(-this.dimensions.x/2), Math.round(-this.dimensions.y/2), this.dimensions.x, this.dimensions.y); ctx.restore(); This is how I instanciate a new bullet: var bulletPosition = playerPosition.clone(); // Copy of the player position var bulletDirection = Vector2D.substract(targetPosition, playerPosition).normalize(); // Difference between the player and the target, normalized new Bullet(bulletPosition, bulletDirection); This is how I move the bullet (this is the bullet object): var speed = 5; this.position.add(Vector2D.multiply(this.direction, speed)); And this is how I draw the bullet (this is the bullet object): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.fillRect(0, 0, 3, 3); ctx.restore(); How can I change the direction and position vectors of the bullet to ensure it is on the blue dotted line? I think I should represent the shift with a vector, but I can’t see how to use it.

    Read the article

  • Issues implementing arcball viewer

    - by Pris
    My scene has a simple cube, and a camera built with the lookAt function (I'm using OpenGL). The scene renders fine, and I'm sure I have my model/view/projection matrices set up correctly. Now I'm trying to implement arcball rotation for my camera, but I'm having some trouble. I've got it down to calculating the angle/axis rotation for a virtual sphere in normalized screen coordinates. That means when I move my mouse left to right, I get an angle around the Y axis... and moving my mouse up/down will get me an angle about X. I'm not sure where to go from here -- what do I need to do with my axis so I can apply the angle to simulate camera rotation about its viewpoint? If I try directly applying the axis/angle rotation the camera/view transform I get what you'd expect. The view is rotated about the world axes which the mouse moving over the virtual sphere on the screen corresponds to. So if I move the mouse up/down the view rotates about the world's X axis (what I get reminds me of a first-person view)... but this isn't what I want. I think I need the axis I get to be transformed so it passes through the camera viewpoint and is oriented correct in reference to the camera... but I don't know if that's right or how to do that.

    Read the article

  • Why doesn't it seem to be any development in the field of 3D VR gear, especially with regard to gaming?

    - by neuviemeporte
    I remember that way back around 1995, there was this big craze with VR in the media, a whole bunch of (mostly mediocre) games labeled as "virtual-reality-interactive-movie (...)" were published. If I recall correctly, the first 3D VR helmet was called VFX-1 and was sold bundled with Descent and some dedicated joystick. I never owned one, and I read just one review which was mostly enthusiastic, but pointed to some weak points, like the eyes getting tired after an hour or so of playing. Then the whole thing basically flickered down and died. I suppose the main reason it wasn't successful was that the hardware of the day was not powerful enough, the VR gear's design wasn't perfected to make it comfortable and natural to use, and the companies that made it failed to market it successfully. What I can't understand is why isn't there any development in the field today. There is some vr-ish hardware mostly targeted at the consoles (Kinect, Wii remote, TrackIR), but all projects of creating some 3d head-mounted display system seem to be in early infancy, appear once in a trade show somewhere and aren't heard of again. I think it could work great with head tracking and some of today's shooters, flight sims (trackIR is nice but the movement scale translation is awkward) and other games with an FPP POV. Is there any technological reason why decent vr headgear can't be made today, or is it just that nobody really cares/everyone is scared to repeat the '90s failure?

    Read the article

  • Sampling Heightmap Edges for Normal map

    - by pl12
    I use a Sobel filter to generate normal maps from procedural height maps. The heightmaps are 258x258 pixels. I scale my texture coordinates like so: texCoord = (texCoord * (256/258)) + (1/258) Yet even with this I am left with the following problem: As you can see the edges of the normal map still proves to be problematic. Putting the texture wrap mode to "clamp" also proved no help. EDIT: The Sobel Filter function by sampling the 8 surrounding pixels around a given pixel so that a derivative can be calculated in order to find the "normal" of the given pixel. The texture coordinates are instanced once per quad (for the quadtree that makes up the world) and are created as follows (it is quite possible that the problem results from the way I scale and offset the texCoords as seen above): Java: for(int i = 0; i<vertices.length; i++){ Vector2f coord = new Vector2f((vertices[i].x)/(worldSize), (vertices[i].z)/( worldSize)); texCoords[i] = coord; } the quad used for input here rests on the X0Z plane. 'worldSize' is the diameter of the planet. No negative texCoords are seen as the quad used for input for this method is not centered around the origin. Is there something I am missing here? Thanks.

    Read the article

  • Vector.Unproject - Checking if a model intersects a large sprite

    - by Fibericon
    Let's say I have a sprite, drawn like this: spriteBatch.Draw(levelCannons[i].texture, levelCannons[i].position, null, alpha, levelCannons[i].rotation, Vector2.Zero, scale, SpriteEffects.None, 0); Picture levelCannon as being a laser beam that goes across the entire screen. I need to see if my 3d model intersects with the screen space inhabited by the sprite. I managed to dig up Vector.Unproject, but that seems to only be useful when dealing with a single point in 2d space, rather than an area. What can I do in my case?

    Read the article

  • Kinect joint coordinates and XNA animation

    - by Sweta Dwivedi
    I have written a program to record the x,y,z coordinated of the Hand joint and I want to animate my models 2D or 3D according to these coordinates. . .However the output of the x,y,z coordinates are fluctuating from -0 to 1 but not more than that.. So i assume I will need to multiply them back with the screen width and height, however it still doesnt seem to animate according to the original x,y,z points Any transformations I might be missing out? while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int) float.Parse(temp[0]))* maxWidth); int y = (int) float.Parse(temp[1])) * maxHeight); }

    Read the article

  • Line Intersection from parametric equation

    - by Sidar
    I'm sure this question has been asked before. However, I'm trying to connect the dots by translating an equation on paper into an actual function. I thought It would be interesting to ask here instead on the Math sites (since it's going to be used for games anyway ). Let's say we have our vector equation : x = s + Lr; where x is the resulting vector, s our starting point/vector. L our parameter and r our direction vector. The ( not sure it's called like this, please correct me ) normal equation is : x.n = c; If we substitute our vector equation we get: (s+Lr).n = c. We now need to isolate L which results in L = (c - s.n) / (r.n); L needs to be 0 < L < 1. Meaning it needs to be between 0 and 1. My question: I want to know what L is so if I were to substitute L for both vector equation (or two lines) they should give me the same intersection coordinates. That is if they intersect. But I can't wrap my head around on how to use this for two lines and find the parameter that fits the intersection point. Could someone with a simple example show how I could translate this to a function/method?

    Read the article

  • Set vertex position

    - by user1806687
    Can anyone tell me how to set the positions of model vertices? I want to be able to change the position of some of the vertices of a Model. Is there any way to make that happen? And make the changed visible at that moment. EDIT: Well, the thing is,I have a model, a cube, that is made up of four "thin" cubes(top,bottom,left side, right side), so I get this cube with "hole" in the middle. And I want to scale it on Y axis. If I do Scale(0,2,0) it will scale the whole object meaning, it will double the Y size of left and right side, but also double the size of the top and bottom cube, which I do not want. Same for X axis I want to double the size of top and bottom cubes but not the left and right one. Hope you can help

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >