Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 609/1169 | < Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >

  • Will you still play a good Red Alert 3 mission map? [closed]

    - by W.N.
    I've been creating a RA3 mission map (play in Skirmish), most likely a remake of RA2 Yuri "To the Moon" mission, with more interesting elements. However, because of my work, the process was corrupted for more than a year. And now, I see that very few people still play RA3. So, should I continue making this map? Because there're still a lot of work to complete this map. I can assure you, the mission will be interesting. However, if few people play it, there's no need to waste time to it. Please give me some advice. Thank you.

    Read the article

  • Fourth texture = segmentation fault

    - by Robin92
    I keep on getting segmentation fault each time I load fourth texture - what type of texture, I mean filename, does not matter. I checked value of GL_TEXTURES_STACK_SIZE which turned out to be 10 so quite more than 4, isn't it? Here're code fragments: funciton to load texture from png static GLuint gl_loadTexture(const char filename[]) { static int iTexNum = 1; GLuint texture = 0; img_s *img = NULL; img = img_loadPNG(filename); if (img) { glGenTextures(iTexNum++, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, img->iGlFormat, img->uiWidth, img->uiHeight, 0, img->iGlFormat, GL_UNSIGNED_BYTE, img->p_ubaData); img_free(img); //it may cause errors on windows } else printf("Error: loading texture '%s' failed!\n", filename); return texture; } actual loading static GLuint textures[4]; static void gl_init() { (...) //setting up OpenGL /* loading textures */ textures[0] = gl_loadTexture("images/background.png"); textures[1] = gl_loadTexture("images/spaceship.png"); textures[2] = gl_loadTexture("images/asteroid.png"); textures[3] = gl_loadTexture("images/asteroid2.png"); //this is causing SegFault no matter which file I load! } Any ideas? Problem is present on both Linux and Windows.

    Read the article

  • How do you set the movement speed of a sprite?

    - by rphello101
    I'm using Slick 2D/Java to play around with graphics. Getting an image to move is easy: Input input = gc.getInput(); if(input.isKeyDown(sprite.up)){ sprite.y--; }else if (input.isKeyDown(sprite.down)){ sprite.y++; }else if (input.isKeyDown(sprite.left)){ sprite.x--; }else if (input.isKeyDown(sprite.right)){ sprite.x++; } However, this is called on every update, so if you hold up, the sprite moves to the edge of the screen in a few hundred milliseconds. Since coordinates are integers, I can't add less than 1 to slow the sprite down. I'm assuming I must have to implement a timer of some sort or something. Any advice?

    Read the article

  • Are there any resources for motion-planning puzzle design?

    - by Salano Software
    Some background: I'm poking at a set of puzzles along the lines of Rush Hour/Sokoban/etc; for want of a better description, call them 'motion planning' puzzles - the player has to figure out the correct sequence of moves to achieve a particular configuration. (It's the sort of puzzle that's generically PSPACE-complete if that actually helps anyone's mental image). While I have a few straightforward 'building blocks' that I can use for puzzle crafting and I have a few basic examples put together, I'm trying to figure out how to avoid too much sameness over a large swath of these kinds of puzzles, and I'm also trying to figure out how to make puzzles that have more of a feel of logical solution than trial-and-error. Does anyone know of good resources out there for designing instances of this sort of puzzle once the core puzzle rules are in place? Most of what I've found on puzzle design only covers creating the puzzle rules, not building interesting puzzles out of a set of rules.

    Read the article

  • How to determine which thrusters to turn on to rotate the ship?

    - by migimunz
    The configuration of the ship changes dynamically, so I have to determine which thruster to turn on when I want to rotate the ship clockwise or counter clockwise. The thrusters are always axis aligned with the ship (never at an angle) and are either on or off. Here's one of the possible setups: What I've tried so far is to visualize the firing vector and the direction vector to the center of mass of the ship: Unfortunately, I didn't get very far with that.

    Read the article

  • Vector Troubles in C++

    - by DistortedLojik
    I am currently working on a project that deals with a vector of objects of a People class. The program compiles and runs just fine, but when I use the debugger it dies when trying to do anything with the PersonWrangler object. I currently have 3 different classes, one for the person, a personwrangler which handles all of the people collectively, and a game class that handles the game input and output. Edit: My basic question is to understand why it is dying when it calls outputPeople. Also I would like to understand why my program works exactly as it should unless I use the debugger. The outputPeople function works the way I intended that way. Edit 2: The callstack has 3 bad calls which are: std::vector ::begin(this=0xbaadf00d) std::vector ::size(this=0xbaadf00d) PersonWrangler::outputPeople(this=0xbaadf00d) Relevant code: class Game { public: Game(); void gameLoop(); void menu(); void setStatus(bool inputStatus); bool getStatus(); PersonWrangler* hal; private: bool status; }; which calls outputPeople where it promptly dies from a baadf00d error. void Game::menu() { hal->outputPeople(); } where hal is an object of PersonWrangler type class PersonWrangler { public: PersonWrangler(int inputStartingNum); void outputPeople(); vector<Person*> peopleVector; vector<Person*>::iterator personIterator; int totalPeople; }; and the outputPeople function is defined as void PersonWrangler::outputPeople() { int totalConnections = 0; cout << " Total People:" << peopleVector.size() << endl; for (unsigned int i = 0;i < peopleVector.size();i++) { sort(peopleVector[i]->connectionsVector.begin(),peopleVector[i]->connectionsVector.end()); peopleVector[i]->connectionsVector.erase( unique (peopleVector[i]->connectionsVector.begin(),peopleVector[i]->connectionsVector.end()),peopleVector[i]->connectionsVector.end()); peopleVector[i]->outputPerson(); totalConnections+=peopleVector[i]->connectionsVector.size(); } cout << "Total connections:" << totalConnections/2 << endl; }

    Read the article

  • Toggle Fullscreen at Runtime

    - by sharethis
    Using the library GLFW, I can create a fullscreen window using this line of code. glfwOpenWindow(Width, Height, 8, 8, 8, 8, 24, 0, GLFW_FULLSCREEN); The line for creating a standard window looks like this. glfwOpenWindow(Width, Height, 8, 8, 8, 8, 24, 0, GLFW_WINDOW); What I want to do is letting the user switch between standard window and fullscreen by a keypress, let's say F11. It there a common practice of toggling fullscreen mode? What do I have to consider?

    Read the article

  • How can I derive force vectors from velocity vectors?

    - by PixelRouter
    I'm making a 2d shooter ala Geometry Wars. I've got my own simple physics at work driving the background grid and all my entities. To move anything in the world I apply a Vector2d force to it. The 'engine' calculates the resulting acceleration and therefore the velocity. I am trying to port some code I found which implements the classic 'Boids' flocking algorithm, but the code I have works by calculating the Boids' velocities directly, so If i use it as is, it bypasses my physics engine. How I can translate the velocity vectors into force vectors that I can apply to the Boids and which will result in the proper velocities via my physics engine.

    Read the article

  • does glBindAttribLocation silently ignore names not found in a shader?

    - by rwols
    Does glBindAttribLocation silently ignore names that are not found? For example, in a shader: // Some vertex shader in vec3 position; in vec3 normal; // ... And in some set up code: // While setting up shader GLuint program = glCreateProgram(); glBindAttribLocation(program, 0, "position"); glBindAttribLocation(program, 1, "normal"); glBindAttribLocation(program, 2, "color"); // What about this one? glLinkProgram(program);

    Read the article

  • Create a Texture2D from larger image

    - by Dialock
    I am having trouble with the basic logic of this solution: Xna: Splitting one large texture into an array of smaller textures in respect to my specific problem (specifically, I'm looking at the second answer.) How can I use my source rectangle that I already use for drawing to create a new Texture2D? spriteBatch.Draw(CurrentMap.CollisionSet, currentMap.CellScreenRectangle(x, y), CurrentMap.TileSourceRectangle(currentMap.MapCells[x, y].TileDepths[4]), Color.FromNonPremultiplied(0,0,0, 45), 0.0f, Vector2.Zero, SpriteEffects.None, 0.91f); I know I want a method that I started so: //In Update Method of say the player's character. Texture2D CollisionTexture = ExtractTexture(MapManager.CurrentMap.CollisionSet, MapManager.TileWidth, MapManager.TileHeight); // In MapManager Class who knows everything about tiles that make up a level. public Texture2D ExtractTexture(Texture2D original, int partWidth, int partheight, MapTile mapCell) { var dataPerPart = partWidth * partheight; Color[] originalPixelData = new Color[original.Width * original.Height]; original.GetData<Color>(originalPixelData); Color[] newTextureData = new Color[dataPerPart]; original.GetData<Color>(0, CurrentMap.TileSourceRectangle(mapCell.TileDepths[4]), originalPixelData, 0, originalPixelData.Count()); Texture2D outTexture = new Texture2D(original.GraphicsDevice, partWidth, partheight); } I think the problem is I'm just not understanding the overload of Texture2D.GetData< Part of my concern is creating an array of the whole texture in the first place. Can I target the original texture and create an array of colors for copying based on what I already get from the method TileSourceRecatangle(int)?

    Read the article

  • Can XNA Content Pipeline split one content file into several .xnb?

    - by Zeta Two
    Let's say I have an xml file which looks like this <Weapons> <Weapon> <Name>Pistol</Name> ... </Weapon> <Weapon> <Name>MachineGun</Name> ... </Weapon> </Weapons> Would it be possible to use a custom importer/writer/reader to create two files, Pistol.xnb and MachineGun.xnb which I can load individually with Content.Load()? While writing this I realized I could just import a Weapon[] list and split them up with a helper, but I'm still wondering if this is possible?

    Read the article

  • How do I render from one render target to another?

    - by Chaotikmind
    I have two render targets: a fake backbuffer; a special render target where I do all my rendering. a light render target; where I render my light fx. I'm sure I'm rendering correctly on both. The problem arises when I overlay the light render target onto the fake backbuffer by drawing a quad covering it: DxEngine.DrawSprite(0.0f, 0.0f, 0.0f, (float)DxEngine.GetWidth(), (float)DxEngine.GetHeight(), 0xFFFFFFFF, LightSurface->GetTexture()); Regardless of what's in the light target, nothing is rendered onto the other target. I tried clearing the light target with full-white or full-black, but still get nothing. Fake backbuffer created with Direct3dDev->CreateTexture(Width, Height, 1, D3DUSAGE_RENDERTARGET, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &Texture, nullptr); Light render target created with Direct3dDev->CreateTexture(Width, Height, 1, D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &Texture, nullptr); I also tried to create both with D3DFMT_A8R8G8B8, again without difference. Both targets have the same width and height. Only the fixed pipeline is used DirectX setup for rendering : Direct3dDev->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); Direct3dDev->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); Direct3dDev->SetSamplerState(0, D3DSAMP_MIPFILTER, D3DTEXF_NONE ); Direct3dDev->SetSamplerState(0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP ); Direct3dDev->SetSamplerState(0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP ); Direct3dDev->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE); Direct3dDev->SetRenderState(D3DRS_LIGHTING, false); Direct3dDev->SetRenderState(D3DRS_ZENABLE, D3DZB_TRUE); Direct3dDev->SetRenderState(D3DRS_ZWRITEENABLE,D3DZB_TRUE); Direct3dDev->SetRenderState(D3DRS_ZFUNC,D3DCMP_LESSEQUAL); Direct3dDev->SetRenderState(D3DRS_ALPHABLENDENABLE, true ); Direct3dDev->SetRenderState(D3DRS_ALPHAREF, 0x00000000ul); Direct3dDev->SetRenderState(D3DRS_ALPHATESTENABLE, true); Direct3dDev->SetRenderState(D3DRS_ALPHAFUNC,D3DCMP_GREATER); Direct3dDev->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA ); Direct3dDev->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA ); Direct3dDev->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE); Direct3dDev->SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_DIFFUSE); Direct3dDev->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_MODULATE); Direct3dDev->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE); Direct3dDev->SetTextureStageState(0, D3DTSS_ALPHAARG2, D3DTA_DIFFUSE); Direct3dDev->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_MODULATE); Direct3dDev->SetTextureStageState(0, D3DTSS_RESULTARG, D3DTA_CURRENT); Direct3dDev->SetTextureStageState(0, D3DTSS_TEXCOORDINDEX, D3DTSS_TCI_PASSTHRU); //ensure the first stage is not used for now Direct3dDev->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_DISABLE); How can I do this right?

    Read the article

  • If I project a sphere in 3D will it be a circle?

    - by yuumei
    Assuming I have infinite vertices to represent the sphere, if I project the sphere from any position/scale in 3D to 2D, will it be a circle? I know it will not be a circle on the screen, because of scaling and different resolutions. But do field of view and aspect ratio effect the results? Edit: Sorry yes, I am talking about perspective projection. Seems the answer is no then, perspective will distort the sphere. Thanks!

    Read the article

  • Limiting the speed of a dragged sprite in Cocos2dx

    - by Frozsht
    I am trying to drag a row of sprites using ccTouchesMoved. By that I mean that there is a row of sprites (they are colored squares) lined up next to each other and if I grab one with a touch the rest follow it. However, if the sprite that is selected by touch moves too fast it creates a slight gap between it and the following sprites. How do I go about limiting the speed that I can drag the sprite with ccTouchesMoved? This is the only solution I could think of to my problem. If anyone has another suggestion to prevent this sprite gap from happening I would appreciate it.

    Read the article

  • EXC_BAD_ACCESS error when box2d joint is destroyed

    - by colilo
    When I destroy the weldJoint in the update method (see below) I get an EXC_BAD_ACCESS error pointing to the line world->DestroyJoint(weldJoint); in the update method below: -(void) update: (ccTime) dt { int32 velocityIterations = 8; int32 positionIterations = 1; // Instruct the world to perform a single step of simulation. It is // generally best to keep the time step and iterations fixed. world->Step(dt, velocityIterations, positionIterations); // using the iterator pos over the set std::set<BodyPair *>::iterator pos; for(pos = bodiesForJoints.begin(); pos != bodiesForJoints.end(); ++pos) { b2WeldJointDef weldJointDef; BodyPair *bodyPair = *pos; b2Body *bodyA = bodyPair->bodyA; b2Body *bodyB = bodyPair->bodyB; weldJointDef.Initialize(bodyA, bodyB, bodyA->GetWorldCenter()); weldJointDef.collideConnected = false; weldJoint = (b2WeldJoint*) world->CreateJoint(&weldJointDef); // Free the structure we allocated earlier. free(bodyPair); // Remove the entry from the set. bodiesForJoints.erase(pos); } for(b2Body *b = world->GetBodyList(); b; b=b->GetNext()) { if (b->GetUserData() != NULL) { CCSprite *mainSprite = (CCSprite*)b->GetUserData(); if (mainSprite.tag == 1) { mainSprite.position = CGPointMake( b->GetPosition().x * PTM_RATIO, b->GetPosition().y * PTM_RATIO); CGPoint mainSpritePosition = mainSprite.position; if (mainSprite.isMoved) { world->DestroyJoint(weldJoint); } } } } } In the HelloWorldLayer.h I set the weldJoint with the assign property. Am I destroying the joint in the wrong way? I would really appreciate any help. Thanks

    Read the article

  • How to make an object fly out of a slingshot?

    - by Deza
    At the moment I'm improvising a slingshot, the user can click and drag the projectile and let go. The force on the object is calculated by getting the distance between the vector of the slingshots two forks and the vector between where the user pulled it. However this will always result in a positive number and will not take into account the angle of the object relative to that of the slingshot. How can I make it fly out of the slingshot correctly?

    Read the article

  • Multiple passes in direct3d10

    - by innochenti
    I begin to learning direct3d10 and stuck with multiple passes. As input I have a triangle(that stored in vb/ib) and effect file: //some vertex shader and globals goes there. skip them to preserve simplicity float4 ColorPixelShader(PixelInputType input) : SV_Target { return float4(1,0,0,0); } float4 ColorPixelShader1(PixelInputType input) : SV_Target { return float4(0,1,0,0); } technique10 ColorTechnique { pass pass0 { SetVertexShader(CompileShader(vs_4_0, ColorVertexShader())); SetPixelShader(CompileShader(ps_4_0, ColorPixelShader())); SetGeometryShader(NULL); } pass pass1 { SetVertexShader(CompileShader(vs_4_0, ColorVertexShader())); SetPixelShader(CompileShader(ps_4_0, ColorPixelShader1())); SetGeometryShader(NULL); } } And some render code: pass1->Apply(0); device->DrawIndexed(indexCount, 0, 0); pass2->Apply(0); device->DrawIndexed(indexCount, 0, 0); What I'd expect to see is the green triangle, but it always shows me red triangle. What am I doing wrong? Also, I've got another question - should I set vertex shader in every pass? I've added ColorVertexShader1 that translates vertex position by some delta, and 've got following picture: http://imgur.com/Oe7Qj

    Read the article

  • GL_INVALID_OPERATION in glEnd

    - by Killrazor
    Hello, I'm having problems drawing a simple sprite. When I draw: void CSprite2D::render() { CHECKGL(glLoadIdentity()); CHECKGL(glEnable(GL_TEXTURE_2D)); CHECKGL(glEnable(GL_BLEND)); CHECKGL(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); m_texture->bind(); //CHECKGL(glPushMatrix()); CHECKGL(glBegin(GL_TRIANGLE_STRIP)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaStart.s,m_textureAreaStart.t)); // 0,0 by default CHECKGL(glVertex3i(m_position.x,m_position.y,0)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s,m_textureAreaStart.t)); // 1,0 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y, 0)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s, m_textureAreaEnd.t)); // 1,1 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y + m_dimensions.y, 0)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaStart.s, m_textureAreaEnd.t)); // 0,1 by default CHECKGL(glVertex3i( m_position.x, m_position.y + m_dimensions.y,0)); CHECKGL(glEnd()); //CHECKGL(glPopMatrix()); CHECKGL(glDisable(GL_BLEND)); } I'm always get an GL_INVALID_OPERATION in glEnd(). I suspect that error is not here, but I can't detect where may be. Actually, the output render seems ok. But I want to solve this situation before to catch a subtle bug tomorrow. Any idea of what could be

    Read the article

  • Android - Unity3D: setVisibility(View.VISIBLE) crashes

    - by Kazoeja
    I have a unity project and I use a Android (java) plugin to get camera data. I draw this on a TextureView. I want to hide/show this view when I press a button in unity. But my app crashes when I setVisibility onCreate UnityPlayer.currentActivity.addContentView(texView, new FrameLayout.LayoutParams(400, 400)); java: public void HideVideo() { //Hide view _TextureView.setVisibility(View.INVISIBLE); } Is there an extra function I need to call, or may I only call it on certain times? None of these thins work, they all make my app crash. _TextureView.setVisibility(View.INVISIBLE); _TextureView.setActivated(false); _TextureView.setAlpha(0); _TextureView.setTranslationY(-1000);

    Read the article

  • fast 3d point -> cuboid volume intersection test

    - by user1130477
    Im trying to test whether a point lies within a 3d volume defined by 8 points. I know I can use the plane equation to check that the signed distance is always -1 for all 6 sides, but does anyone know of a faster way or could point me to some code? Thanks EDIT: I should add that ideally the test would produce 3 linear interpolation parameters which would lie in the range 0..1 to indicate that the point is within the volume for each axis (since I will have to calculate these later if the point is found to be in the volume)

    Read the article

  • How do I get a collision event from a KActor subclass?

    - by Almo
    I have a subclass of KActor, and I want an event when it collides with things. event RigidBodyCollision seems to be what I want according to this http://wiki.beyondunreal.com/UE3:Actor_events_%28UDK%29#RigidBodyCollision Called when a PrimitiveComponent this Actor owns has: -bNotifyRigidBodyCollision set to true -ScriptRigidBodyCollisionThreshold greater than 0 -it is involved in a physics collision where the relative velocity exceeds ScriptRigidBodyCollisionThreshold As far as I can tell, I have these set up, and the event is not called for any collisions (KActor-KActor, KActor-world geometry, etc). Is there something else I need to do?

    Read the article

  • have a problem with my 2nd quad bottom left vertex position ? weirdd

    - by RubyKing
    Hey all I'm just trying to add another quad to my frustum and when doing so I get this weird little error. What happens is the bottom left side of my quad seems to stick to the center point for no apparent reason that I can think of and or figure out from. has anyone else experienced this issue and knows a solution or would you like more information please do ask. here is my main.cpp file http://pastebin.com/g9q8uAsd I think its because of 2 different quad vertex array data is in the same array.

    Read the article

  • Scripted Motion Paths (?) (XNA)

    - by Peteyslatts
    Ok, so the title isn't the greatest because this is a lot more general. Say I want to have the player be able to hit A and have their ship model roll to the right, and shift to the right of the screen, while the camera stays centered. Would I do that through programming (ie. set waypoints for the model and keep the camera focus still) or do it through animation ( so the ship model actually rolls and moves right, and just play those frames)(I actually don't know how to do this kind of 3D animation yet, haven't looked into it. Adding it to my To Do List) This is a really vague question I know, I'll try and answer any questions. Thanks, Peter

    Read the article

  • using per pixel collision for an elastic response

    - by Codejoy
    I realize this might be open ended ended but curious if I just did some over kill... I had this http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and i reworked it to work with my animation code in XNA and what not. It works well, but now I want to use this to decide if there was a collision and to have the items (characters) bounce off eachother elastically. Was the per pixel too much and I could of just used a bounding box ? (in fact would that of been preferred for what needs to be calculated in the response for an elastic collision?) Looking for guidance really.

    Read the article

  • Is there a prohibition against scaling collision shapes at runtime?

    - by Almo
    So, I have a StaticMeshComponent attached to an Actor: Begin Object Class=StaticMeshComponent Name=StaticMeshComponentObject StaticMesh=StaticMesh'QF_Art_Powers.Mesh.GP_ForcePush' CollideActors=true BlockActors=false //Scale3D=(X=5, Y=1.5, Z=3) // ALMODEBUG End Object CollisionComponent=StaticMeshComponentObject Components.Add(StaticMeshComponentObject) Ordinarily, the actor gets spawned, anything touching it gets bumped, and the actor despawns itself. If I set the Scale3D as a default property, everything works as I expect. But I want to scale it at runtime, like this: function SetImpulseComponentTemplate(QuadForceBoxImpulseComponent Value) { Local Vector ScaleVec; ScaleVec.X = Value.Length; ScaleVec.Y = Value.Width; ScaleVec.Z = Value.Height; CollisionComponent.SetScale3D(ScaleVec); } When I do this, the thing only collides as if it were not scaled. If I leave the actor spawned so I can see it, it is scaled. If I also "show collision", the collision displays correctly as well. Is there a prohibition against scaling collision shapes at runtime?

    Read the article

< Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >