Search Results

Search found 24037 results on 962 pages for 'game design'.

Page 565/962 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • What is involved with writing a lobby server?

    - by Kira
    So I'm writing a Chess matchmaking system based on a Lobby view with gaming rooms, general chat etc. So far I have a working prototype but I have big doubts regarding some things I did with the server. Writing a gaming lobby server is a new programming experience to me and so I don't have a clear nor precise programming model for it. I also couldn't find a paper that describes how it should work. I ordered "Java Network Programming 3rd edition" from Amazon and still waiting for shipment, hopefully I'll find some useful examples/information in this book. Meanwhile, I'd like to gather your opinions and see how you would handle some things so I can learn how to write a server correctly. Here are a few questions off the top of my head: (may be more will come) First, let's define what a server does. It's primary functionality is to hold TCP connections with clients, listen to the events they generate and dispatch them to the other players. But is there more to it than that? Should I use one thread per client? If so, 300 clients = 300 threads. Isn't that too much? What hardware is needed to support that? And how much bandwidth does a lobby consume then approx? What kind of data structure should be used to hold the clients' sockets? How do you protect it from concurrent modification (eg. a player enters or exists the lobby) when iterating through it to dispatch an event without hurting throughput? Is ConcurrentHashMap the correct answer here, or are there some techniques I should know? When a user enters the lobby, what mechanism would you use to transfer the state of the lobby to him? And while this is happening, where do the other events bubble up? Screenshot : http://imageshack.us/photo/my-images/695/sansrewyh.png/

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • Procedural Planets, Heightmaps and Textures

    - by henryprescott
    I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the momement I am drawing a cube with VBOs and mapping onto a sphere. I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement(not that useful in this case I know). My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like this. Leaving the tiling obvious. Could anyone advise me on the best route to take? Any input would be much appreciated. Thanks, Henry.

    Read the article

  • Are there any reasons to use Legacy (2.X) OpenGL?

    - by user27886
    The benefits are well documented of the Modern OpenGL 3.X & 4.X API's, but I'm wondering if there are ANY benefits to keeping with the old OpenGL, Or if learning OpenGL 2.X is a complete waste of time now no matter what? Particularly I've wondered if using the OpenGL 2.X API is appropriate if the target platform had graphics hardware capable of only up to OpenGL 2.X. Would a driver update on said target platform allow programs compiled using the Modern OpenGL API's to be released on this old platform? If they both work, which would be faster? Thanks

    Read the article

  • Grpahic hardwares

    - by Vanangamudi
    Which vendor provides better GPGPU. my requirements are confined to rendering utilising the GPU for BSDF building for e.g. Intel started providing Ivy Bridge chipset GPU, which are comparably fast to HD5960 cards. I'm not that against nvidia or amd. but I'm a fan of Intel. how it compares to nvidia in price and performance. if possible may I know, how all of them perform with OpenCL?? I'm not sure if it is right to ask it here. but I don't know where to ask.

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • Open Source Analysis

    - by BluFire
    There are a lot of code in open source projects, looking at all of the code is time consuming and can be confusing to a novice like me. Are there any sections of open-source projects that should be focused on? What should I focus on when I look at code? I'm asking this in general because if I ask this specifically, the question will only apply in one or two projects rather than an entire group of projects ranging in different types of games and difficulty.

    Read the article

  • Transparent parts of texture are opaque black instead

    - by Aaron
    I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial: uniform sampler2D texture; varying vec2 f_texcoord; void main() { gl_FragColor = texture2D(texture, f_texcoord); } I have glEnable(GL_BLEND) and glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL_RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise).

    Read the article

  • Estimating costs in a GOAP system

    - by fullwall
    I'm currently developing a GOAP system in Java. An explanation of GOAP can be found at http://web.media.mit.edu/~jorkin/goap.html. Essentially, it's using A* to plot between Actions that mutate the world state. To provide a fair chance for all Actions and Goals to execute, I'm using a heuristic function to estimate the cost of doing something. What is the best way to estimate this cost so that it is comparable to all the other costs? As an example, estimating the cost of running away from an enemy versus attacking it - how should the cost be calculated to be comparable?

    Read the article

  • Drawing Grid in 3D view - Mathematically calculate points and draw line between them (Not working)

    - by Deukalion
    I'm trying to draw a simple grid from a starting point and expand it to a size. Doing this mathematically and drawing the lines between each point, but since the "DrawPrimitives(LineList)" doesn't work the way it should work, And this method can't even draw lines between four points to create a simple Rectangle, so how does this method work exactly? Some sort of coordinate system: [ ][ ][ ][ ][ ][ ][ ] [ ][2.2][ ][0.2][ ][2.2][ ] [ ][2.1][1.1][ ][1.1][2.1][ ] [ ][2.0][ ][0.0][ ][2.0][ ] [ ][2.1][1.1][ ][1.1][2.1][ ] [ ][2.2][ ][0.2][ ][2.2][ ] [ ][ ][ ][ ][ ][ ][ ] I've checked with my method and it's working as it should. It calculates all the points to form a grid. This way I should be able to create Points where to draw line right? This way, if I supply the method with Size = 2 it starts at 0,0 and works it through all the corners (2,2) on each side. So, I have the positions of each point. How do I draw lines between these? VerticeCount = must be number of Points in this case, right? So, tell me, if I can't supply this method with Point A, Point B, Point C, Point D to draw a four vertice rectangle (Point A - B - C - D) - how do I do it? How do I even begin to understand it? As far as I'm concered, that's a "Line list" or a list of points where to draw lines. Can anyone explain what I'm missing? I wish to do this mathematically so I can create a a custom grid that can be altered.

    Read the article

  • 2d ball collision code problem XNA, over accelerated balls and stick together sometimes. help please? [closed]

    - by Sivan
    public static void Collision(Ball ball1, Ball ball2) { Vector3 x = new Vector3((ball1.BallPosition.X - ball2.BallPosition.X), (ball1.BallPosition.Y - ball2.BallPosition.Y), 0); x.Normalize(); Vector3 v1 = new Vector3(ball1.Speed, 0); float x1 = Vector3.Dot(x, v1); Vector3 v1x = x * x1; Vector3 v1y = v1 - v1x; x = -x; Vector3 v2 = new Vector3(ball2.Speed, 0); float x2 = Vector3.Dot(x, v2); Vector3 v2x = x * x2; Vector3 v2y = v2 - v2x; float m1 = 12, m2 = 4; float combinedMass = m1 + m2; Vector3 newVelA = (v1x * ((m1 - m2) / combinedMass)) + (v2x * ((2f * m2) / combinedMass)) + v1y; Vector3 newVelB = (v1x * ((2f * m1) / combinedMass)) + (v2x * ((m2 - m1) / combinedMass)) + v2y; ball1.Speed = new Vector2(newVelA.X, newVelA.Y); ball2.Speed = new Vector2(newVelB.X,newVelB.Y ); }

    Read the article

  • Modular building technique with angles? (A roof)

    - by Mungoid
    Ive been spending a bit of time lately studying the modular buildings of many games and reading/viewing several tutorials about it as well, but almost every example I see uses a plain square building that does not have any angled roof or similar. In all my applications (CS6, Blender/Max, UDK) I adhere to the same grid spacing and I get pretty good results, but trying to make modular angled pieces is confusing me as I'm not sure the best way to approach it. Below is some shots of my template sheet and workflow I have been doing. Should I do the roof separately or is it possible for me to keep it in the same texture sheet? The main issue is below. I have made a couple modular roof pieces but when i try to use them, i end up needing to model multiple other parts to fill gaps based on what roof shape i want. I then model those 'filler' pieces and now i have that much less space left in my texture sheet and those pieces are usually not that reusable for anything else. This is where im not sure how to proceed. If anyone has any links to documents or papers talking about this or advice, I would greatly appreciate it! =-) My main roof pieces with the gaps My power of 2 texture sheet, with 16x16 grid squares. The texture sheet loaded into blender on a 16x16 plane and starting to separate and extrude.

    Read the article

  • ScissorStack LIBGDX example?

    - by user36531
    I cant find a good resource/tutorial on how to do this. I would appreciate it if someone could provide a scissorstack example from an entity class. ie. using scissorstack on PlayerClass such that the map renders around the Player sprite, say 5 tiles. which would then allow me to create a Pawn class and apply same methodology to give a pawn sprite a lower number, like only rendering 1 tile around the location of the pawn.

    Read the article

  • How to calculate continuous motion with angular velocity in 2d

    - by Rulk
    I'm really new with physics. Maybe someone would be able to help me to solve the next problem: I need to calculate position of an agent on the plane(2D) in next time step where time step is large(20+ seconds) What I know about agent's motion: Initial Position Direction(normalised vector) Velocity(linear function from time ) - object always moves along it's direction Angular Velocity(linear function from time) Optional: External force direction External force (linear function from time) Running discreet simulation with t-0 is not an option.

    Read the article

  • How effects found in "Autodesk Fluid FX" are implemented using OpenGL ES?

    - by afds
    How this kind of effects are technically implemented using OpenGL ES? Are they performing simulation on GPU (using Shaders) or CPU while using some smart vertex positioning and texturing? Why it appears so fast (in terms of performance)? You might check the video of that app here: http://www.youtube.com/watch?v=F4KOk6QP6kQ edit Here is the presentation for the app: http://www.futuregameon.com/FGO2010_JosStam.pdf

    Read the article

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • What should I do if my text exceeds my text render target boundaries?

    - by user1423893
    I have a method for drawing strings in 3D that does the following: Set a render target Draw each character as a quadrangle using a orthographic projection to the render target Unset the render target Draw the render target texture using a perspective projection and a world transform My problem is how to deal with strings whose characters length exceeds that of the render target dimensions? For example if I have string "This is a reallllllllllly long string" and the render target can't accommodate it, it will only capture "This is a realllll". The render target (and its size) could be set each frame but wouldn't that be far too costly?

    Read the article

  • XNA: Retrieve texture file name during runtime

    - by townsean
    I'm trying to retrieve the names of the texture files (or their locations) on a mesh. I realize that the texture file name information is not preserved when the model is loaded. I've been doing tons of searching and some experimenting but I've been met with no luck. I've gathered that I need to extended the content pipeline and store the file location in somewhere like ModelMeshPart.Tag. My problem is, even when I'm trying to make my own custom processor, I still can't figure out where the texture file name is. :( Any thoughts? Thanks! UPDATE: Okay, so I found something kind of promising. NodeContent.Identity.SourceFilename, only that returns the location of my .X model. When I go down the node tree he is always null. Then there's the ContentItem.Name property. It seems to have names of my mesh, but not my actual texture file names. :(

    Read the article

  • Drawing a texture line between two vectors in XNA WP7

    - by Krav
    I want to create a simple graph maker in WP7. The goal is to draw a texture line between two vectors what the user defines with touch. I already made the rotation, and it is working, but not correctly, because it doesn't calculate the line's texture height, and because of that, there are too many overlapping textures. So it does draw the line, but too many of them. How could I calculate it correctly? Here is the code: public void DrawLine(Vector2 st,Vector2 dest,NodeUnit EdgeParent,NodeUnit EdgeChild) { float d = Vector2.Distance(st, dest); float rotate = (float)(Math.Atan2(st.Y - dest.Y, st.X - dest.X)); direction = new Vector2(((dest.X - st.X) / (float)d), (dest.Y - st.Y) / (float)d); Vector2 _pos = st; World.TheHive.Add(new LineHiveMind(linetexture, _pos, rotate, EdgeParent, EdgeChild,new List<LineUnit>())); for (int i = 0; i < d; i++) { World.TheHive.Last()._lines.Add(new LineUnit(linetexture, _pos, rotate, EdgeParent, EdgeChild)); _pos += direction; } } d is for the Distance of the st (Starting node) and dest (Destination node) rotate is for rotation direction calculates the direction between the starting and the destination node _pos is for starting position changing Thanks for any suggestions/help!

    Read the article

  • How can I render multiple windows with DirectX 9 in C++?

    - by Friso1990
    I'm trying to render multiple windows, using DirectX 9 and swap chains, but even though I create 2 windows, I only see the first one that I've created. My RendererDX9 header is this: #include <d3d9.h> #include <Windows.h> #include <vector> #include "RAT_Renderer.h" namespace RAT_ENGINE { class RAT_RendererDX9 : public RAT_Renderer { public: RAT_RendererDX9(); ~RAT_RendererDX9(); void Init(RAT_WindowManager* argWMan); void CleanUp(); void ShowWin(); private: LPDIRECT3D9 renderInterface; // Used to create the D3DDevice LPDIRECT3DDEVICE9 renderDevice; // Our rendering device LPDIRECT3DSWAPCHAIN9* swapChain; // Swapchain to make multi-window rendering possible WNDCLASSEX wc; std::vector<HWND> hwindows; void Render(int argI); }; } And my .cpp file is this: #include "RAT_RendererDX9.h" static LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ); namespace RAT_ENGINE { RAT_RendererDX9::RAT_RendererDX9() : renderInterface(NULL), renderDevice(NULL) { } RAT_RendererDX9::~RAT_RendererDX9() { } void RAT_RendererDX9::Init(RAT_WindowManager* argWMan) { wMan = argWMan; // Register the window class WNDCLASSEX windowClass = { sizeof( WNDCLASSEX ), CS_CLASSDC, MsgProc, 0, 0, GetModuleHandle( NULL ), NULL, NULL, NULL, NULL, "foo", NULL }; wc = windowClass; RegisterClassEx( &wc ); for (int i = 0; i< wMan->getWindows().size(); ++i) { HWND hWnd = CreateWindow( "foo", argWMan->getWindow(i)->getName().c_str(), WS_OVERLAPPEDWINDOW, argWMan->getWindow(i)->getX(), argWMan->getWindow(i)->getY(), argWMan->getWindow(i)->getWidth(), argWMan->getWindow(i)->getHeight(), NULL, NULL, wc.hInstance, NULL ); hwindows.push_back(hWnd); } // Create the D3D object, which is needed to create the D3DDevice. renderInterface = (LPDIRECT3D9)Direct3DCreate9( D3D_SDK_VERSION ); // Set up the structure used to create the D3DDevice. Most parameters are // zeroed out. We set Windowed to TRUE, since we want to do D3D in a // window, and then set the SwapEffect to "discard", which is the most // efficient method of presenting the back buffer to the display. And // we request a back buffer format that matches the current desktop display // format. D3DPRESENT_PARAMETERS deviceConfig; ZeroMemory( &deviceConfig, sizeof( deviceConfig ) ); deviceConfig.Windowed = TRUE; deviceConfig.SwapEffect = D3DSWAPEFFECT_DISCARD; deviceConfig.BackBufferFormat = D3DFMT_UNKNOWN; deviceConfig.BackBufferHeight = 1024; deviceConfig.BackBufferWidth = 768; deviceConfig.EnableAutoDepthStencil = TRUE; deviceConfig.AutoDepthStencilFormat = D3DFMT_D16; // Create the Direct3D device. Here we are using the default adapter (most // systems only have one, unless they have multiple graphics hardware cards // installed) and requesting the HAL (which is saying we want the hardware // device rather than a software one). Software vertex processing is // specified since we know it will work on all cards. On cards that support // hardware vertex processing, though, we would see a big performance gain // by specifying hardware vertex processing. renderInterface->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hwindows[0], D3DCREATE_SOFTWARE_VERTEXPROCESSING, &deviceConfig, &renderDevice ); this->swapChain = new LPDIRECT3DSWAPCHAIN9[wMan->getWindows().size()]; this->renderDevice->GetSwapChain(0, &swapChain[0]); for (int i = 0; i < wMan->getWindows().size(); ++i) { renderDevice->CreateAdditionalSwapChain(&deviceConfig, &swapChain[i]); } renderDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_CCW); // Set cullmode to counterclockwise culling to save resources renderDevice->SetRenderState(D3DRS_AMBIENT, 0xffffffff); // Turn on ambient lighting renderDevice->SetRenderState(D3DRS_ZENABLE, TRUE); // Turn on the zbuffer } void RAT_RendererDX9::CleanUp() { renderDevice->Release(); renderInterface->Release(); } void RAT_RendererDX9::Render(int argI) { // Clear the backbuffer to a blue color renderDevice->Clear( 0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB( 0, 0, 255 ), 1.0f, 0 ); LPDIRECT3DSURFACE9 backBuffer = NULL; // Set draw target this->swapChain[argI]->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &backBuffer); this->renderDevice->SetRenderTarget(0, backBuffer); // Begin the scene renderDevice->BeginScene(); // End the scene renderDevice->EndScene(); swapChain[argI]->Present(NULL, NULL, hwindows[argI], NULL, 0); } void RAT_RendererDX9::ShowWin() { for (int i = 0; i < wMan->getWindows().size(); ++i) { ShowWindow( hwindows[i], SW_SHOWDEFAULT ); UpdateWindow( hwindows[i] ); // Enter the message loop MSG msg; while( GetMessage( &msg, NULL, 0, 0 ) ) { if (PeekMessage( &msg, NULL, 0U, 0U, PM_REMOVE ) ) { TranslateMessage( &msg ); DispatchMessage( &msg ); } else { Render(i); } } } } } LRESULT CALLBACK MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { switch( msg ) { case WM_DESTROY: //CleanUp(); PostQuitMessage( 0 ); return 0; case WM_PAINT: //Render(); ValidateRect( hWnd, NULL ); return 0; } return DefWindowProc( hWnd, msg, wParam, lParam ); } I've made a sample function to make multiple windows: void RunSample1() { //Create the window manager. RAT_ENGINE::RAT_WindowManager* wMan = new RAT_ENGINE::RAT_WindowManager(); //Create the render manager. RAT_ENGINE::RAT_RenderManager* rMan = new RAT_ENGINE::RAT_RenderManager(); //Create a window. //This is currently needed to initialize the render manager and create a renderer. wMan->CreateRATWindow("Sample 1 - 1", 10, 20, 640, 480); wMan->CreateRATWindow("Sample 1 - 2", 150, 100, 480, 640); //Initialize the render manager. rMan->Init(wMan); //Show the window. rMan->getRenderer()->ShowWin(); } How do I get the multiple windows to work?

    Read the article

  • How to adjust the shooting angle of an object

    - by Blue
    I've been trying to add an angle adjustment feature to a power bar that I got from unity3dStudents. But I can't seem to get the code right. I'm using addforce to rigidbody, it works but the power is too great. I also found that rotating the object it's shooting from changes the angle. But I don't know how to proceed from that. Can somebody show me the problem with the script below, as in how to add height to the addforce without it going to far up or to the side? Or how to change the angle of the object? var theAngle : int; var maxAngle : int = 130; var minAngle : int = 0; var angleIncreasing : boolean = false; var angleDecreasing : boolean = false; var rotationSpeed : float = 10; var ball : Rigidbody; var spawnPos : Transform; var shotForce : float = 25; function Update () { if(Input.GetKeyDown("k")){ angleIncreasing = true; angleDecreasing = false; } if(Input.GetKeyUp("k")){ angleIncreasing = false; } if(Input.GetKeyDown("l")){ angleIncreasing = false; angleDecreasing = true; } if(Input.GetKeyUp("l")){ angleDecreasing = false; } ------- if(angleIncreasing){ theAngle += Time.deltaTime * rotationSpeed; if(theAngle > maxAngle){ theAngle = maxAngle; } } if(angleDecreasing){ theAngle -= Time.deltaTime * rotationSpeed; if(theAngle < minAngle){ theAngle = minAngle; } } } function Shoot(power : float, angle : int){ ---- var forward : Vector3 = spawnPos.forward; var upward : Vector3 = spawnPos.up; pFab.AddForce(forward * power * shotForce); pFab.AddForce(upward * angle * 10); ---- }

    Read the article

  • How exactly does app ranking work?

    - by qweasdzxc1
    So I've been in the app industry for around half a year and I still don't know how exactly ranking higher for your app will help increase downloads. That sounds like a question with an obvious answer but this is what's going through my mind so hear me out: Unless your app is ranked within the top 100, no one can see it in the featured categories. So even if my app jumped from 400th to 300th place, would there really even be a difference in downloads? And I'm saying 400th to 300th in ranking in my specific category. Indie developers like me don't even come close to ranking for the overall category. So far, the only usefulness of trying to get a higher rank is to get featured or something like that, but big companies have tons of money to throw on marketing...so the chances of any indie developer getting featured is rare. The only thing that I can see ranking being good for is to rank for your keywords so that when someone searches for that word, your app will hopefully appear in the top 10-25 results. Can anyone confirm my thoughts or add anything else that I might have missed out on? How exactly do users find your app if you're not in the top 100 app in your category?

    Read the article

  • What is the most serious limitation of Unity?

    - by ashes999
    Having read this heated question about Unity vs. UDK vs. ID something, I'm curious to know: what the repeatedly-hit, most crippling limitation(s) of Unity? In order to keep this question non-subjective, again, I'm talking about the top repeated offender(s) of Unity are. This is something that, as a Unity user, you really wish someone had told you about before you started using it. I have heard from someone that Unity does not deal well with version control, since it generates a lot of binary files (which are un-diffable). This, to me, is not really crippling as I work alone. Thoughts?

    Read the article

  • BlockingCollection having issues with byte arrays

    - by MJLaukala
    I am having an issue where an object with a byte[20] is being passed into a BlockingCollection on one thread and another thread returning the object with a byte[0] using BlockingCollection.Take(). I think this is a threading issue but I do not know where or why this is happening considering that BlockingCollection is a concurrent collection. Sometimes on thread2, myclass2.mybytes equals byte[0]. Any information on how to fix this is greatly appreciated. MessageBuffer.cs public class MessageBuffer : BlockingCollection<Message> { } In the class that has Listener() and ReceivedMessageHandler(object messageProcessor) private MessageBuffer RecievedMessageBuffer; On Thread1 private void Listener() { while (this.IsListening) { try { Message message = Message.ReadMessage(this.Stream, this); if (message != null) { this.RecievedMessageBuffer.Add(message); } } catch (IOException ex) { if (!this.Client.Connected) { this.OnDisconnected(); } else { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } catch (Exception ex) { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } } Message.ReadMessage(NetworkStream stream, iTcpConnectClient client) public static Message ReadMessage(NetworkStream stream, iTcpConnectClient client) { int ClassType = -1; Message message = null; try { ClassType = stream.ReadByte(); if (ClassType == -1) { return null; } if (!Message.IDTOCLASS.ContainsKey((byte)ClassType)) { throw new IOException("Class type not found"); } message = Message.GetNewMessage((byte)ClassType); message.Client = client; message.ReadData(stream); if (message.Buffer.Length < message.MessageSize + Message.HeaderSize) { return null; } } catch (IOException ex) { Logger.LogException(ex.ToString()); throw ex; } catch (Exception ex) { Logger.LogException(ex.ToString()); //throw ex; } return message; } On Thread2 private void ReceivedMessageHandler(object messageProcessor) { if (messageProcessor != null) { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(messageProcessor); } } else { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(); } } } PlayerStateMessage.cs public class PlayerStateMessage : Message { public GameObject PlayerState; public override int MessageSize { get { return 12; } } public PlayerStateMessage() : base() { this.PlayerState = new GameObject(); } public PlayerStateMessage(GameObject playerState) { this.PlayerState = playerState; } public override void Reconstruct() { this.PlayerState.Poisiton = this.GetVector2FromBuffer(0); this.PlayerState.Rotation = this.GetFloatFromBuffer(8); base.Reconstruct(); } public override void Deconstruct() { this.CreateBuffer(); this.AddToBuffer(this.PlayerState.Poisiton, 0); this.AddToBuffer(this.PlayerState.Rotation, 8); base.Deconstruct(); } public override void HandleMessage(object messageProcessor) { ((MessageProcessor)messageProcessor).ProcessPlayerStateMessage(this); } } Message.GetVector2FromBuffer(int bufferlocation) This is where the exception is thrown because this.Buffer is byte[0] when it should be byte[20]. public Vector2 GetVector2FromBuffer(int bufferlocation) { return new Vector2( BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation), BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation + 4)); }

    Read the article

  • How do I optimize searching for the nearest point?

    - by Rootosaurus
    For a little project of mine I'm trying to implement a space colonization algorithm in order to grow trees. The current implementation of this algorithm works fine. But I have to optimize the whole thing in order to make it generate faster. I work with 1 to 300K of random attraction points to generate one tree, and it takes a lot of time to compute and compare distances between attraction points and tree node in order to keep only the closest treenode for an attraction point. So I was wondering if some solutions exist (I know they must exist) in order to avoid the time loss looping on each tree node for each attraction point to find the closest... and so on until the tree is finished.

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >