Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 139/3915 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

  • performance of vmware-machine on different computers

    - by bxshi
    I'm working on a filesystem improving project, and found a paper says the cheating on benchmark, and it gives a solution that use VMs could help others to reproduce our result. And the question is, if I have made a specific vmware virtual machine, will it runs the same at different computer and platform? For example, I have a virtual machine which is 1G RAM, 4G HD and 2G one-core CPU. Will that runs the same at a qual-core 3G CPU and a 2.4G P4? What if the computer have 4G RAM? Will vmware use some buffer mechanism to improve performance? If that's true, does it means the VM runs on a 2G RAM host will slower than on a 4G host? Hope you can help me on that, or just told me where could I find the answer.

    Read the article

  • NFS performance troubleshooting

    - by aix
    I am troubleshooting NFS performance issues on Linux, and I'm looking at the following nfsiostat output: host:/path mounted on /path: op/s rpc bklog 96.75 0.01 read: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) 86.561 1408.294 16.269 0 (0.0%) 34.595 89.688 write: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) 10.113 326.282 32.265 0 (0.0%) 19.688 72446.246 What exactly is the meaning of avg RTT (ms) and avg exe (ms)? avg exe for writes is 72 seconds(!) -- would you say this is abnormal and, if so, how do I go about troubleshooting this further? I'm using NFS over TCP. Both the client and the server are on the same GigE LAN.

    Read the article

  • More Chicago Code Camp Information

    - by Tim Murphy
    It seems the guys have posted the venue.  The Chicago Code Camp will be held at the Illinois Institute of Technology on May 1, 2010.  Sign up and join in. IIT- Stuart Building 10 West 31st Chicago, IL 60616   del.icio.us Tags: Chicago Code Camp

    Read the article

  • Structuring multi-threaded programs

    - by davidk01
    Are there any canonical sources for learning how to structure multi-threaded programs? Even with all the concurrency utility classes that Java provides I'm having a hard time properly structuring multi-threaded programs. Whenever threads are involved my code becomes very brittle, any little change can potentially break the program because the code that jumps back and forth between the threads tends to be very convoluted.

    Read the article

  • Bill Gates et Mark Zuckerberg vont enseigner la programmation à travers l'initiative Code.org

    Bill Gates et Zuckerberg rejoignent la campagne Hour of code une initiative de code.org destinée à apprendre la programmation aux plus jeunesLa moisson est abondante pour le vaste marché de l'emploi aux États-Unis, mais les ouvriers sont peu nombreux. Le « Bureau of Labor Statistics » américain estime que les années à venir devraient donner naissance à près de 122 000 opportunités d'emploi en relation avec l'informatique. La condition requise pour postuler à ces offres sera d'avoir au minimum...

    Read the article

  • What's the best book for coding conventions?

    - by Joschua
    What's the best book about coding conventions (and perhaps design patterns), that you highly recommend (at best code samples in Python, C++ or Java)? It would be good, if the book (or just another) also covers the topics project management and agile software development if appropriate (for example how projects fail through spaghetti code). I will accept the answer with the book(s) (maximum two books per answer, please), that looks the most interesting, because the reading might take a while :)

    Read the article

  • Slow Network Performance with Windows Server 2008 SP1

    - by Axeva
    I recently installed Service Pack 1 for Windows Server 2008. Since that time, network performance has been awful. Both Windows 7 and Mac Snow Leopard clients have seen miserable speeds when trying to read or write to the server. This is the exact update: Windows Server 2008 R2 Service Pack 1 x64 Edition (KB976932) It's a very simple file server setup. No Domain or Active Directory. Essentially just shared folders. It's Windows Web Server that I'm running. Are there any settings I can tweak? Should I roll back the update (doesn't seem wise)? Update: I've turned off the Power Management for the Network Adapter. That may help. If it doesn't have to be powered on at the start of a request, it should speed things up. Or so I would assume.

    Read the article

  • My Interview With DevExpress Regarding Silicon Valley Code Camp

    Last week, while at Microsofts TechEd 2010, Mehul Harry, Technical Evangalist for Developer Express, interviewed me about our upcoming Silicon Valley Code Camp (of which Dev Express is a platinum... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Back-sliding into Unmanaged Code

    - by Laila
    It is difficult to write about Microsoft's ambivalence to .NET without mentioning clichés about dog food.  In case you've been away a long time, you'll remember that Microsoft surprised everyone with the speed and energy with which it introduced and evangelised the .NET Framework for managed code. There was good reason for this. Once it became obvious to all that it had sleepwalked into third place as a provider of development languages, behind Borland and Sun, it reacted quickly to attract the best talent in the industry to produce a windows version of the Java runtime, with Bounds-checking, Automatic Garbage collection, structures exception handling and common data types. To develop applications for this managed runtime, it produced several excellent languages, and more are being provided. The only thing Microsoft ever got wrong was to give it a stupid name. The logical step for Microsoft would be to base the entire operating system on the .NET framework, and to re-engineer its own applications. In 2002, Bill Gates, then Microsoft Chairman and Chief Software Architect said about their plans for .NET, "This is a long-term approach. These things don't happen overnight." Now, eight years later, we're still waiting for signs of the 'long-term approach'. Microsoft's vision of an entirely managed operating system has subsided since the Vista fiasco, but stays alive yet dormant as Midori, still being developed by Microsoft Research. This is an Internet-centric fork of the singularity operating system, a research project started in 2003 to build a highly-dependable operating system in which the kernel, device drivers, and applications are all written in managed code. Midori is predicated on the prevalence of connected systems, with provisions for distributed concurrency where application components exist 'in the cloud', and supports a programming model that can tolerate cancellation, intermittent connectivity and latency. It features an entirely new security model that sandboxes applications for increased security. So have Microsoft converted its existing applications to the .NET framework? It seems not. What Windows applications can run on Mono? Very few, it seems. We all thought that .NET spelt the end of DLL Hell and the need for COM interop, but it looks as if Bill Gates' idea of 'not overnight' might stretch to a decade or more. The Operating System has shown only minimal signs of migrating to .NET. Even where the use of .NET has come to dominate, when used for server applications with IIS, IIS itself is still entirely developed in unmanaged code. This is an irritation to Microsoft's greatest supporters who committed themselves fully to the NET framework, only to find parts of the Ambivalent Microsoft Empire quietly backsliding into unmanaged code and the awful C++. It is a strategic mistake that the invigorated Apple didn't make with the Mac OS X Architecture. Cheers, Laila

    Read the article

  • T-SQL Tuesday 24: Ode to Composable Code

    - by merrillaldrich
    I love the T-SQL Tuesday tradition, started by Adam Machanic and hosted this month by Brad Shulz . I am a little pressed for time this month, so today’s post is a short ode to how I love saving time with Composable Code in SQL. Composability is one of the very best features of SQL, but sometimes gets picked on due to both real and imaginary performance worries. I like to pick composable solutions when I can, while keeping the perf issues in mind, because they are just so handy and eliminate so much...(read more)

    Read the article

  • MySQL Linked Server and SQL Server 2008 Express Performance

    - by Jeffrey
    Hi All, I am currently trying to setup a MySQL Linked Server via SQL Server 2008 Express. I have tried two methods, creating a DSN using the mySQL 5.1 ODBC driver, and using Cherry Software OLE DB Driver as well. The method that I prefer would be using the ODBC driver, but both run horrendously slow (doing one simple join takes about 5 min). Is there any way I can get better performance? We are trying to cross query between multiple mySQL databases on different servers, and this seems to be method we think would work well. Any comments, suggestions, etc... would be greatly appreciated. Regards, Jeffrey

    Read the article

  • Using Code Rocket's Flowchart and Pseudocode Tool Support

    This article provides a walk through of a couple of iterations of using Code Rocket's pseudocode and flowchart tool support for designing and implementing a form of binary search algorithm using the Code Rocket plug-in for Visual Studio...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Ruby: how to step through ruby code

    - by user1647484
    I'm trying to learn how to step through Ruby code (written by more experienced programmers) in order to improve my knowledge of Ruby. Problem is that I don't really know how to do it well. Googling the topic brought me to an about.com page on Ruby; I thought a higher quality more comprehensive answer should belong on StackOverflow, so if anyone can write an answer (with an example) for it, showing how a beginner can step through code to learn as much as possible about it, it'd be much appreciated.

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • Why better isolation level means better performance in MS SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead.

    Read the article

  • Diff annotation tool

    - by l0b0
    Among the 11 proven practices for more effective, efficient peer code review, diff annotation seems to be the one particularly well suited to tool assistance. The article is written by the architect of SmartBear's CodeCollaborator, so he of course recommends using that. Does anyone know of any alternatives? I can't think of anything that would be even close to paper+pen+marker in pure developer efficiency when it comes to explaining a piece of code.

    Read the article

  • running GL ES 2.0 code under Linux ( no Android no iOS )

    - by user827992
    I need to code OpenGL ES 2.0 bits and i would like to do this and run the programs on my desktop for practical reasons. Now, i already have tried the official GLES SDK from ATI for my videocard but it not even runs the examples that comes with the SDK itself, i'm not looking for performance here, even a software based rendering pipeline could be enough, i just need full support for GLES 2.0 and GLSL to code and run GL stuff. There is a reliable solution for this under Ubuntu Linux ?

    Read the article

  • Speaking At The Chicago Code Camp

    - by Tim Murphy
    I just got news that my talk on Office Open XML has been accepted for the Chicago Code Camp.  I hear that they will be announcing the full schedule of sessions soon.  Be sure to register and join us.  As a bonus the guys from .NET Rocks will be there. http://www.chicagocodecamp.com del.icio.us Tags: .NET Rocks,Chicago Code Camp,Speaking,OOXML SDK 2.0,OOXML,Office Open XML,PSC Group

    Read the article

  • proxy.pac file performance optimization

    - by Tuinslak
    I reroute certain websites through a proxy with a proxy.pac file. It basically looks like this: if (shExpMatch(host, "www.youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } if (shExpMatch(host, "youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } At the moment about 125 sites are rerouted using this method. However, I plan on adding quite a few more domains to it, and I'm guessing it will eventually be a list of 500-1000 domains. It's important to not reroute all traffic through the proxy. What's the best way to keep this file optimized, performance-wise ? Thanks

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >