Search Results

Search found 28411 results on 1137 pages for 'think'.

Page 71/1137 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • What type of pattern would be used in this case

    - by Admiral Kunkka
    I want to know how to tackle this type of scenario. We are building a person's background, from scratch, and I want to know, conceptually, how to proceed with a secure object pattern in both design and execution... I've been reading on Factory patterns, Model-View-Controller types, Dependency injection, Singleton approaches... and I can't seem to grasp or 'fit' these types of designs decisions into what I'm trying to do.. First and foremost, I started with having a big jack-of-all-trades class, then I read some more, and some tips were to make sure your classes only have a single purpose.. which makes sense and I started breaking down certain things into other classes. Okay, cool. Now I'm looking at dependency injection and kind of didn't really know what's going on. Example/insight of what kind of heirarchy I need to accomplish... class Person needs to access and build from a multitude of different classes. class Culture needs to access a sub-class for culture benefits class Social needs to access class Culture, and other sub-classes class Birth needs to access Social, Culture, and other sub-classes class Childhood/Adolescence/Adulthood need to access everything. Also, depending on different rolls, this class heirarchy needs to create multiple people as well, such as Family, and their backgrounds using some, if not all, of these same classes. Think of it as a people generator, all random, with backgrounds and things that happen to them. Ageing, death of loved ones, military careers, e.t.c. Most of the generation is done randomly, making calls to a mt_rand function to pick from most of the selections inside the classes, guaranteeing the data to be absolutely random. I have most of the bulk-data down, and was looking for some insight from fellow programmers, what do you think?

    Read the article

  • How to represent a graph with multiple edges allowed between nodes and edges that can selectively disappear

    - by Pops
    I'm trying to figure out what sort of data structure to use for modeling some hypothetical, idealized network usage. In my scenario, a number of users who are hostile to each other are all trying to form networks of computers where all potential connections are known. The computers that one user needs to connect may not be the same as the ones another user needs to connect, though; user 1 might need to connect computers A, B and D while user 2 might need to connect computers B, C and E. Image generated with the help of NCTM Graph Creator I think the core of this is going to be an undirected cyclic graph, with nodes representing computers and edges representing Ethernet cables. However, due to the nature of the scenario, there are a few uncommon features that rule out adjacency lists and adjacency matrices (at least, without non-trivial modifications): edges can become restricted-use; that is, if one user acquires a given network connection, no other user may use that connection in the example, the green user cannot possibly connect to computer A, but the red user has connected B to E despite not having a direct link between them in some cases, a given pair of nodes will be connected by more than one edge in the example, there are two independent cables running from D to E, so the green and blue users were both able to connect those machines directly; however, red can no longer make such a connection if two computers are connected by more than one cable, each user may own no more than one of those cables I'll need to do several operations on this graph, such as: determining whether any particular pair of computers is connected for a given user identifying the optimal path for a given user to connect target computers identifying the highest-latency computer connection for a given user (i.e. longest path without branching) My first thought was to simply create a collection of all of the edges, but that's terrible for searching. The best thing I can think to do now is to modify an adjacency list so that each item in the list contains not only the edge length but also its cost and current owner. Is this a sensible approach? Assuming space is not a concern, would it be reasonable to create multiple copies of the graph (one for each user) rather than a single graph?

    Read the article

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • Need a good quality bitmap rotation algorithm for Android

    - by Lumis
    I am creating a kaleidoscopic effect on an android tablet. I am using the code below to rotate a slice of an image, but as you can see in the image when rotating a bitmap 60 degrees it distorts it quite a lot (red rectangles) – it is smudging the image! I have set dither and anti-alias flags but it does not help much. I think it is just not a very sophisticated bitmap rotation algorithm. canvas.save(); canvas.rotate(angle, screenW/2, screenH/2); canvas.drawBitmap(picSlice, screenW/2, screenH/2, pOutput); canvas.restore(); So I wonder if you can help me find a better way to rotate a bitmap. It does not have to be fast, because I intend to use a high quality rotation only when I want to save the screen to the SD card - I would redraw the screen in memory before saving. Do you know any comprehensible or replicable algorithm for bitmap rotation that I could programme or use as a library? Or any other suggestion? EDIT: The answers below made me think if Android OS had bilinear or bicubic interpolation option and after some search I found that it does have its own version of it called FilterBitmap. After applying it to my paint pOutput.setFilterBitmap(true); I get much better result

    Read the article

  • Is version history really sacred or is it better to rebase?

    - by dukeofgaming
    I've always agreed with Mercurial's mantra, however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a "bad practice", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like "Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such "organic commits" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, why does that work for you? (having other team members to keep history, or alternatively, rebasing it).

    Read the article

  • Algorithm for rating books: Relative perception

    - by suneet
    So I am developing this application for rating books (think like IMDB for books) using relational database. Problem statement : Let's say book "A" deserves 8.5 in absolute sense. In case if A is the best book I have ever seen, I'll most probably rate it 9.5 whereas for someone else, it might be just an average book, so he/they will rate it less (say around 8). Let's assume 4 such guys rate it 8. If there are 10 guys who are like me (who haven't ever read great literature) and they all rate it 9.5-10. This will effectively make it's cumulative rating greater than 9 (9.5*10 + 8*4) / 14 = 9.1 whereas we needed the result to be 8.5 ... How can I take care of(normalize) this bias due to incorrect perception of individuals. MyProposedSolution : Here's one of the ways how I think it could be solved. We can have a variable Lit_coefficient which tells us how much knowledge a user has about literature. If I rate "A"(the book) 9.5 and person "X" rates it 8, then he must have read books much better than "A" and thus his Lit_coefficient should be higher. And then we can normalize the ratings according to the Lit_coefficient of user. Could there be a better algorithm/solution for the same?

    Read the article

  • Kindle App for WP7 arrives

    - by dotgeek
    If you had asked me if I was interested in owning a Kindle device, then my response a couple months back would have been a flat out no back then. So what has changed between now on then you might be asking me then? I would have to say watching my wife enjoy the many books in digital form has peeked my interest to start with now. Was that enough for me to buy a Kindle though? Not just yet, but that is where the Windows Phone 7 comes in for me once again. It’s no secret that I’ve wanted this device and yes it does have a number of Apps that I think I’ll really enjoy. The latest must have App for me, will most certainly be the Kindle App for Windows Phone 7, released from Amazon today. I’ve already started playing around with the PC version of Kindle software and I’m looking at purchasing any future books in digital form now. So it’s a given once I have a Windows Phone 7 device, that I’ll be enjoying some of my favorite books, while I’m on the go I think and via this App. Tonight I actually had a chance to check out the WP7 Kindle App on a friends device and I was really pleased with how it looked and performed. Now if the planets could only manage to lineup for me and allow Verizon and Microsoft to come to terms on the releasing of WP7 CDMA devices, I just might be able to really enjoy some of this on my own. Do I rule out ever having an actual Kindle device? Not at all, but I do plan on having the WP7 device well before that day ever comes though.

    Read the article

  • What is the best way to exploit multicores when making multithread games?

    - by Keeper
    Many people suggest to write a program, and then start optimizing it. But I think that when it's coming to multithreading with multicore, a little think ahead is required. I've read about using threads, and experienced it myself during some courses at the university (still a student). The big question is simple, but a bit abstract: What thread related steps in game design do I need to take, before implementation? Now trying to be more specific. Let's say, as an example, that I'm making a small board game (like Monopoly) that I want to be multithreaded. My goal Is that this multithreaded game will exploit the best of the multicore system, lets say 4-6 cores (like in i7 processors). My answer to this question at the moment is, one thread for each of these four basic components: GUI User Input / Output AI (computer rival) Other game related calculations (like shortest path from A to B, or level up status change) I'm not an expert (yet!), and I'm sure there are better answers out there. Any suggestion, answer, different approach will be helpful. Some thoughts: Maybe splitting the main database is a good way.. (or total disaster.. )

    Read the article

  • How to turn off screen (DPMS) together with locking session in KDE?

    - by gertvdijk
    First of all, I'm aware a similar question for GNOME is asked here: "Switch off laptop backlight when locking screen". Objective I would like to turn off my screen on locking the session for power saving reasons. Actual problem Locking the screen on Kubuntu (KDE) inevitably triggers the screensaver as far as I can see. There's no screensaver option other than 'Blank screen' together with its background colour set to black that comes just close to my goal. It blanks the screen, but doesn't turn off the screen. Screen's backlight will still be on and not saving any power. Current workaround A workaround via a script + shortcut key is possible, however, it's just a workaround since it doesn't trigger on all ways to lock the session. Therefore, I think it should be possible to have it done more elegantly, for example by providing this option in KDE's configuration dialog of the screensaver. The workaround I am now using is the following. A script that locks the screen and turns off the screen: #!/bin/bash qdbus org.freedesktop.ScreenSaver /ScreenSaver Lock xset dpms force standby and let it run with a shortcut key via a custom menu entry. It works. Here's why I consider it to be a workaround rather than a solution. It doesn't work for other ways to trigger the locking of the session. My actual question(s) Do I need to touching/patching KDE's source? If not what are my options? If so, could someone point me to where I can get started? what do you think is the recommended place in the GUI for configuration? I'm using Kubuntu 12.04 and willing to upgrade to KDE 4.9 or waiting for the 12.10 release.

    Read the article

  • Indie devs working with publishers

    - by MrDatabase
    I'm an independent game developer considering working with a publisher. This question is very informative however I have more questions. Please give feedback on the following issues... I think this can be helpful to many indie devs in the same situation. Source code: is it common for developers to give the publisher the source code? Code quality: does this matter when working with a publisher any more so than when just working on your own (or in a small team)? Just wondering if developers working for the publisher might scoff at the code quality and perhaps influence the relationship between developer and publisher. Unique game concepts: are publishers generally biased towards new/novel game concepts? Intellectual property: if I send a playable demo to a publisher what's to stop them from just reproducing the new/novel game mechanic? I think the answer is basically nothing... but I'm wondering if this is a realistic concern. Revenue sharing: how does it work? what's a common ratio? 70/30? 30/70? Flaky publishers: how common is it for a publisher to "string along" developers for a while then just drop them? Can this be reconciled with a contract of some kind? And any other issues you've encountered or heard of.

    Read the article

  • Being stupid to get better productivity?

    - by loki2302
    I've spent a lot of time reading different books about "good design", "design patterns", etc. I'm a big fan of the SOLID approach and every time I need to write a simple piece of code, I think about the future. So, if implementing a new feature or a bug fix requires just adding three lines of code like this: if(xxx) { doSomething(); } It doesn't mean I'll do it this way. If I feel like this piece of code is likely to become larger in the nearest future, I'll think of adding abstractions, moving this functionality somewhere else and so on. The goal I'm pursuing is keeping average complexity the same as it was before my changes. I believe, that from the code standpoint, it's quite a good idea - my code is never long enough, and it's quite easy to understand the meanings for different entities, like classes, methods, and relations between classes and objects. The problem is, it takes too much time, and I often feel like it would be better if I just implemented that feature "as is". It's just about "three lines of code" vs. "new interface + two classes to implement that interface". From a product standpoint (when we're talking about the result), the things I do are quite senseless. I know that if we're going to work on the next version, having good code is really great. But on the other side, the time you've spent to make your code "good" may have been spent for implementing a couple of useful features. I often feel very unsatisfied with my results - good code that only can do A is worse than bad code that can do A, B, C, and D. Are there any books, articles, blogs, or your ideas that may help with developing one's "being stupid" approach?

    Read the article

  • What individual needs to be aware when signing a NDA with client?

    - by doNotCheckMyBlog
    I am very new to IT industry and have no prior experience. However I came into contact with a party who is gear to build a mobile application. But, they want me to sign NDA (No Disclosure Agreement). The definition seems vague, The following definitions apply in this Agreement: Confidential Information means information relating to the online and mobile application concepts discussed and that: (a) is disclosed to the Recipient by or on behalf of XYZ; (b) is acquired by the Recipient directly or indirectly from XYZ; (c) is generated by the Recipient (whether alone or with others); or (d) otherwise comes to the knowledge of the Recipient, When they say otherwise comes to the knowledge of the recipient. Does it mean if I think of any idea from my own creative mind and which is similar to their idea then it would be a breach of this agreement? and also is it okay to tell to include application name in definition as currently to me it sounds like any online of mobile application concept they think I should not disclose it to anybody. "Confidential Information means information relating to the online and mobile application concepts discussed and that:" I am more concerned about this part, Without limiting XYZ’s rights at law, the Recipient agrees to indemnify XYZ in respect of all claims, losses, liabilities, costs or expenses of any kind incurred directly or indirectly as a result of or in connection with a breach by it or any of its officers, employees, or consultants of this Agreement. Is it really common in IT industry to sign this agreement between client and developer? Any particular thing I should be concerned about?

    Read the article

  • Grub2 won't detect Ubuntu 11.10 OS after reinstalling Win XP hal.dll.

    - by yoopian
    Hi I'm an Ubuntu newbie here. I've installed ubuntu 11.10 to dual boot on a single HDD. I did a manual partition and basically forgot all the on what sda my /boot partition is. My installation worked out just fine and I tried to install updates with it. After a while I when I wanted to boot to windows it showed that I was missing a "hal.dll" file. I've fixed this problem using the windows resource CD but then after booting up my PC it went straight to Windows XP. I've tried to manually reinstall Grub2 using a Live CD/USB and it worked but I think I have installed in on a different "sda#" (sda5 to be exact) because even though Grub2 loads when I boot my PC, only windows XP shows up as my OS and Ubuntu 11.10 is missing. Now, I've tried installing boot-repair to solve my problems using Live CD/USB. Boot-repair tells me that boot configuration was successful but then a basic grub interface shows up (the black one with a command line grub showing up. Now I can't even boot to Windows XP. Any help would be really appreciated. BTW here's the notes from boot repair that I was asked to save: http://paste.ubuntu.com/890228/ As you can see there are boot files on sda5 and sda7. I think that's the core problem that I have right now. Thanks in advance!

    Read the article

  • Dedicated server: managed hosting or manage it myself?

    - by ddawber
    We're currently hosting a number of sites on a self-managed dedicated server. Some companies, however, offer a managed dedicated server hosting service. They offer: Roughly the same server spec Ticketing system support Managed daily backups Virtual firewall (but with a limit of 10 IP addresses allowed through at any one time) Now, this managed hosting is at extra expense - somewhere in the region of $500 per month, and the limit on the number of IP addresses they'll manage on the firewall is also a real pain. My thinking is it would be better and cheaper to Stay with the same host since the dedicated box is fine Get an Amazon AWS account and use their server to manage backups; there are a number of good tools that can be used to automate the process Configure iptables so that I have complete control of the firewall I want to know Is a managed virtual firewall likely to be more secure than me configuring iptables? Whether, in your opinion, it's best to let someone else take care of backups? If, from your experience, there's anything else i'm missing that warrants using managed hosting over a DIY service? I think there is some reluctance to not having managed hosting since a managed host in effect takes responsibility for your server, whereas any hardware or security issues with a server that we manage would mean we are forced to hold our hands up when a client site goes down. That said, I personally don't think a managed host does that much in the day to day running of your server (backups are automatic, OS updates are carried out with ease, etc.).

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • sharing life experience

    - by gcc
    I am a student of computer engineering. I have never done any programming before, and as you can understand, I don't know how to study it or how to make my own programs. My English is weak [edited for clarity - ed], and so if you don't like the choices I list, please feel free to provide others. How should I study? How should I learn programming languages? Study completely from a book. Don't study from a book, just try writing code. A mix of the two; study from a book, then try writing code. Study half the book, then write the code by hand on paper. Listed to the teacher, then try to solve general problems (those not from any specific chapter). I have send that question to stackoverflow before when I am at first year. Now, I want to construct webpage to guide fresh students by giving advise of yours and mines.Maybe, you wonder Why I want to construct webpage , I just want help the other student. I am giving a link to that question < http://stackoverflow.com/questions/3389465/how-should-i-study-programming-languagess If you have other advice, feel free. EDIT: This web cite, I think , is constructed to share member's life experience and also I know these experiences is valuable . So I have no right to want your opinion, But I want your opinion / experience even if you think it is not so helpful to other

    Read the article

  • Extreme Portability: OpenJDK 7 and GlassFish 3.1.1 on Power Mac G5!

    - by MarkH
    Occasionally you hear someone grumble about platform support for some portion or combination of the Java product "stack". As you're about to see, this really is not as much of a problem as you might think. Our friend John Yeary was able to pull off a pretty slick feat with his vintage Power Mac G5. In his words: Using a build script sent to me by Kurt Miller, build recommendations from Kelly O'Hair, and the great work of the BSD Port team... I created a new build of OpenJDK 7 for my PPC based system using the Zero VM. The results are fantastic. I can run GlassFish 3.1.1 along with all my enterprise applications. I recently had the opportunity to pick up an old G5 for little money and passed on it. What would I do with it? At the time, I didn't think it would be more than a space-consuming novelty. Turns out...I could have had some fun and a useful piece of hardware at the same time. Maybe it's time to go bargain-hunting again. For more information about repurposing classic Apple hardware and learning a few JDK-related tricks in the process, visit John's site for the full article, available here. All the best,Mark

    Read the article

  • Does it make sense to write build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • Will an online degree get you a job that requires "CS or equivalent 4-year degree"? [on hold]

    - by qel
    I'm a nerdy slacker type who didn't get my life together till I was 30. I've had a real job for a couple years doing C#/SQL. I've gotten several raises, but I'm making less than most developers, and the atmosphere is ... not positive. Looking for a new job, I think my applications get thrown out because I don't have a degree. And I want to finish a Bachelor's just to feel like less of a loser. I have a lot of college credits from 1996-2003 and a low GPA, so I don't know if that's worth much. An online degree looks like a good option, but I just don't know what I should be looking at for online schools because they all look like fake degrees. If they had programs equivalent to a real Comp Sci degree, I don't think they would have weird sounding names like they do. University of Phoenix has a B.S./Information Technology-Software Engineering. DeVry has a B.S./Computer Engineering Technology program. But that's not CS, and most other things I see have even more fake-sounding names. Are these useless degrees? Some people say DeVry and UoP are acceptable, some people say they're a joke. I have enough experience now, though, that maybe all I'm missing is being able to check the box that I have a 4-year degree. Harvard Extension seems like a real degree, even if it isn't a real Harvard degree, but I'd have to live there at least 3 months, which kinda defeats the purpose of an online degree fitting around work.

    Read the article

  • Which things instantly ring alarm bells when looking at code? [closed]

    - by FinnNk
    I attended a software craftsmanship event a couple of weeks ago and one of the comments made was "I'm sure we all recognize bad code when we see it" and everyone nodded sagely without further discussion. This sort of thing always worries me as there's that truism that everyone thinks they're an above average driver. Although I think I can recognize bad code I'd love to learn more about what other people consider to be code smells as it's rarely discussed in detail on people's blogs and only in a handful of books. In particular I think it'd be interesting to hear about anything that's a code smell in one language but not another. I'll start off with an easy one: Code in source control that has a high proportion of commented out code - why is it there? was it meant to be deleted? is it a half finished piece of work? maybe it shouldn't have been commented out and was only done when someone was testing something out? Personally I find this sort of thing really annoying even if it's just the odd line here and there, but when you see large blocks interspersed with the rest of the code it's totally unacceptable. It's also usually an indication that the rest of the code is likely to be of dubious quality as well.

    Read the article

  • How does a segment-based rendering engine (as in Descent) work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • Models, collections...and then what? Processes?

    - by Dan
    I'm a LAMP-stack dev who's been more on the JavaScript side the last few years and really enjoying the Model + Collection approach to data entities that BackboneJS, etc. uses. It's helped me organize my code in such a way that it is extremely portable, keeping all my properties and methods in the scope (model, collection, etc.) in which they apply. One thing that keeps bugging me though is how to organize the next level up, the 'process layer' as you might call it, that can potentially operate on instances of either models or collections or whatever else. Where should methods like find() (which returns a collection) and create() (which returns a model) reside? I know some people would put a create() in the Collection prototype, but while a collection operates on models I don't think it's exactly right to create them. And while a find() would return a collection I don't think it correct to have that action within the collection prototype itself (it should be a layer up). Can anyone offer some examples of any patterns that employ some kind of OOP-friendly 'process' layer? I'm sorry if this is a fairly well-known discussion but I'm afraid I can't seem to find the terminology to search for.

    Read the article

  • Employers and intellectual property 2

    - by Rick
    I have a question about intellectual property, I am currently a manager in a small manufacturing firm. The owners are driven by greed and don't appreciate the development process of complex machinery and are happy just to send things out half done. I on the other hand think that it should be done properly as breakdown in the field can be costly, embarrassing. They seem to have all of us running around doing most of the work out of hours using the attitude of "Be grateful to have a job" yet no one has a contract or any security or any agreement in place. For a couple of the projects i am using PLC's and doing the code in my own time and the testing during company time, and i am aware that they cannot support their own machines if i left, but as i created the code in my own time who owns it? The have asked my to put in a shutdown code for a maintenance request after a given length of time, could this be classed as criminal damage or anything illegal apart from immoral? (we sell the machines with 12 month warrantee, shut down after) But as time goes on I'm getting rather fed up of the companies attitude toward the client. I am considering keeping the clients as my own and get them to contact me directly In the shutdown code. By doing something like this is a trial version contact me for a full license? I wouldn't feel bad for my current employer as he is not afraid to S***t on people as he has been evolved in numerous law suits and has over 30 failed companies leaving people and customers high and dry, we have took the company this far on the reputation of the workers and and i can see things heading like all the other companies he has owned and taking our reputations with him. So i suppose now i have set the scene, if i code into it to contact me directly in the shutdown could there be any legal impact on me, as i rightly or wrongly think i own the code and designs? Cheers R

    Read the article

  • Windows Azure v1.7 Spring Release Today&ndash;New Management Dashboard

    - by ToStringTheory
    Today, Microsoft will be publicly releasing a new version of Azure for public consumption.  The web conference, at http://www.meetwindowsazure.com will be airing at 1 PM PST.  They have already released an update to the Service Dashboard that can be accessed by going to http://manage.windowsazure.com.  I have some images of the new dashboard here that I have gathered and removed any PII from.  Let me know what you think! Images You should be able to click any of the images for a full resolution image. Tutorial The first thing you get after signing in is the tutorial: Landing After the tutorial completes, you get a screen with services that are active on your account on the left, and a list of ALL services (db/blob/SQL Azure) on the right.  I like the quick access to services across any of my subscriptions: Service Information These are images from a running web site with several roles.  I love how easy they have made many of the features: SQL Azure They have given some great quick functionality for looking at your DB information: Storage Here is the basic information that they give you for any storage accounts you have: Adding Services Super quick and easy to add services with the new UI: Conclusion I am EXCITED!  As you may have seen in the left side of my blog, I am an MCPD in Azure Development, and I must say that I am excited to see Microsoft moving forward with the technology and not letting it stagnate.  After as much as I have fought the other Azure dashboard, I like the friendliness and fluidity of this one. The important thing to note about ALL of the images above: this is HTML, not Silverlight.  The responsiveness is FAST on all of the actions I completed, and I believe that this is a big step forward for Azure… So, what do you think?

    Read the article

  • Whole continent simulation [on hold]

    - by user2309021
    Let's suppose I am planning to create a simulation of an entire continent at some point in the past (let's say, around 0 A.D). Is it feasible to spawn a hundred million actors that interact with each other and their environments? Having them reproduce, extract resources, etc? The fact is that I actually want to create a simulation that allows me to zoom in from a view of the entire continent up to a single village, and interact with it. (Think as if you could keep zooming in the campaign map of any Total War game and the transition to the battle map was seamless, not a change of the "game mode"). By the way, I have never made a game in my entire life (I have programmed normal desktop applications, though), so I am really having trouble wrapping my head around how to implement such a thing. Even while thinking about how to implement a simple population simulator, without a graphical interface, I think that the O(n) complexity of traversing an array and telling all people to get one year older each time the program ticks is kind of stupid. Any kind help would be greatly appreciated :) EDIT: After being put on hold, I shall specify a question. How would you implement a simulation of all basic human dynamics (reproduction, resource consumption) in an entire continent (with millions of people)?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >