Search Results

Search found 418 results on 17 pages for 'race bannon'.

Page 7/17 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Cliché monsters to populate a steampunk fantasy setting dwarven dungeon?

    - by Alexander Gladysh
    I'm looking for a list of cliché monsters for a steampunk computer game (assume one kind or another of casual rogue-like RPG), to populate lower levels of ancient dwarven-built dungeons. Dwarves are a technology/science race in the setting I am aiming for. The world is a low-magic one. I'm stuck after listing various mechanical golems, gigantic spiders (every dungeon must have some of them!), and maybe a mechanical barlog as a megaboss. What would player expect? What are the key cultural references for such setting? I know a couple of games with suitable steampunk dwarves, but none are detailed enough in the underworld monsters area. Please point me in the right direction. (If you have a single funny monster suggestion, please mention it in comments, not in answer. ;-) )

    Read the article

  • Sprite/Tile Sheets Vs Single Textures

    - by Reanimation
    I'm making a race circuit which is constructed using various textures. To provide some background, I'm writing it in C++ and creating quads with OpenGL to which I assign a loaded .raw texture too. Currently I use 23 500px x 500px textures of which are all loaded and freed individually. I have now combined them all into a single sprite/tile sheet making it 3000 x 2000 pixels seems the number of textures/tiles I'm using is increasing. Now I'm wondering if it's more efficient to load them individually or write extra code to extract a certain tile from the sheet? Is it better to load the sheet, then extract 23 tiles and store them from one sheet, or load the sheet each time and crop it to the correct tile? There seems to be a number of way to implement it... Thanks in advance.

    Read the article

  • What is the point in using real time?

    - by bobobobo
    I understand that using real time frame elapses (which should vary between 16-17ms on average) are provided by a lot of frameworks. GetTimeElapsedSinceLastFrame, and it gives you the wall clock time. But should we use this information in basic physics simulation? It looks to me to be a bad idea. Say there is a slight lag on the machine, for whatever reason (say a virus scanner starts up). The calculations all jump, and there is no need for this. Why not use a virtual second and ignore wall clock time? For gameplay on the level of Commander Keen, shouldn't you always use the virtual second and not real-time? (Besides stopwatch timing for race games) I don't see a need to use real time and not a fixed 16ms time step.

    Read the article

  • How to define default gateway with multiple DHCP interfaces?

    - by DrumEater
    How does ifconfig determine which network interface to use as the default when DHCP assigns a default route for each NIC? It seems like it's in a race-condition and I need to have a more reliable solution. Is there a setting in /etc/network/interfaces that could define the preferred gateway? I read about "metric" but that did not seem to function. 10.04 LTS Server with two NICs on a managed network. IP addresses are assigned via DHCP which I do not manage. eth0 is assigned a private NAT address; eth1 is assigned a public IP.

    Read the article

  • F1 Pit Pragmatics

    Occasionally, you just have to man up and deal with complex systems. In fact, sometimes you just need to sacrifice everything else in the name of performance. In Formula 1 Racing, each car has up to 100 sensors, transmitting around 30Gb of data over the course of a race (70% in real-time). This data is then processed by no less than 3 servers (per car) so that the engineers in the pit have access to telemetry, strategy information, timing feeds, while the servers are exposed to carbon dust, oil,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Asynchronous callback for network in Objective-C Iphone

    - by vodkhang
    I am working with network request - response in Objective-C. There is something with asynchronous model that I don't understand. In summary, I have a view that will show my statuses from 2 social networks: Twitter and Facebook. When I clicked refresh, it will call a model manager. That model manager will call 2 service helpers to request for latest items. When 2 service helpers receive data, it will pass back to model manager and this model will add all data into a sorted array. What I don't understand here is that : when response from social networks come back, how many threads will handle the response. From my understanding about multithreading and networking (in Java), there must have 2 threads handle 2 responses and those 2 threads will execute the code to add the responses to the array. So, it can have race condition and the program can go wrong right? Is it the correct working model of iphone objective-C? Or they do it in a different way that it will never have race condition and we don't have to care about locking, synchronize? Here is my example code: ModelManager.m - (void)updateMyItems:(NSArray *)items { self.helpers = [self authenticatedHelpersForAction:NCHelperActionGetMyItems]; for (id<NCHelper> helper in self.helpers) { [helper updateMyItems:items]; // NETWORK request here } } - (void)helper:(id <NCHelper>)helper didReturnItems:(NSArray *)items { [self helperDidFinishGettingMyItems:items callback:@selector(model:didGetMyItems:)]; break; } } // some private attributes int *_currentSocialNetworkItemsCount = 0; // to count the number of items of a social network - (void)helperDidFinishGettingMyItems:(NSArray *)items { for (Item *item in items) { _currentSocialNetworkItemsCount ++; } NSLog(@"count: %d", _currentSocialNetworkItemsCount); _currentSocialNetworkItemsCount = 0; } I want to ask if there is a case that the method helperDidFinishGettingMyItems is called concurrently. That means, for example, faceboook returns 10 items, twitter returns 10 items, will the output of count will ever be larger than 10? And if there is only one single thread, how can the thread finishes parsing 1 response and jump to the other response because, IMO, thread is only executed sequently, block of code by block of code

    Read the article

  • multi thread in c question

    - by REALFREE
    Does mutex guarantee to execute thread in order of arriving? that is, if, thread 2 and thread 3 arrive is waiting while thread 1 is in critical section what exactly happen after thread 1 exit critical section if thread 2 arrive at mutex lock before thread 3, thread 2 will be allowed to enter critical section before thread 3 ? or race condition will be occurred?

    Read the article

  • Fairness: Where can it be better handled?

    - by Srinivas Nayak
    Hi, I would like to share one of my practical experience with multiprogramming here. Yesterday I had written a multiprogram. Modifications to sharable resources were put under critical sections protected by P(mutex) and V(mutex) and those critical section code were put in a common library. The library will be used by concurrent applications (of my own). I had three applications that will use the common code from library and do their stuff independently. my library --------- work_on_shared_resource { P(mutex) get_shared_resource work_with_it V(mutex) } --------- my application ----------- application1 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application2 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application3 { *[ work_on_shared_resource ] } *[...] denote a loop. ------------ I had to run the applications on Linux OS. I had a thought in my mind, hanging over years, that, OS shall schedule all the processes running under him with all fairness. In other words, it will give all the processes, their pie of resource-usage equally well. When first two applications were put to work, they run perfectly well without deadlock. But when the third application started running, always the third one got the resources, but since it is not doing anything in its non-critical region, it gets the shared resource more often when other tasks are doing something else. So the other two applications were found almost totally halted. When the third application got terminated forcefully, the previous two applications resumed their work as before. I think, this is a case of starvation, first two applications had to starve. Now how can we ensure fairness? Now I started believing that OS scheduler is innocent and blind. It depends upon who won the race; he got the largest pie of CPU and resource. Shall we attempt to ensure fairness of resource users in the critical-section code in library? Or shall we leave it up to the applications to ensure fairness by being liberal, not greedy? To my knowledge, adding code to ensure fairness to the common library shall be an overwhelming task. On the other hand, believing on the applications will also never ensure 100% fairness. The application which does a very little task after working with shared resources shall win the race where as the application which does heavy processing after their work with shared resources shall always starve. What is the best practice in this case? Where we ensure fairness and how? Sincerely, Srinivas Nayak

    Read the article

  • Rails Association Question (addendum)...

    - by keruilin
    My original question and accepted solution was posted here: http://stackoverflow.com/questions/2483640/rails-association-question. Check that out first. My follow-up question is this: I want to return an object that has both the user attributes and the race attributes. That way I can access, for example, the user's name and the fastest_time. How can this be accomplished? I've tried several approaches, but none I've been satisfied with.

    Read the article

  • In which scenario it is useful to use Disassembly on python?

    - by systempuntoout
    The dis module can be effectively used to disassemble Python methods, functions and classes into low-level interpreter instructions. I know that dis information can be used for: 1. Find race condition in programs that use threads 2. Find possible optimizations From your experience, do you know any other scenarios where Disassembly Python feature could be useful?

    Read the article

  • How do you pronounce "Enum"?

    - by Davy8
    In the spirit of this question how do you pronounce Enum? Tagging as subjective and community wiki obviously. I've heard E-Nuhm and E-Nnoom any others? Edit: Looks like we have a winner. Thought it'd be a closer race since most of the people at my work use the 2nd one.

    Read the article

  • General ORM design question

    - by Calvin
    Suppose you have 2 classes, Person and Rabbit. A person can do a number of things to a rabbit, s/he can either feed it, buy it and become its owner, or give it away. A rabbit can have none or at most 1 owner at a time. And if it is not fed for a while, it may die. Class Person { Void Feed(Rabbit r); Void Buy(Rabbit r); Void Giveaway(Person p, Rabbit r); Rabbit[] rabbits; } Class Rabbit { Bool IsAlive(); Person pwner; } There are a couple of observations from the domain model: Person and Rabbit can have references to each other Any actions on 1 object can also change the state of the other object Even if no explicit actions are invoked, there can still be a change of state in the objects (e.g. Rabbit can be starved to death, and that causes it to be removed from the Person.rabbits array) As DDD is concerned, I think the correct approach is to synchronize all calls that may change the states in the domain model. For instance, if a Person buys a Rabbit, s/he would need to acquire a lock in Person to make a change to the rabbits array AND also another lock in Rabbit to change its owner before releasing the first one. This would prevent a race condition where 2 Persons claim to be the owner of the little Rabbit. The other approach is to let the database to handle all these synchronizations. Who makes the first call wins, but then the DB needs to have some kind of business logics to figure out if it is a valid transaction (e.g. if a Rabbit already has an owner, it cannot change its owner unless the Person gives it away). There are both pros/cons in either approach, and I’d expect the “best” solution would be somewhere in-between. How would you do it in real life? What’s your take and experience? Also, is it a valid concern that there can be another race condition the domain model has committed its change but before it is fully committed in the database? And for the 3rd observation (i.e. state change due to time factor). How will you do it?

    Read the article

  • Find a free X11 display number

    - by James
    I have some unit tests that need an X11 display so I plan to start Xvfb before running them, but to start Xvfb I will need a free display number to connect it to. My best guess is to see what's free in /tmp/.X11-unix but I'm not sure how to handle the race if many tests try to start simultaneously. sshd must do this, does anyone know how?

    Read the article

  • Can .NET Task instances go out of scope during run?

    - by Henry Jackson
    If I have the following block of code in a method (using .NET 4 and the Task Parallel Library): var task = new Task(() => DoSomethingLongRunning()); task.Start(); and the method returns, will that task go out of scope and be garbage collected, or will it run to completion? I haven't noticed any issues with GCing, but want to make sure I'm not setting myself up for a race condition with the GC.

    Read the article

  • How do you create a transaction that spans multiple statements in Python with MySQLdb?

    - by Fast Fish
    I know that with an InnoDB table, transactions are autocommit, however I understand that to mean for a single statement? For example, I want to check if a user exists in a table, and then if it doesn't, create it. However there lies a race condition. I believe using a transaction prior to doing the select, will ensure that the table remains untouched until the subsequent insert, and the transaction is committed. How can you do this with MySQLdb and Python?

    Read the article

  • C# Getting a node with attributes from SelectSingleNode

    - by bdefreese
    Hi folks, I am quite the n00b but lately I have been playing with parsing some XML data. I actually found a nice feature on this site where I can get to a specific node with a specific attribute by doing: docFoo.SelectSingleNode("foo/bar/baz[@name='qux']); However, the data looks like this: <saving-throws> <saving-throw> <name>Fortitude</name> <abbr>Fort</abbr> <ability>Con</ability> <modifiers> <modifier name="base" value="2"/> <modifier name="ability" value="5"/> <modifier name="magic" value="0"/> <modifier name="feat" value="0"/> <modifier name="race" value="0"/> <modifier name="familar" value="0"/> <modifier name="feature" value="0"/> <modifier name="user" value="0"/> <modifier name="misc" value="0"/> </modifiers> </saving-throw> <saving-throw> <name>Reflex</name> <abbr>Ref</abbr> <ability>Dex</ability> <modifiers> <modifier name="base" value="6"/> <modifier name="ability" value="1"/> <modifier name="magic" value="0"/> <modifier name="feat" value="0"/> <modifier name="race" value="0"/> <modifier name="familar" value="0"/> <modifier name="feature" value="0"/> <modifier name="user" value="0"/> <modifier name="misc" value="0"/> </modifiers> </saving-throw> And I want to be able to get the node with name=base but for each saving-throw node where childnode "abbr" = xx. Can I somehow do that in a single SelectSingleNode or am I going to have to stop at saving throw and walk through the rest of the tree? Thanks!

    Read the article

  • c++ volatile multithreading variables

    - by anon
    I'm writing a C++ app. I have a class variable that more than one thread is writing to. In C++, anything that can be modified without the compiler "realizing" that it's being changed needs to be marked volatile right? So if my code is multi threaded, and one thread may write to a var while another reads from it, do I need to mark the var volaltile? [I don't have a race condition since I'm relying on writes to ints being atomic] Thanks!

    Read the article

  • Does SQL Server guarantee sequential inserting of an identity column?

    - by balpha
    In other words, is the following "cursoring" approach guaranteed to work: retrieve rows from DB save the largest ID from the returned records for later, e.g. in LastMax later, "SELECT * FROM MyTable WHERE Id > {0}", LastMax In order for that to work, I have to be sure that every row I didn't get in step 1 has an Id greater than LastMax. Is this guaranteed, or can I run into weird race conditions?

    Read the article

  • Is there possible in clojure to make a deadlock (or anything bad case) using agents?

    - by hsestupin
    CLojure agents is powerful tool. So actions to the agents are asynchronously sent using functions "send" and "send-off". And in theory there couldn't appear something like deadlock. Is there possible to write some clojure code (for example invoking from some action another action to another agent) using agents in which we have some concurrency problem - it could be deadlock, race condition or anything else. (guys, i'm very sorry for my english)

    Read the article

  • Multithreaded ActiveRecord requests in rspec

    - by jeem
    I'm trying to recreate a race condition in a test, so I can try out some solutions. I find that in the threads I create in my test, ActiveRecord always returns 0 for counts and nil for finds. For example, with 3 rows in the table "foos": it "whatever" do puts Foo.count 5.times do Thread.new do puts Foo.count end end end will print 3 0 0 0 0 0 test.log shows the expected query, the expected 6 times: SELECT count(*) AS count_all FROM `active_agents` Any idea what's going on here?

    Read the article

  • How expensive is synchronization?

    - by someguy
    I am writing a networking application using the java.nio api. My plan is to perform I/O on one thread, and handle events on another. To do this though, I need to synchronize reading/writing so that a race condition is never met. Bearing in mind that I need to handle thousands of connections concurrently, is synchronization worth it, or should I use a single thread for I/O and event handling?

    Read the article

  • unprotected access to member in property get

    - by Lenik
    I have a property public ObservableCollection<string> Name { get { return _nameCache; } } _nameCache is updated by multiple threads in other class methods. The updates are guarded by a lock. The question is: should I use the same lock around my return statement? Will not using a lock lead to a race condition?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >