Search Results

Search found 60 results on 3 pages for 'randomness'.

Page 1/3 | 1 2 3  | Next Page >

  • Does concurrency inherently introduce "randomness" into a game?

    - by Jeff
    When a game is implemented with concurrency (as most games are), does this necessarily, by its very nature, introduce an element of randomness into the game that is outside of the players' control? Note that when I use the word "random", I'm not meaning to launch into a philosophical debate about the deterministic nature of the system. I understand that concurrency is deterministic in the sense that the operating system decides which processes to allow time on the CPU and in what order (or the JVM controls which Thread's turn it is to execute, etc). But my understanding of this is that there is no way to control or predict whether one thread's next command will execute before or after another. The reason I'm asking is because this seems like a fundamental difficulty for game development where a game is supposedly designed around a player's skill. Consider a game like League of Legends. Assume that two players are battling it out. It's a very close contest between the two and it's coming down to the wire -- so much so that whoever gets their last attack off will be the one to kill the other and win the game for their team. If the players are implemented using concurrency and the situation really was like this, is it essentially out of the players' hands at this point? Is the outcome of this match all up to whatever system is arbitrarily deciding which player's thread/process will execute next? If not, what am I misunderstanding about concurrency? If so, is there any way around this problem so that a game of skill can always be a game of skill, especially in those most crucial moments?

    Read the article

  • Hash Algorithm Randomness Visualization

    - by clstroud
    I'm curious if anyone here has any idea how the images were generated as shown in this response: Which hashing algorithm is best for uniqueness and speed? Ian posted a very well-received response but I can't seem to understand how he went about making the images. I hate to make a new question dedicated to this, but I can't find any means to ask him more directly. On the other hand, perhaps someone has an alternative perspective. The best I can personally come up with would be to have it almost like a bar graph, which would illustrate how evenly the buckets of the hash table are being generated. I have a working Cocoa program that does this, but it can't generate anything like what he showed there. So the question is two fold I suppose: A) How does one truly interpret the data he shows? Is it more than "less whitespace = better"? B) How does one generate such an image based on some set of inputs, a hash, and an index? Perhaps I'm misunderstanding entirely, but I really would like to know more about this particular visualization technique. Or maybe I'm mis-applying this to hash tables rather than just hashes in general, but in that case I don't know how it would be "bounded" for the image.

    Read the article

  • Where does that randomness come from ?

    - by Jules Olléon
    I'm working on a data mining research project and use code from a big svn. Apparently one of the methods I use from that svn uses randomness somewhere without asking for a seed, which makes 2 calls to my program return different results. That's annoying for what I want to do, so I'm trying to locate that "uncontrolled" randomness. Since the classes I use depend on many other, that's pretty painful to do by hand. Any idea how I could find where that randomness comes from ?

    Read the article

  • Distributing entropy to virtual machines.

    - by Louis
    Dear All, I'm interested in generating secret keys for SSL on virtual machines using true randomness. By true randomness I mean the same level of entropy that can be generated by UNIX's dev/random and entropy gathering daemon (EGD). Is there a "general knowledge" recipe to route entropy from the physical layer to the virtual machines via the hypervisor regardless of the Hypervisor/Guest OS combination? Example: suppose one "hypervises" with VMware VSphere and instantiates Windows Guest OS. Can this hypervisor collect entropy from its peripherals (like dev/random/ would) and distribute it to these guest Windows OS? When considering the big vendors (VMware, Hyper-V, Citrix, etc), do they have entropy pools that gather entropy that can easily be pushed to their respective virtual machines? Louis

    Read the article

  • nginx hackery : change image file every X request

    - by Vangel
    Let me describe what I am trying to do first. I have a bunch of pictures in a directory called /images/*.(jpg|gif|png|blah blah|) Now say these images are embedded in an html page and I dont really care which image or where its embedded. For every 10th request for the same picture file (if possible) or for any picture I want to display a fixed image (e.g. trollface.jpg). thats it! I have searched around a bit but i am not even sure what I am looking for. Rewrite might help but then its a permanent thing. this has got to do something with requests. I have heard perl scripts can be used with nginx. I can't write an nginx module (though I did bravely lookup the docs and then gave up) Before you ask "But why don't you do it in application, noob?". This is a static files only server. The point is to not execute any binary at all.

    Read the article

  • Feeding the kernels entropy source from other machines and/or increasing its maximum size

    - by David Spillett
    We have has a little trouble with a small box that acts as a VPN end-point and mail relay for our network, caused by the available entropy for /dev/random being too low (which causes TLS connection attempts by exim to fail). The machine doesn't do anything else, so the normal feed into the entropy pool (interrupt timings from things like disk access) is not enough. As a quick hack I've set a looping script that reads from /dev/hda at a couple of Mbyte/sec which keeps it topped up. Other than buying a hardware RNG, is there a clean way of piping data for entry from elsewhere, such as a copy of the data our file server uses for its entropy source? I've spotted several tips for using rng-tools to feed it from /dev/urandom on the same machine but that "feels dirty". Also, is it possible to increase the maximum pool size? It currently seems to max out at 3585.

    Read the article

  • NTUSER.DAT and UsrClass.dat files building up by the thousands, why and can I delete?

    - by Anthony
    I've noticed that my web server, 2008 Xen VM, gradually loosing free space - more than I would of though from normal use and decided to investigate. There are two problem areas: *C:\Users\Administrator\ (6,755.0 MB)* with files: NTUSER.DAT{randomness}.TMContainer'0000 randomness'.regtrans-ms NTUSER.DAT{randomness}.TM.blf AND C:\Users\Administrator\AppData\Local\Microsoft\Windows\ (6,743.8 MB) with files UsrClass.dat{randomness}.TMContainer'0000 randomness'.regtrans-ms UsrClass.dat{randomness}.TM.blf From what I understand these are in-time backups of registry changes. If that is the case I cannot possibly understand why there would be 10000+ changes. (That's how many files there are per folder location, over 20,000 per folder in total.) The files are using almost 15GB of space and I want rid of them, I'm just wondering can I remove them. However, I need to understand why they are being created so I can avoid this in the future. Any ideas why there would be so many? Is there a way I can check to see what is making the modifications? Are they created with login attempts? Are they created in relation to every day Web Server use? etc. and so on

    Read the article

  • How to create reproducible probability in map generation?

    - by nickbadal
    So for my game, I'm using perlin noise to generate regions of my map (water/land, forest/grass) but I'd also like to create some probability based generation too. For instance: if(nextInt(10) > 2 && tile.adjacentTo(Type.WATER)) tile.setType(Type.SAND); This works fine, and is even reproduceable (based on a common seed) if the nextInt() calls are always in the same order. The issue is that in my game, the world is generated on demand, based on the player's location. This means, that if I explore the map differently, and the chunks of the map are generated in a different order, the randomness is no longer consistent. How can I get this sort of randomness to be consistent, independent of call order? Thanks in advance :)

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • What do different patterns mean in Windows 8 file copy dialog

    - by MainMa
    When copying or extracting files, Windows 8 shows the chart with the speed of the operation. I noticed several patterns: Randomness, High speed at the beginning, then low speed during the most part of the operation, Mostly constant speed. 1. Randomness/nice mountains. 2. High speed at the beginning, then low speed during the most part of the operation. 3. Low speed at the beginning, then high speed during the most part of the operation. (Similar to the previous image, but inverted) 3. Mostly constant speed. (Same as previous image, but without the fast start) I'm curious, what each of those patterns mean? Do some indicate that there may be a problem with hard disk performance? Why the nearly constant speed is so rare, even when copying a single large file from and to a spinning drive, or when copying a single large file or a bunch of small files from and to an SSD?

    Read the article

  • Generating strongly biased radom numbers for tests

    - by nobody
    I want to run tests with randomized inputs and need to generate 'sensible' random numbers, that is, numbers that match good enough to pass the tested function's preconditions, but hopefully wreak havoc deeper inside its code. math.random() (I'm using Lua) produces uniformly distributed random numbers. Scaling these up will give far more big numbers than small numbers, and there will be very few integers. I would like to skew the random numbers (or generate new ones using the old function as a randomness source) in a way that strongly favors 'simple' numbers, but will still cover the whole range, I.e. extending up to positive/negative infinity (or ±1e309 for double). This means: numbers up to, say, ten should be most common, integers should be more common than fractions, numbers ending in 0.5 should be the most common fractions, followed by 0.25 and 0.75; then 0.125, and so on. A different description: Fix a base probability x such that probabilities will sum to one and define the probability of a number n as xk where k is the generation in which n is constructed as a surreal number1. That assigns x to 0, x2 to -1 and +1, x3 to -2, -1/2, +1/2 and +2, and so on. This gives a nice description of something close to what I want (it skews a bit too much), but is near-unusable for computing random numbers. The resulting distribution is nowhere continuous (it's fractal!), I'm not sure how to determine the base probability x (I think for infinite precision it would be zero), and computing numbers based on this by iteration is awfully slow (spending near-infinite time to construct large numbers). Does anyone know of a simple approximation that, given a uniformly distributed randomness source, produces random numbers very roughly distributed as described above? I would like to run thousands of randomized tests, quantity/speed is more important than quality. Still, better numbers mean less inputs get rejected. Lua has a JIT, so performance can't be reasonably predicted. Jumps based on randomness will break every prediction, and many calls to math.random() will be slow, too. This means a closed formula will be better than an iterative or recursive one. 1 Wikipedia has an article on surreal numbers, with a nice picture. A surreal number is a pair of two surreal numbers, i.e. x := {n|m}, and its value is the number in the middle of the pair, i.e. (for finite numbers) {n|m} = (n+m)/2 (as rational). If one side of the pair is empty, that's interpreted as increment (or decrement, if right is empty) by one. If both sides are empty, that's zero. Initially, there are no numbers, so the only number one can build is 0 := { | }. In generation two one can build numbers {0| } =: 1 and { |0} =: -1, in three we get {1| } =: 2, {|1} =: -2, {0|1} =: 1/2 and {-1|0} =: -1/2 (plus some more complex representations of known numbers, e.g. {-1|1} ? 0). Note that e.g. 1/3 is never generated by finite numbers because it is an infinite fraction – the same goes for floats, 1/3 is never represented exactly.

    Read the article

  • How can I make an even more random number in ActionScript 2.0

    - by Theo
    I write a piece of software that runs inside banner ads which generates millions of session IDs every day. For a long time I've known that the random number generator in Flash is't random enough to generate sufficiently unique IDs, so I've employed a number of tricks to get even more random numbers. However, in ActionScript 2.0 it's not easy, and I'm seeing more and more collisions, so I wonder if there is something I've overlooked. As far as I can tell the problem with Math.random() is that it's seeded by the system time, and when you have sufficient numbers of simultaneous attempts you're bound to see collisions. In ActionScript 3.0 I use the System.totalMemory, but there's no equivalent in ActionScript 2.0. AS3 also has Font.enumerateFonts, and a few other things that are different from system to system. On the server side I also add the IP address to the session ID, but even that isn't enough (for example, many large companies use a single proxy server and that means that thousands of people all have the same IP -- and since they tend to look at the same sites, with the same ads, roughly at the same time, there are many session ID collisions). What I need isn't something perfectly random, just something that is random enough to dilute the randomness I get from Math.random(). Think of it this way: there is a certain chance that two people will generate the same random number sequence using only Math.random(), but the chance of two people generating the same sequence and having, say, the exact same list of fonts is significantly lower. I cannot rely on having sufficient script access to use ExternalInterface to get hold of things like the user agent, or the URL of the page. I don't need suggestions of how to do it in AS3, or any other system, only AS2 -- using only what's available in the standard APIs. The best I've come up with so far is to use the list of microphones (Microphone.names), but I've also tried to make some fingerprinting using some of the properties in System.capabilities, I'm not sure how much randomness I can get out of that though so I'm not using that at the moment. I hope I've overlooked something.

    Read the article

  • Mysql random rows

    - by n00b
    please read the whole question... 90% of you dont seem to do that and some of you only read the title obviously... and if you dont know the solution, dont answer - i wont have to downvote you -.-'' im entertaining the idea of getting random rows directly from mysql. what i found was SELECT * FROM tablename WHERE somefield='something' ORDER BY RAND() LIMIT 5 but even i see how slow that would be.. is the only way to do this doing something like SELECT * FROM tablename WHERE somefield='something' LIMIT RAND(aincrementvalue-5), 1 5 times? or is there a way that i with my little knowlege of databases cant come up with ? (no i dont want random indexes. i hate the idea of them...) @commenters - please first look, then think, then look again, think again and then post. i wont point fingers but i dislike stupid comments and why i think random indexes are a nasty hack ? it doesnt give you random results. it gives you x results from a random index in a predefined order its like a gapless id only in the wrong order if you fetch by 1 row and get true randomness you fall back to my method but with an additional junk field finally the reason the field exists is only to serve as a helper to something that can be done without it with almost same performance (but the quality (randomness) is better), so it is a nasty hack ;) i solved it, look @ my answer... if you think its incorrect please tell me :)

    Read the article

  • Difference Procedural Generation and Random Generation

    - by U-No-Poo
    Today, I got into an argument about the term "procedural generation". My point was that its different from "classic" random generation in the way that procedural is based on a more mathematical, fractal based, algorithm leading to a more "realistic" distribution and the usual randomness of most languages are based on a pseudo-random-number generator, leading to an "unrealistic", in a way, ugly, distribution. This discussion was made with a heightmap in mind. The discussion left me somehow unconvinced about my own arguments though, so, is there more to it? Or am I the one who is, in fact, simply wrong?

    Read the article

  • Where can I find "magic numbers" for classic game play mechanics?

    - by MrDatabase
    I'd like to find some "magic numbers" for the classic helicopter game. For example the numbers that determine how fast the helicopter accelerates up and down. Also perhaps the "randomness" of the obstacles (uniformly distributed? Gaussian?). Where can I find these numbers? p.s. I don't care about the particular platform... Flash on the desktop browser is just as good as some implementation on a mobile device.

    Read the article

  • Database for survey

    - by zfm
    One of my job now is to design a database for a survey. Let's say we have a series of questions (web-based), in which one page contains one question. Not every person will be given the same questions, those are based on their previous answers and also randomness. I would like to know whether it is better to have database like this user question answer userX question1 answer1A userX question2 answer2C userX question5 answer5F userY question1 answer1B userY question3 answer3B userY question6 answer6D ... or user q1 q2 q3 q4 q5 q6 userX 1A 2C null null 5F null userY 1B null 3B null null 6D ... My idea here is, using the second approach seems better, however I would like to know whether updating the table is (much) slower than inserting a new row? Also with the first approach, I can omit having some null answers. The total questions given are fix, the client wont add any more question later on. So my question is, what will you do if you were me?

    Read the article

  • What are best practices for testing programs with stochastic behavior?

    - by John Doucette
    Doing R&D work, I often find myself writing programs that have some large degree of randomness in their behavior. For example, when I work in Genetic Programming, I often write programs that generate and execute arbitrary random source code. A problem with testing such code is that bugs are often intermittent and can be very hard to reproduce. This goes beyond just setting a random seed to the same value and starting execution over. For instance, code might read a message from the kernal ring buffer, and then make conditional jumps on the message contents. Naturally, the ring buffer's state will have changed when one later attempts to reproduce the issue. Even though this behavior is a feature it can trigger other code in unexpected ways, and thus often reveals bugs that unit tests (or human testers) don't find. Are there established best practices for testing systems of this sort? If so, some references would be very helpful. If not, any other suggestions are welcome!

    Read the article

  • How does one unit test an algorithm

    - by Asa Baylus
    I was recently working on a JS slideshow which rotates images using a weighted average algorithm. Thankfully, timgilbert has written a weighted list script which implements the exact algorithm I needed. However in his documentation he's noted under todos: "unit tests!". I'd like to know is how one goes about unit testing an algorithm. In the case of a weighted average how would you create a proof that the averages are accurate when there is the element of randomness? Code samples of similar would be very helpful to my understanding.

    Read the article

  • Algorithm to increase odds of matching when randomly selecting

    - by Bryan
    I am building a mobile game loosely based on dual n-back http://brainworkshop.sourceforge.net/tutorial.html Now with the game I have 9 squares (numbered 1 through 9) and 9 letters (A through K) In the current code, I randomly select a square (e.g. 3) and a letter (e.g. C), then repeat the random selection for the next turn. For 1-back, I test whether either, neither or both match the previous turn. The problem with my current code is I get very few matches - I can go through many turns without having either match. How can I increase the match frequency, or alternatively decrease the randomness so a match is more likely? I am not looking for specific code (but pseudo-code would be fine) - just more an approach to increase match frequency.

    Read the article

  • What is the difference between _Procedural Generation_ and _Random Generation_?

    - by U-No-Poo
    Today, I got into an argument about the term "procedural generation". My point was that its different from "classic" random generation in that procedural generation is based on a more mathematical, fractal-based, algorithm leading to a more "realistic" distribution and the usual randomness of most languages are based on a pseudo-random-number generator, leading to an "unrealistic", in a way, ugly, distribution. This discussion was made with a heightmap in mind. The discussion left me somehow unconvinced about my own arguments though, so, is there more to it? Or am I the one who is, in fact, simply wrong?

    Read the article

  • Random text generator

    - by verbatim64x
    What is the best way to generate random a string which is composed of alphabets and its a maximum of 8million characters which will be tested using string searching algorithms? is Math.random still be ok for the randomness or the reliability of the spread of characters based on statistics? any comment is appreciated, right me if im wrong with my ideas.

    Read the article

  • Setting last N bits in an array

    - by Martin
    I'm sure this is fairly simple, however I have a major mental block on it, so I need a little help here! I have an array of 5 integers, the array is already filled with some data. I want to set the last N bits of the array to be random noise. [int][int][int][int][int] set last 40 bits [unchanged][unchanged][unchanged][24 bits of old data followed 8 bits of randomness][all random] This is largely language agnostic, but I'm working in C# so bonus points for answers in C#

    Read the article

  • Cheap and cheerful rand() replacement.

    - by Mick
    After profiling a large game playing program, I have found that the library function rand() is consuming a considerable fraction of the total processing time. My requirements for the random number generator are not very onerous - its not important that it pass a great battery of statistical tests of pure randomness. I just want something cheap and cheerful that is very fast. Any suggestions?

    Read the article

1 2 3  | Next Page >