Search Results

Search found 5070 results on 203 pages for 'algorithm'.

Page 30/203 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • Looking for an algorithm to connect dots - shortest route

    - by e4ch
    I have written a program to solve a special puzzle, but now I'm kind of stuck at the following problem: I have about 3200 points/nodes/dots. Each of these points is connected to a few other points (usually 2-5, theoretical limit is 1-26). I have exactly one starting point and about 30 exit points (probably all of the exit points are connected to each other). Many of these 3200 points are probably not connected to neither start nor end point in any way, like a separate net, but all points are connected to at least one other point. I need to find the shortest number of hops to go from entry to exit. There is no distance between the points (unlike the road or train routing problem), just the number of hops counts. I need to find all solutions with the shortest number of hops, and not just one solution, but all. And potentially also solutions with one more hop etc. I expect to have a solution with about 30-50 hops to go from start to exit. I already tried: 1) randomly trying possibilities and just starting over when the count was bigger than a previous solution. I got first solution with 3500 hops, then it got down to about 97 after some minutes, but looking at the solutions I saw problems like unnecessary loops and stuff, so I tried to optimize a bit (like not going back where it came from etc.). More optimizations are possible, but this random thing doesn't find all best solutions or takes too long. 2) Recursively run through all ways from start (chess-problem-like) and breaking the try when it reached a previous point. This was looping at about a length of 120 nodes, so it tries chains that are (probably) by far too long. If we calculate 4 possibilities and 120 nodes, we're reaching 1.7E72 possibilities, which is not possible to calculate through. This is called Depth-first search (DFS) as I found out in the meantime. Maybe I should try Breadth-first search by adding some queue? The connections between the points are actually moves you can make in the game and the points are how the game looks like after you made the move. What would be the algorithm to use for this problem? I'm using C#.NET, but the language shouldn't matter.

    Read the article

  • Do encryption algorithms require an internal hashing algorithm?

    - by Rudi
    When I use C# to implement the AES symmetric encryption cipher, I noticed: PasswordDeriveBytes derivedPassword = new PasswordDeriveBytes(password, saltBytesArray, hashAlgorithmName, numPasswordIterations); Why do I need to use a hashing algorithm for AES encryption? Aren't they separate? Or is the hashing algorithm only used to create a secure key? The AES algorithm doesn't use a hashing algorithm internally does it?

    Read the article

  • How do you efficiently generate a list of K non-repeating integers between 0 and an upper bound N

    - by tucuxi
    The question gives all necessary data: what is an efficient algorithm to generate a sequence of K non-repeating integers within a given interval. The trivial algorithm (generating random numbers and, before adding them to the sequence, looking them up to see if they were already there) is very expensive if K is large and near enough to N. The algorithm provided here seems more complicated than necessary, and requires some implementation. I've just found another algorithm that seems to do the job fine, as long as you know all the relevant parameters, in a single pass.

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • Auto-organized / smart inventory system?

    - by VeXe
    for the past week I've been working on an inventory system with Unity3D. At first I got help from the guys at Design3 but it wasn't too long till we split path, because I really didn't like the way they did their code, it didn't have any smell of OOP whatsoever. I took it further steps ahead - items take more than one slot, advanced placement system (items tries their best to find the best close fit), local mouse system (mouse gets trapped in active bag area), etc. Here's a demo of my work. What we would like to have in our game, is an auto-organizing feature - not auto-sort. We want this feature because our inventory's going to be in 'real-time' - not like in Resident Evil 1,2,3 etc where you would pause the game and do things in your inventory. Now imagine your self in a sticky situation surrounded by zombies, and you don't have bullets, you look around, you see that there are bullets nearby on the ground, so you go for them and try to pick them up, but they don't fit! you look at your inventory and find out that if you reorganize some of the items, it will fit! - now the player - in that situation doesn't have time to reorganize because he's surrounded with zombies and will die if he stops and organizes the inventory to make space (remember inventory in real-time, no pausing) - wouldn't it be nice for that to happen automatically? - Yes! (I believe this has been implemented in some games like Dungeon siege or something, so sure it's doable) take a look at this picture for example: Yes, so if you auto-sort the issue you will get your spaces but it's bad because: 1- Expensive: it doesn't need a whole sort operation to free those spaces, in the first picture, just slide the red item at the bottom to the very left, and you get the same spaces that you got from the auto-sort. 2- It's annoying to the player: "Who the F told you to re-order my stuff?" I'm not asking for "How to write the code" for this, I'm just asking for some guidance, where to look, what algorithms are involved? Is this something related to graphs and shortest path stuff? I hope not cuz I didn't manage to continue my college studies :/ But even if it is, just tell me and I will learn the stuff related. Notice there could be more than just one solution. So I guess the first thing I have to do is figure out if the situation is 'solvable' - if I know how to determine if a situation is solvable or not, then I can 'solve' it. I just need to know the conditions that makes it 'solvable'. And I believe there must be some algorithm/data structure for this. Here's a pic for more than one solution of trying to fit a 1x3 item: The arrows show just one of the solutions, but if you look you will find more than one. This is what I ultimately not auto-sorting but find a solution and applying it. Note that if I spend time on it I will come up with a way to solve it, but it wouldn't be the best way, it's like, holding a car wheel with your feet instead of your hands! XD Or just like trying to solve an issue that requires arrays, but you're not yet aware of their existence! So what is the right approach to this? Hope somebody helps, thanks a lot in advance :)

    Read the article

  • How is schoolbook long division an O(n^2) algorithm?

    - by eSKay
    Premise: This Wikipedia page suggests that the computational complexity of Schoolbook long division is O(n^2). Deduction: Instead of taking "Two n-digit numbers", if I take one n-digit number and one m-digit number, then the complexity would be O(n*m). Contradiction: Suppose you divide 100000000 (n digits) by 1000 (m digits), you get 100000, which takes six steps to arrive at. Now, if you divide 100000000 (n digits) by 10000 (m digits), you get 10000 . Now this takes only five steps. Conclusion: So, it seems that the order of computation should be something like O(n/m). Question: Who is wrong, me or Wikipedia, and where?

    Read the article

  • How to sort in-place using the merge sort algorithm?

    - by eSKay
    I know the question is too open. All I want is someone to tell me how to convert a normal merge sort into an in-place merge sort (or a merge sort with constant extra space overhead). All I can find (on the net) is pages saying "it is too complex" or "out of scope of this text". "The only known ways to merge in-place (without any extra space) are too complex to be reduced to practical program." (from here) Even if it is too complex, can somebody outline the basic concept of how to make the merge sort in-place?

    Read the article

  • What is a convenient base for a bignum library & primality testing algorithm?

    - by nn
    Hi, I am to program the Solovay-Strassen primality test presented in the original paper on RSA. Additionally I will need to write a small bignum library, and so when searching for a convenient representation for bignum I came across this [specification][1]: struct { int sign; int size; int *tab; } bignum; I will also be writing a multiplication routine using the Karatsuba method. So, for my question: What base would be convenient to store integer data in the bignum struct? Note: I am not allowed to use third party or built-in implementations for bignum such as GMP. Thank you.

    Read the article

  • The algorithm used to generate recommendations in Google News?

    - by Siddhant
    Hi everyone. I'm study recommendation engines, and I went through the paper that defines how Google News generates recommendations to users for news items which might be of their interest, based on collaborative filtering. One interesting technique that they mention is Minhashing. I went through what it does, but I'm pretty sure that what I have is a fuzzy idea and there is a strong chance that I'm wrong. The following is what I could make out of it :- Collect a set of all news items. Define a hash function for a user. This hash function returns the index of the first item from the news items which this user viewed, in the list of all news items. Collect, say "n" number of such values, and represent a user with this list of values. Based on the similarity count between these lists, we can calculate the similarity between users as the number of common items. This reduces the number of comparisons a lot. Based on these similarity measures, group users into different clusters. This is just what I think it might be. In Step 2, instead of defining a constant hash function, it might be possible that we vary the hash function in a way that it returns the index of a different element. So one hash function could return the index of the first element from the user's list, another hash function could return the index of the second element from the user's list, and so on. So the nature of the hash function satisfying the minwise independent permutations condition, this does sound like a possible approach. Could anyone please confirm if what I think is correct? Or the minhashing portion of Google News Recommendations, functions in some other way? I'm new to internal implementations of recommendations. Any help is appreciated a lot. Thanks!

    Read the article

  • Algorithm: how to check intersections of recurring events definitions?

    - by glaz666
    The question comes from MS Outlook calendar behavior. Imagine I have two recurring events (starting from today): "each second Monday" and "every odd date". Is there any way to check intersections and/or find the first intersecting date algorithmically without brute-forcing over each date? Definitions can be made in CRON's notations or ICal notation. I think it doesn't matter. Are there any solutions for this in Gregorian calendar?

    Read the article

  • Algorithm: How to tell if an array is a permutation in O(n)?

    - by Iulian Serbanoiu
    Hello, Input: A read-only array of N elements containing integer values from 1 to N. And a memory zone of a fixed size (10, 100, 1000 etc - not depending on N). How to tell in O(n) if the array represents a permutation? --What I achieved so far:-- I use the limited memory area to store the sum and the product of the array. I compare the sum with N*(N+1)/2 and the product with N! I know that if condition (2) is true I might have a permutation. I'm wondering if there's a way to prove that condition (2) is sufficient to tell if I have a permutation. So far I haven't figured this out ... Thanks, Iulian

    Read the article

  • How to optimize neural network by using genetic algorithm?

    - by Billy Coen
    I'm quite new with this topic so any help would be great. What i need is to optimize a neural network in MATLAB by using GA. My network has [2x98] input and [1x98] target, i've tried consulting matlab help but im still kind of clueless about what to do :( so, any help would be appreciated. Thanks in advance. edit: i guess i didn't say what is there to be optimized as Dan said in the 1st answer. I guess most important thing is number of hidden neurons. And maybe number of hidden layers and training parameters like number of epochs or so. Sorry for not providing enough info, i'm still learning about this.

    Read the article

  • Can SHA-1 algorithm be computed on a stream? With low memory footprint?

    - by raoulsson
    I am looking for a way to compute SHA-1 checksums of very large files without having to fully load them into memory at once. I don't know the details of the SHA-1 implementation and therefore would like to know if it is even possible to do that. If you know the SAX XML parser, then what I look for would be something similar: Computing the SHA-1 checksum by only always loading a small part into memory at a time. All the examples I found, at least in Java, always depend on fully loading the file/byte array/string into memory. If you even know implementations (any language), then please let me know!

    Read the article

  • How to find minimum weight with maximum cost in 0-1 Knapsack algorithm?

    - by Nitin9791
    I am trying to solve a spoj problem Party Schedule the problem statement is- You just received another bill which you cannot pay because you lack the money. Unfortunately, this is not the first time to happen, and now you decide to investigate the cause of your constant monetary shortness. The reason is quite obvious: the lion's share of your money routinely disappears at the entrance of party localities. You make up your mind to solve the problem where it arises, namely at the parties themselves. You introduce a limit for your party budget and try to have the most possible fun with regard to this limit. You inquire beforehand about the entrance fee to each party and estimate how much fun you might have there. The list is readily compiled, but how do you actually pick the parties that give you the most fun and do not exceed your budget? Write a program which finds this optimal set of parties that offer the most fun. Keep in mind that your budget need not necessarily be reached exactly. Achieve the highest possible fun level, and do not spend more money than is absolutely necessary. Input The first line of the input specifies your party budget and the number n of parties. The following n lines contain two numbers each. The first number indicates the entrance fee of each party. Parties cost between 5 and 25 francs. The second number indicates the amount of fun of each party, given as an integer number ranging from 0 to 10. The budget will not exceed 500 and there will be at most 100 parties. All numbers are separated by a single space. There are many test cases. Input ends with 0 0. Output For each test case your program must output the sum of the entrance fees and the sum of all fun values of an optimal solution. Both numbers must be separated by a single space. Example Sample input: 50 10 12 3 15 8 16 9 16 6 10 2 21 9 18 4 12 4 17 8 18 9 50 10 13 8 19 10 16 8 12 9 10 2 12 8 13 5 15 5 11 7 16 2 0 0 Sample output: 49 26 48 32 now I know that it is an advance version of 0/1 knapsack problem where along with maximum cost we also have to find minimum weight that is less than a a given weight and have maximum cost. so I have used dp to solve this problem but still get a wrong awnser on submission while it is perfectly fine with given test cases. My code is typedef vector<int> vi; #define pb push_back #define FOR(i,n) for(int i=0;i<n;i++) int main() { //freopen("input.txt","r",stdin); while(1) { int W,n; cin>>W>>n; if(W==0 && n==0) break; int K[n+1][W+1]; vi val,wt; FOR(i,n) { int x,y; cin>>x>>y; wt.pb(x); val.pb(y); } FOR(i,n+1) { FOR(w,W+1) { if(i==0 || w==0) { K[i][w]=0; } else if (wt[i-1] <= w) { if(val[i-1] + K[i-1][w-wt[i-1]]>=K[i-1][w]) { K[i][w]=val[i-1] + K[i-1][w-wt[i-1]]; } else { K[i][w]=K[i-1][w]; } } else { K[i][w] = K[i-1][w]; } } } int a1=K[n][W],a2; for(int j=0;j<W;j++) { if(K[n][j]==a1) { a2=j; break; } } cout<<a2<<" "<<a1<<"\n"; } return 0; } Could anyone suggest what am I missing??

    Read the article

  • How could I apply a genetic algorithm to a simple game that follows rollercoaster tracks?

    - by Chris
    I have free-rein over what I do on a final assignment for school, with respect to modifying a simple direct-x game that currently just has the camera follow some rollercoaster rails. I've developed an interest in genetic algorithms and would like to take this opportunity to apply one and learn something about them. However, I can't think of any way I could possibly apply one in this case. What are some options available to me?

    Read the article

  • What is the best data structure and algorithm for comparing a list of strings?

    - by Chiraag E Sehar
    I want to find the longest possible sequence of words that match the following rules: Each word can be used at most once All words are Strings Two strings sa and sb can be concatenated if the LAST two characters of sa matches the first two characters of sb. In the case of concatenation, it is performed by overlapping those characters. For example: sa = "torino" sb = "novara" sa concat sb = "torinovara" For example, I have the following input file, "input.txt": novara torino vercelli ravenna napoli liverno messania noviligure roma And, the output of the above file according to the above rules should be: torino novara ravenna napoli livorno noviligure since the longest possible concatenation is: torinovaravennapolivornovilligure Can anyone please help me out with this? What would be the best data structure for this?

    Read the article

  • Is there any algorithm for determining 3d position in such case? (images below)

    - by Ole Jak
    So first of all I have such image (and ofcourse I have all points coordinates in 2d so I can regenerate lines and check where they cross each other) But hey, I have another Image of same lines (I know thay are same) and new coords of my points like on this image So... now Having points (coords) on first image, How can I determin plane rotation and Z depth on second image (asuming first one's center was in point (0,0,0) with no rotation)?

    Read the article

  • Algorithm for assigning a unique series of bits for each user?

    - by Mark
    The problem seems simple at first: just assign an id and represent that in binary. The issue arises because the user is capable of changing as many 0 bits to a 1 bit. To clarify, the hash could go from 0011 to 0111 or 1111 but never 1010. Each bit has an equal chance of being changed and is independent of other changes. What would you have to store in order to go from hash - user assuming a low percentage of bit tampering by the user? I also assume failure in some cases so the correct solution should have an acceptable error rate. I would an estimate the maximum number of bits tampered with would be about 30% of the total set. I guess the acceptable error rate would depend on the number of hashes needed and the number of bits being set per hash. I'm worried with enough manipulation the id can not be reconstructed from the hash. The question I am asking I guess is what safe guards or unique positioning systems can I use to ensure this happens.

    Read the article

  • How do you avoid an invalid search space in a genetic algorithm?

    - by Dave
    I am developing a GA for a school project and I've noticed that upon evaluating my functions for fitness, an individual is equivalent to its inverse. For example, the set (1, 1, -1, 1) is equivalent to (-1, -1, 1, -1). To shrink my search space and reach a solution more efficiently, how can I avoid my crossovers from searching in this second half of the search space?

    Read the article

  • Is there a name for this type of algorithm?

    - by rehanift
    I have a 2 dimensional array forming a table: [color][number][shape ] ------------------------- [black][10 ][square ] [black][10 ][circle ] [red ][05 ][triangle] [red ][04 ][triangle] [green][11 ][oval ] and what I want to do is group largest common denominators, such that we get: 3 groups group #1: color=black, number=10, shapes = [square, circle] group #2: color=red, shape=triange, numbers = [05,04] group #3: color=green, number=11, shape = oval I wrote code that will handle a 2 "column" scenario, then I needed to adjusted it for 3 and I was figuring I might as well do it for n. I wanted to check first if there is some literature around this but I can't think of what to start looking for!

    Read the article

  • Is this a bad version of the Merge Sort algorithm?

    - by SebKom
    merge1(int low, int high, int S[], U[]) { int k = (high - low + 1)/2 for q (from low to high) U[q] = S[q] int j = low int p = low int i = low + k while (j <= low + k - 1) and (i <= high) do { if ( U[j] <= U[i] ) { S[p] := U[j] j := j+1 } else { S[p] := U[i] i := i+1 } p := p+1 } if (j <= low + k - 1) { for q from p to high do { S[q] := U[j] j := j+1 } } } merge_sort1(int low, int high, int S[], U[]) { if low < high { int k := (high - low + 1)/2 merge_sort1(low, low+k-1, S, U) merge_sort1(low+k, high, S, U) merge1(low, high, S, U) } } I am really sorry for the terrible formating, as you can tell I am not a regular visitor here. So, basically, this is on my lecture notes. I find it quite confusing in general but I understand the biggest part of it. What I don't understand is the need of the "if (j <= low + k - 1)" part. It looks like it checks if there are any elements "left" in the left part. Is that even possible when mergesorting?

    Read the article

  • Algorithm design: can you provide a solution to the multiple knapsack problem?

    - by MalcomTucker
    I am looking for a pseudo-code solution to what is effectively the Multiple Knapsack Problem (optimisation statement is halfway down the page). I think this problem is NP Complete so the solution doesn't need to be optimal, rather if it is fairly efficient and easily implemented that would be good. The problem is this: I have many work items, with each taking a different (but fixed and known) amount of time to complete. I need to divide these work items into groups so as to have the smallest number of groups (ideally), with each group of work items taking no longer than a given total threshold - say 1 hour. I am flexible about the threshold - it doesnt need to be rigidly applied, though should be close. My idea was to allocate work items into bins where each bin represents 90% of the threshold, 80%, 70% and so on. I could then match items that take 90% to those that take 10%, and so on. Any better ideas?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >