Search Results

Search found 5119 results on 205 pages for 'genetic algorithm'.

Page 169/205 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • F# replace ref variable with something fun

    - by Stephen Swensen
    I have the following F# functions which makes use of a ref variable to seed and keep track of a running total, something tells me this isn't in the spirit of fp or even particular clear on its own. I'd like some direction on the clearest (possible fp, but if an imperative approach is clearer I'd be open to that) way to express this in F#. Note that selectItem implements a random weighted selection algorithm. type WeightedItem(id: int, weight: int) = member self.id = id member self.weight = weight let selectItem (items: WeightedItem list) (rand:System.Random) = let totalWeight = List.sumBy (fun (item: WeightedItem) -> item.weight) items let selection = rand.Next(totalWeight) + 1 let runningWeight = ref 0 List.find (fun (item: WeightedItem) -> runningWeight := !runningWeight + item.weight !runningWeight >= selection) items let items = [new WeightedItem(1,100); new WeightedItem(2,50); new WeightedItem(3,25)] let selection = selectItem items (new System.Random())

    Read the article

  • The easiest way to draw an image?

    - by Benno
    Assume you want to read an image file in a common file format from the hard drive, change the color of one pixel, and display the resulting image to the screen, in C++. Which (open-source) libraries would you recommend to accomplish the above with the least amount of code? Alternatively, which libraries would do the above in the most elegant way possible? A bit of background: I have been reading a lot of computer graphics literature recently, and there are lots of relatively easy, pixel-based algorithms which I'd like to implement. However, while the algorithm itself would usually be straightforward to implement, the necessary amount of frame-work to manipulate an image on a per-pixel basis and display the result stopped me from doing it.

    Read the article

  • how to diff / align Python lists using arbitrary matching function?

    - by James Tauber
    I'd like to align two lists in a similar way to what difflib.Differ would do except I want to be able to define a match function for comparing items, not just use string equality, and preferably a match function that can return a number between 0.0 and 1.0, not just a boolean. So, for example, say I had the two lists: L1 = [('A', 1), ('B', 3), ('C', 7)] L2 = ['A', 'b', 'C'] and I want to be able to write a match function like this: def match(item1, item2): if item1[0] == item2: return 1.0 elif item1[0].lower() == item2.lower(): return 0.5 else: return 0 and then do: d = Differ(match_func=match) d.compare(L1, L2) and have it diff using the match function. Like difflib, I'd rather the algorithm gave more intuitive Ratcliff-Obershelp type results rather than a purely minimal Levenshtein distance.

    Read the article

  • Enumerating all hamiltonian paths from start to end vertex in grid graph

    - by Eric
    Hello, I'm trying to count the number of Hamiltonian paths from a specified start vertex that end at another specified vertex in a grid graph. Right now I have a solution that uses backtracking recursion but is incredibly slow in practice (e.g. O(n!) / 3 hours for 7x7). I've tried a couple of speedup techniques such as maintaining a list of reachable nodes, making sure the end node is still reachable, and checking for isolated nodes, but all of these slowed my solution down. I know that the problem is NP-complete, but it seems like some reasonable speedups should be achievable in the grid structure. Since I'm trying to count all the paths, I'm sure that the search must be exhaustive, but I'm having trouble figuring out how to prune out paths that aren't promising. Does anyone have some suggestions for speeding the search up? Or an alternate search algorithm?

    Read the article

  • efficiently trimming postgresql tables

    - by agilefall
    I have about 10 tables with over 2 million records and one with 30 million. I would like to efficiently remove older data from each of these tables. My general algorithm is: create a temp table for each large table and populate it with newer data truncate the original tables copy tmp data back to original tables using: "insert into originaltable (select * from tmp_table)" However, the last step of copying the data back is taking longer than I'd like. I thought about deleting the original tables and making the temp tables "permanent", but I lose constraint/foreign key info. If I delete from the tables directly, it takes much longer. Given that I need to preserve all foreign keys and constraints, are there any faster ways of removing the older data? Thanks.

    Read the article

  • Architecture of a secure application that encrypts data in the database.

    - by Przemyslaw Rózycki
    I need to design an application that protects some data in a database against root attack. It means, that even if the aggressor takes control over the machine where data is stored or machine with the application server, he can't read some business critical data from the database. This is a customer's requirement. I'm going to encrypt data with some assymetric algorithm and I need some good ideas, where to store private keys, so that data is secure as well as the application usability was quite comfortable? We can assume, for simplicity, that only one key pair is used.

    Read the article

  • Selecting item from set given distribution

    - by JH
    I have a set of X items such as {blower, mower, stove} and each item has a certain percentage of times it should be selected from the overall set {blower=25%,mower=25%,stove=75%} along with a certain distribution that these items should follow (blower should be selected more at the beginning of selection and stove more at the end). We are given a number of objects to be overall selected (ie 100) and a overall time to do this in (say 100 seconds). I was thinking of using a roulette wheel algorithm where the weights on the wheel are affected by the current distribution as a function of the elapsed time (and the allowed duration) so that simple functions could be used to determine the weight. Are there any common approaches to problems like this that anyone is aware of? Currently i have programmed something similar to this in java using functions such as x^2 (with correct normalization for the weights) to ensure that a good distribution occurs. Other suggestions or common practices would be welcome :-)

    Read the article

  • PHP: Loop through text file and isolate lines which a specific "starting point"

    - by Mestika
    Hi everyone, I’m trying to reduce some editing time within some textfiles where there approximately are 10.000 lines of text, but I only need around 200 or some. The text file relies on a almost specific pattern but it deviates from time to time but my “focus” in order to select the right line to keep is, that the line always starts with: z3455 and then have a variable afterwards, e.g.: z3455 http://url.com/data1/data1.1/data1.3/ (342kb) I have an algorithm to capture the URL and its content but now I need some way to loop through the text file, deleting all lines except does that starts with z3455 and then “push” them together so they are listed underneath each other. I’ve tried different approaches for this in PHP but can’t seem to find a correct function. I can “isolate” a specific line number but when it deviates I can’t use this approach fully. I hope that someone can help me, either by providing the code or knocking me in the right direction to how I’ll solve this problem. Thanks in advance Sincerely - Mestika

    Read the article

  • How to hash and check for equality of objects with circular references

    - by mfya
    I have a cyclic graph-like structure that is represented by Node objects. A Node is either a scalar value (leaf) or a list of n=1 Nodes (inner node). Because of the possible circular references, I cannot simply use a recursive HashCode() function, that combines the HashCode() of all child nodes: It would end up in an infinite recursion. While the HashCode() part seems at least to be doable by flagging and ignoring already visited nodes, I'm having some troubles to think of a working and efficient algorithm for Equals(). To my surprise I did not find any useful information about this, but I'm sure many smart people have thought about good ways to solve these problems...right? Example (python): A = [ 1, 2, None ]; A[2] = A B = [ 1, 2, None ]; B[2] = B A is equal to B, because it represents exactly the same graph. BTW. This question is not targeted to any specific language, but implementing hashCode() and equals() for the described Node object in Java would be a good practical example.

    Read the article

  • PHP 2-way encryption: I need to store passwords that can be retrieved

    - by gAMBOOKa
    I am creating an application that will store passwords, which the user can retrieve and see. The passwords are for a hardware device, so checking against hashes are out of the question. What I need to know is: How do I encrypt and decrypt a password in PHP? What is the safest algorithm to encrypt the passwords with? Where do I store the private key? Instead of storing the private key, is it a good idea to require users to enter the private key any time they need a password decrypted? (Users of this application can be trusted) In what ways can the password be stolen and decrypted? What do I need to be aware of?

    Read the article

  • Why do debug symbols so adversely affect the performance of threaded applications on Linux?

    - by fluffels
    Hi. I'm writing a ray tracer. Recently, I added threading to the program to exploit the additional cores on my i5 Quad Core. In a weird turn of events the debug version of the application is now running slower, but the optimized build is running faster than before I added threading. I'm passing the "-g -pg" flags to gcc for the debug build and the "-O3" flag for the optimized build. Host system: Ubuntu Linux 10.4 AMD64. I know that debug symbols add significant overhead to the program, but the relative performance has always been maintained. I.e. a faster algorithm will always run faster in both debug and optimization builds. Any idea why I'm seeing this behavior?

    Read the article

  • How to find the entity with the greatest primary key?

    - by simpatico
    I've an entity LearningUnit that has an int primary key. Actually, it has nothing more. Entity Concept has the following relationship with it: @ManyToOne @Size(min=1,max=7) private LearningUnit learningUnit; In a constructor of Concept I need to retrieve the LearningUnit with the greatest primary key. If no LearningUnit exists yet I instantiate one. I then set this.learningUnit to the retrieved/instantied. Finally, I call the empty constructor of Concept in a try-catch block, to have the entitymanager do the cardinality check. If an exception is thrown (I expect one in the case that already another 7 Concepts are referring to the same LearningUnit. In that case, I case instantiate a new LearningUnit with a new greater primary key. Please, also point out, if any, clear pitfalls in my outlined algorithm above.

    Read the article

  • Java long task - Did it stop writing to file?

    - by rockit
    I am writing a lot of data to a file, and while keeping my eye on the file it eventually stopped growing in size. Essentially my task is getting information from a database, and printing out all non-unique values in column A. Since there are many rows to the database table, and the database table is across my network, this is taking days to complete. Thus I'm concerned that since the file isn't growing, that it isn't actually writing to the file anymore. Which is odd, I have no "catch"'s in my code, so if there was a problem writing to file, wouldn't it have thrown an error?! Should I let the task complete (estimate 2-3 days from today), or is there something else that I don't know going on here making my application not write to the file?! my algorithm goes something like this Declare file Create new file Open file for writing get database connection get resultset from database for each row in the resultset - write column "A" to file - if row# % 100000 then write to screen "completed " + row# + " rows" when no more rows exist close file write to screen - "completed"

    Read the article

  • make a lazy var in scala

    - by ayvango
    Scala does not permit to create laze vars, only lazy vals. It make sense. But I've bumped on use case, where I'd like to have similar capability. I need a lazy variable holder. It may be assigned a value that should be calculated by time-consuming algorithm. But it may be later reassigned to another value and I'd like not to call first value calculation at all. Example assuming there is some magic var definition lazy var value : Int = _ val calc1 : () => Int = ... // some calculation val calc2 : () => Int = ... // other calculation value = calc1 value = calc2 val result : Int = value + 1 This piece of code should only call calc2(), not calc1 I have an idea how I can write this container with implicit conversions and and special container class. I'm curios if is there any embedded scala feature that doesn't require me write unnecessary code

    Read the article

  • Visual C++ 9 Linker file size limitation.

    - by Raindog
    It appears that the visual C++ 9 linker has a file allocation algorithm that doubles the size of the file every allocation, so you get 512mb, 1024mb, 2048mb, 4096mb. The problem is that it is using a library that cannot handle files larger 2048MB, and as such crashes with an error such as "cannot read file at is the disk full or write protected". Is there a way to bypass this limitation or otherwise replace the linker with something else that works? A bit of background, I have a code generator that generates a large number of files, ~15k cpp files, I've managed to reduce the number of files to something about 6k to get something that at least completes the linking process, I would like to be able to include all 15k without having to create multiple libs.

    Read the article

  • troubles with integration on matlab

    - by user648666
    I'd like some help please I really need to solve this problem. Well before anything thank you for your time... My problem: I have a matrix (826x826 double) and I want to integrate this matrix with respect to a vector of (826x1 double) I don't have the functions of any of this. Is there a command or an algorithm to take the integral of a matrix with respect to a vector? Please I really need help, I'm such a newbie at matlab. Sincerely. George

    Read the article

  • Streaming data to the browser as a file of unknown size

    - by Sir Psycho
    I have some data which is queried from the database and I'd like to send it to the client as a csv file. The file size varies each time due to the fact that the DB data returned can be of any size. Instead of saving this file to the hard disk, I'd like to send it to the browser at the same time it's being processed into a CSV by my algorithm. Response.Write seems useless. For some reason, the file download dialog is only displayed once my processing is finished. This seems odd as I'm writting all my output to the Response.Output stream. I have downloaded files on the web before where the filesize is not known and the browser just keeps on downloading. Is there any way to achieve this? The following stackoverflow thread did not offer any good advise. http://stackoverflow.com/questions/873995/asp-net-downloading-large-files-of-unknown-size Thanks

    Read the article

  • What FPGAs (Field-Programmable Gate Arrays) can one buy to experiment with at home?

    - by Joe Blow
    What the heck is an FPGA -- where can I buy one? What sort of system do you need to experiment with them? How to program them? Can you "load" if that's the right terms an FPGA using an ordinary mac or perhaps other *nix or windoze computer? Where can I buy some FPGAs today to experiment with ??! Are they expensive this only available to industry or can I buy one today? Does anyone know about this? Thanks! I have become interested in FPGAs after reading this question... Holistic Word Recognition algorithm in detail

    Read the article

  • Realtime processing and callbacks with Python and C++

    - by Doughy
    I need to write code to do some realtime processing that is fairly computationally complex. I would like to create some Python classes to manage all my scripting, and leave the intensive parts of the algorithm coded in C++ so that they can run as fast as possible. I would like to instantiate the objects in Python, and have the C++ algorithms chime back into the script with callbacks in python. Something like: myObject = MyObject() myObject.setCallback(myCallback) myObject.run() def myCallback(val): """Do something with the value passed back to the python script.""" pass Will this be possible? How can I run a callback in python from a loop that is running in a C++ module? Anyone have a link or a tutorial to help me do this correctly?

    Read the article

  • Measure CPU performance via JS

    - by Nicholas Kyriakides
    A webapp has as a central component a relatively heavy algorithm that handles geometric operations. There are 2 solutions to make the whole thing accessible from both high-end machines and relatively slower mobile devices. I will use RPC's if i detect that the user machine is ''slow'' or else if i detect that the user machine can handle it OK, then i provide to the webapp the script to handle it client side. Now what would be a reliable way to detect the speed of the user machine? I was thinking of providing a sample script as a test when the page loads and detect the time it took to execute that. Any ideas?

    Read the article

  • Scraping html WITHOUT uniquie identifiers using python

    - by Nicholas Law
    I would like to design an algorithm using python that scrapes thousands of pages like this one and this one, gathers all the data and inserts it into a MySQL database. The script will be run on a weekly or bi-weekly basis to update the database of any new information added to each individual page. Ideally I would like a scraper that is easy to work with for table structured data but also data that does not have unique identifiers (ie. id and classes attributes). Which scraper add-on should I use? BeautifulSoup, Scrapy or Mechanize? Are there any particular tutorials/books I should be looking at for this desired result? In the long-run I will be implementing a mobile app that works with all this data through querying the database.

    Read the article

  • Rounding a positive number to a power of another number

    - by Sagekilla
    I'm trying to round a number to the next smallest power of another number. The number I'm trying to round is always positive. I'm not particular on which direction it rounds, but I prefer downwards if possible. I would like to be able to round towards arbitrary bases, but the ones I'm most concerned with at the moment is base 2 and fractional powers of 2 like 2^(1/2), 2^(1/4), and so forth. Here's my current algorithm for base 2. The log2 I multiply by is actually the inverse of log2: double roundBaseTwo(double x) { return 1.0 / (1 << (int)((log(x) * log2)) } Any help would be appreciated!

    Read the article

  • finding the total number of distinct shortest paths between 2 nodes in undirected weighted graph in linear time?

    - by logan
    I was wondering, that if there is a weighted graph G(V,E), and I need to find a single shortest path between any two vertices S and T in it then I could have used the Dijkstras algorithm. but I am not sure how this can be done when we need to find all the distinct shortest paths from S to T. Is it solvable on O(n) time? I had one more question like if we assume that the weights of the edges in the graph can assume values only in certain range lets say 1 <=w(e)<=2 will this effect the time complexity?

    Read the article

  • iteration on numbers with no 2 same digits

    - by rahmivolkan
    I dont know if it is asked (I couldn't find any). I want to iterate on this kind of numbers implemented on array; int a[10]; int i = 0; for( ; i < 10; i++ ) a[i] = i+1; now the array has "1 2 3 4 5 6 7 8 9 10" and I want to get "1 2 3 4 5 6 7 8 10 9" and then "1 2 3 4 5 6 7 9 8 10" "1 2 3 4 5 6 7 9 10 8" . . . . I tried to get an algorithm but I couldn't figure it out. Is there an easy way to implement "next" iterator for this kind of problems? Thanks in advance

    Read the article

  • Bitmap.Save problems

    - by user284026
    Hello, can anyone tell me if you know to be a problem with Bitmap and steganography for WM 6? I am working on a project and i have to hide a digital signature in a bitmap. The algorithm works perfect, as in untill i have the image on the memory the bitmap contains the modified bytes. But after i save the image (Bitmap.Save()) and I reopen the image, than those bytes are lost. When i say lost i mean they are the orriginal bytes from when the picture was taken. Thank you.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >