Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 144/1252 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Finding what makes strings unique in a list, can you improve on brute force?

    - by Ed Guiness
    Suppose I have a list of strings where each string is exactly 4 characters long and unique within the list. For each of these strings I want to identify the position of the characters within the string that make the string unique. So for a list of three strings abcd abcc bbcb For the first string I want to identify the character in 4th position d since d does not appear in the 4th position in any other string. For the second string I want to identify the character in 4th position c. For the third string it I want to identify the character in 1st position b AND the character in 4th position, also b. This could be concisely represented as abcd -> ...d abcc -> ...c bbcb -> b..b If you consider the same problem but with a list of binary numbers 0101 0011 1111 Then the result I want would be 0101 -> ..0. 0011 -> .0.. 1111 -> 1... Staying with the binary theme I can use XOR to identify which bits are unique within two binary numbers since 0101 ^ 0011 = 0110 which I can interpret as meaning that in this case the 2nd and 3rd bits (reading left to right) are unique between these two binary numbers. This technique might be a red herring unless somehow it can be extended to the larger list. A brute-force approach would be to look at each string in turn, and for each string to iterate through vertical slices of the remainder of the strings in the list. So for the list abcd abcc bbcb I would start with abcd and iterate through vertical slices of abcc bbcb where these vertical slices would be a | b | c | c b | b | c | b or in list form, "ab", "bb", "cc", "cb". This would result in four comparisons a : ab -> . (a is not unique) b : bb -> . (b is not unique) c : cc -> . (c is not unique) d : cb -> d (d is unique) or concisely abcd -> ...d Maybe it's wishful thinking, but I have a feeling that there should be an elegant and general solution that would apply to an arbitrarily large list of strings (or binary numbers). But if there is I haven't yet been able to see it. I hope to use this algorithm to to derive minimal signatures from a collection of unique images (bitmaps) in order to efficiently identify those images at a future time. If future efficiency wasn't a concern I would use a simple hash of each image. Can you improve on brute force?

    Read the article

  • Semantic Diff Utilities

    - by rubancache
    I'm trying to find some good examples of semantic diff/merge utilities. The traditional paradigm of comparing source code files works by comparing lines and characters.. but are there any utilities out there (for any language) that actually consider the structure of code when comparing files? For example, existing diff programs will report "difference found at character 2 of line 125. File x contains v-o-i-d, where file y contains b-o-o-l". A specialized tool should be able to report "Return type of method doSomething() changed from void to bool". I would argue that this type of semantic information is actually what the user is looking for when comparing code, and should be the goal of next-generation progamming tools. Are there any examples of this in available tools?

    Read the article

  • translate by replacing words inside existing text

    - by Berry Tsakala
    What are common approaches for translating certain words (or expressions) inside a given text, when the text must be reconstructed (with punctuations and everythin.) ? The translation comes from a lookup table, and covers words, collocations, and emoticons like L33t, CUL8R, :-), etc. Simple string search-and-replace is not enough since it can replace part of longer words (cat dog ? caterpillar dogerpillar). Assume the following input: s = "dogbert, started a dilbert dilbertion proces cat-bert :-)" after translation, i should receive something like: result = "anna, started a george dilbertion process cat-bert smiley" I can't simply tokenize, since i loose punctuations and word positions. Regular expressions, works for normal words, but don't catch special expressions like the smiley :-) but it does . re.sub(r'\bword\b','translation',s) ==> translation re.sub(r'\b:-\)\b','smiley',s) ==> :-) for now i'm using the above mentioned regex, and simple replace for the non-alphanumeric words, but it's far from being bulletproof. (p.s. i'm using python)

    Read the article

  • How to learn as a lone developer?

    - by fearofawhackplanet
    I've been lucky to work in a small team with a couple of experienced and knowledgeable developers for the first year of my career. I've learned a huge amount. But I'm now getting transferred within my company, and will be working on solo projects. I'll cope, but I know I'll make mistakes and won't always produce the best solutions without someone to guide me and review my output. I'm wondering if anyone has any tips in this situation. How can I keep learning? What's the best way to monitor and asses the quality of my work? How can I ensure that my career and skills don't stagnate?

    Read the article

  • Generating video or images of geometrical objects from data

    - by Jonathan Barbero
    Hello, I'm working in a course's project to predict the velocity and position of the solar system planets (and other objects). It will be really cool if I can visualize the predicted objects data, if it's possible generating 3D images, if in video that's amazing. Do you know any library that lets me to use this data to generate an image or video? (I don't care in which language) Data: - simulation step (time line step for a video) - positions of the objects - radius and/or colours of the objects Thanks in advance, any suggestion is welcome.

    Read the article

  • What is the single most effective thing you did to improve your programming skills?

    - by Oded
    Looking back at my career and life as a programmer, there were plenty of different ways I improved my programming skills - reading code, writing code, reading books, listening to podcasts, watching screencasts and more. My question is: What is the most effective thing you have done that improved your programming skills? What would you recommend to others that want to improve? I do expect varied answers here and no single "one size fits all" answer - I would like to know what worked for different people. Edit: Wow - what great answers! Keep 'em coming people!!!

    Read the article

  • Continue Considered Harmful?

    - by brian
    Should developers avoid using continue in C# or its equivalent in other languages to force the next iteration of a loop? Would arguments for or against overlap with arguments about Goto?

    Read the article

  • Code golf: combining multiple sorted lists into a single sorted list

    - by Alabaster Codify
    Implement an algorithm to merge an arbitrary number of sorted lists into one sorted list. The aim is to create the smallest working programme, in whatever language you like. For example: input: ((1, 4, 7), (2, 5, 8), (3, 6, 9)) output: (1, 2, 3, 4, 5, 6, 7, 8, 9) input: ((1, 10), (), (2, 5, 6, 7)) output: (1, 2, 5, 6, 7, 10) Note: solutions which concatenate the input lists then use a language-provided sort function are not in-keeping with the spirit of golf, and will not be accepted: sorted(sum(lists,[])) # cheating: out of bounds! Apart from anything else, your algorithm should be (but doesn't have to be) a lot faster! Clearly state the language, any foibles and the character count. Only include meaningful characters in the count, but feel free to add whitespace to the code for artistic / readability purposes. To keep things tidy, suggest improvement in comments or by editing answers where appropriate, rather than creating a new answer for each "revision". EDIT: if I was submitting this question again, I would expand on the "no language provided sort" rule to be "don't concatenate all the lists then sort the result". Existing entries which do concatenate-then-sort are actually very interesting and compact, so I won't retro-actively introduce a rule they break, but feel free to work to the more restrictive spec in new submissions. Inspired by http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python

    Read the article

  • Code golf: Word frequency chart

    - by ChristopheD
    The challenge: Build an ASCII chart of the most commonly used words in a given text. The rules: Only accept a-z and A-Z (alphabetic characters) as part of a word. Ignore casing (She == she for our purpose). Ignore the following words (quite arbitary, I know): the, and, of, to, a, i, it, in, or, is Clarification: considering don't: this would be taken as 2 different 'words' in the ranges a-z and A-Z: (don and t). Optionally (it's too late to be formally changing the specifications now) you may choose to drop all single-letter 'words' (this could potentially make for a shortening of the ignore list too). Parse a given text (read a file specified via command line arguments or piped in; presume us-ascii) and build us a word frequency chart with the following characteristics: Display the chart (also see the example below) for the 22 most common words (ordered by descending frequency). The bar width represents the number of occurences (frequency) of the word (proportionally). Append one space and print the word. Make sure these bars (plus space-word-space) always fit: bar + [space] + word + [space] should be always <= 80 characters (make sure you account for possible differing bar and word lenghts: e.g.: the second most common word could be a lot longer then the first while not differing so much in frequency). Maximize bar width within these constraints and scale the bars appropriately (according to the frequencies they represent). An example: The text for the example can be found here (Alice's Adventures in Wonderland, by Lewis Carroll). This specific text would yield the following chart: _________________________________________________________________________ |_________________________________________________________________________| she |_______________________________________________________________| you |____________________________________________________________| said |____________________________________________________| alice |______________________________________________| was |__________________________________________| that |___________________________________| as |_______________________________| her |____________________________| with |____________________________| at |___________________________| s |___________________________| t |_________________________| on |_________________________| all |______________________| this |______________________| for |______________________| had |_____________________| but |____________________| be |____________________| not |___________________| they |__________________| so For your information: these are the frequencies the above chart is built upon: [('she', 553), ('you', 481), ('said', 462), ('alice', 403), ('was', 358), ('that ', 330), ('as', 274), ('her', 248), ('with', 227), ('at', 227), ('s', 219), ('t' , 218), ('on', 204), ('all', 200), ('this', 181), ('for', 179), ('had', 178), (' but', 175), ('be', 167), ('not', 166), ('they', 155), ('so', 152)] A second example (to check if you implemented the complete spec): Replace every occurence of you in the linked Alice in Wonderland file with superlongstringstring: ________________________________________________________________ |________________________________________________________________| she |_______________________________________________________| superlongstringstring |_____________________________________________________| said |______________________________________________| alice |________________________________________| was |_____________________________________| that |______________________________| as |___________________________| her |_________________________| with |_________________________| at |________________________| s |________________________| t |______________________| on |_____________________| all |___________________| this |___________________| for |___________________| had |__________________| but |_________________| be |_________________| not |________________| they |________________| so The winner: Shortest solution (by character count, per language). Have fun! Edit: Table summarizing the results so far (2012-02-15) (originally added by user Nas Banov): Language Relaxed Strict ========= ======= ====== GolfScript 130 143 Perl 185 Windows PowerShell 148 199 Mathematica 199 Ruby 185 205 Unix Toolchain 194 228 Python 183 243 Clojure 282 Scala 311 Haskell 333 Awk 336 R 298 Javascript 304 354 Groovy 321 Matlab 404 C# 422 Smalltalk 386 PHP 450 F# 452 TSQL 483 507 The numbers represent the length of the shortest solution in a specific language. "Strict" refers to a solution that implements the spec completely (draws |____| bars, closes the first bar on top with a ____ line, accounts for the possibility of long words with high frequency etc). "Relaxed" means some liberties were taken to shorten to solution. Only solutions shorter then 500 characters are included. The list of languages is sorted by the length of the 'strict' solution. 'Unix Toolchain' is used to signify various solutions that use traditional *nix shell plus a mix of tools (like grep, tr, sort, uniq, head, perl, awk).

    Read the article

  • Word Jumble Algorithm

    - by MasterMax1313
    Given a word jumble (i.e. ofbaor), what would be an approach to unscramble the letters to create a real word (i.e. foobar)? I could see this having a couple of approaches, and I think I know how I'd do it in .NET, but I curious to see what some other solutions look like (always happy to see if my solution is optimal or not). This isn't homework or anything like that, I just saw a word jumble in the local comics section of the paper (yes, good ol' fashioned newsprint), and the engineer in me started thinking.

    Read the article

  • Based on your development stack, which is easier for you and why? Debugging or logging?

    - by leeand00
    Please state if you are developing on the front end, back end, or if you are developing a mobile/desktop application. List your development stack Language, IDE, etc.. Unit Testing or no Unit Testing Be sure to include any AOP frameworks if used. Tell me if it is easier for you to use debugging or to using logging during development, and why you feel it is easier. I'm just trying to get a feel for why people choose debugging or logging based on their development stack.

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • java vs python. In what way is Java Better?

    - by oxinabox.ucc.asn.au
    What are the advantages of Java over Python? What are the disadvantagesof Python, over Java? Why isn't Java more like Python? Like why don't java have an command line iterpretor? I beleive Java must have some advantages, but...I'm yet to see them. Logically all languages have an advantage afaict: I learnt java before python, - a 6 month unicourse. I spend a couple of weeks using python (writting a script to make a C source file). I hated it at first (as it was so differnt from C). I realised I had fallen in love it it, when I noticed that when I went to do a follow on Java Course at uni, I'ld stopped giving my variables types, and was tryign to multiply strings.

    Read the article

  • How to use a DHT for a social trading environment

    - by Lirik
    I'm trying to understand if a DHT can be used to solve a problem I'm working on: I have a trading environment where professional option traders can get an increase in their risk limit by requesting that fellow traders lend them some of their risk limit. The lending trader will can either search for traders with certain risk parameters which are part of every trader's profile, i.e. Greeks, or the lending trader can subscribe to requests from certain traders. I want this environment to be scalable and decentralized, but I don't know how traders can search for specific profile parameters when the data is contained in a DHT. Could anybody explain how this can be done?

    Read the article

  • Where does "foo" come from in coding examples? [closed]

    - by ThePower
    Possible Duplicates: Using “Foo” and “Bar” in examples To foo bar, or not to foo bar: that is the question. Possible Duplicates: Using "Foo" and "Bar" in examples To foo bar, or not to foo bar: that is the question. Bit of a general question here, but it's something I would like to know! Whenever I am looking for resolutions to my C# problems online, I always come across "foo" being used as an example. Does this represent anything or is it just one of those unexplained catchy object names, used by many people in examples?

    Read the article

  • Segmenting a double array of labels

    - by Ami
    The Problem: I have a large double array populated with various labels. Each element (cell) in the double array contains a set of labels and some elements in the double array may be empty. I need an algorithm to cluster elements in the double array into discrete segments. A segment is defined as a set of pixels that are adjacent within the double array and one label that all those pixels in the segment have in common. (Diagonal adjacency doesn't count and I'm not clustering empty cells). |-------|-------|------| | Jane | Joe | | | Jack | Jane | | |-------|-------|------| | Jane | Jane | | | | Joe | | |-------|-------|------| | | Jack | Jane | | | Joe | | |-------|-------|------| In the above arrangement of labels distributed over nine elements, the largest cluster is the “Jane” cluster occupying the four upper left cells. What I've Considered: I've considered iterating through every label of every cell in the double array and testing to see if the cell-label combination under inspection can be associated with a preexisting segment. If the element under inspection cannot be associated with a preexisting segment it becomes the first member of a new segment. If the label/cell combination can be associated with a preexisting segment it associates. Of course, to make this method reasonable I'd have to implement an elaborate hashing system. I'd have to keep track of all the cell-label combinations that stand adjacent to preexisting segments and are in the path of the incrementing indices that are iterating through the double array. This hash method would avoid having to iterate through every pixel in every preexisting segment to find an adjacency. Why I Don't Like it: As is, the above algorithm doesn't take into consideration the case where an element in the double array can be associated with two unique segments, one in the horizontal direction and one in the vertical direction. To handle these cases properly, I would need to implement a test for this specific case and then implement a method that will both associate the element under inspection with a segment and then concatenate the two adjacent identical segments. On the whole, this method and the intricate hashing system that it would require feels very inelegant. Additionally, I really only care about finding the large segments in the double array and I'm much more concerned with the speed of this algorithm than with the accuracy of the segmentation, so I'm looking for a better way. I assume there is some stochastic method for doing this that I haven't thought of. Any suggestions?

    Read the article

  • Why is it useful to count the number of bits?

    - by Scorchin
    I've seen the numerous questions about counting the number of set bits in a insert type of input, but why is it useful? For those looking for algorithms about bit counting, look here: http://stackoverflow.com/questions/1517848/counting-common-bits-in-a-sequence-of-unsigned-longs http://stackoverflow.com/questions/472325/fastest-way-to-count-number-of-bit-transitions-in-an-unsigned-int http://stackoverflow.com/questions/109023/best-algorithm-to-count-the-number-of-set-bits-in-a-32-bit-integer

    Read the article

  • Why do static Create methods exist?

    - by GeReV
    I was wondering, why do static Create methods exist? For instance, why use this code: System.Xml.XmlReader reader = System.Xml.XmlReader.Create(inputUri); over this code: System.Xml.XmlReader reader = new System.Xml.XmlReader(inputUri); I cannot find the rationale for using one over the other, and can't find any relation between classes who use this construct over the other. Can anyone shed some light on this?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >