Search Results

Search found 4176 results on 168 pages for 'graph algorithms'.

Page 14/168 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Graph diffing and versioning tool

    - by hashable
    I am working with a team that edits large DAGs represented as single files. Currently we are unable to work with multiple users concurrently modifying the DAG. Is there a tool (somewhat like the Eclipse SVN plugin) that can do do revision control on the file (manage timestamps/revision stamps) to identify incoming/outgoing/conflicting changes (Node/Link insertion/deletion/modification) and merge changes just like programmers do with source code files? The system should be able to do dependency management also. E.g. an incoming Link must not be accepted when one of the two Nodes is absent. That is, it should not "break" the existing DAG by allowing partial updates. If there is a framework to do this using generic "Node" and "Link" interfaces? Note: I am aware of Protege and its plugins. They currently do not satisfy my requirements.

    Read the article

  • how to get large pictures from photo album via facebook graph api

    - by Nav
    I am currently using the following code to retrieve all the photos from a user profile FB.api('/me/albums?fields=id,name', function(response) { //console.log(response.data.length); for (var i = 0; i < response.data.length; i++) { var album = response.data[i]; FB.api('/' + album.id + '/photos', function(photos) { if (photos && photos.data && photos.data.length) { for (var j = 0; j < photos.data.length; j++) { var photo = photos.data[j]; // photo.picture contain the link to picture var image = document.createElement('img'); image.src = photo.picture; document.body.appendChild(image); image.className = "border"; image.onclick = function() { //this.parentNode.removeChild(this); document.getElementById(info.id).src = this.src; document.getElementById(info.id).style.width = "220px"; document.getElementById(info.id).style.height = "126px"; }; } } }); } }); but, the photos that it returns are of poor quality and are basically like thumbnails.How to retrieve larger photos from the albums. Generally for profile picture I use ?type=large which returns a decent profile image but the type=large is not working in the case of photos from photo albums and also is there a way to get the photos in zip format from facebook once I specify the photo url as I want users to be able to download the photos.

    Read the article

  • PHP SDK for Facebook: Uploading an Image for an Event created using the Graph API

    - by wenbert
    Can anyone shed some light on my problem? <?php $config = array(); $config['appId'] = "foo"; $config['secret'] = "bar"; $config['cookie'] = true; $config['fileUpload'] = true; $facebook = new Facebook($config); $eventParams = array( "privacy_type" => $this->request->data['Event']['privacy'], "name" => $this->request->data['Event']['event'], "description" => $this->request->data['Event']['details'], "start_time" => $this->request->data['Event']['when'], "country" => "NZ" ); //around 300x300 pixels //I have set the permissions to Everyone $imgpath = "C:\\Yes\\Windows\\Path\\Photo_for_the_event_app.jpg"; $eventParams["@file.jpg"] = "@".$imgpath; $fbEvent = $facebook->api("me/events", "POST", $eventParams); var_dump($fbEvent); //I get the event id I also have this in my "scope" when the user is asked to Allow the app to post on his behalf: user_about_me,email,publish_stream,create_event,photo_upload This works. It creates the event with all the details I have specified. EXCEPT for the event image. I have been to most of Stackoverflow posts related to my problem but all of them are not working for me. (EG: http://stackoverflow.com/a/4245260/66767) I also do not get any error. Any ideas? THanks!

    Read the article

  • A Plot Graph .NET WindowsForm Component Free

    - by user255946
    Hello All, I'm searching for a plot .NET component to plot a 2D line chart, given an array of data. It will be used with WindowsForm (C#) and It will be very helpful if it could be freeware. It is for a scientific application. This is my first asked question in stackoverflow, and excuse me for my terrible English written.

    Read the article

  • How to familiarize myself with Python

    - by Zel
    I am Python beginner. Started Python 1.5 months back. I downloaded the Python docs and read some part of the tutorial. I have been programming on codechef.com and solving problems of projecteuler. I am thinking of reading Introduction to algorithms and following this course on MIT opencourse ware as I haven't been getting much improvement in programming and I am wasting much time thinking just what should I do when faced with any programming problem. But I think that I still don't know the correct way to learn the language itself. Should I start the library reference or continue with Python tutorial? Is learning algorithms useful for language such as C and not so much for Python as it has "batteries included"? Are there some other resources for familiarization with the language and in general for learning to solve programming problems? Or do I need to just devote some more time?

    Read the article

  • What modern alternatives to Numerical Recipes exist?

    - by Stewart
    In the past, the Numerical Recipes book was considered the gold standard reference for numerical algorithms. The earliest Fortran Edition was followed by editions in C and C++ and others, bringing it then more up-to-date. Through these, it provided reference code for the state-of-the-art algorithms of the day. Older editions are available online for free nowadays. Unfortunately, I think it is now mostly useful only as a historic tome. The "software engineering" practises seem to me to be outdated, and the actual content hasn't kept pace with the literature. What comprehensive yet approachable references should the modern programmer look at instead?

    Read the article

  • Hopcroft–Karp algorithm in Python

    - by Simon
    I am trying to implement the Hopcroft Karp algorithm in Python using networkx as graph representation. Currently I am as far as this: #Algorithms for bipartite graphs import networkx as nx import collections class HopcroftKarp(object): INFINITY = -1 def __init__(self, G): self.G = G def match(self): self.N1, self.N2 = self.partition() self.pair = {} self.dist = {} self.q = collections.deque() #init for v in self.G: self.pair[v] = None self.dist[v] = HopcroftKarp.INFINITY matching = 0 while self.bfs(): for v in self.N1: if self.pair[v] and self.dfs(v): matching = matching + 1 return matching def dfs(self, v): if v != None: for u in self.G.neighbors_iter(v): if self.dist[ self.pair[u] ] == self.dist[v] + 1 and self.dfs(self.pair[u]): self.pair[u] = v self.pair[v] = u return True self.dist[v] = HopcroftKarp.INFINITY return False return True def bfs(self): for v in self.N1: if self.pair[v] == None: self.dist[v] = 0 self.q.append(v) else: self.dist[v] = HopcroftKarp.INFINITY self.dist[None] = HopcroftKarp.INFINITY while len(self.q) > 0: v = self.q.pop() if v != None: for u in self.G.neighbors_iter(v): if self.dist[ self.pair[u] ] == HopcroftKarp.INFINITY: self.dist[ self.pair[u] ] = self.dist[v] + 1 self.q.append(self.pair[u]) return self.dist[None] != HopcroftKarp.INFINITY def partition(self): return nx.bipartite_sets(self.G) The algorithm is taken from http://en.wikipedia.org/wiki/Hopcroft%E2%80%93Karp_algorithm However it does not work. I use the following test code G = nx.Graph([ (1,"a"), (1,"c"), (2,"a"), (2,"b"), (3,"a"), (3,"c"), (4,"d"), (4,"e"),(4,"f"),(4,"g"), (5,"b"), (5,"c"), (6,"c"), (6,"d") ]) matching = HopcroftKarp(G).match() print matching Unfortunately this does not work, I end up in an endless loop :(. Can someone spot the error, I am out of ideas and I must admit that I have not yet fully understand the algorithm, so it is mostly an implementation of the pseudo code on wikipedia

    Read the article

  • Sorting Algorithms

    - by MarkPearl
    General Every time I go back to university I find myself wading through sorting algorithms and their implementation in C++. Up to now I haven’t really appreciated their true value. However as I discovered this last week with Dictionaries in C# – having a knowledge of some basic programming principles can greatly improve the performance of a system and make one think twice about how to tackle a problem. I’m going to cover briefly in this post the following: Selection Sort Insertion Sort Shellsort Quicksort Mergesort Heapsort (not complete) Selection Sort Array based selection sort is a simple approach to sorting an unsorted array. Simply put, it repeats two basic steps to achieve a sorted collection. It starts with a collection of data and repeatedly parses it, each time sorting out one element and reducing the size of the next iteration of parsed data by one. So the first iteration would go something like this… Go through the entire array of data and find the lowest value Place the value at the front of the array The second iteration would go something like this… Go through the array from position two (position one has already been sorted with the smallest value) and find the next lowest value in the array. Place the value at the second position in the array This process would be completed until the entire array had been sorted. A positive about selection sort is that it does not make many item movements. In fact, in a worst case scenario every items is only moved once. Selection sort is however a comparison intensive sort. If you had 10 items in a collection, just to parse the collection you would have 10+9+8+7+6+5+4+3+2=54 comparisons to sort regardless of how sorted the collection was to start with. If you think about it, if you applied selection sort to a collection already sorted, you would still perform relatively the same number of iterations as if it was not sorted at all. Many of the following algorithms try and reduce the number of comparisons if the list is already sorted – leaving one with a best case and worst case scenario for comparisons. Likewise different approaches have different levels of item movement. Depending on what is more expensive, one may give priority to one approach compared to another based on what is more expensive, a comparison or a item move. Insertion Sort Insertion sort tries to reduce the number of key comparisons it performs compared to selection sort by not “doing anything” if things are sorted. Assume you had an collection of numbers in the following order… 10 18 25 30 23 17 45 35 There are 8 elements in the list. If we were to start at the front of the list – 10 18 25 & 30 are already sorted. Element 5 (23) however is smaller than element 4 (30) and so needs to be repositioned. We do this by copying the value at element 5 to a temporary holder, and then begin shifting the elements before it up one. So… Element 5 would be copied to a temporary holder 10 18 25 30 23 17 45 35 – T 23 Element 4 would shift to Element 5 10 18 25 30 30 17 45 35 – T 23 Element 3 would shift to Element 4 10 18 25 25 30 17 45 35 – T 23 Element 2 (18) is smaller than the temporary holder so we put the temporary holder value into Element 3. 10 18 23 25 30 17 45 35 – T 23   We now have a sorted list up to element 6. And so we would repeat the same process by moving element 6 to a temporary value and then shifting everything up by one from element 2 to element 5. As you can see, one major setback for this technique is the shifting values up one – this is because up to now we have been considering the collection to be an array. If however the collection was a linked list, we would not need to shift values up, but merely remove the link from the unsorted value and “reinsert” it in a sorted position. Which would reduce the number of transactions performed on the collection. So.. Insertion sort seems to perform better than selection sort – however an implementation is slightly more complicated. This is typical with most sorting algorithms – generally, greater performance leads to greater complexity. Also, insertion sort performs better if a collection of data is already sorted. If for instance you were handed a sorted collection of size n, then only n number of comparisons would need to be performed to verify that it is sorted. It’s important to note that insertion sort (array based) performs a number item moves – every time an item is “out of place” several items before it get shifted up. Shellsort – Diminishing Increment Sort So up to now we have covered Selection Sort & Insertion Sort. Selection Sort makes many comparisons and insertion sort (with an array) has the potential of making many item movements. Shellsort is an approach that takes the normal insertion sort and tries to reduce the number of item movements. In Shellsort, elements in a collection are viewed as sub-collections of a particular size. Each sub-collection is sorted so that the elements that are far apart move closer to their final position. Suppose we had a collection of 15 elements… 10 20 15 45 36 48 7 60 18 50 2 19 43 30 55 First we may view the collection as 7 sub-collections and sort each sublist, lets say at intervals of 7 10 60 55 – 20 18 – 15 50 – 45 2 – 36 19 – 48 43 – 7 30 10 55 60 – 18 20 – 15 50 – 2 45 – 19 36 – 43 48 – 7 30 (Sorted) We then sort each sublist at a smaller inter – lets say 4 10 55 60 18 – 20 15 50 2 – 45 19 36 43 – 48 7 30 10 18 55 60 – 2 15 20 50 – 19 36 43 45 – 7 30 48 (Sorted) We then sort elements at a distance of 1 (i.e. we apply a normal insertion sort) 10 18 55 60 2 15 20 50 19 36 43 45 7 30 48 2 7 10 15 18 19 20 30 36 43 45 48 50 55 (Sorted) The important thing with shellsort is deciding on the increment sequence of each sub-collection. From what I can tell, there isn’t any definitive method and depending on the order of your elements, different increment sequences may perform better than others. There are however certain increment sequences that you may want to avoid. An even based increment sequence (e.g. 2 4 8 16 32 …) should typically be avoided because it does not allow for even elements to be compared with odd elements until the final sort phase – which in a way would negate many of the benefits of using sub-collections. The performance on the number of comparisons and item movements of Shellsort is hard to determine, however it is considered to be considerably better than the normal insertion sort. Quicksort Quicksort uses a divide and conquer approach to sort a collection of items. The collection is divided into two sub-collections – and the two sub-collections are sorted and combined into one list in such a way that the combined list is sorted. The algorithm is in general pseudo code below… Divide the collection into two sub-collections Quicksort the lower sub-collection Quicksort the upper sub-collection Combine the lower & upper sub-collection together As hinted at above, quicksort uses recursion in its implementation. The real trick with quicksort is to get the lower and upper sub-collections to be of equal size. The size of a sub-collection is determined by what value the pivot is. Once a pivot is determined, one would partition to sub-collections and then repeat the process on each sub collection until you reach the base case. With quicksort, the work is done when dividing the sub-collections into lower & upper collections. The actual combining of the lower & upper sub-collections at the end is relatively simple since every element in the lower sub-collection is smaller than the smallest element in the upper sub-collection. Mergesort With quicksort, the average-case complexity was O(nlog2n) however the worst case complexity was still O(N*N). Mergesort improves on quicksort by always having a complexity of O(nlog2n) regardless of the best or worst case. So how does it do this? Mergesort makes use of the divide and conquer approach to partition a collection into two sub-collections. It then sorts each sub-collection and combines the sorted sub-collections into one sorted collection. The general algorithm for mergesort is as follows… Divide the collection into two sub-collections Mergesort the first sub-collection Mergesort the second sub-collection Merge the first sub-collection and the second sub-collection As you can see.. it still pretty much looks like quicksort – so lets see where it differs… Firstly, mergesort differs from quicksort in how it partitions the sub-collections. Instead of having a pivot – merge sort partitions each sub-collection based on size so that the first and second sub-collection of relatively the same size. This dividing keeps getting repeated until the sub-collections are the size of a single element. If a sub-collection is one element in size – it is now sorted! So the trick is how do we put all these sub-collections together so that they maintain their sorted order. Sorted sub-collections are merged into a sorted collection by comparing the elements of the sub-collection and then adjusting the sorted collection. Lets have a look at a few examples… Assume 2 sub-collections with 1 element each 10 & 20 Compare the first element of the first sub-collection with the first element of the second sub-collection. Take the smallest of the two and place it as the first element in the sorted collection. In this scenario 10 is smaller than 20 so 10 is taken from sub-collection 1 leaving that sub-collection empty, which means by default the next smallest element is in sub-collection 2 (20). So the sorted collection would be 10 20 Lets assume 2 sub-collections with 2 elements each 10 20 & 15 19 So… again we would Compare 10 with 15 – 10 is the winner so we add it to our sorted collection (10) leaving us with 20 & 15 19 Compare 20 with 15 – 15 is the winner so we add it to our sorted collection (10 15) leaving us with 20 & 19 Compare 20 with 19 – 19 is the winner so we add it to our sorted collection (10 15 19) leaving us with 20 & _ 20 is by default the winner so our sorted collection is 10 15 19 20. Make sense? Heapsort (still needs to be completed) So by now I am tired of sorting algorithms and trying to remember why they were so important. I think every year I go through this stuff I wonder to myself why are we made to learn about selection sort and insertion sort if they are so bad – why didn’t we just skip to Mergesort & Quicksort. I guess the only explanation I have for this is that sometimes you learn things so that you can implement them in future – and other times you learn things so that you know it isn’t the best way of implementing things and that you don’t need to implement it in future. Anyhow… luckily this is going to be the last one of my sorts for today. The first step in heapsort is to convert a collection of data into a heap. After the data is converted into a heap, sorting begins… So what is the definition of a heap? If we have to convert a collection of data into a heap, how do we know when it is a heap and when it is not? The definition of a heap is as follows: A heap is a list in which each element contains a key, such that the key in the element at position k in the list is at least as large as the key in the element at position 2k +1 (if it exists) and 2k + 2 (if it exists). Does that make sense? At first glance I’m thinking what the heck??? But then after re-reading my notes I see that we are doing something different – up to now we have really looked at data as an array or sequential collection of data that we need to sort – a heap represents data in a slightly different way – although the data is stored in a sequential collection, for a sequential collection of data to be in a valid heap – it is “semi sorted”. Let me try and explain a bit further with an example… Example 1 of Potential Heap Data Assume we had a collection of numbers as follows 1[1] 2[2] 3[3] 4[4] 5[5] 6[6] For this to be a valid heap element with value of 1 at position [1] needs to be greater or equal to the element at position [3] (2k +1) and position [4] (2k +2). So in the above example, the collection of numbers is not in a valid heap. Example 2 of Potential Heap Data Lets look at another collection of numbers as follows 6[1] 5[2] 4[3] 3[4] 2[5] 1[6] Is this a valid heap? Well… element with the value 6 at position 1 must be greater or equal to the element at position [3] and position [4]. Is 6 > 4 and 6 > 3? Yes it is. Lets look at element 5 as position 2. It must be greater than the values at [4] & [5]. Is 5 > 3 and 5 > 2? Yes it is. If you continued to examine this second collection of data you would find that it is in a valid heap based on the definition of a heap.

    Read the article

  • How to find optimal path visit every node with parallel workers complicated by dynamic edge costs?

    - by Aaron Anodide
    Say you have an acyclic directed graph with weighted edges and create N workers. My goal is to calculate the optimal way those workers can traverse the entire graph in parralel. However, edge costs may change along the way. Example: A -1-> B A -2-> C B -3-> C (if A has already been visited) B -5-> C (if A has not already been visited) Does what I describe lend itself to a standard algorithmic approach, or alternately can someone suggest if I'm looking at this in an inherently flawed way (i have an intuition I might be)?

    Read the article

  • How can I use the Boost Graph Library to lay out verticies?

    - by Mike
    I'm trying to lay out vertices using the Boost Graph Library. However, I'm running into some compilation issues which I'm unsure about. Am I using the BGL in an improper manner? My code is: PositionVec position_vec(2); PositionMap position(position_vec.begin(), get(vertex_index, g)); int iterations = 100; double width = 100.0; double height = 100.0; minstd_rand gen; rectangle_topology<> topology(gen, 0, 0, 100, 100); fruchterman_reingold_force_directed_layout(g, position, topology); //Compile fails on this line The diagnostics produced by clang++(I've also tried GCC) are: In file included from test.cpp:2: /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:95:3: error: no member named 'dimensions' in 'boost::simple_point<double>' BOOST_STATIC_ASSERT (Point::dimensions == 2); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from test.cpp:2: In file included from /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:13: In file included from /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/graph_traits.hpp:15: In file included from /Volumes/Data/mike/Downloads/boost_1_43_0/boost/tuple/tuple.hpp:24: /Volumes/Data/mike/Downloads/boost_1_43_0/boost/static_assert.hpp:118:49: note: instantiated from: sizeof(::boost::STATIC_ASSERTION_FAILURE< BOOST_STATIC_ASSERT_BOOL_CAST( B ) >)>\ ^ In file included from test.cpp:2: /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:95:3: note: instantiated from: BOOST_STATIC_ASSERT (Point::dimensions == 2); ^ ~~~~~~~ /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:95:31: note: instantiated from: BOOST_STATIC_ASSERT (Point::dimensions == 2); ~~~~~~~^ /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:417:19: note: in instantiation of template class 'boost::grid_force_pairs<boost::rectangle_topology<boost::random::linear_congruential<int, 48271, 0, 2147483647, 399268537> >, boost::iterator_property_map<__gnu_cxx::__normal_iterator<boost::simple_point<double> *, std::vector<boost::simple_point<double>, std::allocator<boost::simple_point<double> > > >, boost::vec_adj_list_vertex_id_map<boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, unsigned long>, boost::simple_point<double>, boost::simple_point<double> &> >' requested here make_grid_force_pairs(topology, position, g)), ^ /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:431:3: note: in instantiation of function template specialization 'boost::fruchterman_reingold_force_directed_layout<boost::rectangle_topology<boost::random::linear_congruential<int, 48271, 0, 2147483647, 399268537> >, boost::adjacency_list<boost::listS, boost::vecS, boost::undirectedS, boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, boost::no_property, boost::no_property, boost::listS>, boost::iterator_property_map<__gnu_cxx::__normal_iterator<boost::simple_point<double> *, std::vector<boost::simple_point<double>, std::allocator<boost::simple_point<double> > > >, boost::vec_adj_list_vertex_id_map<boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, unsigned long>, boost::simple_point<double>, boost::simple_point<double> &>, boost::square_distance_attractive_force, boost::attractive_force_t, boost::no_property>' requested here fruchterman_reingold_force_directed_layout ^ test.cpp:48:3: note: in instantiation of function template specialization 'boost::fruchterman_reingold_force_directed_layout<boost::rectangle_topology<boost::random::linear_congruential<int, 48271, 0, 2147483647, 399268537> >, boost::adjacency_list<boost::listS, boost::vecS, boost::undirectedS, boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, boost::no_property, boost::no_property, boost::listS>, boost::iterator_property_map<__gnu_cxx::__normal_iterator<boost::simple_point<double> *, std::vector<boost::simple_point<double>, std::allocator<boost::simple_point<double> > > >, boost::vec_adj_list_vertex_id_map<boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, unsigned long>, boost::simple_point<double>, boost::simple_point<double> &> >' requested here fruchterman_reingold_force_directed_layout(g, position, topology); ^ 1 error generated.

    Read the article

  • finding shortest valid path in a colored-edge graphs

    - by user1067083
    Given a directed graph G, with edges colored either green or purple, and a vertex S in G, I must find an algorithm that finds the shortest path from s to each vertex in G so the path includes at most two purple edges (and green as much as needed). I thought of BFS on G after removing all the purple edges, and for every vertex that the shortest path is still infinity, do something to try to find it, but I'm kinda stuck, and it takes alot of the running time as well... Any other suggestions? Thanks in advance

    Read the article

  • List of generic algorithms and data structures

    - by Jake Petroules
    As part of a library project, I want to include a plethora of generic algorithms and data structures. This includes algorithms for searching and sorting, data structures like linked lists and binary trees, path-finding algorithms like A*... the works. Basically, any generic algorithm or data structure you can think of that you think might be useful in such a library, please post or add it to the list. Thanks! (NOTE: Because there is no single right answer I've of course placed this in community wiki... and also, please don't suggest algorithms which are too specialized to be provided by a generic library) The List: Data structures AVL tree B-tree B*-tree B+-tree Binary tree Binary heap Binary search tree Linked lists Singly linked list Doubly linked list Stack Queue Sorting algorithms Binary tree sort Bubble sort Heapsort Insertion sort Merge sort Quicksort Selection sort Searching algorithms

    Read the article

  • Drawing Directed Acyclic Graphs: Minimizing edge crossing?

    - by Robert Fraser
    Laying out the verticies in a DAG in a tree form (i.e. verticies with no in-edges on top, verticies dependent only on those on the next level, etc.) is rather simple without graph drawing algorithms such as Efficient Sugimiya. However, is there a simple algorithm to do this that minimizes edge crossing? (For some graphs, it may be impossible to completely eliminate edge crossing.) A picture says a thousand words, so is there an algorithm that would suggest: instead of: EDIT: As the picture suggests, a vertex's inputs are always on top and outputs are always below, which is another barrier to just pasting in an existing layout algorithm.

    Read the article

  • Sublinear Extra Space MergeSort

    - by hulkmeister
    I am reviewing basic algorithms from a book called Algorithms by Robert Sedgewick, and I came across a problem in MergeSort that I am, sad to say, having difficulty solving. The problem is below: Sublinear Extra Space. Develop a merge implementation that reduces that extra space requirement to max(M, N/M), based on the following idea: Divide the array into N/M blocks of size M (for simplicity in this description, assume that N is a multiple of M). Then, (i) considering the blocks as items with their first key as the sort key, sort them using selection sort; and (ii) run through the array merging the first block with the second, then the second block with the third, and so forth. The problem I have with the problem is that based on the idea Sedgewick recommends, the following set of arrays will not be sorted: {0, 10, 12}, {3, 9, 11}, {5, 8, 13}. The algorithm I use is the following: Divide the full array into subarrays of size M. Run Selection Sort on each of the subarrays. Merge each of the subarrays using the method Sedgwick recommends in (ii). (This is where I encounter the problem of where to store the results after the merge.) This leads to wanting to increase the size of the auxiliary space needed to handle at least two subarrays at a time (for merging), but based on the specifications of the problem, that is not allowed. I have also considered using the original array as space for one subarray and using the auxiliary space for the second subarray. However, I can't envision a solution that does not end up overwriting the entries of the first subarray. Any ideas on other ways this can be done? NOTE: If this is suppose to be on StackOverflow.com, please let me know how I can move it. I posted here because the question was academic.

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Algorithm to use for shop floor layout?

    - by jkohlhepp
    I ran into a classroom problem yesterday (business oriented class, not computer science) and I found it interesting from an algorithmic perspective. The problem goes something like this: Assume there is a shop floor with N different rooms, and you have N different departments that need to go in those rooms. The departments and the rooms are all the same size, so any department could go in any room. There is a known travel distance from each room to each other room. There is also a known amount of trips necessary from one department to another (trips are counted the same regardless which room they originate from, so a trip from A to B is equivalent to a trip from B to A). Given those inputs, determine a layout of departments into rooms which minimizes travel time. What is the best way to approach this problem algorithmically? Is there already a particular algorithm or class of algorithms designed to solve this type of problem? Does this type of problem have a name in computer science? I am not looking for you to design an algorithm to solve this, although feel free to do so if you would like. I'm wondering if this is a problem space that has already been well defined and studied algorithmically and if so get some links to research further. I can see a lot of different data structures and algorithms that might apply to this and I'm curious which approach would be "best". And don't worry, you are not doing my homework for me. This is not a homework problem per se, as this is a business course and we were simply discussing the concepts and not trying to solve the problem algorithmically.

    Read the article

  • Why is the use of abstractions (such as LINQ) so taboo?

    - by Matthew Patrick Cashatt
    I am an independent contractor and, as such, I interview 3-4 times a year for new gigs. I am in the midst of that cycle now and got turned down for an opportunity even though I felt like the interview went well. The same thing has happened to me a couple of times this year. Now, I am not a perfect guy and I don't expect to be a good fit for every organization. That said, my batting average is lower than usual so I politely asked my last interviewer for some constructive feedback, and he delivered! The main thing, according to the interviewer, was that I seemed to lean too much towards the use of abstractions (such as LINQ) rather than towards lower-level, organically grown algorithms. On the surface, this makes sense--in fact, it made the other rejections make sense too because I blabbed about LINQ in those interviews as well and it didn't seem that the interviewers knew much about LINQ (even though they were .NET guys). So now I am left with this question: If we are supposed to be "standing on the shoulders of giants" and using abstractions that are available to us (like LINQ), then why do some folks consider it so taboo? Doesn't it make sense to pull code "off the shelf" if it accomplishes the same goals without extra cost? It would seem to me that LINQ, even if it is an abstraction, is simply an abstraction of all the same algorithms one would write to accomplish exactly the same end. Only a performance test could tell you if your custom approach was better, but if something like LINQ met the requirements, why bother writing your own classes in the first place? I don't mean to focus on LINQ here. I am sure that the JAVA world has something comparable, I just would like to know why some folks get so uncomfortable with the idea of using an abstraction that they themselves did not write. UPDATE As Euphoric pointed out, there isn't anything comparable to LINQ in the Java world. So, if you are developing on the .NET stack, why not always try and make use of it? Is it possible that people just don't fully understand what it does?

    Read the article

  • Efficient algorithm for Virtual Machine(VM) Consolidation in Cloud

    - by devansh dalal
    PROBLEM: We have N physical machines(PMs) each with ram Ri, cpu Ci and a set of currently scheduled VMs each with ram requirement ri and ci respectively Moving(Migrating) any VM from one PM to other has a cost associated which depends on its ram ri. A PM with no VMs is shut down to save power. Our target is to minimize the weighted sum of (N,migration cost) by migrating some VMs i.e. minimize the number of working PMs as well as not to degrade the service level due to excessive migrations. My Approach: Brute Force approach is choosing the minimum loaded PM and try to fit its VMs to other PMs by First Fit Decreasing algorithm or we can select the victim PMs and target PMs based on their loading level and shut down victims if possible by moving their VMs to targets. I tried this Greedy approach on the Data of Baadal(IIT-D cloud) but It isn't giving promising results. I have also tried to study the Ant colony optimization for dynamic VM consolidating but was unable to understand very much. I used the links. http://dumas.ccsd.cnrs.fr/docs/00/72/52/15/PDF/Esnault.pdf http://hal.archives-ouvertes.fr/docs/00/72/38/56/PDF/RR-8032.pdf Would anyone please clarify the solution or suggest any new approach/resources for better performance. I am basically searching for the algorithms not the physical optimizations and I also know that many commercial organizations have provided these solution but I just wanted to know more the underlying algorithms. Thanks in advance.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >