Search Results

Search found 1375 results on 55 pages for 'asymptotic complexity'.

Page 3/55 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Handle complexity in large software projects

    - by Oliver Vogel
    I am a lead developer in a larger software projects. From time to time its getting hard to handle the complexity within this project. E. g. Have the whole big picture in mind all the time Keeping track of the teammates work results Doing Code Reviews Supply management with information etc. Are there best practices/ time management techniques to handle these tasks? Are there any tools to support you having an overview?

    Read the article

  • Latest Fusion DOO White Paper - Overcoming Order Management Complexity in Global Organizations

    - by Pam Petropoulos
    Check out this latest Fusion Distributed Order Orchestration white paper entitled “Overcoming Order Management Complexity in Global Organizations”.  Discover how Oracle Fusion DOO enables large, complex organizations to streamline their order management processes and take advantage of lower costs, higher margins, and improved customer service. Click here to read the whitepaper.

    Read the article

  • What is the complexity of the below code with respect to memory ?

    - by Cshah
    Hi, I read about Big-O Notation from here and had few questions on calculating the complexity.So for the below code i have calculated the complexity. need your inputs for the same. private void reverse(String strToRevers) { if(strToRevers.length() == 0) { return ; } else { reverse(strToRevers.substring(1)); System.out.print(strToRevers.charAt(0)); } } If the memory factor is considered then the complexity of above code for a string of n characters is O(n^2). The explanation is for a string that consists of n characters, the below function would be called recursively n-1 times and each function call creates a string of single character(stringToReverse.charAT(0)). Hence it is n*(n-1)*2 which translates to o(n^2). Let me know if this is right ?

    Read the article

  • How to Edit Domain Password Complexity?

    - by Milad K. Awawdeh
    Hi All, :) My Domain Environment is 2 Domain Controller ( Main & Secondary ) DHCP Mail Server Internet Server & ISA Server 2 DNS Server Primary & Secondary My problem i tried to Remove Password Complexity in my 2 domain Controller but i still receive error message that the password doesn't meet password complexity and i tried to run gpupdate /force after i disabled password complexity and check other condition any one know why I use windows server 2003 Stand alone

    Read the article

  • Video White Paper: Mega-Project Management: Reducing Risk & Complexity across the Value Chain

    - by Melissa Centurio Lopes
    Watch this short video white paper, to learn how Oracle Primavera can help you keep projects on track and protect your investments. You can also download the full white paper “Mega-Project Management: Reducing Risk & Complexity Across the Value Chain” to gain more in depth information about strategies for collaborating and sharing information and data in a systematic way across the value chain. Download the white paper in order to learn how your company can get the expected payoff from your next mega project. Register now to download the full complementary white paper, and discover how to: Improve decision-making and accountability through enterprise-wide visibility, workflows, and collaboration Reduce financial and performance risk

    Read the article

  • Low complexity shader to indicate the sides of a polyline

    - by Pris
    I have a bunch of polylines that I draw using GL_LINES. They can have thousands of points. They actually represent the separation of land and water on a map. I don't have complete polygons, just the ordered set of points. I'm looking for a neat but efficient way to visually convey Side A and Side B as being different. For example I could offset the polyline in one direction a few times and fade it out (but every offset is doubling the number of points), or offset it once to make a "ribbon" and give one side a 'glow' like effect to mimic the outer glow or shadow of a polygon). This is for a mobile application and I'm using OpenGL ES 2. I'd like to keep the effect as simple as possible from a complexity stand point. I'm looking for some additional ideas; maybe there's a clever shader technique out there or a visual effect I haven't considered.

    Read the article

  • With Choice Comes Complexity

    - by BuckWoody
    "Complex" may be defined as "Having many steps, details or parts." Many of Microsoft's products, including SQL Server, can be complex. I'm stating what most data professionals already know - there's usually multiple ways to do things in SQL Server. For instance, to import some data into a table you can use graphical tools, SQLCMD, bcp, SQL Server Integration Services, BULK INSERT, even PowerShell, just to name a few tools at your disposal. That's really not the issue, though. The bigger issue is that there are normally multiple thought-processes, or methods, that you have available for a task. That's both a strength and a weakness. If things were more simple, you would have fewer choices. Sometimes that's a good thing. Just tell me what I need to do and I'll do it. However, your particular situation may not fit that tool or process, so having more options increases your ability to get your job done the way you need to do it. On the other hand, that's more for you to learn, which is harder. There's another side of this benefit/difficulty that you need to be aware of. Even if you're quite good at what you do, keep in mind that the way you know how to do something may not be the only way to do it. Keep your mind open to new possibilities, and most importantly - to new knowledge. SQL Server professionals teach me something new every day. So embrace the complexity - on balance, it's a good thing! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Complexity of defense AI

    - by Fredrik Johansson
    I have a non-released game, and currently it's only possible to play with another human being. As the game rules are made up by me, I think it would be great if new players could learn basic game play by playing against an AI opponent. I mean it's not like Tennis, where the majority knows at least the fundamental rules. On the other hand, I'm a bit concerned that this AI implementation can be quite complex. I hope you can help me with an complexity estimation. I've tried to summarize the gameplay below. Is this defense AI very hard to do? Basic Defense Game Play Player Defender can move within his land, i.e. inside a random, non-convex, polygon. This land will also contain obstacles modeled as polygons, that Defender has to move around. Player Attacker has also a land, modeled as another such polygon. Assume that Defender shall defend against Attacker. Attacker will then throw a thingy towards Defender's land. To be rewarded, Attacker wants to hit Defender's land, and Defender will want to strike away the thingy from his land before it stops to prevent Attacker from scoring. To feint Defender, Attacker might run around within his land before the throw, and based on these attacker movements Defender shall then continuously move to the best defense position within his land.

    Read the article

  • Looking for tips on managing complexity with SCM repositories

    - by Philip Regan
    I am a solo developer in my department and I have a lot of individual projects, all created and managed by me. I started using SVN at ProjectLocker via Versions on the Mac a couple years ago when the variety of projects started getting unwieldy. Scenario 1: Now I have a process that is of reasonable complexity it can be broken up into multiple smaller applications and they all share files. In one phase, there is a single shared file—a constants file—that is shared between a Cocoa app and an iPhone app framework. In the second phase, the iPhone app framework will be used to create individual apps of the same ilk—controller classes and what not will all be the same—but with different content in each. The problem that I am running across is that the file in the first phase is in one repository with the application that started it, and the app framework is in a second, separate repository. Scenario 2: I have another application framework that partially relies on code from an open source project. This is all internal, non-commerical work, but again, the application framework is going to be used to create a variety of unique products and processes. So, now I have an internally managed repository and an externally managed one out of my control. I make little changes to the open source code to meet the needs of my framework when there is an update I download, but I never commit back into the external repository (though, now that I think about it, I don't think I'm committing it to mine either. Oops). The Problem I have all of this set up on my production Mac quite nicely, but duplicating and subsequently maintaining that environment on my laptop has been challenging. For Scenario 1, I've thought of merging these two projects together into the same repository because they are, for all intents and purposes inextricably linked. But, Scenario 2, I think I'm stuck just managing files as best I can. The Question I'm wondering if anyone has any tips on how to manage either of these situations, as well as other complex SCM scenarios when it comes to linking various files from various repositories together. My familiarity with SVN only comes from my work with Versions. It's been great, but I'm a little out of my depth here.

    Read the article

  • What guarantees are there on the run-time complexity (Big-O) of LINQ methods?

    - by tzaman
    I've recently started using LINQ quite a bit, and I haven't really seen any mention of run-time complexity for any of the LINQ methods. Obviously, there are many factors at play here, so let's restrict the discussion to the plain IEnumerable LINQ-to-Objects provider. Further, let's assume that any Func passed in as a selector / mutator / etc. is a cheap O(1) operation. It seems obvious that all the single-pass operations (Select, Where, Count, Take/Skip, Any/All, etc.) will be O(n), since they only need to walk the sequence once; although even this is subject to laziness. Things are murkier for the more complex operations; the set-like operators (Union, Distinct, Except, etc.) work using GetHashCode by default (afaik), so it seems reasonable to assume they're using a hash-table internally, making these operations O(n) as well, in general. What about the versions that use an IEqualityComparer? OrderBy would need a sort, so most likely we're looking at O(n log n). What if it's already sorted? How about if I say OrderBy().ThenBy() and provide the same key to both? I could see GroupBy (and Join) using either sorting, or hashing. Which is it? Contains would be O(n) on a List, but O(1) on a HashSet - does LINQ check the underlying container to see if it can speed things up? And the real question - so far, I've been taking it on faith that the operations are performant. However, can I bank on that? STL containers, for example, clearly specify the complexity of every operation. Are there any similar guarantees on LINQ performance in the .NET library specification?

    Read the article

  • Form, function and complexity in rule processing

    Tim Bass posted on Orwellian Event Processing.I was involved in a heated exchange in the comments, and he has more recently published a post entitled Disadvantages of Rule-Based Systems (Part 1).Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions.I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Python — Time complexity of built-in functions versus manually-built functions in finite fields

    - by stackuser
    Generally, I'm wondering about the advantages versus disadvantages of using the built-in arithmetic functions versus rolling your own in Python. Specifically, I'm taking in GF(2) finite field polynomials in string format, converting to base 2 values, performing arithmetic, then output back into polynomials as string format. So a small example of this is in multiplication: Rolling my own: def multiply(a,b): bitsa = reversed("{0:b}".format(a)) g = [(b<<i)*int(bit) for i,bit in enumerate(bitsa)] return reduce(lambda x,y: x+y,g) Versus the built-in: def multiply(a,b): # a,b are GF(2) polynomials in binary form .... return a*b #returns product of 2 polynomials in gf2 Currently, operations like multiplicative inverse (with for example 20 bit exponents) take a long time to run in my program as it's using all of Python's built-in mathematical operations like // floor division and % modulus, etc. as opposed to making my own division, remainder, etc. I'm wondering how much of a gain in efficiency and performance I can get by building these manually (as shown above). I realize the gains are dependent on how well the manual versions are built, that's not the question. I'd like to find out 'basically' how much advantage there is over the built-in's. So for instance, if multiplication (as in the example above) is well-suited for base 10 (decimal) arithmetic but has to jump through more hoops to change bases to binary and then even more hoops in operating (so it's lower efficiency), that's what I'm wondering. Like, I'm wondering if it's possible to bring the time down significantly by building them myself in ways that maybe some professionals here have already come across.

    Read the article

  • Evolution of mainstream programming languages: simplicity versus complexity.

    - by Giorgio
    I had posted this question on http://stackoverflow.com but I was suggested that it may be more appropriate to post it on this forum. I did a quick search on this site and it seems to me that this question has not been asked yet. Please give me a hint if the topic has been raised already by someone else. Update I have rephrased this question, removed personal opinions and made it shorter. I hope in this way it is better suited for this forum. By looking at the recent development of Java (Java 7) and C++ (C++0x) I see that new features are added to these languages. For sure this makes it easier to use certain programming idioms, adding to the productivity of developers. On the other hand, there might be the following risks A language becomes too big, complex, and difficult to understand. It lacks coherence in the design, e.g. if it mixes different paradigms like object-orientation and functional programming, which might not fit well together. Questions: what is more important to you as a developer: to have a rich language that captures a large collection of programming idioms or to have a small language that aims at coherence and simplicity (of course, with a good deal of libraries and tools accompanying it)? Or is it possible to have both? With respect to these issues: How do you judge the current evolutions of main-stream programming languages like Java or C++? Are they becoming too complex, less intuitive? Do they have enough features? Do they need more? Are they still easy enough to understand and use?

    Read the article

  • Is Domain Driven Design useful / productive for not so complex domains?

    - by Elijah
    When assessing a potential project at work, I suggested that it might be advantageous to use a domain driven design approach to its object model. The project does not have an excessively complex domain, so my coworker threw this at me: It has been said, that DDD is favorable in instances where there is a complex domain model (“...It applies whenever we are operating in a complex, intricate domain” Eric Evans). What I'm lost on is - how you define the complexity of a domain? Can it be defined by the number of aggregate roots in the domain model? Is the complexity of a domain in the interaction of objects? The domain that we are assessing is related online publishing and content management.

    Read the article

  • Web Service Standard Complexity

    Are we over-standardizing web services and hindering their adoption? No, and in fact I feel that it is helping its adoption in the modern corporate world. Standards, although they can be daunting and tedious, provide a universal framework to which we all can operate in and around. These frameworks provide a common interface for all of to use when interaction with various computing environments so that data can be transfer freely.  Standards are protocols in which computers communicate with one another. If we take this to the living world, the united nations hires interprets for all each countries dignitaries so that they can understand what other countries are talking about. Imagine if the president of the United States wanted to talk to the ruler of China. How would these to communicate? The interpreter would translate data back and forth acting as in intermediary using both standard American English and Chinese. Without knowing the standards in either language no one would be able to communicate. Even though we work within the framework of standards does not mean that we are stuck with these standards. As technology evolves all standards will be out of touch, and when this occurs standards need to be refactored or replaced with new standards that are current with the technology at that time. How else are we as developers and the technology going to grow? What do you guys think?

    Read the article

  • The Complexity of SEO - Made Easy For You

    The very first thing in the SEARCH ENGINE OPTIMIZATION process is deciding on what keywords you want to optimize your website for, similarly what keywords you want your website to be ranked for. You want keywords that have a large number of searches, but have a few results or competitions. The effective and easiest way to get this is to look into Google's keyword tool.

    Read the article

  • Array sum of difference of first smallest k elment

    - by prateeak ojha
    Hi i come across this challenge on an online programing challange the task is so simple for ex WE have to variable N and K where N is where N is lenght of array and we have to find sum of duiffrence of K smallest element of array as Sample Input 10 4 1 2 3 4 10 20 30 40 100 200 Sample Output 10 Explanation #0 We have 10 as N size of array and K is 4 here the next N values are array's elments in order to find sum of diffrences we will takek smallest values from array which are 1,2,3,4 then will perform operation(sum of difference) as |1-2| + |1-3| + |1-4| + |2-3| + |2-4| + |3-4| = 10 I tried solving the problem and found a way through which i can solve the problem wit N^3 complexity but my solution is rejected i need a approach to solve the problem with n complexity i still can't figure out a way .. looked at some solution but coud'nt find the exact way to approach. if anybody have a better idea and would like to share it would be appreciable thanks in advance

    Read the article

  • Asymptotic runtime of list-to-tree function

    - by Deestan
    I have a merge function which takes time O(log n) to combine two trees into one, and a listToTree function which converts an initial list of elements to singleton trees and repeatedly calls merge on each successive pair of trees until only one tree remains. Function signatures and relevant implementations are as follows: merge :: Tree a -> Tree a -> Tree a --// O(log n) where n is size of input trees singleton :: a -> Tree a --// O(1) empty :: Tree a --// O(1) listToTree :: [a] -> Tree a --// Supposedly O(n) listToTree = listToTreeR . (map singleton) listToTreeR :: [Tree a] -> Tree a listToTreeR [] = empty listToTreeR (x:[]) = x listToTreeR xs = listToTreeR (mergePairs xs) mergePairs :: [Tree a] -> [Tree a] mergePairs [] = [] mergePairs (x:[]) = [x] mergePairs (x:y:xs) = merge x y : mergePairs xs This is a slightly simplified version of exercise 3.3 in Purely Functional Data Structures by Chris Okasaki. According to the exercise, I shall now show that listToTree takes O(n) time. Which I can't. :-( There are trivially ceil(log n) recursive calls to listToTreeR, meaning ceil(log n) calls to mergePairs. The running time of mergePairs is dependent on the length of the list, and the sizes of the trees. The length of the list is 2^h-1, and the sizes of the trees are log(n/(2^h)), where h=log n is the first recursive step, and h=1 is the last recursive step. Each call to mergePairs thus takes time (2^h-1) * log(n/(2^h)) I'm having trouble taking this analysis any further. Can anyone give me a hint in the right direction?

    Read the article

  • Big O complexity of simple for not always linear?

    - by i30817
    I'm sure most of you know that a nested loop has O(n^2) complexity if the function input size is n for(int i = 0; i < n; i++){ for(int j = 0; j < n; j++){ ... } } I think that this is similar, by a analogous argument, but i'm not sure can anyone confirm? for(int i = 0, max = n*n; i < max; i++{ ... } If so i guess that there is some kinds of code whose big O mapping is not immediately obvious besides recursion and subroutines.

    Read the article

  • Do "if" statements affect in the time complexity analysis?

    - by FranXh
    According to my analysis, the running time of this algorithm should be N2, because each of the loops goes once through all the elements. I am not sure whether the presence of the if statement changes the time complexity? for(int i=0; i<N; i++){ for(int j=1; j<N; j++){ System.out.println("Yayyyy"); if(i<=j){ System.out.println("Yayyy not"); } } }

    Read the article

  • Where does complexity bloat from?

    - by AareP
    Many of our design decisions are based on our gut feeling about how to avoid complexity and bloating. Some of our complexity-fears are true, we have plenty of painful experience on throwing away deprecated code. Other times we learn that some particular task isn't really that complex as we though it to be. We notice for example that upkeeping 3000 lines of code in one file isn't that difficult... or that using special purpose "dirty flags" isn't really bad OO practice... or that in some cases it's more convenient to have 50 variables in one class that have 5 different classes with shared responsibilities... One friend has even stated that adding functions to the program isn't really adding complexity to your system. So, what do you think, where does bloated complexity creep from? Is it variable count, function count, code line count, code line count per function, or something else?

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • What would be the time complexity of counting the number of all structurally different binary trees?

    - by ktslwy
    Using the method presented here: http://cslibrary.stanford.edu/110/BinaryTrees.html#java 12. countTrees() Solution (Java) /** For the key values 1...numKeys, how many structurally unique binary search trees are possible that store those keys? Strategy: consider that each value could be the root. Recursively find the size of the left and right subtrees. */ public static int countTrees(int numKeys) { if (numKeys <=1) { return(1); } else { // there will be one value at the root, with whatever remains // on the left and right each forming their own subtrees. // Iterate through all the values that could be the root... int sum = 0; int left, right, root; for (root=1; root<=numKeys; root++) { left = countTrees(root-1); right = countTrees(numKeys - root); // number of possible trees with this root == left*right sum += left*right; } return(sum); } } I have a sense that it might be n(n-1)(n-2)...1, i.e. n!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >