Search Results

Search found 1366 results on 55 pages for 'complexity'.

Page 1/55 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Complexity of a web application

    - by Dominik G
    I am currently writing my Master's Thesis on maintainability of a web application. I found some methods like the "Maintainability Index" by Coleman et.al. or the "Software Maintainability Index" by Muthanna et.al. For both of them one needs to calculate the cyclomatic complexity. So my question is: Is it possible to measure the cyclomatic complexity of a web application? In my opinion there are three parts to a web application: Server code (PHP, C#, Python, Perl, etc.) Client code (JavaScript) HTML (links and forms as operators, GET-parameters and form fields as operands!?) What do you think? Is there another point of view on the complexity of web application? Did I miss something?

    Read the article

  • Linear complexity and quadratic complexity

    - by jasonline
    I'm just not sure... If you have a code that can be executed in either of the following complexities: A sequence of O(n), like for example: two O(n) in sequence O(n²) The preferred version would be the one that can be executed in linear time. Would there be a time such that the sequence of O(n) would be too much and that O(n²) would be preferred? In other words, is the statement C x O(n) < O(n²) always true for any constant C? Why or why not? What are the factors that would affect the condition such that it would be better to choose the O(n²) complexity?

    Read the article

  • When should complexity be removed?

    - by ElGringoGrande
    Prematurely introducing complexity by implementing design patterns before they are needed is not good practice. But if you follow all (or even most of) the SOLID principles and use common design patterns you will introduce some complexity as features and requirements are added or changed to keep your design as maintainable and flexible as needed. However once that complexity is introduced and working like a champ when do you removed it? Example. I have an application written for a client. When originally created there where several ways to give raises to employees. I used the strategy pattern and factory to keep the whole process nice and clean. Over time certain raise methods where added or removed by the application owner. Time passes and new owner takes over. This new owner is hard nosed, keeps everything simple and only has one single way to give a raise. The complexity needed by the strategy pattern is no longer needed. If I where to code this from the requirements as they are now I would not introduce this extra complexity (but make sure I could introduce it with little or no work should the need arise). So do I remove the strategy implementation now? I don't think this new owner will ever change how raises are given. But the application itself has demonstrated that this could happen. Of course this is just one example in an application where a new owner takes over and has simplified many processes. I could remove dozens of classes, interfaces and factories and make the whole application much more simple. Note that the current implementation does works just fine and the owner is happy with it (and surprised and even happier that I was able to implement her changes so quickly because of the discussed complexity). I admit that a small part of this doubt is because it is highly likely the new owner isn't going to use me any longer. I don't really care that somebody else will take this over since it has not been a big income generator. But I do care about 2 (related) things I care a bit that the new maintainer will have to think a bit harder when trying to understand the code. Complexity is complexity and I don't want to anger the psycho maniac coming after me. But even more I worry about a competitor seeing this complexity and thinking I just implement design patterns to pad my hours on jobs. Then spreading this rumor to hurt my other business. (I have heard this mentioned.) So... In general should previously needed complexity be removed even though it works and there has been a historically demonstrated need for the complexity but you have no indication that it will be needed in the future? Even if the question above is generally answered "no" is it wise to remove this "un-needed" complexity if handing off the project to a competitor (or stranger)?

    Read the article

  • Finding complexity of a program as a service [on hold]

    - by Seshu
    I would like to find the complexity of a specific code chunk written in Java. Is there a place/web site/service where I can find out the complexity of any arbitrary program. This program might include loops/recursion. Using theory we can compute complexity ourselves. But, just curious in finding if any service is out there to find such complexity. We have several code quality related tools does any of such tools will also find complexity of given code? Could any one point me or direct me to such a utility/site/service?

    Read the article

  • Time complexity for Search and Insert operation in sorted and unsorted arrays that includes duplicat

    - by iecut
    1-)For sorted array I have used Binary Search. We know that the worst case complexity for SEARCH operation in sorted array is O(lg N), if we use Binary Search, where N are the number of items in an array. What is the worst case complexity for the search operation in the array that includes duplicate values, using binary search?? Will it be the be the same O(lg N)?? Please correct me if I am wrong!! Also what is the worst case for INSERT operation in sorted array using binary search?? My guess is O(N).... is that right?? 2-) For unsorted array I have used Linear search. Now we have an unsorted array that also accepts duplicate element/values. What are the best worst case complexity for both SEARCH and INSERT operation. I think that we can use linear search that will give us O(N) worst case time for both search and delete operations. Can we do better than this for unsorted array and does the complexity changes if we accepts duplicates in the array.

    Read the article

  • Is there a correlation between complexity and reachability?

    - by Saladin Akara
    I've been studying cyclomatic complexity (McCabe) and reachability of software at uni recently. Today my lecturer said that there's no correlation between the two metrics, but is this really the case? I'd think there would definitely be some correlation, as less complex programs (from the scant few we've looked at) seem to have 'better' results in terms of reachability. Does anyone know of any attempt to look at the two metrics together, and if not, what would be a good place to find data on both complexity and reachability for a large(ish) number of programs? (As clarification, this isn't a homework question. Also, if I've put this in the wrong place, let me know.)

    Read the article

  • How do you manage a complexity jump?

    - by glenatron
    It seems an infrequent but common experience that sometimes you're working on a project and suddenly something turns up unexpectedly, throws a massive spanner in the works and ramps up the complexity a whole lot. For example, I was working on an application that talked to SOAP services on various other machines. I whipped up a prototype that worked fine, then went on to develop a regular front end and generally get everything up and running in a nice, fairly simple and easy to follow fashion. It worked great until we started testing across a wider network and suddenly pages started timing out as the latency of the connections and the time required to perform calculations on remote machines resulted in timed out requests to the soap services. It turned out that we needed to change the architecture to spin requests out onto their own threads and cache the returned data so it could be updated progressively in the background rather than performing calculations on a request by request basis. The details of that scenario are not too important - indeed it's not a great example as it was quite forseeable and people who have written a lot of apps of this type for this type of environment might have anticipated it - except that it illustrates a way that one can start with a simple premise and model and suddenly have an escalation of complexity well into the development of the project. What strategies do you have for dealing with these types of functional changes whose need arises - often as a result of environmental factors rather than specification change - later on in the development process or as a result of testing? How do you balance between avoiding the premature optimisation/ YAGNI/ overengineering risks of designing a solution that mitigates against possible but not necessarily probable issues as opposed to developing a simpler and easier solution that is likely to be as effective but doesn't incorporate preparedness for every possible eventuality?

    Read the article

  • Adding complexity to remove duplicate code

    - by Phil
    I have several classes that all inherit from a generic base class. The base class contains a collection of several objects of type T. Each child class needs to be able to calculate interpolated values from the collection of objects, but since the child classes use different types, the calculation varies a tiny bit from class to class. So far I have copy/pasted my code from class to class and made minor modifications to each. But now I am trying to remove the duplicated code and replace it with one generic interpolation method in my base class. However that is proving to be very difficult, and all the solutions I have thought of seem way too complex. I am starting to think the DRY principle does not apply as much in this kind of situation, but that sounds like blasphemy. How much complexity is too much when trying to remove code duplication? EDIT: The best solution I can come up with goes something like this: Base Class: protected T GetInterpolated(int frame) { var index = SortedFrames.BinarySearch(frame); if (index >= 0) return Data[index]; index = ~index; if (index == 0) return Data[index]; if (index >= Data.Count) return Data[Data.Count - 1]; return GetInterpolatedItem(frame, Data[index - 1], Data[index]); } protected abstract T GetInterpolatedItem(int frame, T lower, T upper); Child class A: public IGpsCoordinate GetInterpolatedCoord(int frame) { ReadData(); return GetInterpolated(frame); } protected override IGpsCoordinate GetInterpolatedItem(int frame, IGpsCoordinate lower, IGpsCoordinate upper) { double ratio = GetInterpolationRatio(frame, lower.Frame, upper.Frame); var x = GetInterpolatedValue(lower.X, upper.X, ratio); var y = GetInterpolatedValue(lower.Y, upper.Y, ratio); var z = GetInterpolatedValue(lower.Z, upper.Z, ratio); return new GpsCoordinate(frame, x, y, z); } Child class B: public double GetMph(int frame) { ReadData(); return GetInterpolated(frame).MilesPerHour; } protected override ISpeed GetInterpolatedItem(int frame, ISpeed lower, ISpeed upper) { var ratio = GetInterpolationRatio(frame, lower.Frame, upper.Frame); var mph = GetInterpolatedValue(lower.MilesPerHour, upper.MilesPerHour, ratio); return new Speed(frame, mph); }

    Read the article

  • What is the Big-O time complexity of this algorithm

    - by grebwerd
    I was wondering what the run time of this small program would be? #include <stdio.h> int main(int argc, char* argv[]) { int i; int j; int inputSize; int sum = 0; if(argc == 1) inputSize = 16; else inputSize = atoi(argv[i]); for(i = 1; i <= inputSize; i++){ for(j = i; j < inputSize; j *=2 ){ printf("The value of sum is %d\n",++sum); } } } n S floor(log n - log (n-i)) = ? i =1 and that each summation would be the floor value between log(n) - log(n-i). Would the run time be n log n?

    Read the article

  • Adding complexity by generalising: how far should you go?

    - by marcog
    Reference question: http://stackoverflow.com/questions/4303813/help-with-interview-question The above question asked to solve a problem for an NxN matrix. While there was an easy solution, I gave a more general solution to solve the more general problem for an NxM matrix. A handful of people commented that this generalisation was bad because it made the solution more complex. One such comment is voted +8. Putting aside the hard-to-explain voting effects on SO, there are two types of complexity to be considered here: Runtime complexity, i.e. how fast does the code run Code complexity, i.e. how difficult is the code to read and understand The question of runtime complexity is something that requires a better understanding of the input data today and what it might look like in the future, taking the various growth factors into account where necessary. The question of code complexity is the one I'm interested in here. By generalising the solution, we avoid having to rewrite it in the event that the constraints change. However, at the same time it can often result in complicating the code. In the reference question, the code for NxN is easy to understand for any competent programmer, but the NxM case (unless documented well) could easily confuse someone coming across the code for the first time. So, my question is this: Where should you draw the line between generalising and keeping the code easy to understand?

    Read the article

  • Adding complexity by generalising: how far should you go?

    - by marcog
    Reference question: http://stackoverflow.com/questions/4303813/help-with-interview-question The above question asked to solve a problem for an NxN matrix. While there was an easy solution, I gave a more general solution to solve the more general problem for an NxM matrix. A handful of people commented that this generalisation was bad because it made the solution more complex. One such comment is voted +8. Putting aside the hard-to-explain voting effects on SO, there are two types of complexity to be considered here: Runtime complexity, i.e. how fast does the code run Code complexity, i.e. how difficult is the code to read and understand The question of runtime complexity is something that requires a better understanding of the input data today and what it might look like in the future, taking the various growth factors into account where necessary. The question of code complexity is the one I'm interested in here. By generalising the solution, we avoid having to rewrite it in the event that the constraints change. However, at the same time it can often result in complicating the code. In the reference question, the code for NxN is easy to understand for any competent programmer, but the NxM case (unless documented well) could easily confuse someone coming across the code for the first time. So, my question is this: Where should you draw the line between generalising and keeping the code easy to understand?

    Read the article

  • Windows Web Server 2008 R2 Server Core local password complexity

    - by Dennis Allen
    How can I disable the local user account password complexity settings on Windows 2008 R2 "Server Core"? I am trying to migrate our windows 2003 web server to windows 2008 R2. I am trying to see if I can use the "Server Core" install, and it has been a very internet search intensive experience. What I can't find out how to do is to find out how to disable password complexity for local user accounts. While our user account generator currently creates nice strong passwords, there was a time when this was not the case and unfortunately forcing the users to change their password is not an option at this time. Any help greatly appreciated. Dennis

    Read the article

  • Big-O complexity of c^n + n*(logn)^2 + (10*n)^c

    - by zebraman
    I need to derive the Big-O complexity of this expression: c^n + n*(log(n))^2 + (10*n)^c where c is a constant and n is a variable. I'm pretty sure I understand how to derive the Big-O complexity of each term individually, I just don't know how the Big-O complexity changes when the terms are combined like this. Ideas? Any help would be great, thanks.

    Read the article

  • Using Completed User Stories to Estimate Future User Stories

    - by David Kaczynski
    In Scrum/Agile, the complexity of a user story can be estimated in story points. After completing some user stories, a programmer or team of programmers can use those experiences to better estimate how much time it might take to complete a future user story. Is there a methodology for breaking down the complexity of user stories into quantifiable or quantifiable attributes? For example, User Story X requires a rich, new view in the GUI, but User Story X can perform most of its functionality using existing business logic on the server. On a scale of 1 to 10, User Story X has a complexity of 7 on the client and a complexity of 2 on the server. After User Story X is completed, someone asks how long would it take to complete User Story Y, which has a complexity of 3 on the client and 6 on the server. Looking at how long it took to complete User Story X, we can make an educated estimate on how long it might take to complete User Story Y. I can imagine some other details: The complexity of one attribute (such as complexity of client) could have sub-attributes, such as number of steps in a sequence, function points, etc. Several other attributes that could be considered as well, such as the programmer's familiarity with the system or the number of components/interfaces involved These attributes could be accumulated into some sort of user story checklist. To reiterate: is there an existing methodology for decomposing the complexity of a user story into complexity of attributes/sub-attributes, or is using completed user stories as indicators in estimating future user stories more of an informal process?

    Read the article

  • Asymptotic complexity of a compiler

    - by Meinersbur
    What is the maximal acceptable asymptotic runtime of a general-purpose compiler? For clarification: The complexity of compilation process itself, not of the compiled program. Depending on the program size, for instance, the number of source code characters, statements, variables, procedures, basic blocks, intermediate language instructions, assembler instructions, or whatever. This is highly depending on your point of view, so this is a community wiki. See this from the view of someone who writes a compiler. Will the optimisation level -O4 ever be used for larger programs when one of its optimisations takes O(n^6)? Related questions: When is superoptimisation (exponential complexity or even incomputable) acceptable? What is acceptable for JITs? Does it have to be linear? What is the complexity of established compilers? GCC? VC? Intel? Java? C#? Turbo Pascal? LCC? LLVM? (Reference?) If you do not know what asymptotic complexity is: How long are you willing to wait until the compiler compiled your project? (scripting languages excluded)

    Read the article

  • Time complexity of a certain program

    - by HokageSama
    In a discussion with my friend i am not able to predict correct and tight time complexity of a program. Program is as below. /* This Function reads input array "input" and update array "output" in such a way that B[i] = index value of nearest greater value from A[i], A[i+1] ... A[n], for all i belongs to [1, n] Time Complexity: ?? **/ void createNearestRightSidedLargestArr(int* input, int size, int* output){ if(!input || size < 1) return; //last element of output will always be zero, since no element is present on its right. output[size-1] = -1; int curr = size - 2; int trav; while(curr >= 0){ if(input[curr] < input[curr + 1]){ output[curr] = curr + 1; curr--; continue; } trav = curr + 1; while( input[ output [trav] ] < input[curr] && output [trav] != -1) trav = output[trav]; output[curr--] = output[trav]; } } I said time complexity is O(n^2) but my friend insists that this is not correct. What is the actual time complexity?

    Read the article

  • Time complexity with bit cost

    - by Keyser
    I think I might have completely misunderstood bit cost analysis. I'm trying to wrap my head around the concept of studying an algorithm's time complexity with respect to bit cost (instead of unit cost) and it seems to be impossible to find anything on the subject. Is this considered to be so trivial that no one ever needs to have it explained to them? Well I do. (Also, there doesn't even seem to be anything on wikipedia which is very unusual). Here's what I have so far: The bit cost of multiplication and division of two numbers with n bits is O(n^2) (in general?) So, for example: int number = 2; for(int i = 0; i < n; i++ ){ number = i*i; } has a time complexity with respect to bit cost of O(n^3), because it does n multiplications (right?) But in a regular scenario we want the time complexity with respect to the input. So, how does that scenario work? The number of bits in i could be considered a constant. Which would make the time complexity the same as with unit cost except with a bigger constant (and both would be linear). Also, I'm guessing addition and subtraction can be done in constant time, O(1). Couldn't find any info on it but it seems reasonable since it's one assembler operation.

    Read the article

  • Reducing Time Complexity in Java

    - by Koeneuze
    Right, this is from an older exam which i'm using to prepare my own exam in january. We are given the following method: public static void Oorspronkelijk() { String bs = "Dit is een boodschap aan de wereld"; int max = -1; char let = '*'; for (int i=0;i<bs.length();i++) { int tel = 1; for (int j=i+1;j<bs.length();j++) { if (bs.charAt(j) == bs.charAt(i)) tel++; } if (tel > max) { max = tel; let = bs.charAt(i); } } System.out.println(max + " keer " + let); } The questions are: what is the output? - Since the code is just an algorithm to determine the most occuring character, the output is "6 keer " (6 times space) What is the time complexity of this code? Fairly sure it's O(n²), unless someone thinks otherwise? Can you reduce the time complexity, and if so, how? Well, you can. I've received some help already and managed to get the following code: public static void Nieuw() { String bs = "Dit is een boodschap aan de wereld"; HashMap<Character, Integer> letters = new HashMap<Character, Integer>(); char max = bs.charAt(0); for (int i=0;i<bs.length();i++) { char let = bs.charAt(i); if(!letters.containsKey(let)) { letters.put(let,0); } int tel = letters.get(let)+1; letters.put(let,tel); if(letters.get(max)<tel) { max = let; } } System.out.println(letters.get(max) + " keer " + max); } However, I'm uncertain of the time complexity of this new code: Is it O(n) because you only use one for-loop, or does the fact we require the use of the HashMap's get methods make it O(n log n) ? And if someone knows an even better way of reducing the time complexity, please do tell! :)

    Read the article

  • What is the complexity of this specialized sort

    - by ADB
    I would like to know the complexity (as in O(...) ) of the following sorting algorithm: there are B barrels that contains a total of N elements, spread unevenly across the barrels. the elements in each barrel are already sorted The sort takes combines all the elements from each barrel in a single sorted list: using an array of size B to store the last sorted element of each barrel (starting at 0) check each barrel (at the last stored index) and find the smallest element copy the element in the final sorted array, increment array counter increment the last sorted element for the barrel we picked from perform those steps N times or in pseudo: for i from 0 to N smallest = MAX_ELEMENT foreach b in B if bIndex[b] < b.length && b[bIndex[b]] < smallest smallest_barrel = b smallest = b[bIndex[b]] result[i] = smallest bIndex[smallest_barrel] += 1 I thought that the complexity would be O(n), but the problem I have with finding the complexity is that if B grows, it has a larger impact than if N grows because it adds another round in the B loop. But maybe that has no effect on the complexity?

    Read the article

  • Time complexity of Sieve of Eratosthenes algorithm

    - by eSKay
    From Wikipedia: The complexity of the algorithm is O(n(logn)(loglogn)) bit operations. How do you arrive at that? That the complexity includes the loglogn term tells me that there is a sqrt(n) somewhere. Suppose I am running the sieve on the first 100 numbers (n = 100), assuming that marking the numbers as composite takes constant time (array implementation), the number of times we use mark_composite() would be something like n/2 + n/3 + n/5 + n/7 + ... + n/97 = O(n) And to find the next prime number (for example to jump to 7 after crossing out all the numbers that are multiples of 5), the number of operations would be O(n). So, the complexity would be O(n^2). Do you agree?

    Read the article

  • Time complexity of a sorting algorithm

    - by Passonate Learner
    The two programs below get n integers from file and calculates the sum of ath to bth integers q(number of question) times. I think the upper program has worse time complexity than the lower, but I'm having problems calculating the time complexity of these two algorithms. [input sample] 5 3 5 4 3 2 1 2 3 3 4 2 4 [output sample] 7 5 9 Program 1: #include <stdio.h> FILE *in=fopen("input.txt","r"); FILE *out=fopen("output.txt","w"); int n,q,a,b,sum; int data[1000]; int main() int i,j; fscanf(in,"%d%d",&n,&q); for(i=1;i<=n;i++) fscanf(in,"%d",&data[i]); for i=0;i<q;i++) { fscanf(in,"%d%d",&a,&b); sum=0; for(j=a;j<=b;j++) sum+=data[j]; fprintf(out,"%d\n",sum); } return 0; } Program 2: #include <stdio.h> FILE *in=fopen("input.txt","r"); FILE *out=fopen("output.txt","w"); int n,q,a,b; int data[1000]; int sum[1000]; int main() { int i,j; fscanf(in,"%d%d",&n,&q); for(i=1;i<=n;i++) fscanf(in,"%d",&data[i]); for(i=1;i<=n;i++) sum[i]=sum[i-1]+data[i]; for(i=0;i<q;i++) { fscanf(in,"%d%d",&a,&b); fprintf(out,"%d\n",sum[b]-sum[a-1]); } return 0; } The programs below gets n integers from 1 to m and sorts them. Again, I cannot calculate the time complexity. [input sample] 5 5 2 1 3 4 5 [output sample] 1 2 3 4 5 Program: #include <stdio.h> FILE *in=fopen("input.txt","r") FILE *out=fopen("output.txt","w") int n,m; int data[1000]; int count[1000]; int main() { int i,j; fscanf(in,"%d%d",&n,&m); for(i=0;i<n;i++) { fscanf(in,"%d",&data[i]); count[data[i]]++ } for(i=1;i<=m;i++) { for(j=0;j<count[i];j++) fprintf(out,"%d ",i); } return 0; } It's ironic(or not) that I cannot calculate the time complexity of my own algorithms, but I have passions to learn, so please programming gurus, help me!

    Read the article

  • Computational complexity of Fibonacci Sequence

    - by Juliet
    I understand Big-O notation, but I don't know how to calculate it for many functions. In particular, I've been trying to figure out the computational complexity of the naive version of the Fibonacci sequence: int Fib(int n) { if (n <= 1) return 1; else return Fib(n - 1) + Fib(n - 2); } What is the computational complexity of the Fibonnaci sequence and how is it calculated?

    Read the article

  • Pseudo-quicksort time complexity

    - by Ord
    I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a suitably high level of abstraction) that is often used to demonstrate the conciseness of functional languages is as follows (given in Haskell): quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort [y | y<-xs, y<p] ++ [p] ++ quicksort [y | y<-xs, y>=p] Okay, so I know this thing has problems. The biggest problem with this is that it does not sort in place, which is normally a big advantage of quicksort. Even if that didn't matter, it would still take longer than a typical quicksort because it has to do two passes of the list when it partitions it, and it does costly append operations to splice it back together afterwards. Further, the choice of the first element as the pivot is not the best choice. But even considering all of that, isn't the average time complexity of this quicksort the same as the standard quicksort? Namely, O(n log n)? Because the appends and the partition still have linear time complexity, even if they are inefficient.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >