Search Results

Search found 8262 results on 331 pages for 'optimization algorithm'.

Page 278/331 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Throughput measurements

    - by dotsid
    I wrote simple load testing tool for testing performance of Java modules. One problem I faced is algorithm of throughput measurements. Tests are executed in several thread (client configure how much times test should be repeated), and execution time is logged. So, when tests are finished we have following history: 4 test executions 2 threads 36ms overall time - idle * test execution 5ms 9ms 4ms 13ms T1 |-*****-*********-****-*************-| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-----| <-----------------36ms---------------> For the moment I calculate throughput (per second) in a following way: 1000 / overallTime * threadCount. But there is problem. What if one thread will complete it's own tests more quickly (for whatever reason): 3ms 3ms 3ms 3ms T1 |-***-***-***-***----------------| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-| <--------------32ms--------------> In this case actual throughput is much better because of measured throughput is bounded by the most slow thread. So, my question is how should I measure throughput of code execution in multithreaded environment.

    Read the article

  • What is your longest-held programming assumption that turned out to be incorrect?

    - by Demi
    I am doing some research into common errors and poor assumptions made by junior (and perhaps senior) software engineers. What was your longest-held poor assumption that was eventually corrected? For example: I at one point failed to understand that the size of an integer was not a standard (depends on the language and target). A bit embarrassing to state, but there it is. Be frank: what hard-held belief did you have, and roughly how long did you maintain the assumption? It can be about an algorithm, a language, a programming concept, testing, anything under the computer science domain.

    Read the article

  • How should I use random.jumpahead in Python

    - by Peter Smit
    I have a application that does a certain experiment 1000 times (multi-threaded, so that multiple experiments are done at the same time). Every experiment needs appr. 50.000 random.random() calls. What is the best approach to get this really random. I could copy a random object to every experiment and do than a jumpahead of 50.000 * expid. The documentation suggests that jumpahead(1) already scrambles the state, but is that really true? Or is there another way to do this in 'the best way'? (No, the random numbers are not used for security, but for a metropolis hasting algorithm. The only requirement is that the experiments are independent, not whether the random sequence is somehow predictable or so)

    Read the article

  • What statistics can be maintained for a set of numerical data without iterating?

    - by Dan Tao
    Update Just for future reference, I'm going to list all of the statistics that I'm aware of that can be maintained in a rolling collection, recalculated as an O(1) operation on every addition/removal (this is really how I should've worded the question from the beginning): Obvious Count Sum Mean Max* Min* Median** Less Obvious Variance Standard Deviation Skewness Kurtosis Mode*** Weighted Average Weighted Moving Average**** OK, so to put it more accurately: these are not "all" of the statistics I'm aware of. They're just the ones that I can remember off the top of my head right now. *Can be recalculated in O(1) for additions only, or for additions and removals if the collection is sorted (but in this case, insertion is not O(1)). Removals potentially incur an O(n) recalculation for non-sorted collections. **Recalculated in O(1) for a sorted, indexed collection only. ***Requires a fairly complex data structure to recalculate in O(1). ****This can certainly be achieved in O(1) for additions and removals when the weights are assigned in a linearly descending fashion. In other scenarios, I'm not sure. Original Question Say I maintain a collection of numerical data -- let's say, just a bunch of numbers. For this data, there are loads of calculated values that might be of interest; one example would be the sum. To get the sum of all this data, I could... Option 1: Iterate through the collection, adding all the values: double sum = 0.0; for (int i = 0; i < values.Count; i++) sum += values[i]; Option 2: Maintain the sum, eliminating the need to ever iterate over the collection just to find the sum: void Add(double value) { values.Add(value); sum += value; } void Remove(double value) { values.Remove(value); sum -= value; } EDIT: To put this question in more relatable terms, let's compare the two options above to a (sort of) real-world situation: Suppose I start listing numbers out loud and ask you to keep them in your head. I start by saying, "11, 16, 13, 12." If you've just been remembering the numbers themselves and nothing more, and then I say, "What's the sum?", you'd have to think to yourself, "OK, what's 11 + 16 + 13 + 12?" before responding, "52." If, on the other hand, you had been keeping track of the sum yourself while I was listing the numbers (i.e., when I said, "11" you thought "11", when I said "16", you thought, "27," and so on), you could answer "52" right away. Then if I say, "OK, now forget the number 16," if you've been keeping track of the sum inside your head you can simply take 16 away from 52 and know that the new sum is 36, rather than taking 16 off the list and them summing up 11 + 13 + 12. So my question is, what other calculations, other than the obvious ones like sum and average, are like this? SECOND EDIT: As an arbitrary example of a statistic that (I'm almost certain) does require iteration -- and therefore cannot be maintained as simply as a sum or average -- consider if I asked you, "how many numbers in this collection are divisible by the min?" Let's say the numbers are 5, 15, 19, 20, 21, 25, and 30. The min of this set is 5, which divides into 5, 15, 20, 25, and 30 (but not 19 or 21), so the answer is 5. Now if I remove 5 from the collection and ask the same question, the answer is now 2, since only 15 and 30 are divisible by the new min of 15; but, as far as I can tell, you cannot know this without going through the collection again. So I think this gets to the heart of my question: if we can divide kinds of statistics into these categories, those that are maintainable (my own term, maybe there's a more official one somewhere) versus those that require iteration to compute any time a collection is changed, what are all the maintainable ones? What I am asking about is not strictly the same as an online algorithm (though I sincerely thank those of you who introduced me to that concept). An online algorithm can begin its work without having even seen all of the input data; the maintainable statistics I am seeking will certainly have seen all the data, they just don't need to reiterate through it over and over again whenever it changes.

    Read the article

  • SharePoint Permissions

    - by Greg
    I have a custom workflow. This workflow removes permissions to items when an item is added (example an item is added by a service account and once added those permissions need to be removed from that item). This works as I have the service account 'hard coded' in the custom workflow. Now I would like to remove this hard coding and when a item is added to a list I would like to iterate through all users that have access to the list item. If a user matches some algorithm then remove that user from the item permissions which will be 0 to many. The piece I'm stuggling with is how to iterage all users with permission to a SPListItem. Any thoughts on how to accomplish this? Thanks in advance!

    Read the article

  • Math for a geodesic sphere

    - by Marcelo Cantos
    I'm trying to create a very specific geodesic tessellation, but I can't find anything online about it. It is normal to subdivide the triangles of an icosahedron into triangle patches and project them onto the sphere. However, I noticed an animated GIF on the Wikipedia entry for Geodesic Domes that appears not to follow this scheme. Geodesic spheres generally comprise a mixture of mostly hexagonal triangle patches, with pentagonal patches forming at the vertices of the original icosahedron; in most cases, these pentagons are linked together; that is, following a straight edge from the center of one pentagon leads to the center of another pentagon. In the Wikipedia animation, however, the edge from the center of one pentagon doesn't appear to intersect the center of an adjacent pentagons; instead it intersects the side of the other pentagon. Hopefully the drawing below makes this clear: Where can I go to learn about the math behind this particular geometry? Ideally, I'd like to know of an algorithm for generating such tessellations.

    Read the article

  • openssl ssl encryption

    - by deddihp
    Hello, I want to discuss about openssl write and read method. Assume I have an data structure like below: /-----------------------------------------------------\ | my_header | PAYLOAD | \-----------------------------------------------------/ | | \ / \ / not encrypted encrypted I think the proper algorithm would be like this : SEND: build my_header with my own header. encrypt PAYLOAD with encryption function attach my_header and PAYLOAD (encrypted) to one buffer send it using common POSIX function just like send or sendto RECV: using common POSIX function just like recv or recvfrom. extract my_header and PAYLOAD(encrypted) decrypt PAYLOAD with decryption function at last i got my_header and PAYLOAD(decrypted). How is your approach if you face a problem like above. Since openssl encrypt all of data that is sent to SSL_write function (CMIIW). Thanks

    Read the article

  • Mass Ball-to-Ball Collision Handling (as in, lots of balls)

    - by BlueThen
    Update: Found out that I was using the radius as the diameter, which was why the mtd was overcompensating. Hi, StackOverflow. I've written a Processing program awhile back simulating ball physics. Basically, I have a large number of balls (1000), with gravity turned on. Detection works great, but my issue is that they start acting weird when they're bouncing against other balls in all directions. I'm pretty confident this involves the handling. For the most part, I'm using Jay Conrod's code. One part that's different is if (distance > 1.0) return; which I've changed to if (distance < 1.0) return; because the collision wasn't even being performed with the first bit of code, I'm guessing that's a typo. The balls overlap when I use his code, which isn't what I was looking for. My attempt to fix it was to move the balls to the edge of each other: float angle = atan2(y - collider.y, x - collider.x); float distance = dist(x,y, balls[ID2].x,balls[ID2].y); x = collider.x + radius * cos(angle); y = collider.y + radius * sin(angle); This isn't correct, I'm pretty sure of that. I tried the correction algorithm in the previous ball-to-ball topic: // get the mtd Vector2d delta = (position.subtract(ball.position)); float d = delta.getLength(); // minimum translation distance to push balls apart after intersecting Vector2d mtd = delta.multiply(((getRadius() + ball.getRadius())-d)/d); // resolve intersection -- // inverse mass quantities float im1 = 1 / getMass(); float im2 = 1 / ball.getMass(); // push-pull them apart based off their mass position = position.add(mtd.multiply(im1 / (im1 + im2))); ball.position = ball.position.subtract(mtd.multiply(im2 / (im1 + im2))); except my version doesn't use vectors, and every ball's weight is 1. The resulting code I get is this: PVector delta = new PVector(collider.x - x, collider.y - y); float d = delta.mag(); PVector mtd = new PVector(delta.x * ((radius + collider.radius - d) / d), delta.y * ((radius + collider.radius - d) / d)); // push-pull apart based on mass x -= mtd.x * 0.5; y -= mtd.y * 0.5; collider.x += mtd.x * 0.5; collider.y += mtd.y * 0.5; This code seems to over-correct collisions. Which doesn't make sense to me because in no other way do I modify the x and y values of each ball, other than this. Some other part of my code could be wrong, but I don't know. Here's the snippet of the entire ball-to-ball collision handling I'm using: if (alreadyCollided.contains(new Integer(ID2))) // if the ball has already collided with this, then we don't need to reperform the collision algorithm return; Ball collider = (Ball) objects.get(ID2); PVector collision = new PVector(x - collider.x, y - collider.y); float distance = collision.mag(); if (distance == 0) { collision = new PVector(1,0); distance = 1; } if (distance < 1) return; PVector velocity = new PVector(vx,vy); PVector velocity2 = new PVector(collider.vx, collider.vy); collision.div(distance); // normalize the distance float aci = velocity.dot(collision); float bci = velocity2.dot(collision); float acf = bci; float bcf = aci; vx += (acf - aci) * collision.x; vy += (acf - aci) * collision.y; collider.vx += (bcf - bci) * collision.x; collider.vy += (bcf - bci) * collision.y; alreadyCollided.add(new Integer(ID2)); collider.alreadyCollided.add(new Integer(ID)); PVector delta = new PVector(collider.x - x, collider.y - y); float d = delta.mag(); PVector mtd = new PVector(delta.x * ((radius + collider.radius - d) / d), delta.y * ((radius + collider.radius - d) / d)); // push-pull apart based on mass x -= mtd.x * 0.2; y -= mtd.y * 0.2; collider.x += mtd.x * 0.2; collider.y += mtd.y * 0.2; Thanks. (Apologies for lack of sources, stackoverflow thinks I'm a spammer)

    Read the article

  • pyplot: really slow creating heatmaps

    - by cvondrick
    I have a loop that executes the body about 200 times. In each loop iteration, it does a sophisticated calculation, and then as debugging, I wish to produce a heatmap of a NxM matrix. But, generating this heatmap is unbearably slow and significantly slow downs an already slow algorithm. My code is along the lines: import numpy import matplotlib.pyplot as plt for i in range(200): matrix = complex_calculation() plt.set_cmap("gray") plt.imshow(matrix) plt.savefig("frame{0}.png".format(i)) The matrix, from numpy, is not huge --- 300 x 600 of doubles. Even if I do not save the figure and instead update an on-screen plot, it's even slower. Surely I must be abusing pyplot. (Matlab can do this, no problem.) How do I speed this up?

    Read the article

  • Can this be done with the ORM? - Django

    - by RadiantHex
    Hi folks, I have a few item listed in a database, ordered through Reddit's algorithm. This is it: def reddit_ranking(post): t = time.mktime(post.created_on.timetuple()) - 1134000000 x = post.score if x>0: y=1 elif x==0: y=-0 else: y=-1 if x<0: z=1 else: z=x return (log(z) + y * t/45000) I'm wondering if there is any clever way of using Django's ORM, in order to UPDATE the models in bulk. Without doing this: items = Item.objects.filter(created_on__gte=datetime.now()-timedelta(days=7)) for item in items: item.reddit_rank = reddit_rank(item) item.save() I know about the F() object, but I can't figure out if this function can be performed inside the ORM. Any ideas? Help would be very much appreciated!

    Read the article

  • Does the 80/20 rule of time management apply to developers?

    - by Dean
    Jeff's recent article linked to a time management example of the First Fit Decreasing algorithm, which talked about the Pareto principle (or, the 80/20 rule) of time management, that is, that 80% of the work we produce in 20% of our time. Now we've all heard the programmer quote: The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time. But all jokes aside, it is often as if 20% of your code is to do what you want, and the other 80% is to handle exceptions... so does the 80/20 rule really apply to developers? Does anyone have any examples of why it does / does not apply to us?

    Read the article

  • is there a such thing as a randomly accessible pseudo-random number generator? (preferably open-sour

    - by lucid
    first off, is there a such thing as a random access random number generator, where you could not only sequentially generate random numbers as we're all used to, assuming rand100() always generates a value from 0-100: for (int i=0;i<5;i++) print rand100() output: 14 75 36 22 67 but also randomly access any random value like: rand100(0) would output 14 as long as you didn't change the seed rand100(3) would always output 22 rand100(4) would always output 67 and so on... I've actually found an open-source generator algorithm that does this, but you cannot change the seed. I know that pseudorandomness is a complex field; I wouldn't know how to alter it to add that functionality. Is there a seedable random access random number generator, preferably open source? or is there a better term for this I can google for more information? if not, part 2 of my question would be, is there any reliably random open source conventional seedable pseudorandom number generator so I could port it to multiple platforms/languages while retaining a consistent sequence of values for each platform for any given seed?

    Read the article

  • Query size of block device file in Python

    - by ??O?????
    Hello. I have a Python script that reads a file (typically from optical media) marking the unreadable sectors, to allow a re-attempt to read said unreadable sectors on a different optical reader. I discovered that my script does not work with block devices (e.g. /dev/sr0), in order to create a copy of the contained ISO9660/UDF filesystem, because os.stat().st_size is zero. The algorithm currently needs to know the filesize in advance; I can change that, but the issue (of knowing the block device size) remains, and it's not answered here, so I open this question. I am aware of the following two related SO questions: Determine the size of a block device (/proc/partitions, ioctl through ctypes) how to check file size in python? (about non-special files) Therefore, I'm asking: in Python, how can I get the file size of a block device file?

    Read the article

  • Writing a search engine

    - by wvd
    Hello all, The title might be a bit misleading, but I couldn't figure out a better title. I'm writing a simple search engine which will search on several sites for the specific domain. To be concrete: I'm writing a search engine for hardstyle livesets/aftermovies/tracks. To do I will search on the sites who provide livesets, tracks, and such. The problem here is speed, I need to pass the search query to 5-7 sites, get the results and then use my own algorithm to display the results in a sorted order. I could just "multithread" it, but it's easier said then done so I have a few questions. What would be the best solution to this problem? Should I just multithread/process this application, so I'm going to get a bit of speed-up? Are there any other solutions or I am doing something really wrong? Thanks, William van Doorn

    Read the article

  • Python ValueError: not allowed to raise maximum limit

    - by Ricky Bobby
    I'm using python 2.7.2 on mac os 10.7.3 I'm doing a recursive algorithm in python with more than 50 000 recursion levels. I tried to increase the maximum recursion level to 1 000 000 but my python shell still exit after 18 000 recursion levels. I tried to increase the resources available : import resource resource.setrlimit(resource.RLIMIT_STACK, (2**29,-1)) sys.setrecursionlimit(10**6) and I get this error : Traceback (most recent call last): File "<pyshell#58>", line 1, in <module> resource.setrlimit(resource.RLIMIT_STACK,(2**29,-1)) ValueError: not allowed to raise maximum limit I don't know why I cannot raise the maximum limit ? thanks for your suggestions .

    Read the article

  • how to pass an arbitrary signature to Certifcate

    - by eskoba
    I am trying to sign certificate (X509) using secret sharing. that is shareholders combine their signatures to produce the final signature. which will be in this case the signed certificate. however practically from my understanding only one entity can sign a certificate. therefore I want to know: which entities or data of the x509certificate are actually taken as input to the signing algorithm? ideally I want this data to be signed by the shareholders and then the final combination will be passed to the X509certificate as valid signature. is this possible? how could it done? if not are they other alternatives?

    Read the article

  • Efficiency of while(true) ServerSocket Listen

    - by Submerged
    I am wondering if a typical while(true) ServerSocket listen loop takes an entire core to wait and accept a client connection (Even when implementing runnable and using Thread .start()) I am implementing a type of distributed computing cluster and each computer needs every core it has for computation. A Master node needs to communicate with these computers (invoking static methods that modify the algorithm's functioning). The reason I need to use sockets is due to the cross platform / cross language capabilities. In some cases, PHP will be invoking these java static methods. I used a java profiler (YourKit) and I can see my running ServerSocket listen thread and it never sleeps and it's always running. Is there a better approach to do what I want? Or, will the performance hit be negligible? Please, feel free to offer any suggestion if you can think of a better way (I've tried RMI, but it isn't supported cross-language. Thanks everyone

    Read the article

  • question about frequency of updating access

    - by I__
    i have a table in an access database this access database is used on a regular basis, basically from 9-5 someone else has a copy of this exact table. sometimes records are added, sometimes deleted, and sometimes data within the records is updated. i need to update the access database table with the offsite table every hour or so. what is the best algorithm of updating the data? there are about 5000 records. would it severely lock up the table for a few seconds every hour? i would like to publicly apologize for my rude comment to david fenton

    Read the article

  • Java text classification problem

    - by yox
    Hello, I have a set of Books objects, classs Book is defined as following : Class Book{ String title; ArrayList<tags> taglist; } Where title is the title of the book, example : Javascript for dummies. and taglist is a list of tags for our example : Javascript, jquery, "web dev", .. As I said a have a set of books talking about different things : IT, BIOLOGY, HISTORY, ... Each book has a title and a set of tags describing it.. I have to classify automaticaly those books into separated sets by topic, example : IT BOOKS : Java for dummies Javascript for dummies Learn flash in 30 days C++ programming HISTORY BOOKS : World wars America in 1960 Martin luther king's life BIOLOGY BOOKS : .... Do you guys know a classification algorithm/method to apply for that kind of problems ? A solution is to use an external API to define the category of the text, but the problem here is that books are in different languages : french, spanish, english ..

    Read the article

  • C# how to create functions that are interpreted at runtime

    - by Lirik
    I'm making a Genetic Program, but I'm hitting a limitation with C# where I want to present new functions to the algorithm but I can't do it without recompiling the program. In essence I want the user of the program to provide the allowed functions and the GP will automatically use them. It would be great if the user is required to know as little about programming as possible. I want to plug in the new functions without compiling them into the program. In Python this is easy, since it's all interpreted, but I have no clue how to do it with C#. Does anybody know how to achieve this in C#? Are there any libraries, techniques, etc?

    Read the article

  • How to create a custom ADO Multi Demensional Catalog with no database

    - by Alan Clark
    Does anyone know of an example of how to dynamically define and build ADO MD (ActiveX Data Objects Multidimensional) catalogs and cube definitions with a set of data other than a database? Background: we have a huge amount of data in our application that we export to a database and then query using the usual SQL joins, groups, sums etc to produce reports. The data in the application is originally in objects and arrays. The problem is the amount of data is so large the export can take 2 hours. So I am trying to figure out a good way of querying the objects in memory, either by a custom OLAP algorithm or library, or ADO MD. But I haven't been able to find an example of using ADO MD without a database behind it. We are using Delphi 2010 so would use ADO ActiveX but I imagine the ADO.NET MD is similar. I realize that if the application data was already stored in a database the problem would solve itself. Also if Delphi had LINQ capability I could query the objects and arrays that way.

    Read the article

  • C# BinarySearch breaks when inheriting from something that implements IComparable<T>?

    - by Ender
    In .NET the BinarySearch algorithm (in Lists, Arrays, etc.) appears to fail if the items you are trying to search inherit from an IComparable instead of implementing it directly: List<B> foo = new List<B>(); // B inherits from A, which implements IComparable<A> foo.Add(new B()); foo.BinarySearch(new B()); // InvalidOperationException, "Failed to compare two elements in the array." Where: public abstract class A : IComparable<A> { public int x; public int CompareTo(A other) { return x.CompareTo(other.x); } } public class B : A {} Is there a way around this? Implementing CompareTo(B other) in class B doesn't seem to work.

    Read the article

  • Any tips of how to handle hierarchical trees in relational model?

    - by George
    Hello all. I have a tree structure that can be n-levels deep, without restriction. That means that each node can have another n nodes. What is the best way to retrieve a tree like that without issuing thousands of queries to the database? I looked at a few other models, like flat table model, Preorder Tree Traversal Algorithm, and so. Do you guys have any tips or suggestions of how to implement a efficient tree model? My objective in the real end is to have one or two queries that would spit the whole tree for me. With enough processing i can display the tree in dot net, but that would be in client machine, so, not much of a big deal. Thanks for the attention

    Read the article

  • Vectorizing sums of different diagonals in a matrix

    - by reve_etrange
    I want to vectorize the following MATLAB code. I think it must be simple but I'm finding it confusing nevertheless. r = some constant less than m or n [m,n] = size(C); S = zeros(m-r,n-r); for i=1:m-r for j=1:n-r S(i,j) = sum(diag(C(i:i+r-1,j:j+r-1))); end end The code calculates a table of scores, S, for a dynamic programming algorithm, from another score table, C. The diagonal summing is to generate scores for individual pieces of the data used to generate C, for all possible pieces (of size r). Thanks in advance for any answers! Sorry if this one should be obvious...

    Read the article

  • matlab fit exp2

    - by HelloWorld
    I'm unsuccessfully looking for documentation of fit function using exp2 (sum of 2 exponents). How to operate the function is clear: [curve, gof] = fit(x, y,'exp2'); But since there are multiple ways to fit a sum of exponents I'm trying to find out what algorithm is used. Particularly what happens when I'm fitting one exponent (the raw data) with a bit of noise, how the exponents are spread. I've simulated several cases, and it seems that it "drops" all the weight on the second set of coefficients, but row data analysis often shows different behavior. Does anyone have suggestions of documentation?

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >