Search Results

Search found 7490 results on 300 pages for 'algorithm analysis'.

Page 217/300 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Creating an Excel Template for different data size

    - by dassouki
    I created an excel template for a file i've done for a routine work calculation. The file takes data from the data logger and does some analysis on it and outputs one number regardless of the input size. The problem I'm having is i have to modify the sheet to suit the number of rows, as everyday the data logger outputs a different number of rows. there are about 15 sheets in the workbook and it's annoying to have to change everyone of them everyday. What i'd like to do input the data logger csv, and boom the result gets outputted. Is there a way through vba or not to ahieve

    Read the article

  • How does one modify the thread scheduling behavior when using Threading Building Blocks (TBB)?

    - by J Teller
    Does anyone know how to modify the thread scheduling (specifically affinity) when using TBB? Doing a high level analysis on a simple parallel-for application, it seems like TBB is specifying the underlying threads' affinity in a way that reduces performance. Specifically, the cores I'm running on have hyper-threading enabled, and it looks like TBB is affinitizing threads to the same core even if there is a different core left completely unloaded. FWIW, I realize it's likely that TBB is doing the "right thing" and that changing the threads' affinity will only reduce performance. I'd just like to experiment with it to see if that's really the case.

    Read the article

  • JJIL Android Java Problem

    - by Danny_E
    Hey Guys, Long time reader never posted until now. Im having some trouble with Android, im implementing a library called JJIL its an open source imaging library. My problem is this i need to run some analysis on an image and to do so i need to have it in jjil.core.image format and once those processes are complete i need to convert the changed image from jjil.core.image to java.awt.image. I cant seem to find a method of doing this does anyone have any ideas or have any experience with this? I would be grateful of any help. Danny

    Read the article

  • Manual stack backtrace on Windows mobile (SEH)

    - by caahab
    Following situation: I'm developing an windows mobile application using the sdk 6. Target machine is a nautiz x7. To improve the error reporting I want to catch the structured exceptions (SEH) and do a stack backtrace to store some information for analysis. So far I have the information where the exception was thrown (windows core.dll) and I can backtrace the return adresses thru the stack. But what I want to know is, which instruction in my code caused the exception? Does anyone know how to use the available exception and context information to get the appropriate function/instruction address? Unfortunately windows mobile 6 sdk for pocketpc does not support all the helper functions to do stackwalks or mini dumps.

    Read the article

  • Leak caused by fread

    - by Jack
    I'm profiling code of a game I wrote and I'm wondering how it is possible that the following snippet causes an heap increase of 4kb (I'm profiling with Heapshot Analysis of Xcode) every time it is executed: u8 WorldManager::versionOfMap(FILE *file) { char magic[4]; u8 version; fread(magic, 4, 1, file); <-- this is the line fread(&version,1,1,file); fseek(file, 0, SEEK_SET); return version; } According to the profiler the highlighted line allocates 4.00Kb of memory with a malloc every time the function is called, memory which is never released. This thing seems to happen with other calls to fread around the code, but this was the most eclatant one. Is there anything trivial I'm missing? Is it something internal I shouldn't care about? Just as a note: I'm profiling it on an iPhone and it's compiled as release (-O2).

    Read the article

  • Bitmap manipulation in C++ on Windows

    - by Oliver
    Hi, I have myself a handle to a bitmap, in C++, on Windows: HBITMAP hBitmap; On this image I want to do some Image Recognition, pattern analysis, that sort of thing. In my studies at University, I have done this in Matlab, it is quite easy to get at the individual pixels based on their position, but I have no idea how to do this in C++ under Windows - I haven't really been able to understand what I have read so far. I have seen some references to a nice looking Bitmap class that lets you setPixel() and getPixel() and that sort of thing, but I think this is with .net . How should I go about turning my HBITMAP into something I can play with easily? I need to be able to get at the RGBA information. Are there libraries that allow me to work with the data without having to learn about DCs and BitBlt and that sort of thing?

    Read the article

  • Random List of millions of elements in Python Efficiently

    - by eWizardII
    Hello, I have read this answer potentially as the best way to randomize a list of strings in Python. I'm just wondering then if that's the most efficient way to do it because I have a list of about 30 million elements via the following code: import json from sets import Set from random import shuffle a = [] for i in range(0,193): json_data = open("C:/Twitter/user/user_" + str(i) + ".json") data = json.load(json_data) for j in range(0,len(data)): a.append(data[j]['su']) new = list(Set(a)) print "Cleaned length is: " + str(len(new)) ## Take Cleaned List and Randomize it for Analysis shuffle(new) If there is a more efficient way to do it, I'd greatly appreciate any advice on how to do it. Thanks,

    Read the article

  • What is the most useful R trick?

    - by Dirk Eddelbuettel
    In order to share some more tips and tricks for R, what is you single-most useful feature or trick? Clever vectorization? Data input/output? Visualization and graphics? Statistical analysis? Special functions? The interactive environment itself? One item per post, and we will see if we get a winner by means of votes. [Edit 25-Aug 2008]: So after one week, it seems that the simple str() won the poll. As I like to recommend that one myself, it is an easy answer to accept.

    Read the article

  • XNA or C# Pop-up progress bar for the LoadContent() method

    - by Warlax
    Hey people, We wrote a small game using Microsoft's XNA Game Studio 3.1. The LoadContent() takes a long time because, other than loading models, and config files, we're also running some one-time (per run) terrain analysis. We are not C# or XNA programmers... we're Java programmers, and want to be able to give the user some feedback that the system is loading. Preferably, this will be through a simple pop-up with a progress bar that will say something simple like "loading please wait". The progress bar doesn't have to be a 0 to 1 progress bar, it can instead be one of those 'back and forth' progress bars. I was hoping for some quick copy-paste ready code to just do that - as it is not a central piece of our project, nor do we have a need to delve into too much documentation. I appreciate you time, effort, and possible donation. Thanks.

    Read the article

  • Java source code generation frameworks

    - by Superfilin
    I have a set of Java 5 source files with old-style Doclet tags, comments and annotations. And based on that I would like to write a generator for another set of Java classes. What is the best way to do that? And are there any good standalone libraries for code analysis/generation in Java? Any shared exprience in this field is appreciated. So, far I have found these: JaxME's Java Source Reflection - seems good, but it does not seem to support annotations. Also it had no release since 2006. Annogen - uses JDK's Doclet generator, which has some bugs under 1.5 JDK. Also it had no releases for a long time. Javaparser - seems good as well and pretty recent, but only supports Visitor pattern for a single class i.e. no query mechanism like in the 2 above packages.

    Read the article

  • fortran error I/O

    - by jpcgandre
    I get this error when compiling: forrtl: severe (256): unformatted I/O to unit open for formatted transfers, unit 27, file C:\Abaqus_JOBS\w.txt The error occurs in the beginning of the analysis. At the start, the file w.txt is created but is empty. The error may be related to the fact that I want to read from an empty file. My code is: OPEN(27, FILE = "C:/Abaqus_JOBS/w.txt", status = "UNKNOWN") READ(27, *, iostat=stat) w IF (stat .NE. 0) CALL del_file(27, stat) SUBROUTINE del_file(uFile, stat) IMPLICIT NONE INTEGER uFile, stat C If the unit is not open, stat will be non-zero CLOSE(unit=uFile, status='delete', iostat=stat) END SUBROUTINE Ref: Close multiple files If you agree with my opion about the cause of the error, is there a way to solve it? Thanks

    Read the article

  • What was that tutorial on pointers?

    - by pecker
    Hello, I once read a tutorial/article on Pointers somewhere. It was not a general tutorial but it explained how to clearly understand the complex & confusing pointers (especially like the ones that are usually asked in interview). It was more like http://www.codeweblog.com/right-left-rule-complex-pointer-analysis/ I'm unable to find it. Could any one post it here. PS: I did tried to google it but couldn't find. I'm asking it here because I thought it was popular.

    Read the article

  • How can I use splne() with ggplot?

    - by David
    I would like to fit my data using spline(y~x) but all of the examples that I can find use a spline with smoothing, e.g. lm(y~ns(x), df=_). I want to use spline() specifically because I am using this to do the analysis represented by the plot that I am making. Is there a simple way to use spline() in ggplot? I have considered the hackish approach of fitting a line using geom_smooth(aes(x=(spline(y~x)$x, y=spline(y~x)$y)) but I would prefer not to have to resort to this. Thanks!

    Read the article

  • Performance Overhead of Perf Event Subsystem in Linux Kernel

    - by Bo Xiao
    Performance counters for Linux are a new kernel-based subsystem that provide a framework for all things performance analysis. It covers hardware level (CPU/PMU, Performance Monitoring Unit) features and software features (software counters, tracepoints) as well. Since 2.6.33, the kernel provide 'perf_event_create_kernel_counter' kernel api for developers to create kernel counter to collect system runtime information. What I concern most is the performance impact on overall system when tracepoint/ftrace is enabled. There are no docs I can find about them. I was once told that ftrace was implemented by dynamically patching code, will it slow the system dramatically?

    Read the article

  • Reference-counted object is used after it is released

    - by EndyVelvet
    Doing code analysis of the project and get the message "Reference-counted object is used after it is released" on the line [defaults setObject: deviceUuid forKey: @ "deviceUuid"]; I watched this topic Obj-C, Reference-counted object is used after it is released? But the solution is not found. ARC disabled. // Get the users Device Model, Display Name, Unique ID, Token & Version Number UIDevice *dev = [UIDevice currentDevice]; NSString *deviceUuid; if ([dev respondsToSelector:@selector(uniqueIdentifier)]) deviceUuid = dev.uniqueIdentifier; else { NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; id uuid = [defaults objectForKey:@"deviceUuid"]; if (uuid) deviceUuid = (NSString *)uuid; else { CFStringRef cfUuid = CFUUIDCreateString(NULL, CFUUIDCreate(NULL)); deviceUuid = (NSString *)cfUuid; CFRelease(cfUuid); [defaults setObject:deviceUuid forKey:@"deviceUuid"]; } } Please help find the cause.

    Read the article

  • Screening (multi)collinearity in a reggresion model

    - by aL3xa
    I hope that this one is not going to be "ask-and-answer" question... here goes: (multi)collinearity refers to extremely high correlations between predictors in the regression model. How to cure them... well, sometimes you don't need to "cure" collinearity, since it doesn't affect regression model itself, but interpretation of an effect of individual predictors. One way to spot collinearity is to put each predictor as a dependent variable, and other predictors as independent variables, determine R2, and if it's larger than .9 (or .95), we can consider predictor redundant. This is one "method"... what about other approaches? Some of them are time consuming, like excluding predictors from model and watching for b-coefficient changes - they should be noticeably different. Of course, we must always bare in mind specific context/goal of analysis... Sometimes, only remedy is to repeat a research, but right now, I'm interested in various ways of screening redundant predictors when (multi)collinearity occurs in a regression model.

    Read the article

  • Java code optimization leads to numerical inaccuracies and errors

    - by rano
    I'm trying to implement a version of the Fuzzy C-Means algorithm in Java and I'm trying to do some optimization by computing just once everything that can be computed just once. This is an iterative algorithm and regarding the updating of a matrix, the clusters x pixels membership matrix U, this is the update rule I want to optimize: where the x are the element of a matrix X (pixels x features) and v belongs to the matrix V (clusters x features). And m is a parameter that ranges from 1.1 to infinity. The distance used is the euclidean norm. If I had to implement this formula in a banal way I'd do: for(int i = 0; i < X.length; i++) { int count = 0; for(int j = 0; j < V.length; j++) { double num = D[i][j]; double sumTerms = 0; for(int k = 0; k < V.length; k++) { double thisDistance = D[i][k]; sumTerms += Math.pow(num / thisDistance, (1.0 / (m - 1.0))); } U[i][j] = (float) (1f / sumTerms); } } In this way some optimization is already done, I precomputed all the possible squared distances between X and V and stored them in a matrix D but that is not enough, since I'm cycling througn the elements of V two times resulting in two nested loops. Looking at the formula the numerator of the fraction is independent of the sum so I can compute numerator and denominator independently and the denominator can be computed just once for each pixel. So I came to a solution like this: int nClusters = V.length; double exp = (1.0 / (m - 1.0)); for(int i = 0; i < X.length; i++) { int count = 0; for(int j = 0; j < nClusters; j++) { double distance = D[i][j]; double denominator = D[i][nClusters]; double numerator = Math.pow(distance, exp); U[i][j] = (float) (1f / (numerator * denominator)); } } Where I precomputed the denominator into an additional column of the matrix D while I was computing the distances: for (int i = 0; i < X.length; i++) { for (int j = 0; j < V.length; j++) { double sum = 0; for (int k = 0; k < nDims; k++) { final double d = X[i][k] - V[j][k]; sum += d * d; } D[i][j] = sum; D[i][B.length] += Math.pow(1 / D[i][j], exp); } } By doing so I encounter numerical differences between the 'banal' computation and the second one that leads to different numerical value in U (not in the first iterates but soon enough). I guess that the problem is that exponentiate very small numbers to high values (the elements of U can range from 0.0 to 1.0 and exp , for m = 1.1, is 10) leads to ver y small values, whereas by dividing the numerator and the denominator and THEN exponentiating the result seems to be better numerically. The problem is it involves much more operations. Am I doing something wrong? Is there a possible solution to get both the code optimized and numerically stable? Any suggestion or criticism will be appreciated.

    Read the article

  • to get columns from Excel files using Apache POI??

    - by posdef
    Hi, In order to do some statistical analysis I need to extract values in a column of an Excel sheet. I have been using the Apache POI package to read from Excel files, and it works fine when one needs to iterate over rows. However I couldn't find anything about getting columns neither in the API (link text) nor through google searching. As I need to get max and min values of different columns and generate random numbers using these values, so without picking up individual columns, the only other option is to iterate over rows and columns to get the values and compare one by one, which doesn't sound all that time-efficient. Any ideas on how to tackle this problem? Thanks,

    Read the article

  • Screen capture during testing

    - by Edwward
    This is an application for reviewing performance tests. Simple in concept, tricky to describe. Picture: 1) Recording interactions with a WPF program so the inputs can be played back. 2) Playing the inputs back while doing a continuous screen capture. 3) Capturing wall time as well as continuous CPU percentages during playback. 4) Repeating steps (2) and (3) lots of times. 5) Writing the relevant stuff out to files/db. 6) Reading it and putting it all in a fancy UI for easy review/analysis. The killer for me is (2). I could use some guidance on a good, possibly commercial, screen capture SDK. I would also welcome the news that my whole problem already has a solution. And of course any thoughts on the overall idea would also be great. Thanks. Ed

    Read the article

  • How to make Visual C++ 9 not emit code that is actually never called?

    - by sharptooth
    My native C++ COM component uses ATL. In DllRegisterServer() I call CComModule::RegisterServer(): STDAPI DllRegisterServer() { return _Module.RegisterServer(FALSE); // <<< notice FALSE here } FALSE is passed to indicate to not register the type library. ATL is available as sources, so I in fact compile the implementation of CComModule::RegisterServer(). Somewhere down the call stack there's an if statement: if( doRegisterTypeLibrary ) { //<< FALSE goes here // do some stuff, then call RegisterTypeLib() } The compiler sees all of the above code and so it can see that in fact the if condition is always false, yet when I inspect the linker progress messages I see that the reference to RegisterTypeLib() is still there, so the if statement is not eliminated. Can I make Visual C++ 9 perform better static analysis and actually see that some code is never called and not emit that code?

    Read the article

  • Java resource management: please help to understand Findbugs results.

    - by java.is.for.desktop
    Hello, everyone! Findbugs bugs me about a method which opens two Closeable instances, but I can't understand why. Source public static void sourceXmlToBeautifiedXml(File input, File output) throws TransformerException, IOException, JAXBException { FileReader fileReader = new FileReader(input); FileWriter fileWriter = new FileWriter(output); try { // may throw something sourceXmlToBeautifiedXml(fileReader, fileWriter); } finally { try { fileReader.close(); } finally { fileWriter.close(); } } } Findbugs analysis Findbugs tells me Method [...] may fail to clean up java.io.Reader [...] and points to the line with FileReader fileReader = ... Question Who is wrong: me or Findbugs?

    Read the article

  • Python: how to run several scripts (or functions) at the same time under windows 7 multicore processor 64bit

    - by Gianni
    sorry for this question because there are several examples in Stackoverflow. I am writing in order to clarify some of my doubts because I am quite new in Python language. i wrote a function: def clipmyfile(inFile,poly,outFile): ... # doing something with inFile and poly and return outFile Normally I do this: clipmyfile(inFile="File1.txt",poly="poly1.shp",outFile="res1.txt") clipmyfile(inFile="File2.txt",poly="poly2.shp",outFile="res2.txt") clipmyfile(inFile="File3.txt",poly="poly3.shp",outFile="res3.txt") ...... clipmyfile(inFile="File21.txt",poly="poly21.shp",outFile="res21.txt") I had read in this example Run several python programs at the same time and i can use (but probably i wrong) from multiprocessing import Pool p = Pool(21) # like in your example, running 21 separate processes to run the function in the same time and speed my analysis I am really honest to say that I didn't understand the next step. Thanks in advance for help and suggestion Gianni

    Read the article

  • Strange performance behaviour for 64 bit modulo operation

    - by codymanix
    The last three of these method calls take approx. double the time than the first four. The only difference is that their arguments doesn't fit in integer anymore. But should this matter? The parameter is declared to be long, so it should use long for calculation anyway. Does the modulo operation use another algorithm for numbersmaxint? I am using amd athlon64 3200+, winxp sp3 and vs2008. Stopwatch sw = new Stopwatch(); TestLong(sw, int.MaxValue - 3l); TestLong(sw, int.MaxValue - 2l); TestLong(sw, int.MaxValue - 1l); TestLong(sw, int.MaxValue); TestLong(sw, int.MaxValue + 1l); TestLong(sw, int.MaxValue + 2l); TestLong(sw, int.MaxValue + 3l); Console.ReadLine(); static void TestLong(Stopwatch sw, long num) { long n = 0; sw.Reset(); sw.Start(); for (long i = 3; i < 20000000; i++) { n += num % i; } sw.Stop(); Console.WriteLine(sw.Elapsed); } EDIT: I now tried the same with C and the issue does not occur here, all modulo operations take the same time, in release and in debug mode with and without optimizations turned on: #include "stdafx.h" #include "time.h" #include "limits.h" static void TestLong(long long num) { long long n = 0; clock_t t = clock(); for (long long i = 3; i < 20000000LL*100; i++) { n += num % i; } printf("%d - %lld\n", clock()-t, n); } int main() { printf("%i %i %i %i\n\n", sizeof (int), sizeof(long), sizeof(long long), sizeof(void*)); TestLong(3); TestLong(10); TestLong(131); TestLong(INT_MAX - 1L); TestLong(UINT_MAX +1LL); TestLong(INT_MAX + 1LL); TestLong(LLONG_MAX-1LL); getchar(); return 0; } EDIT2: Thanks for the great suggestions. I found that both .net and c (in debug as well as in release mode) does't not use atomically cpu instructions to calculate the remainder but they call a function that does. In the c program I could get the name of it which is "_allrem". It also displayed full source comments for this file so I found the information that this algorithm special cases the 32bit divisors instead of dividends which was the case in the .net application. I also found out that the performance of the c program really is only affected by the value of the divisor but not the dividend. Another test showed that the performance of the remainder function in the .net program depends on both the dividend and divisor. BTW: Even simple additions of long long values are calculated by a consecutive add and adc instructions. So even if my processor calls itself 64bit, it really isn't :( EDIT3: I now ran the c app on a windows 7 x64 edition, compiled with visual studio 2010. The funny thing is, the performance behavior stays the same, although now (I checked the assembly source) true 64 bit instructions are used.

    Read the article

  • Have you switched from CodeIgniter to Kohana?

    - by Eli
    Hi All, I usually just work with straight PHP, but want to try MVC and see if a framework will really speed up development. After much waffling, analysis paralysis, and many dumb SO questions, I thought I had settled on CodeIgniter for my next PHP project. However, I am now seriously considering Kohana. Has anyone made the switch from CI to Kohana? If so, why? What's better about the actual code, libraries, etc? Edit: Hi All, I did end up going with Kohana. It's easy to use, but more importantly, it's easy NOT to use, since there are a lot of things I like to work with native PHP for. It's ridiculously extensible, well coded, and seems like it is beginning to pull out ahead of CI in a few things like putting views in views, passing subview data, etc. I am sure CI will catch up, but Kohana should be 3 steps ahead by then =o)

    Read the article

  • Agile: User Stories for Machine Learning Project?

    - by benjismith
    I've just finished up with a prototype implementation of a supervised learning algorithm, automatically assigning categorical tags to all the items in our company database (roughly 5 million items). The results look good, and I've been given the go-ahead to plan the production implementation project. I've done this kind of work before, so I know how the functional components of the software. I need a collection of web crawlers to fetch data. I need to extract features from the crawled documents. Those documents need to be segregated into a "training set" and a "classification set", and feature-vectors need to be extracted from each document. Those feature vectors are self-organized into clusters, and the clusters are passed through a series of rebalancing operations. Etc etc etc etc. So I put together a plan, with about 30 unique development/deployment tasks, each with time estimates. The first stage of development -- ignoring some advanced features that we'd like to have in the long-term, but aren't high enough priority to make it into the development schedule yet -- is slated for about two months worth of work. (Keep in mind that I already have a working prototype, so the final implementation is significantly simpler than if the project was starting from scratch.) My manager said the plan looked good to him, but he asked if I could reorganize the tasks into user stories, for a few reasons: (1) our project management software is totally organized around user stories; (2) all of our scheduling is based on fitting entire user stories into sprints, rather than individually scheduling tasks; (3) other teams -- like the web developers -- have made great use of agile methodologies, and they've benefited from modelling all the software features as user stories. So I created a user story at the top level of the project: As a user of the system, I want to search for items by category, so that I can easily find the most relevant items within a huge, complex database. Or maybe a better top-level story for this feature would be: As a content editor, I want to automatically create categorical designations for the items in our database, so that customers can easily find high-value data within our huge, complex database. But that's not the real problem. The tricky part, for me, is figuring out how to create subordinate user stories for the rest of the machine learning architecture. Case in point... I know that the algorithm requires two major architectural subdivisions: (A) training, and (B) classification. And I know that the training portion of the architecture requires construction of a cluster-space. All the Agile Development literature I've read seems to indicate that a user story should be the "smallest possible implementation that provides any business value". And that makes a lot of sense when designing a piece of end-user software. Start small, and then incrementally add value when users demand additional functionality. But a cluster-space, in and of itself, provides zero business value. Nor does a crawler, or a feature-extractor. There's no business value (not for the end-user, or for any of the roles internal to the company) in a partial system. A trained cluster-space is only possible with the crawler and feature extractor, and only relevant if we also develop an accompanying classifier. I suppose it would be possible to create user stories where the subordinate components of the system act as the users in the stories: As a supervised-learning cluster-space construction routine, I want to consume data from a feature extractor, so that I can exist. But that seems really weird. What benefit does it provide me as the developer (or our users, or any other stakeholders, for that matter) to model my user stories like that? Although the main story can be easily divided along architectural-component boundaries (crawler, trainer, classifier, etc), I can't think of any useful decomposition from a user's perspective. What do you guys think? How do you plan Agile user stories for sophisticated, indivisible, non-user-facing components?

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >