Search Results

Search found 265 results on 11 pages for 'regression'.

Page 7/11 | < Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >

  • Which approach is the most maintainable?

    - by 2rs2ts
    When creating a product which will inherently suffer from regression due to OS updates, which of these is the preferable approach when trying to reduce maintenance cost and the likelihood of needing refactoring, when considering the task of interpreting system state and settings for a lay user? Delegate the responsibility of interpreting the results of inspecting the system to the modules which perform these tasks, or, Separate the concerns of interpretation and inspection into two modules? The first obviously creates a blob in which a lot of code would be verbose, redundant, and hard to grok; the second creates a strong coupling in which the interpretation module essentially has to know what it expects from inspection routines and will have to adapt to changes to the OS just as much as the inspection will. I would normally choose the second option for the separation of concerns, foreseeing the possibility that inspection routines could be re-used, but a developer updating the product to deal with a new OS feature or something would have to not only write an inspection routine but also write an interpretation routine and link the two correctly - and it gets worse for a developer who has to change which inspection routines are used to get a certain system setting, or worse yet, has to fix an inspection routine which broke after an OS patch. I wonder, is it better to have to patch one package a lot or two packages, each somewhat less so?

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

  • Looking for an example of how a software project can be managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is understanding how to move code between stages of development. We have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. This movement of code is triggered by advancing the status of the change request to match the stage of the pipeline. I have been searching the internet for a few days trying to find how the process is accomplished elsewhere -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources detailing specific workflows or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. Note: I've cleaned up the question to hopefully make it easier to understand. Also, I found another question (which I can't find now) that referenced this book, which sounds like it might be exactly what I am looking for -- not sure if I want to shell out the cash for it, though.

    Read the article

  • If the bug is 5+ years old, then is it a feature?

    - by Job
    Allow me to add details: I work at an institutional place with many coders, testers, QA analysts, product owners, etc. and here is something that bugs me: We have been able to sell crappy (albeit pretty functional) software for over a decade. It has many features and the product is competitive, but there are a some serious bugs out there, as well as thousands of "paper cuts" - little annoyances that clients need to get used to. It pains me to look at some of the things because I firmly believe that if computers do not help to make our lives easier, then we should not use them. I have confidence in my colleagues - they are smart, able, and can improve things when the focus is on doing that. But, it can be difficult to file bugs against some old functionality without seeing them closed or forgotten. "It worked like that for ions" is a typical answer. Also, when QA does regression, they tend to look for anything that is different as much as anything that does not seem right. So, a fix to an old problem can be written up as a bug, because "it has been like that before even my time". The young coder in me thinks: rewrite this freaking thing! As someone who had the opportunity to be close to sales, clients, I want to give a benefit of a doubt to this approach. I am interested in your opinion/experience as well. Please try to consider risk, cost-to-benefit, and other non-technical factors.

    Read the article

  • OpenGL Colorspace Conversion

    - by Steven Behnke
    Does anyone know how to create a texture with a YUV colorspace so that we can get hardware based YUV to RGB colorspace conversion without having to use a fragment shader? I'm using an NVidia 9400 and I don't see an obvious GL extension that seems to do the trick. I've found examples how to use a fragment shader, but the project I'm working on currently only supports OpenGL 1.1 and I don't have time to convert it to 2.0 and perform all the regression testing necessary. This is also targeting Linux. On other platforms I've been using a MESA extension but it doesn't function on the Nvidia card.

    Read the article

  • Error reading in date using Zoo in R

    - by SASnewby
    I am trying to use the Zoo package to read in daily observations which do not occur on each day. The date format looks like this, and it is in the first column of dataset: 25-May-07 The command I use is: z <- read.zoo("multiplier7.csv", sep = ";", header = TRUE, format="%d-%b-%y") I get this error: Error in strptime(x, format, tz = "GMT") : input string is too long First question is: 1. Is Zoo the right package to use if I want to use lm regression with a spline time dummy? The bs command does not work because I need to convert it into a time series. 2. Why is this error occurring and how to fix?

    Read the article

  • Good projects to learn OCaml and F#

    - by Yin Zhu
    After learning the basic syntax, reading some non-trivial code is a fast way to learn a language. We can also learn how to design a library/software during reading others' code. I have following lists. A Chess program in OCaml by Tomek Czajka. Hal Daumé has written several machine learning libraries in Ocaml. Including decision trees, logistic regression and SVM. All of them are near-production-quality code. A Chess Game Analysis program in F# in Microsoft Research. The above three are my favorites. Will you suggest some other sources? General purpose open source software are good, specialized open source like the three I list here are even more welcome.

    Read the article

  • TortoiseSVN 1.6.8 missing repository browser in Branch/Tag "To URL" dialog?

    - by Ash
    It seems that in TortoiseSVN 1.6.8 (on Windows), when you click the "To URL..." button in the Branch/Tag dialog, it now pops up a generic "browse for folders" dialog. It used to pop up a Repository Browser. Displaying a regular folder browser isn't much use, since you can't navigate to any of the tags/branches via the file system. Does anyone know if this is a regression or a deliberate change? Any possible workarounds (other than reverting to 1.6.7, which works fine)? Notes: I am running a repository on the local file system, which may yield different results to one going across a network. I'm definitely using an FSFS repository, so changes to BDB access via file:/// shouldn't apply. The only reference I could find to this problem is here: http://groups.google.com/group/tortoisesvn/browse_thread/thread/f3406d1bad89f1d9.

    Read the article

  • Testing a command from Perl and checking content of a file

    - by justintime
    I want to test a script I have written in Perl and specifically check what output it writes to file. I wrote it some time ago and don't want to modify it to the extent of turning it into a module but would like to regression test it before adding some small functional changes. So far I have use Test::Command tests = 10; exit_is_num($cmd, 0); .... But the command produces some files and I want to check those files are the same as I expect (either equal or match some regexp). Any suggestions

    Read the article

  • From a Perl test file, how do I check the contents of a file?

    - by justintime
    I want to test a script I have written in Perl and specifically check what output it writes to file. I wrote it some time ago and don't want to modify it to the extent of turning it into a module but would like to regression test it before adding some small functional changes. So far I have use Test::Command tests => 10; exit_is_num($cmd, 0); .... But the command produces some files and I want to check those files are the same as I expect (either equal or match some regexp). Any suggestions

    Read the article

  • Mapping functions of 2D numpy arrays

    - by perimosocordiae
    I have a function foo that takes a NxM numpy array as an argument and returns a scalar value. I have a AxNxM numpy array data, over which I'd like to map foo to give me a resultant numpy array of length A. Curently, I'm doing this: result = numpy.array([foo(x) for x in data]) It works, but it seems like I'm not taking advantage of the numpy magic (and speed). Is there a better way? I've looked at numpy.vectorize, and numpy.apply_along_axis, but neither works for a function of 2D arrays. EDIT: I'm doing boosted regression on 24x24 image patches, so my AxNxM is something like 1000x24x24. What I called foo above applies a Haar-like feature to a patch (so, not terribly computationally intensive).

    Read the article

  • Essential skills of a Data Scientist

    - by harshsinghal
    I would like to know more about the relevant skills in the arsenal of a Data Scientist, and with new technologies coming in every day, how one picks and chooses the essentials. A few ideas germane to this discussion: Knowing SQL and the use of a DB such as MySQL, PostgreSQL was great till the advent of NoSql and non-relational databases. MongoDB, CouchDB etc. are becoming popular to work with web-scale data. Knowing a stats tool like R is enough for analysis, but to create applications one may need to add Java, Python, and such others to the list. Data now comes in the form of text, urls, multi-media to name a few, and there are different paradigms associated with their manipulation. What about cluster computing, parallel computing, the cloud, Amazon EC2, Hadoop ? OLS Regression now has Artificial Neural Networks, Random Forests and other relatively exotic machine learning/data mining algos. for company Thoughts?

    Read the article

  • How to figure out optimal C / Gamma parameters in libsvm?

    - by Cuga
    I'm using libsvm for multi-class classification of datasets with a large number of features/attributes (around 5,800 per each item). I'd like to choose better parameters for C and Gamma than the defaults I am currently using. I've already tried running easy.py, but for the datasets I'm using, the estimated time is near forever (ran easy.py at 20, 50, 100, and 200 data samples and got a super-linear regression which projected my necessary runtime to take years). Is there a way to more quickly arrive at better C and Gamma values than the defaults? I'm using the Java libraries, if that makes any difference.

    Read the article

  • Should I unit test my JavaScript?

    - by Joseph Silvashy
    I'm curious to if it would be valuable, I'd like to start using QUnit, but I really don't know where to get started. Actually I'm not going to lie, I'm new to testing in general, not just with JS. I'm hoping to get some tips to how I would start using unit testing with an app that already has a large amount of JavaScript (ok so about 500 lines, not huge, be enough to make me wonder if I have regression that goes unnoticed). How would you recommend getting started and Where would I write my tests? (for example its rails app, where is a logical place to have my JS tests, it would be cool if they could go in the /test directory but it's outside the public directory and thus not possible... err is it?)

    Read the article

  • Is JBehave a good choice for Web Service Automated Testing?

    - by Vanchinathan
    Hi All, We have a requirement at my workplace to automate the webservice testing. We have been using QTP scripts to do so. We as a team, Kind of leaning towards Jbehave as a choice. Is JBehave a good choice for web service functional testing automation? We do use Soap UI to test manually. But we are planning to automate the functional and regression testing to reduce the release cycle time. Suggestions welcome.

    Read the article

  • Automatic VM deployment

    - by Robert Wilson
    I have an idea to streamline deployments of prototypes within our team using VMs. The idea would be that a developer would be able to deploy their artifacts to Maven, then use a Web interface to pull them onto a development VM for integration/regression testing. They would then be able to to push those artifacts to a reference system, and finally onto production. I'm currently thinking of doing this myself using the vSphere Java API ( http://vijava.sourceforge.net/ ), and some simple scripting to grab artifacts from the Maven repository, configuration from SVN, and then start up a JBoss server. It feels like the kind of thing that may already be available though, has anyone heard of something similar?

    Read the article

  • How to make double[,] x_List in C#3.0?

    - by Newbie
    I ned to implement the multi-linear regression in C#(3.0) by using the LinESt function of Excel. Basically I am trying to achieve =LINEST(ACL_returns!I2:I10,ACL_returns!J2:K10,FALSE,TRUE) So I have the data as below double[] x1 = new double[] { 0.0330, -0.6463, 0.1226, -0.3304, 0.4764, -0.4159, 0.4209, -0.4070, -0.2090 }; double[] x2 = new double[] { -0.2718, -0.2240, -0.1275, -0.0810, 0.0349, -0.5067, 0.0094, -0.4404, -0.1212 }; double[] y = new double[] { 0.4807, -3.7070, -4.5582, -11.2126, -0.7733, 3.7269, 2.7672, 8.3333, 4.7023 }; I have to write a function whose signature will be Compute(double[,] x_List, double[] y_List) { LinEst(x_List,y_List, true, true); < - This is the excel function that I will call. } My question is how by using double[] x1 and double[] x2 I will make double[,] x_List ? I am using C#3.0 and framework 3.5. Thanks in advance

    Read the article

  • Postgres pg_dump dumps database in a different order every time

    - by behrk2
    Hello, I am writing a PHP script (which also uses linux bash commands) which will run through test cases by doing the following: I am using a PostgreSQL database (8.4.2)... 1.) Create a DB 2.) Modify the DB 3.) Store a database dump of the DB (pg_dump) 4.) Do regression testing by doing steps 1.) and 2.), and then take another database dump and compare it (diff) with the original database dump from step number 3.) However, I am finding that pg_dump will not always dump the database in the same way. It will dump things in a different order every time. Therefore, when I do a diff on the two database dumps, the comparison will result in the two files being different, when they are actually the same, just in a different order. Is there a different way I can go about doing the pg_dump? Thanks!

    Read the article

  • Devising a test strategy

    - by Simon Callan
    As part of a new job, I have to devise and implement a complete test strategy for the companies new product. So far, all I really know about it is that it is written in C++, uses an SQL database and has a web API which is used by a browser client written using GWT. As far as I know, there isn't much of an existing strategy, except for using Python scripts to test the web API. I need to develop and implement a suitable strategy for unit, system, regression and release testing, preferably a fully automated one. I'm looking for good references for : Devising the complete test strategy. Testing the web API. Testing the GWT based application. Unit testing C++ code. In addition, any suitable tools would be appreciate

    Read the article

  • How has test first development changed the way you write software?

    - by Toran Billups
    I've started to find that I can't write software without writing a test first. I ask this subjective question because I want to hear what others in the community think about the reasons I can't go back to writing production code without a test first. If you can't write a test for something you don't understand it Without a regression test you can't clean the code You are going to test it anyway, spend the time to do it right Evolutionary design is possible without fear You actually write less code yourself Fast feedback cycles save time and money Job security (less bugs makes your boss happy) It actually makes my work more enjoyable

    Read the article

  • R error message about variable lengths

    - by Abraham
    I ran the following code in order to recode the variable. Unfortunately, when I move to run an logit model (using the Zelig package), I get an error message that the variable length differ for this variable. ## Independent Variable - Partisanship (ANES 2004) data04$V043114 part <- data04$V043114 attributes(part) summary(part) partb < part partb[part %in% levels(part)[4]] <- NA partb[part %in% levels(part)[5]] <- NA partb[part %in% levels(part)[6]] <- NA partb[part %in% levels(part)[7]] <- NA partb <- factor(partb) attributes(partb) summary(partb) table(partb) table(part, partb) cbind(part, partb) partisan041 <- partb partisan042 <- as.numeric(partb) summary(partisan041) summary(partisan042) ## Regression Model - ANES 2004 ## anes04one <- zelig(trade041a ~ age042 + education042 + personal042 + economy042 + partisan042 + employment042 + union042 + home042 + market042 + race042 + income042 + gender042, model="logit", data=data04) summary(anes04one) #Error in model.frame.default(formula = trade041a ~ age042 + education042 + : # variable lengths differ (found for 'partisan042')

    Read the article

  • Visual Studio add-in for performance benchmarking

    - by chiccodoro
    I'd like to measure the performance of some code blocks in my c# winforms application. In particular I want to measure performance regression/improvement after some restructuring of the code. So long I've seen the System.Diagnostics.Stopwatch. However, I want to avoid writing measuring code into my classes, I would rather prefer to separate measuring from actual code. As for debugging, you can set breakpoints on several code lines and "jump" from one to the next by "Continue Execution", I imagine something similar for measuring: Mark to lines of code and make Visual Studio display the time elapsing from one to the next. Is there any feature/add-in in that direction?

    Read the article

  • Java: How to test methods that call System.exit()?

    - by Chris Conway
    I've got a few methods that should call System.exit() on certain inputs. Unfortunately, testing these cases causes JUnit to terminate! Putting the method calls in a new Thread doesn't seem to help, since System.exit() terminates the JVM, not just the current thread. Are there any common patterns for dealing with this? For example, can I subsitute a stub for System.exit()? [EDIT] The class in question is actually a command-line tool which I'm attempting to test inside JUnit. Maybe JUnit is simply not the right tool for the job? Suggestions for complementary regression testing tools are welcome (preferably something that integrates well with JUnit and EclEmma).

    Read the article

  • How often should we write unit tests?

    - by Midnight Blue
    Hi, I am recently introduced to the test-driven approach to development by my mentor at work, and he encourages me to write an unit-test whenenver "it makes sense." I understand some benefits of having a throughout unit-test suite for both regression testing and refractoring, but I do wonder how often and how throughout we should write unit-test. My mentor/development lead asks me to write a new unit test-case for a newly written control flow in a method that is already being tested by the exsisting test class, and I think it is an overkill. How often do you write your unit tests, and how detailed do you think your unit tests should be? Thanks!

    Read the article

  • Panel data with binary dependent variable in R

    - by Abiel
    Is it possible to do regressions in R using a panel data set with a binary dependent variable? I am familiar with using glm for logit and probit and plm for panel data, but am not sure how to combine the two. Are there any existing code examples? Thank you. EDIT It would also be helpful if I could figure out how to extract the matrix that plm() is using when it does a regression. For instance, you could use plm to do fixed effects, or you could create a matrix with the appropriate dummy variables and then run that through glm(). In a case like this, however, it is annoying to generate the dummies yourself and it would be easier to have plm do it for you. Abiel

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >