Search Results

Search found 265 results on 11 pages for 'regression'.

Page 1/11 | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • Linux kernel regression on power usage

    - by dago
    Webupd8 reported this power management fix for the 2.6.38 Linux kernel regression: Add the following to the boot grub line "pcie_aspm=force" My question - how does this suggested fix differ from this hint from powertop: Suggestion: Enable Device Power Management by pressing the P key, which execute the following action: find /sys/devices/pci* -path "*power/control" -exec bash -c "echo auto > '{}'" \;

    Read the article

  • How to troubleshoot a wireless networking regression?

    - by fluteflute
    I've been experiencing this perhaps slightly odd bug. It works flawlessly in Lucid, but not in Maverick and Natty. I find it seems to work when I'm booting a partition everyday (as I do for my main 10.10 partition) but for my 11.04 testing partition it's a real pain - usually refusing to connect. So given that I have both a working (10.04) and not-working (10.10 and 11.04) installs, how can I troubleshoot my problem?

    Read the article

  • How can I connect my HP Photosmart C3100 printer in 10.04 (regression from 9.x)

    - by Brian
    My printer was working under 9.x. It is a an HP Photosmart C3100 series. When I open Admin-Printing no local printers are found. I try to add via Other (My local choices are Serial and other). I have tried many url's - ipp://localhost:631/ipp, http://localhost/ipp, localhost, 127.0.0.1, etc... None have worked. Under the networked I have tried JetDirect, using localhost and 127.0.0.1 and port 631. I have tried many options under IPP with different variants in the host trying to verify a printer. No luck .I tried LPD/LPR with localhost and tried the probe. no luck. I tried the cups admin via localhost:631 and that didn't work. On the old version its simply found the local printer, I might have picked the driver, I can't remember but it was the photosmart c3100 series that was working.I just can't get 10.04 to print.

    Read the article

  • Regression testing for firewall changes

    - by James C
    We have a number of firewalls in place around our organisation and in some cases packets can pass through four levels of firewall limiting the flow TCP traffic. A concept that I'm used to from software testing is regression testing, allowing you to run a test suite against a changed application to verify that the new changes haven't affected any old features. Does anyone have any experience or an offer any solutions to being able to perform the same type of thing with firewall changes and network testing? The problem becomes a lot more complicated because you'd ideally want to be originating (and testing receipt) of packets across many machines.

    Read the article

  • Exponential regression : p-value and F significance

    - by Saravanan K
    I am new to statistics. I have a set of independent data and dependent data (X,Y), where I would like to do an exponential regression to obtain its p-value and significant F (already obtained R2 and also the coefficients through mathematical calculation). What is the natural evolution from the (X,Y) data to mathematically calculate those variables. Spent a week on the internet to study this but unable to find the right answer. Often an exponential data, y=be^(mx) will be converted first to a linear data, ln y = mx + ln b . Then a linear regression will done on the converted data, obtaining its p-value etc. Assume we use a statistical tool such as Excel's Analysis ToolPak: Data Analysis : Regression, it will produce a result such as below, I believe the p-value and Significant F value is representing the converted linear data and not the original exponential data. Questions: What is the approach/steps used by Excel to get the p-value and Significant F value for the converted linear data as shown in the statistic output in the image above? It is not clear in their help page or website. Can the p-value and Significant F could be mathematically calculated for exponential regression without using a statistical tool? Can you assist to point me to the right link if this has been answered before.

    Read the article

  • Graphing perpendicular offsets in a least squares regression plot in R

    - by D W
    I'm interested in making a plot with a least squares regression line and line segments connecting the datapoints to the regression line as illustrated here in the graphic called perpendicular offsets: http://mathworld.wolfram.com/LeastSquaresFitting.html I have the plot and regression line done here: ## Dataset from http://www.apsnet.org/education/advancedplantpath/topics/RModules/doc1/04_Linear_regression.html ## Disease severity as a function of temperature # Response variable, disease severity diseasesev<-c(1.9,3.1,3.3,4.8,5.3,6.1,6.4,7.6,9.8,12.4) # Predictor variable, (Centigrade) temperature<-c(2,1,5,5,20,20,23,10,30,25) ## Fit a linear model for the data and summarize the output from function lm() severity.lm <- lm(diseasesev~temperature,data=severity) # Take a look at the data plot( diseasesev~temperature, data=severity, xlab="Temperature", ylab="% Disease Severity", pch=16 ) abline(severity.lm,lty=1) title(main="Graph of % Disease Severity vs Temperature") Should I use some kind of for loop and segments http://www.iiap.res.in/astrostat/School07/R/html/graphics/html/segments.html to do the perpendicular offsets? Is there a more efficient way? Please provide an example if possible.

    Read the article

  • Screening (multi)collinearity in a regression model

    - by aL3xa
    I hope that this one is not going to be "ask-and-answer" question... here goes: (multi)collinearity refers to extremely high correlations between predictors in the regression model. How to cure them... well, sometimes you don't need to "cure" collinearity, since it doesn't affect regression model itself, but interpretation of an effect of individual predictors. One way to spot collinearity is to put each predictor as a dependent variable, and other predictors as independent variables, determine R2, and if it's larger than .9 (or .95), we can consider predictor redundant. This is one "method"... what about other approaches? Some of them are time consuming, like excluding predictors from model and watching for b-coefficient changes - they should be noticeably different. Of course, we must always bare in mind specific context/goal of analysis... Sometimes, only remedy is to repeat a research, but right now, I'm interested in various ways of screening redundant predictors when (multi)collinearity occurs in a regression model.

    Read the article

  • How much detail is in a good UI regression test?

    - by GlenPeterson
    We use a detailed step-by-step user-interface regression test for our commercial web application. It has a "backbone" test for the most used / most important parts of the system, with optional tests for specific areas of functionality. Using this plan has definitely helped us ensure high quality software. But, having very specific tests can be counter-productive. The tester concentrates on following the test and will completely miss usability issues, or not notice fairly obvious problems such as the bottom part of a page that is missing. By contrast, some of the best UI testing happens when building a demo of a new feature. I often do my own best testing by pretending to demonstrate the system to an imaginary prospect. Yet when I tell the testers, "Just demonstrate the system to yourself" they don't cover nearly as much functionality as they do with a detailed point-by-point test. I'm repeatedly asked to provide more and more detail in the test plan so that a new untrained tester can test with it without asking any questions. Yet details seem to be counter-productive. How much detail do you put in a regression test to make it effective? What techniques make the tester to focus more on the system than on checking off items on the test?

    Read the article

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • Automation testing tool for Regression testing of desktop application

    - by user285037
    Hi I am working on a desktop application which uses Infragistic grids. We need to automate the regression tests for same. QTP alone does not support this, we need to buy new plug in for same which my company is not very much interested in. Do we have any open source tool for automating regression testing of desktop application? Application is in Dot net but i do not think it makes much of a difference. Please suggests, i have zeroed in for test complete but again it is licensed one. I need some open source.

    Read the article

  • Multiple outliers for two variable linear regression

    - by Dave Jarvis
    Problem Building on my previous question, the "extreme" outliers in the following graph are somewhat obvious: Question Given: T - Set of all temperatures Y - Set of all years ST - Sum of temperatures. SY - Sum of years. N - Number of elements T(n) - Temperature of the nth element in the temperature set How would you implement an efficient MySQL stored procedure or user-defined function (UDF) to determine if T(n) is an outlier? (If such an implementation already exists, that would be good to know as well.) Related Sites I am slowly working through these sites to get a better understanding of the problem: Multiple Outliers Detection Procedures in Linear Regression M-estimator Measure of Surprise for Outlier Detection Ordinary Least Squares Linear Regression Many thanks!

    Read the article

  • Efficient Multiple Linear Regression in C# / .Net

    - by mrnye
    Does anyone know of an efficient way to do multiple linear regression in C#, where the number of simultaneous equations may be in the 1000's (with 3 or 4 different inputs). After reading this article on multiple linear regression I tried implementing it with a matrix equation: Matrix y = new Matrix( new double[,]{{745}, {895}, {442}, {440}, {1598}}); Matrix x = new Matrix( new double[,]{{1, 36, 66}, {1, 37, 68}, {1, 47, 64}, {1, 32, 53}, {1, 1, 101}}); Matrix b = (x.Transpose() * x).Inverse() * x.Transpose() * y; for (int i = 0; i < b.Rows; i++) { Trace.WriteLine("INFO: " + b[i, 0].ToDouble()); } However it does not scale well to the scale of 1000's of equations due to the matrix inversion operation. I can call the R language and use that, however I was hoping there would be a pure .Net solution which will scale to these large sets. Any suggestions? EDIT #1: I have settled using R for the time being. By using statconn (downloaded here) I have found it to be both fast & relatively easy to use this method. I.e. here is a small code snippet, it really isn't much code at all to use the R statconn library (note: this is not all the code!). _StatConn.EvaluateNoReturn(string.Format("output <- lm({0})", equation)); object intercept = _StatConn.Evaluate("coefficients(output)['(Intercept)']"); parameters[0] = (double)intercept; for (int i = 0; i < xColCount; i++) { object parameter = _StatConn.Evaluate(string.Format("coefficients(output)['x{0}']", i)); parameters[i + 1] = (double)parameter; }

    Read the article

  • CSS regression tool?

    - by ronaldwidha
    I'm looking for a visual regression testing tool for CSS refactoring and see whether or not there are any unintended cascading behavior in a website. Ideally, the tool that can crawl a website (even locally) and grab snapshots of each page and store it in a single repository. When run for the second time, it will show the pages that are visually different since the last time it was run. Even better: if it can show the overlapper XOR view of the 2 version of the page. compare rendering results of different browsers (almost like an automated Microsoft Expression Web compare feature). Thanks

    Read the article

  • Optimal two variable linear regression calculation

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT, FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < 15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Question The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Related Sites Least absolute deviations Robust regression Thank you!

    Read the article

  • Linear Regression and Java Dates

    - by Smithers
    I am trying to find the linear trend line for a set of data. The set contains pairs of dates (x values) and scores (y values). I am using a version of this code as the basis of my algorithm. The results I am getting are off by a few orders of magnitude. I assume that there is some problem with round off error or overflow because I am using Date's getTime method which gives you a huge number of milliseconds. Does anyone have a suggestion on how to minimize the errors and compute the correct results?

    Read the article

  • Optimal two variable linear regression SQL statement (censoring outliers)

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here (with five outliers highlighted): Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • Optimal two variable linear regression SQL statement

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 5 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; and insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • Looping through covariates in regression using R

    - by Kyle Peyton
    I'm trying to run 96 regressions and save the results as 96 different objects. To complicate things, I want the subscript on one of the covariates in the model to also change 96 times. I've almost solved the problem but I've unfortunately hit a wall. The code so far is, for(i in 1:96){ assign(paste("z.out", i,sep=""), lm(rMonExp_EGM~ TE_i+ Month2+Month3+Month4+Month5+Month6+Month7+Month8+Month9+ Month10+Month11+Month12+Yrs_minus_2004 + as.factor(LGA),data=Pokies)) } This works on the object creation side (e.g. I have z.out1 - z.out96) but I can't seem to get the subscript on the covariate to change as well. I have 96 variables called TE_1, TE_2 ... TE_96 in the dataset. As such, the subscript on TE_, the "i" needs to change to correspond to each of the objects I create. That is, z.out1 should hold the results from this model: z.out1 <- lm(rMonExp_EGM~ TE_1 + Month2+Month3+Month4+Month5+Month6+Month7+Month8+Month9+ Month10+Month11+Month12+Yrs_minus_2004 + as.factor(LGA),data=Pokies) And z.out96 should be: z.out96 <- lm(rMonExp_EGM~ TE_96+ Month2+Month3+Month4+Month5+Month6+Month7+Month8+Month9+ Month10+Month11+Month12+Yrs_minus_2004 + as.factor(LGA),data=Pokies) Hopefully this makes sense. I'm grateful for any tips/advice. cheers, kyle

    Read the article

1 2 3 4 5 6 7 8 9 10 11  | Next Page >