Search Results

Search found 2480 results on 100 pages for 'dave jarvis'.

Page 26/100 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • MDX: How To Aggregate Hierarchy Level Members With Same Name

    - by Dave Frautnick
    Greetings, I am new to MDX, and am having trouble understanding how to perform an aggregation on a hierarchy level with members that have the same names. This query is particular to Microsoft Analysis Services 2000 cubes. I have a given hierarchy dimension with levels defined as follows: [Segment].[Flow].[Segment Week] Within the [Segment Week] level, I have the following members: [Week- 1] [Week- 2] [Week- 3] ... [Week- 1] [Week- 2] [Week- 3] The members have the same names, but are aligned with a different [Flow] in the parent level. So, the first occurrence of the [Week- 1] member aligns with [Flow].[A] while the second occurrence of [Week- 1] aligns with [Flow].[B]. What I am trying to do is aggregate all the members within the [Segment Week] level that have the same name. In SQL terms, I want to GROUP BY the member names within the [Segment Week] level. I am unsure how to do this. Thank you. Dave

    Read the article

  • How can I make my search more broad?

    - by user1804952
    I created this search mysql string and it is to literal or maybe un-literal? If I search for Dave for example it will only find items like "Dave something" and not find Dave. $queryArtist = mysql_query("SELECT * FROM artists WHERE artist LIKE '%$ArtistNameSearch%' ORDER BY artist ASC"); I know mysql_query is out dated and will change it to mysqli soon as I get this worked out. Stuck here. An example of it no working is Example of search Could it be becasue the %20 space? I got it figured out, but it still does NOT find one direction or other things even those exist. here is what I have now $queryArtist = mysql_query("SELECT * FROM artists WHERE match(artist) against('$SafeSearchTerm' in boolean mode)");

    Read the article

  • iPhone 4.0 Screen Resolution and writing robust code...

    - by Magic Bullet Dave
    Does anyone know what will happen with existing apps when they run on the iPhone 4.0 in terms of the new screen resolution? I am assuming, just like developing for the iPad that there should be no hard coded screen resolutions in your code. I'd also like advice on the best way of writing robust code to work well on any device. For instance, detecting the screen resolution is not enough - on the iPad the screen is physically bigger so you can display more items on it. On the new iPhone the screen is the same physical size but higher resolution, so the likely thing is that you wont want to display more items, just higher resolution versions of them. Any help would be useful, Regards Dave EDIT: I have read the other similar posts, I guess what I really would like to know is what is the recommended way to write code for all App Store devices in a robust way so they a) all work b) make best use of the device.

    Read the article

  • MSSQL 2000 Stored Procedure to Split Shift Times

    - by JClaspill
    I am being asked to alter a system to include the ability to have pay differentials based on hours worked. The old method included a stored procedure (MSSQL2000 db) that did the basics, but simply knows the start and end of every shift. So, this is the information I start with: EMPLOYEE | TYPE | HOURS | INSTAMP | OUTSTAMP Dave | Hourly | 8.643055 | 2011-01-08 07:57:35.557 | 2011-01-08 16:36:10.120 And I need to turn that into something like: EMPLOYEE | TYPE | HOURS | INSTAMP | OUTSTAMP Dave | Hourly | 4.00 | 2011-01-08 08:00:00.000 | 2011-01-08 12:00:00.000 Dave | ShiftDiff1 | 4.50 | 2011-01-08 12:00:00.000 | 2011-01-08 16:30:00.000 The ShiftDiff's range from hours to certain days, to a combo of both. Should I try to make the SQL2000 SP do this or pass the info back to my ASP.NET(C#) app and let it handle it, then send back?

    Read the article

  • Flex: How can I use the @ContextRoot in a Button or LinkButton

    - by Dave Meurer
    I'm trying to create a button that will simply link back to the context root. I noticed flex has a @ContextRoot attribute that appears to work only in certain cases. For example, if I try to use it in the following mxml: <mx:Button label="Back to Root" click="navigateToURL(new URLRequest(@ContextRoot()), '_parent')"/> I get the following error: Error: Attributes are not callable. I can't seem to find this technique explained anywhere, is there another way? Thanks for the help! Dave

    Read the article

  • Possible to map a new file extension to an existing handler in ASP.NET?

    - by Dave
    I have a scenario where my application is going to be publishing services that are consumed by both PC's and mobile devices, and I have a HTTPModule that I want to only perform work on only the mobile requests. So I thought the best way of doing this was to point the mobile requests to a different file extension and have the HTTPModule decide to process only if the request targets this new extension. I don't need a custom HTTPHandler for the new extension; I want to program the services like a normal .ASMX service, just with a different extension. First, can I do this? If so, how do I do it so that requests to my new extension are handled just like .ASMX requests? Second, is this the right approach? Am I going about separating and managing the mobile vs. PC requests the wrong way? Thanks, Dave

    Read the article

  • rails contoller defaults to respond with application/xml in production

    - by Dave Paroulek
    I have a standard contacts_controller.rb with index action that responds as follows: respond_to do |format| format.html format.xml { render :xml => @contacts } end In development, it works as intended: when I browse to http://localhost:3000/contacts, I get an html response. But, when I start the app using capistrano on a remote ubuntu server and browse to the same url, I get a xml response? If I go to http://remote_host:8000/contacts.html, then I see the html response. If I comment out the format.xml { render :xml => @contacts }, then I see the desired html response. Pretty sure I'm missing something subtle about difference between rails development and production modes? Any ideas about what I'm overlooking? Thanks, - Dave

    Read the article

  • Removing a UIView from its superView and expanding its frame to full screen

    - by Magic Bullet Dave
    I have an object that is a subclass of UIView that can be added to a view hierarchy as a subView. I want to be able to remove the UIView from its superView and add it as a subView of the main window and then expand to full screen. Something along the lines of: // Remove from superView and add to mainWindow [self retain]; [self removeFromSuperView]; [self addSubView:mainWindow]; // Animate to full screen [UIView beginAnimations:@"expandToFullScreen" context:nil]; [UIView setAnimationDuration:1.0]; self.frame = [[UIScreen mainScreen] applicationFrame]; [UIView commitAnimations]; [self release]; Firstly am I on the right lines? Secondly, is there an easily way for the object to get a pointer to the mainWindow? Thanks Dave

    Read the article

  • Best way to store application images taken via camera

    - by Dave
    Hi all, I'm just looking for some insight into what would be the best way for me to store images as part of my app. I have an activity that represents a 'Job' which has a couple of edittext's and underneath was planning on using the Gallery component to show images relevant to this job. The job data is stored in a database (on the sdcard) so was also thinking of creating a table to store 'JobImages' and having each image stored as a byte array. But I'm not sure if it would be better to store the images directly on sdcard under a folder structure specific to my application and the job. E.g. using the job ID number as a folder name. Depending on which method I use will greatly determine the code that goes into an 'adapter' that allows me to bind to the gallery component so before I begin I was wondering if anyone has had the same design problem and what option they chose. Thanks, Dave

    Read the article

  • rails controller defaults to respond with application/xml in production

    - by Dave Paroulek
    I have a standard contacts_controller.rb with index action that responds as follows: respond_to do |format| format.html format.xml { render :xml => @contacts } end In development, it works as intended: when I browse to http://localhost:3000/contacts, I get an html response. But, when I start the app using capistrano on a remote Ubuntu server and browse to the same url, I get an xml response. If I go to http://remote_host:8000/contacts.html, then I see the html response. If I comment out the format.xml { render :xml => @contacts }, then I see the desired html response. Pretty sure I'm missing something subtle about difference between Rails development and production modes. Any ideas about what I'm overlooking? Thanks, - Dave

    Read the article

  • PHP Default Timezone issue on Fedora + Zend Server CE

    - by Dave Morris
    I have ZendServer CE (PHP 5.2) installed on a Fedora VM, and I have the system timezone set to 'America/Chicago'. I have date.timezone = 'UTC' in my php.ini file, and when I call date_default_timezone_get(), or display date('T') on a web page, it says 'CDT'. The documentation on php.net for date_default_timezone_get() says it follows this order when choosing a default timezone: - Reading the timezone set using the date_default_timezone_set() function (if any) - Reading the TZ environment variable (if non empty) - Reading the value of the date.timezone ini option (if set) - Querying the host operating system (if supported and allowed by the OS) If I change the system timezone through the 'setup' GUI, and reboot the server, date('T') returns whatever I changed the system timezone to, regardless of what php.ini says. I also don't have a TZ environment variable, and I am not currently using date_default_timezone_set() anywhere in my code. Any idea what might be going on? I realize I can always override the system timezone by calling date_default_timezone_set('UTC'), but I'd rather rely on the php.ini file if possible. Thanks for the help, Dave

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    A long time ago in a galaxy far, far away... Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A New Hope Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. The episode continues in a remote shell known as bash, with both databases running, and the installation of a set of tools with a rather unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? How do you ensure the schema name comes through when transforming the data from MySQL to PostgreSQL inserts? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • JQuery ajax call to httpget webmethod (c#) not working

    - by Tim Jarvis
    I am trying to get an ajax get to a webmethod in code behind. The problem is I keep getting the error "parserror" from the JQuery onfail method. If I change the GET to a POST everything works fine. Please see my code below. Ajax Call <script type="text/javascript"> var id = "li1234"; function AjaxGet() { $.ajax({ type: "GET", url: "webmethods.aspx/AjaxGet", data: "{ 'id' : '" + id + "'}", contentType: "application/json; charset=utf-8", dataType: "json", async: false, success: function(msg) { alert("success"); }, error: function(msg, text) { alert(text); } }); } </script> Code Behind [System.Web.Services.WebMethod] [System.Web.Script.Services.ScriptMethod(UseHttpGet = true, ResponseFormat = System.Web.Script.Services.ResponseFormat.Json)] public static string AjaxGet(string id) { return id; } Web.config <webServices> <protocols> <add name="HttpGet"/> </protocols> </webServices> The URL being used ......../webmethods.aspx/AjaxGet?{%20%27id%27%20:%20%27li1234%27} As part of the response it is returning the html for the page webmethods. Any help will be greatly appreciated.

    Read the article

  • High-quality ERD generator for PostgresQL under Linux?

    - by Dave Jarvis
    Background MySQL Workbench can produce appealing and high-quality ERDs such as: Research I have not found a tool that even comes close for PostgreSQL. Tools I have found: dbVisualizer - Yellow squares. AquaFold - Yellow squares. SQL Developer - Coloured squares. Dia - Coloured squares. SchemaBank - Can't export to PNG; looks okay, nothing stellar. SchemaSpy - XML export makes it possible to write an XSL skin... Gliffy - Incompatible Flash version. Druid - No. pgAdmin3 - Not applicable? phpPgAdmin - Couldn't login without a 30-minute configuration battle. Requirements Looking for an ERD tool: Visually stunning by default Can reverse-engineer a PostgreSQL (or JDBC-compliant) database Runs on Linux (or under WINE) Export high-resolution PNG (or SVG) FOSS

    Read the article

  • Loading, listing, and using R Modules and Functions in PL/R

    - by Dave Jarvis
    I am having difficulty with: Listing the R packages and functions available to PostgreSQL. Installing a package (such as Kendall) for use with PL/R Calling an R function within PostgreSQL Listing Available R Packages Q.1. How do you find out what R modules have been loaded? SELECT * FROM r_typenames(); That shows the types that are available, but what about checking if Kendall( X, Y ) is loaded? For example, the documentation shows: CREATE TABLE plr_modules ( modseq int4, modsrc text ); That seems to allow inserting records to dictate that Kendall is to be loaded, but the following code doesn't explain, syntactically, how to ensure that it gets loaded: INSERT INTO plr_modules VALUES (0, 'pg.test.module.load <-function(msg) {print(msg)}'); Q.2. What would the above line look like if you were trying to load Kendall? Q.3. Is it applicable? Installing R Packages Using the "synaptic" package manager the following packages have been installed: r-base r-base-core r-base-dev r-base-html r-base-latex r-cran-acepack r-cran-boot r-cran-car r-cran-chron r-cran-cluster r-cran-codetools r-cran-design r-cran-foreign r-cran-hmisc r-cran-kernsmooth r-cran-lattice r-cran-matrix r-cran-mgcv r-cran-nlme r-cran-quadprog r-cran-robustbase r-cran-rpart r-cran-survival r-cran-vr r-recommended Q.4. How do I know if Kendall is in there? Q.5. If it isn't, how do I find out what package it is in? Q.6. If it isn't in a package suitable for installing with apt-get (aptitude, synaptic, dpkg, what have you), how do I go about installing it on Ubuntu? Q.7. Where are the installation steps documented? Calling R Functions I have the following code: EXECUTE 'SELECT ' 'regr_slope( amount, year_taken ),' 'regr_intercept( amount, year_taken ),' 'corr( amount, year_taken ),' 'sum( measurements ) AS total_measurements ' 'FROM temp_regression' INTO STRICT slope, intercept, correlation, total_measurements; This code calls the PostgreSQL function corr to calculate Pearson's correlation over the data. Ideally, I'd like to do the following (by switching corr for plr_kendall): EXECUTE 'SELECT ' 'regr_slope( amount, year_taken ),' 'regr_intercept( amount, year_taken ),' 'plr_kendall( amount, year_taken ),' 'sum( measurements ) AS total_measurements ' 'FROM temp_regression' INTO STRICT slope, intercept, correlation, total_measurements; Q.8. Do I have to write plr_kendall myself? Q.9. Where can I find a simple example that walks through: Loading an R module into PG. Writing a PG wrapper for the desired R function. Calling the PG wrapper from a SELECT. For example, would the last two steps look like: create or replace function plr_kendall( _float8, _float8 ) returns float as ' agg_kendall(arg1, arg2) ' language 'plr'; CREATE AGGREGATE agg_kendall ( sfunc = plr_array_accum, basetype = float8, -- ??? stype = _float8, -- ??? finalfunc = plr_kendall ); And then the SELECT as above? Thank you!

    Read the article

  • UTF-8 character encoding battles json_encode()

    - by Dave Jarvis
    Quest I am looking to fetch rows that have accented characters. The encoding for the column (NAME) is latin1_swedish_ci. The Code The following query returns Abord â Plouffe using phpMyAdmin: SELECT C.NAME FROM CITY C WHERE C.REGION_ID=10 AND C.NAME_LOWERCASE LIKE '%abor%' ORDER BY C.NAME LIMIT 30 The following displays expected values (function is called db_fetch_all( $result )): while( $row = mysql_fetch_assoc( $result ) ) { foreach( $row as $value ) { echo $value . " "; $value = utf8_encode( $value ); echo $value . " "; } $r[] = $row; } The displayed values: 5482 5482 Abord â Plouffe Abord â Plouffe The array is then encoded using json_encode: $rows = db_fetch_all( $result ); echo json_encode( $rows ); Problem The web browser receives the following value: {"ID":"5482","NAME":null} Instead of: {"ID":"5482","NAME":"Abord â Plouffe"} (Or the encoded equivalent.) Question The documentation states that json_encode() works on UTF-8. I can see the values being encoded from LATIN1 to UTF-8. After the call to json_encode(), however, the value becomes null. How do I make json_encode() encode the UTF-8 values properly? One possible solution is to use the Zend Framework, but I'd rather not if it can be avoided.

    Read the article

  • Clone behaviour - cannot set attribute value for clone?

    - by Dave Jarvis
    This code does not function as expected: // $field contains the name of a subclass of WMSInput. $fieldClone = clone $field; echo $fieldClone->getInputName(); // Method on abstract WMSInput superclass. $fieldClone->setInputName( 'name' ); echo $fieldClone->getInputName(); The WMSInput class: abstract class WMSInput { private $inputName; public function setInputName( $inputName ) { $this->inputName = $inputName; } } There are no PHP errors (error reporting is set to E_ALL). Actual Results email email Expected Results email name Any ideas?

    Read the article

  • Prevent full table scan for query with multiple where clauses

    - by Dave Jarvis
    A while ago I posted a message about optimizing a query in MySQL. I have since ported the data and query to PostgreSQL, but now PostgreSQL has the same problem. The solution in MySQL was to force the optimizer to not optimize using STRAIGHT_JOIN. PostgreSQL offers no such option. Here is the explain: Here is the query: SELECT avg(d.amount) AS amount, y.year FROM station s, station_district sd, year_ref y, month_ref m, daily d LEFT JOIN city c ON c.id = 10663 WHERE -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 AND -- Ignore stations outside the given elevations -- s.elevation BETWEEN 0 AND 2000 AND sd.id = s.station_district_id AND -- Gather all known years for that station ... -- y.station_district_id = sd.id AND -- The data before 1900 is shaky; insufficient after 2009. -- y.year BETWEEN 1980 AND 2000 AND -- Filtered by all known months ... -- m.year_ref_id = y.id AND m.month = 12 AND -- Whittled down by category ... -- m.category_id = '001' AND -- Into the valid daily climate data. -- m.id = d.month_ref_id AND d.daily_flag_id <> 'M' GROUP BY y.year It appears as though PostgreSQL is looking at the DAILY table first, which is simply not the right way to go about this query as there are nearly 300 million rows. How do I force PostgreSQL to start at the CITY table? Thank you!

    Read the article

  • Wireframe mock-up software

    - by Dave Jarvis
    Requirements Looking for wireframe mock-up software for web apps with the following constraints: Desktop application (supports Linux) (can be online) Export all pages to separate image files (PNG or SVG preferred) Template for all pages Drag and drop standard HTML widgets for <forms> Tabbed panels (that allow content on the different tabs) Shuttle controls Saves in an XML format (XSLT could convert to HTML) Widget alignment and resizing (relative to other widgets) Under $100.00 USD Examples That come close: Cacoo - Not a desktop application; does not have a true tabbed panel widget Pencil - Export feature has serious bugs (missing text); no template? Balsamiq - Installation proved cantankerous on Linux (due to Adobe AIR) Mockingbird - No shuttle controls; no auto-resize of widgets Pencil & paper - Not a good look for a formal document presented to clients Any others that meet the requirements?

    Read the article

  • Trend analysis using iterative value increments

    - by Dave Jarvis
    We have configured iReport to generate the following graph: The real data points are in blue, the trend line is green. The problems include: Too many data points for the trend line Trend line does not follow a Bezier curve (spline) The source of the problem is with the incrementer class. The incrementer is provided with the data points iteratively. There does not appear to be a way to get the set of data. The code that calculates the trend line looks as follows: import java.math.BigDecimal; import net.sf.jasperreports.engine.fill.*; /** * Used by an iReport variable to increment its average. */ public class MovingAverageIncrementer implements JRIncrementer { private BigDecimal average; private int incr = 0; /** * Instantiated by the MovingAverageIncrementerFactory class. */ public MovingAverageIncrementer() { } /** * Returns the newly incremented value, which is calculated by averaging * the previous value from the previous call to this method. * * @param jrFillVariable Unused. * @param object New data point to average. * @param abstractValueProvider Unused. * @return The newly incremented value. */ public Object increment( JRFillVariable jrFillVariable, Object object, AbstractValueProvider abstractValueProvider ) { BigDecimal value = new BigDecimal( ( ( Number )object ).doubleValue() ); // Average every 10 data points // if( incr % 10 == 0 ) { setAverage( ( value.add( getAverage() ).doubleValue() / 2.0 ) ); } incr++; return getAverage(); } /** * Changes the value that is the moving average. * @param average The new moving average value. */ private void setAverage( BigDecimal average ) { this.average = average; } /** * Returns the current moving average average. * @return Value used for plotting on a report. */ protected BigDecimal getAverage() { if( this.average == null ) { this.average = new BigDecimal( 0 ); } return this.average; } /** Helper method. */ private void setAverage( double d ) { setAverage( new BigDecimal( d ) ); } } How would you create a smoother and more accurate representation of the trend line?

    Read the article

  • Fast file search algorithm for IP addresses

    - by Dave Jarvis
    Question What is the fastest way to find if an IP address exists in a file that contains IP addresses sorted as: 219.93.88.62 219.94.181.87 219.94.193.96 220.1.72.201 220.110.162.50 220.126.52.187 220.126.52.247 Constraints No database (e.g., MySQL, PostgreSQL, Oracle, etc.). Infrequent pre-processing is allowed (see possibilities section) Would be nice not to have to load the file each query (131Kb) Uses under 5 megabytes of disk space File Details One IP address per line 9500+ lines Possible Solutions Create a directory hierarchy (radix tree?) then use is_dir() (sadly, this uses 87 megabytes)

    Read the article

  • Calculate year for end date: PostgreSQL

    - by Dave Jarvis
    Background Users can pick dates as shown in the following screen shot: Any starting month/day and ending month/day combinations are valid, such as: Mar 22 to Jun 22 Dec 1 to Feb 28 The second combination is difficult (I call it the "tricky date scenario") because the year for the ending month/day is before the year for the starting month/day. That is to say, for the year 1900 (also shown selected in the screen shot above), the full dates would be: Dec 22, 1900 to Feb 28, 1901 Dec 22, 1901 to Feb 28, 1902 ... Dec 22, 2007 to Feb 28, 2008 Dec 22, 2008 to Feb 28, 2009 Problem Writing a SQL statement that selects values from a table with dates that fall between the start month/day and end month/day, regardless of how the start and end days are selected. In other words, this is a year wrapping problem. Inputs The query receives as parameters: Year1, Year2: The full range of years, independent of month/day combination. Month1, Day1: The starting day within the year to gather data. Month2, Day2: The ending day within the year (or the next year) to gather data. Previous Attempt Consider the following MySQL code (that worked): end_year = start_year + greatest( -1 * sign( datediff( date( concat_ws('-', year, end_month, end_day ) ), date( concat_ws('-', year, start_month, start_day ) ) ) ), 0 ) How it works, with respect to the tricky date scenario: Create two dates in the current year. The first date is Dec 22, 1900 and the second date is Feb 28, 1900. Count the difference, in days, between the two dates. If the result is negative, it means the year for the second date must be incremented by 1. In this case: Add 1 to the current year. Create a new end date: Feb 28, 1901. Check to see if the date range for the data falls between the start and calculated end date. If the result is positive, the dates have been provided in chronological order and nothing special needs to be done. This worked in MySQL because the difference in dates would be positive or negative. In PostgreSQL, the equivalent functionality always returns a positive number, regardless of their relative chronological order. Question How should the following (broken) code be rewritten for PostgreSQL to take into consideration the relative chronological order of the starting and ending month/day pairs (with respect to an annual temporal displacement)? SELECT m.amount FROM measurement m WHERE (extract(MONTH FROM m.taken) >= month1 AND extract(DAY FROM m.taken) >= day1) AND (extract(MONTH FROM m.taken) <= month2 AND extract(DAY FROM m.taken) <= day2) Any thoughts, comments, or questions? (The dates are pre-parsed into MM/DD format in PHP. My preference is for a pure PostgreSQL solution, but I am open to suggestions on what might make the problem simpler using PHP.) Versions PostgreSQL 8.4.4 and PHP 5.2.10

    Read the article

  • Best fit curve for trend line

    - by Dave Jarvis
    Problem Constraints Size of the data set, but not the data itself, is known. Data set grows by one data point at a time. Trend line is graphed one data point at a time (using a spline/Bezier curve). Graphs The collage below shows data sets with reasonably accurate trend lines: The graphs are: Upper-left. By hour, with ~24 data points. Upper-right. By day for one year, with ~365 data points. Lower-left. By week for one year, with ~52 data points. Lower-right. By month for one year, with ~12 data points. User Inputs The user can select: the type of time series (hourly, daily, monthly, quarterly, annual); and the start and end dates for the time series. For example, the user could select a daily report for 30 days in June. Trend Weight To calculate the window size (i.e., the number of data points to average when calculating the trend line), the following expression is used: data points / trend weight Where data points is derived from user inputs and trend weight is 6.4. Even though a trend weight of 6.4 produces good fits, it is rather arbitrary, and might not be appropriate for different user inputs. Question How should trend weight be calculated given the constraints of this problem?

    Read the article

  • Non-linear regression models in PostgreSQL using R

    - by Dave Jarvis
    Background I have climate data (temperature, precipitation, snow depth) for all of Canada between 1900 and 2009. I have written a basic website and the simplest page allows users to choose category and city. They then get back a very simple report (without the parameters and calculations section): The primary purpose of the web application is to provide a simple user interface so that the general public can explore the data in meaningful ways. (A list of numbers is not meaningful to the general public, nor is a website that provides too many inputs.) The secondary purpose of the application is to provide climatologists and other scientists with deeper ways to view the data. (Using too many inputs, of course.) Tool Set The database is PostgreSQL with R (mostly) installed. The reports are written using iReport and generated using JasperReports. Poor Model Choice Currently, a linear regression model is applied against annual averages of daily data. The linear regression model is calculated within a PostgreSQL function as follows: SELECT regr_slope( amount, year_taken ), regr_intercept( amount, year_taken ), corr( amount, year_taken ) FROM temp_regression INTO STRICT slope, intercept, correlation; The results are returned to JasperReports using: SELECT year_taken, amount, year_taken * slope + intercept, slope, intercept, correlation, total_measurements INTO result; JasperReports calls into PostgreSQL using the following parameterized analysis function: SELECT year_taken, amount, measurements, regression_line, slope, intercept, correlation, total_measurements, execute_time FROM climate.analysis( $P{CityId}, $P{Elevation1}, $P{Elevation2}, $P{Radius}, $P{CategoryId}, $P{Year1}, $P{Year2} ) ORDER BY year_taken This is not an optimal solution because it gives the false impression that the climate is changing at a slow, but steady rate. Questions Using functions that take two parameters (e.g., year [X] and amount [Y]), such as PostgreSQL's regr_slope: What is a better regression model to apply? What CPAN-R packages provide such models? (Installable, ideally, using apt-get.) How can the R functions be called within a PostgreSQL function? If no such functions exist: What parameters should I try to obtain for functions that will produce the desired fit? How would you recommend showing the best fit curve? Keep in mind that this is a web app for use by the general public. If the only way to analyse the data is from an R shell, then the purpose has been defeated. (I know this is not the case for most R functions I have looked at so far.) Thank you!

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >