Search Results

Search found 3721 results on 149 pages for 'postgresql newbie'.

Page 51/149 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Get the last N rows in the database in order?

    - by Kristopher
    Let's say I have the following database table: record_id | record_date | record_value -----------+-------------+-------------- 1 | 2010-05-01 | 195.00 2 | 2010-07-01 | 185.00 3 | 2010-09-01 | 175.00 4 | 2010-05-01 | 189.00 5 | 2010-06-01 | 185.00 6 | 2010-07-01 | 180.00 7 | 2010-08-01 | 175.00 8 | 2010-09-01 | 170.00 9 | 2010-10-01 | 165.00 I want to grab the last 5 rows with the data ordered by record_date ASC. This is easy to do with: SELECT * FROM mytable ORDER BY record_date ASC LIMIT 5 OFFSET 4 Which would give me: record_id | record_date | record_value -----------+-------------+-------------- 6 | 2010-07-01 | 180.00 7 | 2010-08-01 | 175.00 3 | 2010-09-01 | 175.00 8 | 2010-09-01 | 170.00 9 | 2010-10-01 | 165.00 But how do I do this when I don't know how many records there are and can't compute the magic number of 4? I've tried this query, but if there are less than 5 records, it results in a negative OFFSET, which is invalid: SELECT * FROM mytable ORDER BY record_date ASC LIMIT 5 OFFSET (SELECT COUNT(*) FROM mytable) - 5; So how do I accomplish this?

    Read the article

  • expand a varchar column very slowly , why?

    - by francs
    Hi We need to modify a column of a big product table , usually normall ddl statments will be excutely fast ,but the above ddl statmens takes about 10 minnutes?I wonder know the reason! I just want to expand a varchar column?The following is the detailsl --table size wapreader_log= select pg_size_pretty(pg_relation_size('log_foot_mark')); pg_size_pretty ---------------- 5441 MB (1 row) --table ddl wapreader_log= \d log_foot_mark Table "wapreader_log.log_foot_mark" Column | Type | Modifiers -------------+-----------------------------+----------- id | integer | not null create_time | timestamp without time zone | sky_id | integer | url | character varying(1000) | refer_url | character varying(1000) | source | character varying(64) | users | character varying(64) | userm | character varying(64) | usert | character varying(64) | ip | character varying(32) | module | character varying(64) | resource_id | character varying(100) | user_agent | character varying(128) | Indexes: "pk_log_footmark" PRIMARY KEY, btree (id) --alter column wapreader_log= \timing Timing is on. wapreader_log= ALTER TABLE wapreader_log.log_foot_mark ALTER column user_agent TYPE character varying(256); ALTER TABLE Time: 603504.835 ms

    Read the article

  • Check value at insert

    - by ThreeFingerMark
    Hello, i have this three tables. Table: Item Columns: ItemID, Title, Content, NoChange (Date) Table: Tag Columns: TagID, Title Table: ItemTag Columns: ItemID, TagID In the Item Table is a Field with NoChange, if this field = true no Tag is allowed to insert a ItemTag value with this ItemID. How can i check this in the insert? For Updates i have this Statement: UPDATE ItemTag SET TagID = ? where ItemID = ? AND TagID = ? AND exists ( select ItemID from Item where ItemID = ? AND NoChange is null)"); Thank you.

    Read the article

  • Postgres vs Firebird

    - by Tedi
    I'm looking to use either Firebird or Postgres in my next development project ... largely because both are available under a BSD-like license. I found a great comparison of the two database at http://www.amsoftwaredesign.com/pg_vs_fb But this comparison is a good 2+ years old and both databases have come a long ways since. Does anyone mind updating the comparison table to be relevant for the current versions of both Firebird and Postgres ... or have a link to a site that does a good recent comparison between the two database?

    Read the article

  • Rails: Converting from MySQL to PostGres breaks Geokit Distance Calculations???

    - by Kevin
    I recently switched my database from MySQL to PostGres. I also use GeoKit. When I started my app up with the new database already seeded, I get the following error: PGError: ERROR: function radians(character varying) does not exist LINE 1: ...COS(0.661045389762993)*COS(-2.12957994527573)*COS(RADIANS(ti... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. Anyone know why this is breaking now? I know GeoKit still works because it's still performing the geocoding in the model per ticket when the database is seeded, it just won't do the distance calculations correctly.

    Read the article

  • Escape SQL "LIKE" value for Postgres with psycopg2

    - by Evgeny
    Does psycopg2 have a function for escaping the value of a LIKE operand for Postgres? For example I may want to match strings that start with the string "20% of all", so I want to write something like this: sql = '... WHERE ... LIKE %(myvalue)s' cursor.fetchall(sql, { 'myvalue': escape_sql_like('20% of all') + '%' } Is there an existing escape_sql_like function that I could plug in here? (Similar question to How to quote a string value explicitly (Python DB API/Psycopg2), but I couldn't find an answer there.)

    Read the article

  • Problems writing a query to join two tables

    - by Psyche
    Hello, I'm working on a script which purpose is to grant site users access to different sections of the site menu. For this I have created two tables, "menu" and "rights": menu - id - section_name rights - id - menu_id (references column id from menu table) - user_id (references column id from users table) How can a query be written in order to get all menu sections and mark the ones where a given user has access. I'm using PHP and Postgres. Thank you.

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • Check if row already exists, if so tell the referenced table the id

    - by flhe
    Let's assume I have a table magazine: CREATE TABLE magazine ( magazine_id integer NOT NULL DEFAULT nextval(('public.magazine_magazine_id_seq'::text)::regclass), longname character varying(1000), shortname character varying(200), issn character varying(9), CONSTRAINT pk_magazine PRIMARY KEY (magazine_id) ); And another table issue: CREATE TABLE issue ( issue_id integer NOT NULL DEFAULT nextval(('public.issue_issue_id_seq'::text)::regclass), number integer, year integer, volume integer, fk_magazine_id integer, CONSTRAINT pk_issue PRIMARY KEY (issue_id), CONSTRAINT fk_magazine_id FOREIGN KEY (fk_magazine_id) REFERENCES magazine (magazine_id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ); Current INSERTS: INSERT INTO magazine (longname,shotname,issn) VALUES ('a long name','ee','1111-2222'); INSERT INTO issue (fk_magazine_id,number,year,volume) VALUES (currval('magazine_magazine_id_seq'),'8','1982','6'); Now a row should only be inserted into 'magazine', if it does not already exist. However if it exists, the table 'issue' needs to get the 'magazine_id' of the row that already exists in order to establish the reference. How can i do this? Thx in advance!

    Read the article

  • PostGres - run a query in batches?

    - by CaffeineIV
    Is it possible to loop through a query so that if (for example) 500,000 rows are found, it'll return results for the first 10,000 and then rerun the query again? So, what I want to do is run a query and build an array, like this: $result = pg_query("SELECT * FROM myTable"); $i = 0; while($row = pg_fetch_array($result) ) { $myArray[$i]['id'] = $row['id']; $myArray[$i]['name'] = $row['name']; $i++; } But, I know that there will be several hundred thousand rows, so I wanted to do it in batches of like 10,000... 1- 9,999 and then 10,000 - 10,999 etc... The reason why is because I keep getting this error: Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 3 bytes) Which, incidentally, I don't understand how 3 bytes could exhaust 512M... So, if that's something that I can just change, that'd be great, although, still might be better to do this in batches?

    Read the article

  • Error when pushing data to Heroku: time zone displacement out of range

    - by J. Pablo Fernández
    I run the following command to push the contents of my local database to Heroku: heroku db:push --app my-app and from my home computer it works flawlessly but from my work computer I get this error: Taps Server Error: PGError: ERROR: time zone displacement out of range: "2011-11-15 12:00:00.000000+5894114400" I'm not sure where that date is coming from, I can't find it in the data anywhere. Any ideas what's going on and/or how to fix it? Thanks.

    Read the article

  • Nesting queries in SQL

    - by ZAX
    The goal of my query is to return the country name and its head of state if it's headofstate has a name starting with A, and the capital of the country has greater than 100,000 people utilizing a nested query. Here is my query: SELECT country.name as country, (SELECT country.headofstate from country where country.headofstate like 'A%') from country, city where city.population > 100000; I've tried reversing it, placing it in the where clause etc. I don't get nested queries. I'm just getting errors back, like subquery returns more than one row and such. If someone could help me out with how to order it, and explain why it needs to be a certain way, that'd be great.

    Read the article

  • INSERT data from Textbox to Postgres SQL

    - by user1479013
    I just learn how to connect C# and PostgresSQL. I want to INSERT data from tb1(Textbox) and tb2 to database. But I don't know how to code My previous code is SELECT from database. this is my code private void button1_Click(object sender, EventArgs e) { bool blnfound = false; NpgsqlConnection conn = new NpgsqlConnection("Server=127.0.0.1;Port=5432;User Id=postgres;Password=admin123;Database=Login"); conn.Open(); NpgsqlCommand cmd = new NpgsqlCommand("SELECT * FROM login WHERE name='" + tb1.Text + "' and password = '" + tb2.Text + "'",conn); NpgsqlDataReader dr = cmd.ExecuteReader(); if (dr.Read()) { blnfound = true; Form2 f5 = new Form2(); f5.Show(); this.Hide(); } if (blnfound == false) { MessageBox.Show("Name or password is incorrect", "Message Box", MessageBoxButtons.OK, MessageBoxIcon.Exclamation, MessageBoxDefaultButton.Button1); dr.Close(); conn.Close(); } } So please help me the code.

    Read the article

  • Finding all areas that intersect with a point and vice-versa - PostGIS

    - by ForeignerBR
    I'm developing a project using PostGIS to hold spatial data where I have records that hold geometry point data and records that hold geometry area data. To solve my problem I'm looking for two queries that can take geographic shapes rather than geometric shapes as parameters. For query A I need it to return all points that intersect with a given area. For query B I need it to return all areas that intersect with a given point.

    Read the article

  • Postgesql select from 2 tables. Joins?

    - by Daniel
    I have 2 tables that look like this: Table "public.phone_lists" Column | Type | Modifiers ----------+-------------------+-------------------------------------------------------------------- id | integer | not null default nextval(('"phone_lists_id_seq"'::text)::regclass) list_id | integer | not null sequence | integer | not null phone | character varying | name | character varying | and Table "public.email_lists" Column | Type | Modifiers ---------+-------------------+-------------------------------------------------------------------- id | integer | not null default nextval(('"email_lists_id_seq"'::text)::regclass) list_id | integer | not null email | character varying | I'm trying to get the list_id, phone, and emails out of the tables in one table. I'm looking for an output like: list_id | phone | email ---------+-------------+-------------------------------- 0 | | [email protected] 0 | | [email protected] 0 | | [email protected] 0 | | [email protected] 0 | | [email protected] 1 | 15555555555 | 1 | 15555551806 | 1 | 15555555508 | 1 | 15055555506 | 1 | 15055555558 | 1 | | [email protected] 1 | | [email protected] I've come up with select pl.list_id, pl.phone, el.email from phone_lists as pl left join email_lists as el using (list_id); but thats not quite right. Any suggestions?

    Read the article

  • PropelBundle database:create for postgres

    - by Karol85
    I've installed propel bundle for symfony2. my database configuration is: propel: dbal: driver: pgsql user: postgres password: postgres dsn: pgsql:host=localhost;port=5432;dbname=test_database options: {} attributes: {} When i wan to create this database from console (console propel: database:create) i have got strange error : Unable to open PDO connection [wrapped: SQLSTATE[08006] [7] FATAL: database "pgsql" does not exist. i created pgsql database on my localhost and everything was good. Database "test_database" was succesfull created. Can somebody explain me why i got this previous error? On mysql i've created database without any problems.

    Read the article

  • Backup of folder + database - Python

    - by RadiantHex
    Hi there, I feel like this is quite delicate, I have various folders whith projects I would like to backup into a zip/tar file, but would like to avoid backing up files such as pyc files and temporary files. I also have a Postgres db I need to backup. Any tips for running this operation as a python script? Also, would there be anyway to stop the process from hogging resources in the process? Help would be very much appreciated.

    Read the article

  • Paginating data, has to be a better way

    - by John Tyler
    I've read like 10 or so "tutorials", and they all involve the same thing: Pull a count of the data set Pull the relevant data set (LIMIT, OFFSET) IE: SELECT COUNT(*) FROM table WHERE something = ? SELECT * FROM table WHERE something =? LIMIT ? offset ?` Two very similar queries, no? There has to be a better way to do this, my dataset is 600,000+ rows and already sluggish (results are determined by over 30 where clauses, and vary from user to user, but are properly indexed of course).

    Read the article

  • Is this postgres function cost efficient or still have to clean

    - by kiranking
    There are two tables in postgres db. english_all and english_glob First table contains words like international,confidential,booting,cooler ...etc I have written the function to get the words from english_all then perform for loop for each word to get word list which are not inserted in anglish_glob table. Word list is like I In Int Inte Inter .. b bo boo boot .. c co coo cool etc.. for some reason zwnj(zero-width non-joiner) is added during insertion to english_all table. But in function I am removing that character with regexp_replace. Postgres function for_loop_test is taking two parameter min and max based on that I am selecting words from english_all table. function code is like DECLARE inMinLength ALIAS FOR $1; inMaxLength ALIAS FOR $2; mviews RECORD; outenglishListRow english_word_list;--custom data type eng_id,english_text BEGIN FOR mviews IN SELECT id,english_all_text FROM english_all where wlength between inMinLength and inMaxLength ORDER BY english_all_text limit 30 LOOP FOR i IN 1..char_length(regexp_replace(mviews.english_all_text,'(?)$','')) LOOP FOR outenglishListRow IN SELECT distinct on (regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','')) mviews.id, regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') where regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') not in(select english_glob.english_text from english_glob where i=english_glob.wlength) order by regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') LOOP RETURN NEXT outenglishListRow; END LOOP; END LOOP; END LOOP; END; Once I get the word list I will insert that into another table english_glob. My question is is there any thing I can add to or remove from function to make it more efficient. edit Let assume english_all table have words like footer,settle,question,overflow,database,kingdom If inMinLength = 5 and inmaxLength=7 then in the outer loop footer,settle,kingdom will be selected. For above 3 words inner two loop will apply to get words like f,fo,foo,foot,foote,footer,s,se,set,sett,settl .... etc. In the final process words which are bold will be entered into english_glob with another parameter like 1 to denote it is a proper word and stored in the another filed of english_glob table. Remaining word will be stored with another parameter 0 because in the next call words which are saved in database should not be fetched again. edit2: This is a complete code CREATE TABLE english_all ( id serial NOT NULL, english_all_text text NOT NULL, wlength integer NOT NULL, CONSTRAINT english_all PRIMARY KEY (id), CONSTRAINT english_all_kan_text_uq_id UNIQUE (english_all_text) ) CREATE TABLE english_glob ( id serial NOT NULL, english_text text NOT NULL, is_prop integer default 1, CONSTRAINT english_glob PRIMARY KEY (id), CONSTRAINT english_glob_kan_text_uq_id UNIQUE (english_text) ) insert into english_all(english_text) values ('ant'),('forget'),('forgive'); on function call with parameter 3 and 6 fallowing rows should fetched a an ant f fo for forg forge forget next is insert to another table based on above row insert into english_glob(english_text,is_prop) values ('a',1),('an',1), ('ant',1),('f',0), ('fo',0),('for',1), ('forg',0),('forge',1), ('forget',1), on function call next time with parameter 3 and 7 fallowing rows should fetched.(because f,fo,for,forg are all entered in english_glob table) forgi forgiv forgive

    Read the article

  • How to get min/max of two integers in Postgres/SQL?

    - by HRJ
    How do I find the maximum (or minimum) of two integers in Postgres/SQL? One of the integers is not a column value. I will give an example scenario: I would like to subtract an integer from a column (in all rows), but the result should not be less than zero. So, to begin with, I have: UPDATE my_table SET my_column = my_column - 10; But this can make some of the values negative. What I would like (in pseudo code) is: UPDATE my_table SET my_column = MAXIMUM(my_column - 10, 0);

    Read the article

  • How can I selectively override a django .count() method

    - by Tom Viner
    I'm using postGresSQL and my main table has about 20,000 rows. Sometimes count() methods can take ages or even timeout. Mod.manager.filter(...).count() I need to selectively override the count() method depending on what filter has been applied. Just having a cache of results would be a great gain but I'd like to be able to say: if filter query is just {'enabled'=True} then return 20,000 without touching the db. Note: I can't prevent the call to .count() as it's inside django's pagination, which always does a count.

    Read the article

  • PostGres if query?

    - by KnockKnockWhosThere
    Is there a way to select records based using an if statement? My table looks like this: id | num | dis 1 | 4 | 0.5234333 2 | 4 | 8.2234 3 | 8 | 2.3325 4 | 8 | 1.4553 5 | 4 | 3.43324 And I want to select the num and dis where dis is the lowest number... So, a query that will produce the following results: id | num | dis 1 | 4 | 0.5234333 4 | 8 | 1.4553

    Read the article

  • Adding Postgres table cells based on same value

    - by russell kinch
    I have a table called expenses. There are numerous columns but the ones involved in my php page are date, spplierinv, amount. I have created a page that lists all the expenses in a given month and totals it at the end. However, each row has a value, but many rows might be on the same supplier invoice.This means adding each row with the same supplierinv to get a total as per my bank statement. Is there anyway I can get a total for the rows based on the supplierinv. I mean say I have 10 rows. 5 on supplierinv 4, two on supplierinv 5 and 3 on supplierinv 12, how can a get 3 figures (inv 4, 5 and 12) and the grand total at the bottom. Many thanks

    Read the article

  • How do I select and group by a portion of a string?

    - by Russ Bradberry
    Given I have data like the following, how can I select and group by portions of a string? Version Users 1.1.1 1 1.1.23 3 1.1.45 1 2.1.24 3 2.1.12 1 2.1.45 3 3.1.10 1 3.1.23 3 What I want is to sum up the users using version 1.1.x and 2.2.x and 3.3.x etc, but I'm not sure how I can group on a partial string in a select statement. edit What the data should return like is this: Version Users 1.1.XX 5 2.1.XX 7 3.1.XX 4 There is an infinite variable number of versions, some are in this format (major, minor, build) some are just major, minor and some are just major, the only time I want to "roll up" the versions is when there is a build.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >