Search Results

Search found 11198 results on 448 pages for 'ruby postgresql'.

Page 236/448 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • GIS: line_locate_point() in Python

    - by miracle2k
    I'm pretty much a beginner when it comes to GIS, but I think I understand the basics - it doesn't seem to hard. But: All these acronyms and different libraries, GEOS, GDAL, PROJ, PCL, Shaply, OpenGEO, OGR, OGC, OWS and what not, each seemingly depending on any number of others, is slightly overwhelming me. Here's what I would like to do: Given a number of points and a linestring, I want to determine the location on the line closest to a certain point. In other words, what PostGIS's line_locate_point() does: http://postgis.refractions.net/documentation/manual-1.3/ch06.html#line%5Flocate%5Fpoint Except I want do use plain Python. Which library or libraries should I have a look at generally for doing these kinds of spatial calculations in Python, and is there one that specifically supports a line_locate_point() equivalent?

    Read the article

  • How to manage db connections on server?

    - by simpatico
    I have a severe problem with my database connection in my web application. Since I use a single database connection for the whole application from singleton Database class, if i try concurrent db operations (two users) the database rollsback the transactions. This is my static method used: All threads/servlets call static Database.doSomething(...) methods, which in turn call the the below method. private static /* synchronized*/ Connection getConnection(final boolean autoCommit) throws SQLException { if (con == null) { con = new MyRegistrationBean().getConnection(); } con.setAutoCommit(true); //TODO return con; } What's the recommended way to manage this db connection/s I have, so that I don't incurr in the same problem.

    Read the article

  • How to test crontab entry?

    - by Mark
    I have an entry in my crontab that looks like this: 0 3 * * * pg_dump mydb | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz That script works perfectly when I execute it from the shell, but it doesn't seem to be running every night. I'm assuming there's something wrong with the permissions, maybe crontab is running under a different user or something. How can I debug this? I'm a shared hosting environment (WebFaction).

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • Why would a TableAdapter populate a DataSet with "1/1/2000" for an entire timestamp column?

    - by Rob
    I have a TableAdapter filling a DataSet, and for some reason every select query populates my timestamp column with the value 1/1/2000 for every selected row. I first verified that original values are intact on the DB side; for the most part, they are, although it seems a few rows lost their original timestamp because of update queries performed programmatically before the issue was discovered. The DataColumn type is DateType, while the database (Postgres) column type is timestamp. Up until recently, this was all playing very nicely. I noticed the issue in a bound DataGridView control, and verified that this is not related to data binding by utilizing the 'Preview Data' option in the VS DataSet Editor. Usually when I notice unexpected values popping up in my application it's related to a mis-configured property, type conflict, or another silly mistake I've made. So after checking properties and types, and even recreating the TableAdapter from scratch, to say I'm a little baffled is an understatement. Does anyone have any ideas of what I could do to fix the issue and/or diagnose the cause?

    Read the article

  • Problems writing a query to join two tables

    - by Psyche
    Hello, I'm working on a script which purpose is to grant site users access to different sections of the site menu. For this I have created two tables, "menu" and "rights": menu - id - section_name rights - id - menu_id (references column id from menu table) - user_id (references column id from users table) How can a query be written in order to get all menu sections and mark the ones where a given user has access. I'm using PHP and Postgres. Thank you.

    Read the article

  • PropelBundle database:create for postgres

    - by Karol85
    I've installed propel bundle for symfony2. my database configuration is: propel: dbal: driver: pgsql user: postgres password: postgres dsn: pgsql:host=localhost;port=5432;dbname=test_database options: {} attributes: {} When i wan to create this database from console (console propel: database:create) i have got strange error : Unable to open PDO connection [wrapped: SQLSTATE[08006] [7] FATAL: database "pgsql" does not exist. i created pgsql database on my localhost and everything was good. Database "test_database" was succesfull created. Can somebody explain me why i got this previous error? On mysql i've created database without any problems.

    Read the article

  • Nesting queries in SQL

    - by ZAX
    The goal of my query is to return the country name and its head of state if it's headofstate has a name starting with A, and the capital of the country has greater than 100,000 people utilizing a nested query. Here is my query: SELECT country.name as country, (SELECT country.headofstate from country where country.headofstate like 'A%') from country, city where city.population > 100000; I've tried reversing it, placing it in the where clause etc. I don't get nested queries. I'm just getting errors back, like subquery returns more than one row and such. If someone could help me out with how to order it, and explain why it needs to be a certain way, that'd be great.

    Read the article

  • expand a varchar column very slowly , why?

    - by francs
    Hi We need to modify a column of a big product table , usually normall ddl statments will be excutely fast ,but the above ddl statmens takes about 10 minnutes?I wonder know the reason! I just want to expand a varchar column?The following is the detailsl --table size wapreader_log= select pg_size_pretty(pg_relation_size('log_foot_mark')); pg_size_pretty ---------------- 5441 MB (1 row) --table ddl wapreader_log= \d log_foot_mark Table "wapreader_log.log_foot_mark" Column | Type | Modifiers -------------+-----------------------------+----------- id | integer | not null create_time | timestamp without time zone | sky_id | integer | url | character varying(1000) | refer_url | character varying(1000) | source | character varying(64) | users | character varying(64) | userm | character varying(64) | usert | character varying(64) | ip | character varying(32) | module | character varying(64) | resource_id | character varying(100) | user_agent | character varying(128) | Indexes: "pk_log_footmark" PRIMARY KEY, btree (id) --alter column wapreader_log= \timing Timing is on. wapreader_log= ALTER TABLE wapreader_log.log_foot_mark ALTER column user_agent TYPE character varying(256); ALTER TABLE Time: 603504.835 ms

    Read the article

  • Tomcat Postgres Connection

    - by user191207
    Hi, I'm using a singleton class for a PostgresSQL connection inside a servelet. The problem is that once it is open it works for a while (I guess until some timeout), and then it starts throwing a I/O exception. Any idea what is happening to the singleton class inside Tomcat VM? Thanks

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • Search a string to find which records in table are inside said string

    - by Improfane
    Hello, Say I have a string. Then I have a number of unique tokens or keywords, potentially a large number in a database. I want to search and find out which of these database strings are inside the string I provide (and get the IDs of them). Is there a way of using a query to search the provided string or must it be taken to application space? Am I right in thinking that this is not a 'full text search'? Would the best method be to insert it into the database to make it a full text search?

    Read the article

  • PostGres - run a query in batches?

    - by CaffeineIV
    Is it possible to loop through a query so that if (for example) 500,000 rows are found, it'll return results for the first 10,000 and then rerun the query again? So, what I want to do is run a query and build an array, like this: $result = pg_query("SELECT * FROM myTable"); $i = 0; while($row = pg_fetch_array($result) ) { $myArray[$i]['id'] = $row['id']; $myArray[$i]['name'] = $row['name']; $i++; } But, I know that there will be several hundred thousand rows, so I wanted to do it in batches of like 10,000... 1- 9,999 and then 10,000 - 10,999 etc... The reason why is because I keep getting this error: Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 3 bytes) Which, incidentally, I don't understand how 3 bytes could exhaust 512M... So, if that's something that I can just change, that'd be great, although, still might be better to do this in batches?

    Read the article

  • Postgres : Post statement (or insert) asynchronous, non-blocking processing.

    - by Hassan Syed
    I'm wondering if it is possible, that after a collection of rows is inserted, to initiate an operation that is executed asynchronously, is non-blocking, and doesn't need to inform the originator of the request - of the result. I am working with large amounts of events and I can guarantee that the post-insert logic will not fail -- I just want to have a single insert thread in my event-sources, and I want this thread to keep flying without blocking, and without being responsible for any post-delivery book-keeping. I can tell you that I would potentially have a 100 of these jobs executing concurrently and each job might operate on 5 tables with anywhere between 200-1000 inserts on each of these tables. A hint in the right direction should be enough.

    Read the article

  • unique constraint (w/o Trigger) on "one-to-many" relation

    - by elgcom
    To illustrate the problem, I make an example: A tag_bundle consists of one or more than one tags. A unique tag combination can map to a unique tag_bundle, vice versa. tag_bundle tag tag_bundle_relation +---------------+ +--------+ +---------------+--------+ | tag_bundle_id | | tag_id | | tag_bundle_id | tag_id | +---------------+ +--------+ +---------------+--------+ | 1 | | 100 | | 1 | 100 | +---------------+ +--------+ +---------------+--------+ | 101 | | 1 | 101 | +--------+ +---------------+--------+ There can't be another tag_bundle having the combination from tag 100 and tag 101. How can I ensure such unique constraint when executing SQL "concurrently"!! that is, to prevent concurrently adding two bundles with the same tag combination Adding a simple unique constraint on any table does not work, Is there any solution other than Trigger or explicit lock. I come to only this simple way: make tag combination into string, and let it be unique. tag_bundle (unique on tags) tag tag_bundle_relation +---------------+--------+ +--------+ +---------------+--------+ | tag_bundle_id | tags | | tag_id | | tag_bundle_id | tag_id | +---------------+--------+ +--------+ +---------------+--------+ | 1 | 100,101| | 100 | | 1 | 100 | +---------------+--------+ +--------+ +---------------+--------+ | 101 | | 1 | 101 | +--------+ +---------------+--------+ but it seems not a good way :(

    Read the article

  • Postgres vs Firebird

    - by Tedi
    I'm looking to use either Firebird or Postgres in my next development project ... largely because both are available under a BSD-like license. I found a great comparison of the two database at http://www.amsoftwaredesign.com/pg_vs_fb But this comparison is a good 2+ years old and both databases have come a long ways since. Does anyone mind updating the comparison table to be relevant for the current versions of both Firebird and Postgres ... or have a link to a site that does a good recent comparison between the two database?

    Read the article

  • PostgeSQL: Arrays Data Type with PHP

    - by ArchJ
    I'm working on PostgeSQL with PHP and I know that PosrgeSQL allow columns of a table to be defined as arrays. So let's say I have a table like this: CREATE TABLE sal_emp ( a text ARRAY, b text ARRAY, c text ARRAY, ); These are my arrays: $a = array(aa,bb,cc); $b = array(dd,dd,aa); $c = array(bb,ff,ee); and I want to insert them into respective column each like this: a | b | c -----------+------------+------------ {aa,bb,cc} | {dd,dd,aa} | {bb,ff,ee} Can I insert it this way? $a = implode(',', $a); $b = implode(',', $b); $c = implode(',', $c); $a = array('a' => $a, 'b' => $b, 'c' => $c); pg_insert($dbconn, 'table', $a); Or is there a better way to achieve the same result?

    Read the article

  • Paginating data, has to be a better way

    - by John Tyler
    I've read like 10 or so "tutorials", and they all involve the same thing: Pull a count of the data set Pull the relevant data set (LIMIT, OFFSET) IE: SELECT COUNT(*) FROM table WHERE something = ? SELECT * FROM table WHERE something =? LIMIT ? offset ?` Two very similar queries, no? There has to be a better way to do this, my dataset is 600,000+ rows and already sluggish (results are determined by over 30 where clauses, and vary from user to user, but are properly indexed of course).

    Read the article

  • Get the last N rows in the database in order?

    - by Kristopher
    Let's say I have the following database table: record_id | record_date | record_value -----------+-------------+-------------- 1 | 2010-05-01 | 195.00 2 | 2010-07-01 | 185.00 3 | 2010-09-01 | 175.00 4 | 2010-05-01 | 189.00 5 | 2010-06-01 | 185.00 6 | 2010-07-01 | 180.00 7 | 2010-08-01 | 175.00 8 | 2010-09-01 | 170.00 9 | 2010-10-01 | 165.00 I want to grab the last 5 rows with the data ordered by record_date ASC. This is easy to do with: SELECT * FROM mytable ORDER BY record_date ASC LIMIT 5 OFFSET 4 Which would give me: record_id | record_date | record_value -----------+-------------+-------------- 6 | 2010-07-01 | 180.00 7 | 2010-08-01 | 175.00 3 | 2010-09-01 | 175.00 8 | 2010-09-01 | 170.00 9 | 2010-10-01 | 165.00 But how do I do this when I don't know how many records there are and can't compute the magic number of 4? I've tried this query, but if there are less than 5 records, it results in a negative OFFSET, which is invalid: SELECT * FROM mytable ORDER BY record_date ASC LIMIT 5 OFFSET (SELECT COUNT(*) FROM mytable) - 5; So how do I accomplish this?

    Read the article

  • Escape SQL "LIKE" value for Postgres with psycopg2

    - by Evgeny
    Does psycopg2 have a function for escaping the value of a LIKE operand for Postgres? For example I may want to match strings that start with the string "20% of all", so I want to write something like this: sql = '... WHERE ... LIKE %(myvalue)s' cursor.fetchall(sql, { 'myvalue': escape_sql_like('20% of all') + '%' } Is there an existing escape_sql_like function that I could plug in here? (Similar question to How to quote a string value explicitly (Python DB API/Psycopg2), but I couldn't find an answer there.)

    Read the article

  • Finding all areas that intersect with a point and vice-versa - PostGIS

    - by ForeignerBR
    I'm developing a project using PostGIS to hold spatial data where I have records that hold geometry point data and records that hold geometry area data. To solve my problem I'm looking for two queries that can take geographic shapes rather than geometric shapes as parameters. For query A I need it to return all points that intersect with a given area. For query B I need it to return all areas that intersect with a given point.

    Read the article

  • How to increase query speed without using full-text search?

    - by andre matos
    This is my simple query; By searching selectnothing I'm sure I'll have no hits. SELECT nome_t FROM myTable WHERE nome_t ILIKE '%selectnothing%'; This is the EXPLAIN ANALYZE VERBOSE Seq Scan on myTable (cost=0.00..15259.04 rows=37 width=29) (actual time=2153.061..2153.061 rows=0 loops=1) Output: nome_t Filter: (nome_t ~~* '%selectnothing%'::text) Total runtime: 2153.116 ms myTable has around 350k rows and the table definition is something like: CREATE TABLE myTable ( nome_t text NOT NULL, ) I have an index on nome_t as stated below: CREATE INDEX idx_m_nome_t ON myTable USING btree (nome_t); Although this is clearly a good candidate for Fulltext search I would like to rule that option out for now. This query is meant to be run from a web application and currently it's taking around 2 seconds which is obviously too much; Is there anything I can do, like using other index methods, to improve the speed of this query?

    Read the article

  • Check value at insert

    - by ThreeFingerMark
    Hello, i have this three tables. Table: Item Columns: ItemID, Title, Content, NoChange (Date) Table: Tag Columns: TagID, Title Table: ItemTag Columns: ItemID, TagID In the Item Table is a Field with NoChange, if this field = true no Tag is allowed to insert a ItemTag value with this ItemID. How can i check this in the insert? For Updates i have this Statement: UPDATE ItemTag SET TagID = ? where ItemID = ? AND TagID = ? AND exists ( select ItemID from Item where ItemID = ? AND NoChange is null)"); Thank you.

    Read the article

  • INSERT data from Textbox to Postgres SQL

    - by user1479013
    I just learn how to connect C# and PostgresSQL. I want to INSERT data from tb1(Textbox) and tb2 to database. But I don't know how to code My previous code is SELECT from database. this is my code private void button1_Click(object sender, EventArgs e) { bool blnfound = false; NpgsqlConnection conn = new NpgsqlConnection("Server=127.0.0.1;Port=5432;User Id=postgres;Password=admin123;Database=Login"); conn.Open(); NpgsqlCommand cmd = new NpgsqlCommand("SELECT * FROM login WHERE name='" + tb1.Text + "' and password = '" + tb2.Text + "'",conn); NpgsqlDataReader dr = cmd.ExecuteReader(); if (dr.Read()) { blnfound = true; Form2 f5 = new Form2(); f5.Show(); this.Hide(); } if (blnfound == false) { MessageBox.Show("Name or password is incorrect", "Message Box", MessageBoxButtons.OK, MessageBoxIcon.Exclamation, MessageBoxDefaultButton.Button1); dr.Close(); conn.Close(); } } So please help me the code.

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >