Search Results

Search found 1505 results on 61 pages for 'postgresql'.

Page 24/61 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How can I temporarily redirect printf output to a c-string?

    - by Ben S
    I'm writing an assignment which involves adding some functionality to PostgreSQL on a Solaris box. As part of the assignment, we need to print some information on the client side (i.e.: using elog.) PostgreSQL already has lots of helper methods which print out the required information, however, the helper methods are packed with hundreds of printf calls, and the elog method only works with c-style strings. Is there I way that I could temporarily redirect printf calls to a buffer so I could easily send it over elog to the client? If that's not possible, what would be the simplest way to modify the helper methods to end up with a buffer as output?

    Read the article

  • Unable to install gem "pg" on Ubuntu 12.10 (AMD64)

    - by Lynx_Eyes
    I've been (unsuccessfully) trying to install the "pg" gem on my ruby 1.9.3-p286 but nothing seems to work. I've already installed postgresql (9.1), libpq-dev and a few others like postgresql-server-dev-9.1. I've tried to pass the "with-pg-config" flag to the gem install but simply nothing seems to work. Every time I try to install the gem it outputs something like this: Building native extensions. This could take a while... ERROR: Error installing pg: ERROR: Failed to build gem native extension. /home/lynux/.rvm/rubies/ruby-1.9.3-p286/bin/ruby extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... no checking for PQconnectdb() in -llibpq... no checking for PQconnectdb() in -lms/libpq... no Can't find the PostgreSQL client library (libpq) *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/home/lynux/.rvm/rubies/ruby-1.9.3-p286/bin/ruby --with-pg --without-pg --with-pg-dir --without-pg-dir --with-pg-include --without-pg-include=${pg-dir}/include --with-pg-lib --without-pg-lib=${pg-dir}/lib --with-pg-config --without-pg-config --with-pg_config --without-pg_config --with-pqlib --without-pqlib --with-libpqlib --without-libpqlib --with-ms/libpqlib --without-ms/libpqlib Gem files will remain installed in /home/lynux/.rvm/gems/ruby-1.9.3-p286@phisiodata/gems/pg-0.14.1 for inspection. Results logged to /home/lynux/.rvm/gems/ruby-1.9.3-p286@phisiodata/gems/pg-0.14.1/ext/gem_make.out What am I doing wrong? Is there something else that I should do before trying to install the gem? Thank you in advance. [EDIT] Ok, so joelparkerhenderson's answer set me to think that there might me something wrong with paths and libraries and a went on digging a little bit further.. I've found this awesome post and it solved! Basically the problem lies with RVM. So, my problem is solved and for anyone out there that might suffer from the same thing, follow the link!

    Read the article

  • Switch role after connecting to database

    - by Chris Gow
    Is it possible to change the postgresql role a user is using when interacting with postgres after the initial connection? The database(s) will be used in a web application and I'd like to employ database level rules on tables and schemas with connection pooling. From reading the postgresql documentation it appears I can switch roles if I originally connect as a user with the superuser role, but I would prefer to initially connect as a user with minimal permissions and switch as necessary. Having to specify the user's password when switching would be fine (in fact I'd prefer it). What am I missing?

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • GWT Date Picker Format problem when saving a java date through hibernate in postgresql

    - by Noor
    Hi, I am using Java Date and Hibernate which is then being saved in the database (Postgresql). I am not that good in hibernate Part of the Mapping File <property name="DateOfBirth" type="java.util.Date"> <column name="DATEOFBIRTH" /> </property> I am using GWT Date picker Short date format i.e. yyyy-MM-dd. I am getting the value from the date picker using View.getUserDateOfBirth().getValue() But when I am saving the date 2010-11-30 into the datebase it is saving it as 2010-11-30 00:00:00 instead of 2010-11-30 So, I want it to be saved in the database as in this format 2010-11-30?? I have many things such timestamp but i not being able to configure it. I think this part <property name="DateOfBirth" type="java.util.Date"> <column name="DATEOFBIRTH" /> </property> should be changed but I do not know what to change

    Read the article

  • Help to convert PostgreSQL dates into SQL Server dates

    - by Earlz
    Hello I'm doing some data conversion from PostgreSQL to Microsoft SQL Server. So far it has all went well and I almost have the entire database dump script running. There is only one thing that is now messed up: dates. The dates are dumped to a string format. These are two example formats I've seen so far: '2008-01-14 12:00:00' and the more precise '2010-04-09 12:23:45.26525' I would like a regex (or set of regexs) that I could run so that will replace these with SQL Server compatible dates. Anyone know how I can do that?

    Read the article

  • Mixed surrogate composite key insert in JPA 2.0, PostgreSQL and Hibernate 3.5

    - by Gerald
    First off, we are using JPA 2.0 and Hibernate 3.5 as persistence provider on a PostgreSQL database. We successfully use the sequence of the database via the JPA 2.0 annotations as an auto-generated value for single-field-surrogate-keys and all works fine. Now we are implementing a bi-temporal database-scheme that requires a mixed key in the following manner: Table 1: id (pk, integer, auto-generated-sequence) validTimeBegin (pk, dateTime) validTimeEnd (dateTime) firstName (varChar) Now we have a problem. You see, if we INSERT a new element, the field id is auto-generated and that's fine. Only, if we want to UPDATE the field within this scheme, then we have to change the validTimeBegin column WITHOUT changing the id-field and insert it as a new row like so: BEFORE THE UPDATE OF THE ROW: |---|-------------------------|-------------------------|-------------------| | id| validTimeBegin | validTimeEnd | firstName | |---|-------------------------|-------------------------|-------------------| | 1| 2010-05-01-10:00:00.000 | NULL | Gerald | |---|-------------------------|-------------------------|-------------------| AFTER THE UPDATE OF THE ROW happening at exactly 2010-05-01-10:35:01.788 server-time: (we update the person with the id:1 to reflect his new first name...) |---|-------------------------|-------------------------|-------------------| | id| validTimeBegin | validTimeEnd | firstName | |---|-------------------------|-------------------------|-------------------| | 1| 2010-05-01-10:00:00.000 | 2010-05-01-10:35:01.788 | Gerald | |---|-------------------------|-------------------------|-------------------| | 1| 2010-05-01-10:35:01.788 | NULL | Jerry | |---|-------------------------|-------------------------|-------------------| So our problem is, that this doesn't work at all using an auto-generated-sequence for the field id because when inserting a new row then the id ALWAYS is auto-generated although it really is part of a composite key which should sometimes behave differently. So my question is: Is there a way to tell hibernate via JPA to stop auto-generating the id-field in the case I want to generate a new variety of the same person and go on as usual in every other case or do I have to take over the whole id-generation with custom code? Thanks in advance, Gerald

    Read the article

  • AbstractMethodError when invoking createArrayOf, with postgresql 8.4 jdbc4 and JBoss 5.1GA

    - by Francesco
    Hi, when using this method public List<Field> getFieldWithoutId(List<Integer> idSections) throws Exception { try { Connection conn = this.getConnection(); Array arraySections = conn.createArrayOf("int4", idSections.toArray()); this.log.info("Recupero field"); List<Field> fields = this.getJdbcTemplate().query(getFieldWithoutIdQuery, new Object[] {arraySections},ParameterizedBeanPropertyRowMapper.newInstance(Field.class)); /*if (!conn.isClosed()) conn.close(); */ releaseConnection(conn); return fields; } catch (Exception e) { e.printStackTrace(); throw new Exception("Errore."); } } I have an exception at conn.createArrayOf("int4", idSections.toArray());. The exception is: javax.ejb.EJBException : Unexpected Error java.lang.AbstractMethodError: org.jboss.resource.adapter.jdbc.jdk5.WrappedConnectionJDK5.createArrayOf(Ljava/lang/String;[Ljava/lang/Object;)Ljava/sql/Array; postgresql-8.4-701.jdbc4.jar is in jboss/server/all/lib dir. Application is spring based with ejb3. When working locally with the same setup everything is fine. This only happens on a preproduction environment. Only difference is locally I have jboss run in default mode, in the other case there are 2 jbosses in all configuration. I can't track down the cause of this error. Could someone help me please?

    Read the article

  • em.persist seems doesn't persist data on postgreSQL db

    - by Mario
    I've got a simple java main which must write bean data on a PostgreSQL database. I use Entity manager to persist or update object. I use hibernate and toplink driver connection which are specified in persistence.xml file. When I call em.persist(obj), nothing is saved on database, I don't know why. here is my simple code: private static void importa(FileReader f) throws IOException { EntityManagerFactory emf = Persistence .createEntityManagerFactory("orpt2"); EntityManager em = emf.createEntityManager(); dispositivoMedico = new DispositivoMedico(); dispositivoMedico.setCategoria("prova"); dispositivoMedico.setCodice("323"); em.persist(dispositivoMedico); And here is my persistence.xml http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" it.ariadne.orpt2.entities.AccessoriScheda it.ariadne.orpt2.entities.CampiSchede it.ariadne.orpt2.entities.CampiSchedeSalvati it.ariadne.orpt2.entities.CampoAggiuntivo it.ariadne.orpt2.entities.Categorie it.ariadne.orpt2.entities.CategorieCampi it.ariadne.orpt2.entities.CategorieCampiPK it.ariadne.orpt2.entities.ClasseCivab it.ariadne.orpt2.entities.DecodificaStato it.ariadne.orpt2.entities.DispositivoMedico it.ariadne.orpt2.entities.Ente it.ariadne.orpt2.entities.FormaNegoziazione it.ariadne.orpt2.entities.Fornitore it.ariadne.orpt2.entities.LogSession it.ariadne.orpt2.entities.Modello it.ariadne.orpt2.entities.Periodicita it.ariadne.orpt2.entities.Produttore it.ariadne.orpt2.entities.Ruolo it.ariadne.orpt2.entities.RuoloPK it.ariadne.orpt2.entities.RuoloUtente it.ariadne.orpt2.entities.Scheda it.ariadne.orpt2.entities.SchedaSalvata it.ariadne.orpt2.entities.Tipologia it.ariadne.orpt2.entities.Utente Thank you for your help. Mario

    Read the article

  • Divide a path into N sections using Java or PostgreSQL/PostGIS

    - by Guido
    Imagine a GPS tracking system that is following the position of several objects. The points are stored in a database (PostgreSQL + PostGIS). Each path is composed by a different number of points. That is the reason why, in order to compare a pair of paths, I need to divide every path in a set of 100 points. Do you know any PostGIS function that already implement this algorithm? I've not been able to find it. If not, I'd like to solve it using Java. In this case I'd like to know an efficient and easy to implement algorithm to divide a path into N points. The most simple example could be to divide this path into three points: position 1 : x=1, y=2 position 2 : x=1, y=3 And the result should be: position 1 : x=1, y=2 (starting point) position 2 : x=5, y=2.5 position 3 : x=9, y=3 (end point) Edit: By 'compare a pair of paths' I mean to calculate the distance between two paths. I plan to divide each path in 100 points, and sum the euclidean distance between each one of these points as the distance between the two paths.

    Read the article

  • Get most left|right|top|bottom point contained in box

    - by skyman
    I'm storing Points Of Interest (POI) in PostgreSQL database, and retrieve them via PHP script to Android application. To reduce internet usage I want my mobile app to know if there are any points in the neighborhood of currently displayed area. My idea is to store bounds of the rectangle containing all points already retrieved (in other words: nearest point on the left (West) of most west already retrieved, nearest point above (North) of most north already retrieved etc.) and I will make next query when any edge of screen goes outside of this bounds. Currently I can retrieve points which are in "single screen" (in the area covered by currently displayed map) using: SELECT * FROM ch WHERE loc <@ (box '((".-$latSpan.", ".$lonSpan."),(".$latSpan.", ".-$lonSpan."))' + point '".$loc."') Now I need to know four most remote points in each direction, than I will be able to retrieve next four "more remote" points. Is there any possibility to get those points (or box) directly from PostgreSQL (maybe using some "aggregate points to box" function)?

    Read the article

  • Can't install pg gem on Windows

    - by sNiCKY
    I've got 2 Ruby versions: 1.8.7 and 1.9.2 and PostgreSQL 8.3. I cant install pg gem on any of them. Getting this error: C:/Development/Ruby187/bin/ruby.exe extconf.rb checking for pg_config... yes not recorded checking for libpq-fe.h... no Can't find the 'libpq-fe.h header *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=C:/Development/Ruby187/bin/ruby --with-pg --without-pg --with-pg-config --without-pg-config --with-pg-dir --without-pg-dir --with-pg-include --without-pg-include=${pg-dir}/include --with-pg-lib --without-pg-lib=${pg-dir}/lib I know it's a common problem, but I haven't found any working solution yet... Oh, I have added C:\Program Files (x86)\PostgreSQL\8.3\bin to my PATH.

    Read the article

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • "date_part('epoch', now() at time zone 'UTC')" not the same time as "now() at time zone 'UTC'" in po

    - by sirlark
    I'm writing a web based front end to a database (PHP/Postgresql) in which I need to store various dates/times. The times are meant to be always be entered on the client side in the local time, and displayed in the local time too. For storage purposes, I store all dates/times as integers (UNIX timestamps) and normalised to UTC. One particular field has a restriction that the timestamp filled in is not allowed to be in the future, so I tried this with a database constraint... CONSTRAINT not_future CHECK (timestamp-300 <= date_part('epoch', now() at time zone 'UTC')) The -300 is to give 5 minutes leeway in case of slightly desynchronised times between browser and server. The problem is, this constraint always fails when submitting the current time. I've done testing, and found the following. In PostgreSQL client: SELECT now() -- returns correct local time SELECT date_part('epoch', now()) -- returns a unix timestamp at UTC (tested by feeding the value into the date function in PHP correcting for its compensation to my time zone) SELECT date_part('epoch', now() at time zone 'UTC') -- returns a unix timestamp at two time zone offsets west, e.g. I am at GMT+2, I get a GMT-2 timestamp. I've figured out obviously that dropping the "at time zone 'UTC'" will solve my problem, but my question is if 'epoch' is meant to return a unix timestamp which AFAIK is always meant to be in UTC, why would the 'epoch' of a time already in UTC be corrected? Is this a bug, or I am I missing something about the defined/normal behaviour here.

    Read the article

  • SQL return ORDER BY result as an array

    - by EarthMind
    Is it possible to return groups as an associative array? I'd like to know if a pure SQL solution is possible. Note that I release that I could be making things more complex unnecessarily but this is mainly to give me an idea of the power of SQL. My problem: I have a list of words in the database that should be sorted alphabetically and grouped into separate groups according to the first letter of the word. For example: ape broom coconut banana apple should be returned as array( 'a' => 'ape', 'apple', 'b' => 'banana', 'broom', 'c' => 'coconut' ) so I can easily created sorted lists by first letter (i.e. clicking "A" only shows words starting with a, "B" with b, etc. This should make it easier for me to load everything in one query and make the sorted list JavaScript based, i.e. without having to reload the page (or use AJAX). Side notes: I'm using PostgreSQL but a solution for MySQL would be fine too so I can try to port it to PostgreSQL. Scripting language is PHP.

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • Heroku Push Problem part 2 - Postgresql - PGError Relations does not exist - Ruby on Rails

    - by bgadoci
    Ok so got through my last problem with the difference between Postgresql and SQLite and seems like Heroku is telling me I have another one. I am new to ruby and rails so a lot of this stuff I can't decipher at first. Looking for a little direction here. The error message and PostsController Index are below. I checked my routes.rb file and all seems well there but I could be missing something. I will post if you need. Processing PostsController#index (for 99.7.50.140 at 2010-04-23 15:19:22) [GET] ActiveRecord::StatementInvalid (PGError: ERROR: relation "tags" does not exist : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"tags"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum ): PostsController#index def index @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @ugtag_counts = Ugtag.count(:group => :ugctag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @vote_counts = Vote.count(:group => :post_title, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes unless(params[:tag_name] || "").empty? conditions = ["tags.tag_name = ? ", params[:tag_name]] joins = [:tags, :votes] end @posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id, posts.id ", :order => "created_at DESC", :page => params[:page], :per_page => 5) @popular_posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id, posts.id", :order => "vote_total DESC", :page => params[:page], :per_page => 3) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • SQL: convert tokens in a string or elements of an array into rows of a table

    - by slowpoison
    Is there a simple way in SQL to convert a string or an array to rows of a table? For example, let's stay the string is 'a,b,c,d,e,f,g'. I'd prefer an SQL statement that takes that string, splits it at commas and inserts the resulting strings into a table. In PostgreSQL I can use regexp_split_to_array() and split the string into an array. So, if you know a way to insert an array's elements as rows into a table, that would work too.

    Read the article

  • Optimal size for Database partitions

    - by Adrian Mouat
    Hi all, I am creating a very simple, very large Postgresql database. The database will have around 10 billion rows, which means I am looking at partitioning it into several tables. However, I can't find any information on how many partitions we should break it into. I don't know what type of queries to expect as of yet, so it won't be possible to come up with a perfect partitioning scheme, but are there any rules of thumb for partition size? Cheers, Adrian.

    Read the article

  • Scrubbing IPv4 dotted-quad addresses in SQL

    - by pilcrow
    Given a table containing dotted quad IPv4 addresses stored as a VARCHAR(15), for example: ipv4 -------------- 172.16.1.100 172.16.50.5 172.30.29.28 what's a convenient way to SELECT all "ipv4" fields with the final two octets scrubbed, so that the above would become: ipv4 ------------ 172.16.x.y 172.16.x.y 172.30.x.y Target RDBMS is postgresql 8.4, but the more portable the better! Thanks.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >