Search Results

Search found 27519 results on 1101 pages for 'sql learner'.

Page 664/1101 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • XML Import how would you do it?

    - by Rico
    XML is used as one of our main integration points. it comes over by many clients at a time but too many clients importing at the same time can slow down our database to a crawl. Someone has to have solved a problem like this. I am basically using VB to parse through the data and import what i want and don't want. Is there a better way?

    Read the article

  • Any way to make this PostgreSQL count query any faster?

    - by Ben Dauphinee
    I'm running a case-insensitive search on a table with 7.2 million rows, and I was wondering if there was any way to make this query any faster? Currently, it takes approx 11.6 seconds to execute, with just one search parameter, and I'm worried that as soon as I add more than one, this query will become massively slow. SELECT count(*) FROM "exif_parse" WHERE (description ~* 'canon')

    Read the article

  • count(*) vs count(column-name) - which is more correct?

    - by bread
    Does it make a difference if you do count(*) vs count(column-name) as in these two examples? I have a tendency to always write count(*) because it seems to fit better in my mind with the notion of it being an aggregate function, if that makes sense. But I'm not sure if it's technically best as I tend to see example code written without the * more often than not. count(*): select customerid, count(*), sum(price) from items_ordered group by customerid having count(*) > 1; vs. count(column-name): SELECT customerid, count(customerid), sum(price) FROM items_ordered GROUP BY customerid HAVING count(customerid) > 1;

    Read the article

  • oracle call stored procedure inside select

    - by CC
    Hi everybody. I'm working on a query (a SELECT) and I need to insert the result of this one in a table. Before doing the insert I have some checking to do, and if all columns are valid, I will do the insert. The checking is done in a stored procedure. The same procedure is used somewhere else too. So I'm thinking using the same procedure to do my checks. The procedure does the checkings and insert the values is all OK. I tryied to call the procedure inside my SELECT but it does not works. SELECT field1, field2, myproc(field1, field2) from MYTABLE. This kind of code does not works. I think it can be done using a cursor, but I would like to avoid the cursors. I'm looking for the easiest solution. Anybody, any idea ? Thanks alot. C.C.

    Read the article

  • MySQL count statements error - operand should contain 1 column(s)

    - by jason
    I am trying to do multiple counts everyone was working accept the first sub select (list1) I get an error that reads "Operand should contain 1 column(s)" i'm guessing it has something to do with the AND, but i'm not sure how I would fix this one. Select Count(list0.ustatus) AS finished_count, (Select list1.ustatus, Count(*) From listofupdates list1 Where list1.ustarted != '0000-00-00 00:00:00' AND list1.ustatus != 1) AS start_count, (Select Count(list2.udifficulty) From listofupdates list2 Where list2.udifficulty = 2) AS recheck_count, (Select Count(list3.udifficulty) From listofupdates list3 Where list3.udifficulty = 4) AS question_count From listofupdates list0 Where list0.ustatus = 1

    Read the article

  • Conditional sorting in MySQL?

    - by serg555
    I have "tasks" table with 3 fields: date priority (0,1,2) done (0,1) What I am trying to achieve is with the whole table sorted by done flag, tasks that are not done should be sorted by priority, while tasks that are done should be sorted by date: Select * from tasks order by done asc If done=0 additionally order by priority desc If done=1 additionally order by date desc Is it possible to do this in MySQL without unions? Thanks.

    Read the article

  • Does UNIQ constraint mean also an index on that field(s)?

    - by Gremo
    As title, should i defined a separate index on email column (for searching purposes) or the index is "automatically" added along with UNIQ_EMAIL_USER constraint? CREATE TABLE IF NOT EXISTS `customer` ( `id` int(11) NOT NULL AUTO_INCREMENT, `user_id` int(11) NOT NULL, `first` varchar(255) NOT NULL, `last` varchar(255) NOT NULL, `slug` varchar(255) NOT NULL, `email` varchar(255) NOT NULL, `created_at` datetime NOT NULL, `updated_at` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `UNIQ_SLUG` (`slug`), UNIQUE KEY `UNIQ_EMAIL_USER` (`email`,`user_id`), KEY `IDX_USER` (`user_id`) ) ENGINE=InnoDB;

    Read the article

  • How can I treat a sequence value like a generated key?

    - by Jeff Knecht
    Here is my situation and my constraints: I am using Java 5, JDBC, and DB2 9.5 My database table contains a BIGINT value which represents the primary key. For various reasons that are too complicated to go into here, the way I insert records into the table is by executing an insert against a VIEW; an INSTEAD OF trigger retrieves the NEXT_VAL from a SEQUENCE and performs the INSERT into the target table. I can change the triggers, but I cannot change the underlying table or the general approach of inserting through the view. I want to retrieve the sequence value from JDBC as if it were a generated key. Question: How can I get access to the value pulled from the SEQUENCE. Is there some message I can fire within DB2 to float this sequence value back to the JDBC driver?

    Read the article

  • SQL Server Reporting Services using of pictures with dynamic links

    - by YvesR
    I have setup a SSRS 2008 and building reports. So far so good. Now there is a picture control in SSRS where you can set the picture as external link reference. There you can choose to use a link. When I use a weblink (http://anyurl/download_picture.aspx?id=123) it dont' work for me. Calling the url in the web browser (all tested IE, Safari, Chrome, FF) the pictures is delivered, Header is ok, content type, too. Does it work in general in SSRS ? Or do I have to copy the picture to a temp folder and link the url like http://anyurl/mypicture.jpg.

    Read the article

  • My update query executes but doesn't update

    - by Kindson
    I have this update query. UPDATE production_shr_01 SET total_hours = hours, total_weight = weight, percentage = total_hours / 7893.3 WHERE (status = 'X') The query executes fine but the problem is that when this query executes, it doesn't update the percentage field. What might be the problem?

    Read the article

  • Limit calls to external database with Python CGI

    - by Matt Ball
    I've got a Python CGI script that pulls data from a GPS service; I'd like this information to be updated on the webpage about once every 10s (the max allowed by the GPS service's TOS). But there could be, say, 100 users viewing the webpage at once, all calling the script. I think the users' scripts need to grab data from a buffer page that itself only upates once every ten seconds. How can I make this buffer page auto-update if there's no one directly viewing the content (and not accessing the CGI)? Are there better ways to accomplish this?

    Read the article

  • update record only works when there is no auto_increment

    - by every_answer_gets_a_point
    i am accessing a mysql table through an odbc connection in excel here is how i am updating the table: With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = "bstate" .Fields("instrument") = "NA" .Update ' stores the new record End With when the schema of the table is this, updating it works: create table batchinfo(datapath text,analysistime text,reporttime text,lastcalib text,analystname text, reportname text, batchstate text, instrument text); but when i have auto_increment in there it does not work: CREATE TABLE batchinfo ( rowid int(11) NOT NULL AUTO_INCREMENT, datapath text, analysistime text, reporttime text, lastcalib text, analystname text, reportname text, batchstate text, instrument text, PRIMARY KEY (rowid) ) ENGINE=InnoDB AUTO_INCREMENT=67 DEFAULT CHARSET=latin1 has anyone experienced a problem like this where updating does not work when there is an auto_increment field involved? connection string: Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=pas;" & _ "Option=3" End Sub also here's the rs.open: rs.Open "batchinfo", oConn, adOpenKeyset, adLockOptimistic, adCmdTable

    Read the article

  • How can I find all records for a model without doing a long list of "OR" conditions?

    - by gomezuk
    I'm having trouble composing a CakePHP find() which returns the records I'm looking for. My associations go like this: User -(has many)- Friends , User -(has many)- Posts I'm trying to display a list of all a user's friends recent posts, in other words, list every post that was created by a friend of the current user logged in. The only way I can think of doing this is by putting all the user's friends' user_ids in a big array, and then looping through each one, so that the find() call would look something like: $posts = $this->Post->find('all',array( 'conditions' => array( 'Post.user_id' => array( 'OR' => array( $user_id_array[0],$user_id_array[1],$user_id_array[2] # .. etc ) ) ) )); I get the impression this isn't the best way of doing things as if that user is popular that's a lot of OR conditions. Can anyone suggest a better alternative?

    Read the article

  • Oracle: how to UPSERT (update or insert into a table?)

    - by Mark Harrison
    The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data: if table t has a row exists that has key X: update t set mystuff... where mykey=X else insert into t mystuff... Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • Fetch multiple rows from SQL in PHP foreach item in array

    - by TrySpace
    I try to request an array of IDs, to return each row with that ID, and push each into an Array $finalArray But only the first result from the Query will output, and at the second foreach, it skips the while loop. I have this working in another script, so I don't understand where it's going wrong. The $arrayItems is an array containing: "home, info" $finalArray = array(); foreach ($arrayItems as $UID_get) { $Query = "SELECT * FROM items WHERE (uid = '" . cleanQuery($UID_get) . "' ) ORDER BY uid"; if($Result = $mysqli->query($Query)) { print_r($UID_get); echo "<BR><-><BR>"; while ($Row = $Result->fetch_assoc()) { array_push($finalArray , $Row); print_r($finalArray ); echo "<BR><><BR>"; } } else { echo '{ "returned" : "FAIL" }'; //. mysqli_connect_errno() . ' ' . mysqli_connect_error() . "<BR>"; } } (the cleanQuery is to escape and stripslashes) What I'm trying to get is an array of multiple rows (after i json_encoded it, like: {"finalArray" : { "home": {"id":"1","created":"0000-00-00 00:00:00","css":"{ \"background-color\" : \"red\" }"} }, { "info": {"id":"2","created":"0000-00-00 00:00:00","css":"{ \"background-color\" : \"blue\" }"} } } But that's after I get both, or more results from the db. the print_r($UID_get); does print info, but then nothing.. So, why am I not getting the second row from info? I am essentially re-querying foreach $arrayItem right?

    Read the article

  • Creating temporary tables in MySQL Stored Procedure

    - by burntblark
    The following procedure gives me an error when I invoke it using the CALL statement: CREATE DEFINER=`user`@`localhost` PROCEDURE `emp_performance`(id VARCHAR(10)) BEGIN DROP TABLE IF EXISTS performance; CREATE TABLE performance AS SELECT time_in, time_out, day FROM attendance WHERE employee_id = id; END The error says "Unknown table 'performance' ". This is my first time actually using stored procedures and I got my sources from Google. I just cant figure out what I am doing wrong.

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >