Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 673/1102 | < Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >

  • copy an identity column into another table

    - by slake
    I have 2 tables that are related,both have identity columns for primary keys and i am using a vb form to insert data into them,My problem is that i cannot get the child table to get the primary key of the parent table and use this as its foreign key in my database. the data is inserted fine though no foreign key constraint is made.I am wondering if a trigger will do it and if so how. All my inserting of data is done in vb. The user wont insert any keys. all these are identity columns that are auto generated. If a trigger is my way out please illustrate with an example. If there is another way i can do this in VB itself then please advise and an example will be greatly appreciated Thanks in advance

    Read the article

  • select distinct over specific columns

    - by Midhat
    A query in a system I maintain returns QID AID DATA 1 2 x 1 2 y 5 6 t As per a new requirement, I do not want the (QID, AID)=(1,2) pair to be repeated. We also dont care what value is selected from "data" column. either x or y will do. What I have done is to enclose the original query like this SELECT * FROM (<original query text>) Results group by QID,AID Is there a better way to go about this? The original query uses multiple joins and unions and what not, So I would prefer not to touch it unless its absolutely necesary

    Read the article

  • PHP, MySQL - would results-array shuffle be quicker than "select... order by rand()"?

    - by sombe
    I've been reading a lot about the disadvantages of using "order by rand" so I don't need update on that. I was thinking, since I only need a limited amount of rows retrieved from the db to be randomized, maybe I should do: $r = $db->query("select * from table limit 500"); for($i;$i<500;$i++) $arr[$i]=mysqli_fetch_assoc($r); shuffle($arr); (i know this only randomizes the 500 first rows, be it). would that be faster than $r = $db->("select * from table order by rand() limit 500"); let me just mention, say the db tables were packed with more than...10,000 rows. why don't you do it yourself?!? - well, i have, but i'm looking for your experienced opinion. thanks!

    Read the article

  • Getting #Error value from ssrs reporting

    - by deepa
    I have created a dataset with fields "LastRunBuild" and "project" .The LastRunBuild field contain string of data seperated by commas according to each project. But Some Projects have no value in LastRunBuild field.When i am using this expression " iif(Fields!LastRunBuild.Value=nothing, nothing,Split(Fields!LastRunBuild.Value,",").GetValue(3)) " a #Error value returns every time. Please reply...

    Read the article

  • Improving performance for WRITE operation on Oracle DB in Java

    - by Lucky
    I've a typical scenario & need to understand best possible way to handle this, so here it goes - I'm developing a solution that will retrieve data from a remote SOAP based web service & will then push this data to an Oracle database on network. Also, this will be a scheduled task that will execute every 15 minutes. I've event queues on remote service that contains the INSERT/UPDATE/DELETE operations that have been done since last retrieval, & once I retrieve the events for last 15 minutes, it again add events for next retrieval. Now, its just pushing data to Oracle so all my interactions are INSERT & UPDATE statements. There are around 60 tables on Oracle with some of them having 100+ columns. Moreover, for every 15 minutes cycle there would be around 60-70 Inserts, 100+ Updates & 10-20 Deletes. This will be an executable jar file that will terminate after operation & will again start on next 15 minutes cycle. So, I need to understand how should I handle WRITE operations (best practices) to improve performance for this application as whole ? Current Test Code (on every cycle) - Connects to remote service to get events. Creates a connection with DB (single connection object). Identifies the type of operation (INSERT/UPDATE/DELETE) & table on which it is done. After above, calls the respective method based on type of operation & table. Uses Preparedstatement with positional parameters, & retrieves each column value from remote service & assigns that to statement parameters. Commits the statement & returns to get event class to process next event. Above is repeated till all the retrieved events are processed after which program closes & then starts on next cycle & everything repeats again. Thanks for help !

    Read the article

  • SQLite: Simple DELETE statement did not work

    - by user186446
    I have a table MRU, that has 3 columns. (VALUE varchar(255); TYPE varchar(20); DT_ADD datetime) This is a table simply storing an entry and recording the date time it was recorded. What I wanted to do is: delete the oldest entry whenever I add a new entry that exceeds a certain number. here is my query: delete from MRU where type = 'FILENAME' ORDER BY DT_ADD limit 1; The query returns an error. Thanks

    Read the article

  • Query to check the consistency of records

    - by orunner
    I have four tables TableA: id1 id2 id3 value TableB: id1 desc TableC: id2 desc TableD: id3 desc What I need to do is to check if all combinations of id1 id2 id3 from table B C and D exist in the TableA. In other words, table A should contain all possible combinations of id1 id2 and id3 which are stored in the other three tables.

    Read the article

  • MySQL - accessing a table sum and compare to another table?

    - by assignment_operator
    This is for a homework assignment. I just plain don't understand how to do it. The instructions for this particular question is: List the branch name for all branches that have at least one book that has at least 4 copies on hand. Where the tables in question are: Branch: BranchName | BranchId Henry Downtown | 1 16 Riverview | 2 Henry On The Hill | 3 Inventory: BookId | BranchId | OnHand 1 | 1 | 2 2 | 3 | 4 3 | 1 | 8 4 | 3 | 1 5 | 1 | 2 6 | 2 | 3 From what I understand, I can get the number of OnHand per branch name with: SELECT BranchName, SUM(OnHand) FROM Branch B, Inventory I WHERE B.BranchId = I.BranchId GROUP BY BranchName; but I don't get how I'd do the comparison between the sum of OnHand per branch and 4. Any help would be appreciated, guys!

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • R equivalent of SELECT DISTINCT on two or more fields/variables

    - by wahalulu
    Say I have a dataframe df with two or more columns, is there an easy way to use unique() or other R function to create a subset of unique combinations of two or more columns? I know I can use sqldf() and write an easy "SELECT DISTINCT var1, var2, ... varN" query, but I am looking for an R way of doing this. It occurred to me to try ftable coerced to a dataframe and use the field names, but I also get the cross tabulations of combinations that don't exist in the dataset: uniques <- as.data.frame(ftable(df$var1, df$var2))

    Read the article

  • is Payment table needed when you have an invoice table like this?

    - by EBAGHAKI
    this is my invoice table: Invoice Table: invoice_id creation_date due_date payment_date status enum('not paid','paid','expired') user_id total_price I wonder if it's Useful to have a payment table in order to record user payments for invoices. payment table can be like this: payment_id payment_date invoice_id price_paid status enum('successful', 'not successful')

    Read the article

  • How does linq decide between inner & outer joins

    - by user287795
    Hi Usually linq is using an left outer join for its queries but on some cases it decides to use inner join instead. I have a situation where that decision results in wrong results since the second table doesn't always have suitable records and that removes the records from the first table. I'm using a linqdatasource over a dbml where the relevant tables are identical but one holds historical records removed from the first. both have the same primary key. and I'm using a dataloadoption to load both tables at once with out round trips. Would you explain why linq decided to use an inner join here? Thanks

    Read the article

  • Problem with sending "SetCookie" first in php code

    - by Camran
    According to this manual: http://us2.php.net/setcookie I have to set the cookie before anything else. Here is my cookie code: if (isset($_COOKIE['watched_ads'])){ $expir = time()+1728000; //20 days $ad_arr = unserialize($_COOKIE['watched_ads']); $arr_elem = count($ad_arr); if (in_array($ad_id, $ad_arr) == FALSE){ if ($arr_elem>10){ array_shift($ad_arr); } $ad_arr[]=$ad_id; setcookie('watched_ads', serialize($ad_arr), $expir, '/'); } } else { $expir = time()+1728000; //20 days $ad_arr[] = $ad_id; setcookie('watched_ads', serialize($ad_arr), $expir, '/'); } As you can see I am using variables in setting the cookie. The variables comes from a mysql_query and I have to do the query first. But then, if I do, I will get an error message: Cannot modify header information - headers already sent by ... The error points to the line where I set the cookie above. What should I do?a

    Read the article

  • SSRS run SQL/DataSet conditionally

    - by MikeTWebb
    Hello.... I have an SSRS report that contains several subreports. The user has the ability to select/deselect which subreports they want to produce using several Boolean parameters. If a subreport is deselected then it is not rendered by setting the Visibility property. However, the DataSet associated with the de-selected subreport still executes causing the execution time to take longer than expected. Is there any way to tell a dataset on a subreport or Tablix not to execute based on a Parameter selection? Thanks

    Read the article

  • Getting records from a table based on a filter field and Between but also having the OR login for mu

    - by Pentium10
    I have a this table, where I store multiple ids and an age range (def1,def2) CREATE TABLE "template_requirements" ("_id" INTEGER NOT NULL, "templateid" INTEGER, "def1" VARCHAR(255), "def2" VARCHAR(255), PRIMARY KEY("_id")) Having values such as: templateid | def1 | def2 100 | 7 | 25 200 | 40 | 90 300 | 7 | 25 300 | 40 | 60 as you see for templateid 300 we have an or logic: age between 7 and 25 or age between 40 and 60. I want to get all the template ids that are not for a certain age like 25... What's the problem? If I run a query like this one: SELECT group_concat(templateid) FROM template_requirements where and '25' not between cast(def1 as integer) and cast(def2 as integer) it returns 200, 300, which is wrong, as the 300 matched on row 40 to 60, but shouldn't be included in the result as we have a condition with same templateid 7 to 25 that fails the not beetween stuff. How would be the correct query in SQLite, I would like to keep the group_concat stuff.

    Read the article

  • What sort of schema can I use to accommodate manual date based data entries?

    - by meder
    I have an admin where users from multiple properties can enter in monthly statistics for twitter/facebook followers. We do not have access to the real data/db so this is why a manual entry. The form looks like this: Type ( radio, select **one** only ): - Twitter - Facebook Followers/Fans ( textfield ): Property (dropdown): Hotel A, Hotel B Date Start: mm/dd/yyyy (textfield) Date End: mm/dd/yyyy (textfield) Question 1.1: Since I am only keeping track of month per month, the date start/end fields which I have already created might be too specific. Would it be a better idea just to have a start month/year and and month/year if that's the only thing I care about? Question 1.2: What schema could I use for month to month statistics if I were to change the date start and end textfields to start month/year and end month/year dropdowns?

    Read the article

  • Proper chart scaling in Reporting Services 2005

    - by lastas
    I'm developing a simple bar-chart in Reporting Services 2005 with a stored procedure as data-source. The values in this graph can be both positive and negative, and can span a very big range, and hence I cannot specify any non-dynamic scale that will work for all scenarios. The problem I'm facing is that the automatic scaling pretty much sucks. I get no line to show where the zero-point is, and the y-scale labels are from top to bottom: 8818 -191181 -391181 etc etc... So my question is, what is the best approach to make the scale more adapted to human reading? Is there any guide out there? Does reporting services 2008 handle this better? Also, moving away from Reporting Services is not really an option. I realize how to put values and expression in the max, min, and the gridline interval fields, although its more of a question what expressions I should put there.

    Read the article

  • An attempt to attach an auto-named database for file failed in Vb.Net

    - by user2454135
    I am Trying to connect database for first time , and I am getting this error : An attempt to attach an auto-named database for file VBTestDB.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. and getting error on myconnect.Open() Heres my code : Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim myconnect As New SqlClient.SqlConnection myconnect.ConnectionString = "Data Source=.\SQLEXPRESS;AttachDbFilename=VBTestDB.mdf;Integrated Security=True;User Instance=True;" Dim mycommand As SqlClient.SqlCommand = New SqlClient.SqlCommand() mycommand.Connection = myconnect mycommand.CommandText = "INSERT INTO Card (CardNo,Name) VALUES (@cardno,@name)" myconnect.Open() Try mycommand.Parameters.Add("@cardno", SqlDbType.Int).Value = TextBox1.Text mycommand.Parameters.Add("@name", SqlDbType.NVarChar).Value = TextBox2.Text mycommand.ExecuteNonQuery() MsgBox("Success") Catch ex As System.Data.SqlClient.SqlException MsgBox(ex.Message) End Try myconnect.Close() End Sub

    Read the article

  • mysql: managing memory usage

    - by every_answer_gets_a_point
    i am doing a delete with a LIKE statement my keybuffer is 25m, the sort buffer size is 256k the delete has been taking over 2 hours should i increase memory usage? there are about 50 megs of data in the table from which i am deleting, thats about 500,000 rows is there anything else i can do on the adminsitration size to speed up this delete?

    Read the article

< Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >